Samsung Networks and O2 Telefonica have announced the launch of the first commercial site in Germany that uses Open RAN and vRAN technologies. These companies have used these technologies to make O2 Telefonica’s 4G and 5G networks more reliable. They have been testing these new technologies since October 2023.
Samsung’s Open RAN and vRAN technologies power O2 Telefonica’s faster and more reliable 4G and 5G networks
The Open RAN and vRAN technologies have been deployed in Landsberg am Lech, Bavaria. This is the first time Samsung Networks’ Open RAN and vRAN technologies have been used in Germany. With these technologies, the two companies will start upgrading more cellular towers in Germany. This site went online just three months after Samsung Networks shipped the required hardware to O2 Telefonica. The hardware includes 4G and 5G vRAN 3.0 and O-RAN compliant radios that support low- and mid-bands (700MHz, 800MHz, 1.8GHz, 2.1GHz, 2.6GHz and 3.6GHz), and 64T64R Massive MIMO radios.
Open RAN improves the flexibility of a cellular network operator’s site, allowing the operator to use hardware and software from different vendors. The vRAN technology brings cloud-native architecture, allowing network operators to implement automation techniques better. It also allows firms to introduce new services and technologies more efficiently and quickly. This allows them to accelerate network buildouts and adopt new 5G applications. For example, they can create an instance for an AR/VR application with low response times.
Companies will now use Samsung’s network automation solutions to control the life cycle management of their networks, from deployment to operation and maintenance.
Junehee Lee, EVP and Head of Global Sales & Marketing at Samsung Networks, said, “Samsung is setting new standards for excellence in the telecommunications industry with our innovative vRAN and Open RAN capabilities. Celebrating Telefónica’s 100th anniversary, we are proud to be the key partner for O2 Telefónica on their trailblazing journey to usher in a new era of connectivity in Germany.”
For Dritjon Gruda, artificial-intelligence chatbots have been a huge help in scientific writing and peer review.Credit: Vladimira Stavreva-Gruda
Confession time: I use generative artificial intelligence (AI). Despite the debate over whether chatbots are positive or negative forces in academia, I use these tools almost daily to refine the phrasing in papers that I’ve written, and to seek an alternative assessment of work I’ve been asked to evaluate, as either a reviewer or an editor. AI even helped me to refine this article.
I study personality and leadership at Católica Porto Business School in Portugal and am an associate editor at Personality and Individual Differences and Psychology of Leaders and Leadership. The value that I derive from generative AI is not from the technology itself blindly churning out text, but from engaging with the tool and using my own expertise to refine what it produces. The dialogue between me and the chatbot both enhances the coherence of my work and, over time, teaches me how to describe complex topics in a simpler way.
Whether you’re using AI in writing, editing or peer review, here’s how it can do the same for you.
Polishing academic writing
Ever heard the property mantra, ‘location, location, location’? In the world of generative AI, it’s ‘context, context, context’.
Context is king. You can’t expect generative AI — or anything or anyone, for that matter — to provide a meaningful response to a question without it. When you’re using a chatbot to refine a section of your paper for clarity, start by outlining the context. What is your paper about, and what is your main argument? Jot down your ideas in any format — even bullet points will work. Then, present this information to the generative AI of your choice. I typically use ChatGPT, made by OpenAI in San Francisco, California, but for tasks that demand a deep understanding of language nuances, such as analysing search queries or text, I find Gemini, developed by researchers at Google, to be particularly effective. The open-source large language models made by Mixtral, based in Paris, are ideal when you’re working offline but still need assistance from a chatbot.
Regardless of which generative-AI tool you choose, the key to success lies in providing precise instructions. The clearer you are, the better. For example, you might write: “I’m writing a paper on [topic] for a leading [discipline] academic journal. What I tried to say in the following section is [specific point]. Please rephrase it for clarity, coherence and conciseness, ensuring each paragraph flows into the next. Remove jargon. Use a professional tone.” You can use the same technique again later on, to clarify your responses to reviewer comments.
Remember, the chatbot’s first reply might not be perfect — it’s a collaborative and iterative process. You might need to refine your instructions or add more information, much as you would when discussing a concept with a colleague. It’s the interaction that improves the results. If something doesn’t quite hit the mark, don’t hesitate to say, “This isn’t quite what I meant. Let’s adjust this part.” Or you can commend its improvements: “This is much clearer, but let’s tweak the ending for a stronger transition to the next section.”
This approach can transform a challenging task into a manageable one, filling the page with insights you might not have fully gleaned on your own. It’s like having a conversation that opens new perspectives, making generative AI a collaborative partner in the creative process of developing and refining ideas. But importantly, you are using the AI as a sounding board: it is not writing your document for you; nor is it reviewing manuscripts.
Elevating peer review
Generative AI can be a valuable tool in the peer-review process. After thoroughly reading a manuscript, summarize key points and areas for review. Then, use the AI to help organize and articulate your feedback (without directly inputting or uploading the manuscript’s text, thus avoiding privacy concerns). For example, you might instruct the AI: “Assume you’re an expert and seasoned scholar with 20+ years of academic experience in [field]. On the basis of my summary of a paper in [field], where the main focus is on [general topic], provide a detailed review of this paper, in the following order: 1) briefly discuss its core content; 2) identify its limitations; and 3) explain the significance of each limitation in order of importance. Maintain a concise and professional tone throughout.”
I’ve found that AI partnerships can be incredibly enriching; the tools often offer perspectives I hadn’t considered. For instance, ChatGPT excels at explaining and justifying the reasons behind specific limitations that I had identified in my review, which helps me to grasp the broader implications of the study’s contribution. If I identify methodological limitations, ChatGPT can elaborate on these in detail and suggest ways to overcome them in a revision. This feedback often helps me to connect the dots between the limitations and their collective impact on the paper’s overall contribution. Occasionally, however, its suggestions are off-base, far-fetched, irrelevant or simply wrong. And that is why the final responsibility for the review always remains with you. A reviewer must be able to distinguish between what is factual and what is not, and no chatbot can reliably do that.
Optimizing editorial feedback
The final area in which I benefit from using chatbots is in my role as a journal editor. Providing constructive editorial feedback to authors can be challenging, especially when you oversee several manuscripts every week. Having personally received countless pieces of unhelpful, non-specific feedback — such as, “After careful consideration, we have decided not to proceed with your manuscript” — I recognize the importance of clear and constructive communication. ChatGPT has become indispensable in this process, helping me to craft precise, empathetic and actionable feedback without replacing human editorial decisions.
For instance, after evaluating a paper and noting its pros and cons, I might feed these into ChatGPT and get it to draft a suitable letter: “On the basis of these notes, draft a letter to the author. Highlight the manuscript’s key issues and clearly explain why the manuscript, despite its interesting topic, might not provide a substantial enough advancement to merit publication. Avoid jargon. Be direct. Maintain a professional and respectful tone throughout.” Again, it might take a few iterations to get the tone and content just right.
I’ve found that this approach both enhances the quality of my feedback and helps to guarantee that I convey my thoughts supportively. The result is a more positive and productive dialogue between editors and authors.
There is no doubt that generative AI presents challenges to the scientific community. But it can also enhance the quality of our work. These tools can bolster our capabilities in writing, reviewing and editing. They preserve the essence of scientific inquiry — curiosity, critical thinking and innovation — while improving how we communicate our research.
Considering the benefits, what are you waiting for?
In a fascinating adoption of technology, a surgical team in the UK recently used Apple’s Vision Pro to help with a medical procedure.
It wasn’t a surgeon who donned the headset, but Suvi Verho, the lead scrub nurse (also known as a theater nurse) at the Cromwell Hospital in London. Scrub nurses help surgeons by providing them with all the equipment and support they need to complete an operation – in this case, it was a spinal surgery.
Verho told The Daily Mail that the Vision Pro used an app made by software developer eXeX to float “superimposed virtual screens in front of [her displaying] vital information”. The report adds that the mixed reality headset was used to help her prepare, keep track of the surgery, and choose which tools to hand to the surgeon. There’s even a photograph of the operation itself in the publication.
(Image credit: Cromwell Hospital/The Daily Mail)
Verho sounds like a big fan of the Vision Pro stating, perhaps somewhat hyperbolically, “It eliminates human error… [and] guesswork”. Even so, anything that ensures operations go as smoothly as possible is A-OK in our books.
Syed Aftab, the surgeon who led the procedure, also had several words of praise. He had never worked with Verho before. However, he said the headset turned an unfamiliar scrub nurse “into someone with ten years’ experience” working alongside him.
Mixed reality support
eXeX, as a company, specializes in upgrading hospitals by implementing mixed reality. This isn’t the first time one of their products has been used in an operating room. Last month, American surgeon Dr. Robert Masson used the Vision Pro with eXeX’s app to help him perform a spinal procedure. Again, it doesn’t appear he physically wore the headset, although his assistants did. They used the device to follow procedural guides from inside a sterile environment, something that was previously deemed “impossible.”
Dr. Masson had his own words of praise stating that the combination of the Vision Pro and the eXeX tool enabled an “undistracted workflow” for his team. It’s unknown which software was used. However, if you check the company’s website, it appears both Dr. Masson’s team and Nurse Verho utilized ExperienceX, a mixed reality app giving technicians “a touch-free heads up display”
Apple’s future in medicine
The Vision Pro’s future in medicine won’t just be for spinal surgeries. In a recent blog post, Apple highlighted several other medical apps harnessing visionOS Medical corporation Stryker created myMako to help doctors plan for their patients’ joint replacement surgeries. For medical students, Cinematic Reality by Siemens Healthineers offers “interactive holograms of the human body”.
These two and more are available for download off the App Store, although some of the software requires a connection to the developer’s platform to work. You can download if you want to, but keep in mind they’re primarily for medical professionals.
If you’re looking for a headset with a wider range of usability, check out TechRadar’s list of the best VR headsets for 2024.
If you own an Apple Watch there is a good chance that you have more than one Apple Watch band, some people have many Apple Watch bands and they change them as often as they change their outfits, this is where the Twelve South TimePorter.
The Twelve South TimePorter is designed to be the ultimate accessory for those of us who have a collection of Apple Watch bands and straps, it is designed to help you neatly organize them and each one can hold and display up to six Apple Watch bands.
The TimePorter comes with a sleek white finish, and it has been designed to compliment the interior of any space, be it a focal point in a bedroom or neatly placed within a wardrobe or walk-in closet.
Moreover, setting up the TimePorter is a breeze. The provided 3M Command Strips allow for easy attachment to walls, eliminating the need for drills or nails. This feature is particularly beneficial for those in rental properties, ensuring the option to reposition without causing damage or leaving behind any unsightly blemishes.
It’s worth noting that the TimePorter is designed to be versatile. It seamlessly pairs with any Apple Watch strap and is fully compatible with all Apple Watch versions, inclusive of the latest Series 9 and Ultra 2 models.
The new TwelveSouth TimePorter will go on sale in the UK and the end of October for £29.99 you can find out more details about this new accessory to help you organize your Apple Watch band collection at the link below.
Source Twelve South
Filed Under: Apple, Gadgets News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.
If you have been noticing that your locally installed LLM is slowing down when you try to include larger prompts. You may be interested in a new solution to improve the speed and performance of large language models in the form of StreamingLLM helps improve the speed and performance of you LLMs . Extending Llama 2 and Falcon up to 4 million tokens and providing a 22 times faster inference than your standard LLM.
Check out the video created below by AI Jason who explains more about StreamingLLM and how it can be used to improve performance of locally installed AI models. Exploring these challenges and explores potential solutions, focusing on a new research project that aims to increase the data input capacity and efficiency of LLMs.
One of the primary challenges in deploying LLMs in streaming applications is the extensive memory consumption during the decoding stage. This is due to the caching of Key and Value states (KV) of previous tokens. This issue is further compounded by the fact that popular LLMs, such as Llama-2, MPT, Falcon, and Pythia, cannot generalize to longer texts than the training sequence length. This limitation is primarily due to GPU memory constraints and the computational time required by the complex Transformer architecture used in these models.
A common solution to manage large data inputs is the use of Window attention. This approach involves caching only the most recent KVs, effectively limiting the amount of data that needs to be stored. However, this method has a significant drawback: it loses context about the removed tokens. When the text length surpasses the cache size, the performance of window attention deteriorates, leading to a loss of context and a decrease in the quality of the generated content.
StreamingLLM helps improve the speed of you LLMs
Other articles you may find of interest on the subject of large language models :
This problem led researchers to observe an interesting phenomenon known as attention sink. They found that the model pays more attention to initial tokens than later ones, even if the initial tokens are not semantically important. This phenomenon, they discovered, could be leveraged to largely recover the performance of window attention.
Based on this analysis, the researchers introduced StreamingLLM, an efficient framework that enables LLMs trained with a finite length attention window to generalize to infinite sequence length without any fine-tuning. This approach uses a combination of the first few tokens that have attention sink and a rolling cache of the latest tokens. This allows the LLM to maintain context about what has been discussed before, as well as recent conversation, effectively extending the effective context window.
The StreamingLLM approach has shown promising results, enabling LLMs to perform stable and efficient language modeling with up to 4 million tokens and more. In streaming settings, it outperforms the sliding window recomputation baseline by up to 22.2x speedup. This makes it particularly useful for applications such as long-form content generation and chatbots with long-term memory.
However, it’s important to note that StreamingLLM is not without its limitations. While it does maintain context about the beginning and end of a conversation, it still loses detailed context in the middle. This means it may not work well for summarizing large amounts of data, such as research papers.
The introduction of StreamingLLM and the concept of attention sink represent significant strides in overcoming the challenges of feeding unlimited data to LLMs. However, they are just one solution to the context limit problem. As the field of artificial intelligence continues to evolve, it’s likely that more creative concepts will emerge to further enhance the capacity and efficiency of LLMs.
Filed Under: Guides, Top News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.