Categories
Featured

Apple is forging a path towards more ethical generative AI – something sorely needed in today’s AI-powered world

[ad_1]

Copyright is something of a minefield right now when it comes to AI, and there’s a new report claiming that Apple’s generative AI – specifically its ‘Ajax’ large language model (LLM) – may be one of the only ones to have been both legally and ethically trained. It’s claimed that Apple is trying to uphold privacy and legality standards by adopting innovative training methods. 

Copyright law in the age of generative AI is difficult to navigate, and it’s becoming increasingly important as AI tools become more commonplace. One of the most glaring issues that comes up, again and again, is that many companies train their large language models (LLMs) using copyrighted works, typically not disclosing whether they license that training material. Sometimes, the outputs of these models include entire sections of copyright-protected works. 

[ad_2]

Source Article Link

Categories
Featured

Enabot Ebo SE pet robot review: the catsitter I didn’t know I needed but can’t live without

[ad_1]

Two-minute review

Being a cat owner is a joy like no other, but I miss my cat so, so much when I’m away even just for a day. That’s why the Enabot Ebo SE pet robot is a literal must-have in my cat-crazy household.  This small and sweet little robot doesn’t have an adorable little ‘face’ like the Enabot Ebo X, but operates similarly, offering features like mobile phone compatibility and the ability to take photos and videos. 

It’s also not as stuffed with features as the Enabot Ebo X, which has built-in Alexa smart home functions and a 4K UHD camera, however, if you’re looking for a simple and much cheaper robot, the Enabot Ebo SE robot reigns supreme. This little orb is simple to set up right out of the box and is completely managed through the app. 

[ad_2]

Source Article Link

Categories
Life Style

AI now beats humans at basic tasks — new benchmarks are needed, says major report

[ad_1]

Artificial intelligence (AI) systems, such as the chatbot ChatGPT, have become so advanced that they now very nearly match or exceed human performance in tasks including reading comprehension, image classification and competition-level mathematics, according to a new report (see ‘Speedy advances’). Rapid progress in the development of these systems also means that many common benchmarks and tests for assessing them are quickly becoming obsolete.

These are just a few of the top-line findings from the Artificial Intelligence Index Report 2024, which was published on 15 April by the Institute for Human-Centered Artificial Intelligence at Stanford University in California. The report charts the meteoric progress in machine-learning systems over the past decade.

In particular, the report says, new ways of assessing AI — for example, evaluating their performance on complex tasks, such as abstraction and reasoning — are more and more necessary. “A decade ago, benchmarks would serve the community for 5–10 years” whereas now they often become irrelevant in just a few years, says Nestor Maslej, a social scientist at Stanford and editor-in-chief of the AI Index. “The pace of gain has been startlingly rapid.”

Speedy advances: Line chart showing the performance of AI systems on certain benchmark tests compared to humans since 2012.

Source: Artificial Intelligence Index Report 2024.

Stanford’s annual AI Index, first published in 2017, is compiled by a group of academic and industry specialists to assess the field’s technical capabilities, costs, ethics and more — with an eye towards informing researchers, policymakers and the public. This year’s report, which is more than 400 pages long and was copy-edited and tightened with the aid of AI tools, notes that AI-related regulation in the United States is sharply rising. But the lack of standardized assessments for responsible use of AI makes it difficult to compare systems in terms of the risks that they pose.

The rising use of AI in science is also highlighted in this year’s edition: for the first time, it dedicates an entire chapter to science applications, highlighting projects including Graph Networks for Materials Exploration (GNoME), a project from Google DeepMind that aims to help chemists discover materials, and GraphCast, another DeepMind tool, which does rapid weather forecasting.

Growing up

The current AI boom — built on neural networks and machine-learning algorithms — dates back to the early 2010s. The field has since rapidly expanded. For example, the number of AI coding projects on GitHub, a common platform for sharing code, increased from about 800 in 2011 to 1.8 million last year. And journal publications about AI roughly tripled over this period, the report says.

Much of the cutting-edge work on AI is being done in industry: that sector produced 51 notable machine-learning systems last year, whereas academic researchers contributed 15. “Academic work is shifting to analysing the models coming out of companies — doing a deeper dive into their weaknesses,” says Raymond Mooney, director of the AI Lab at the University of Texas at Austin, who wasn’t involved in the report.

That includes developing tougher tests to assess the visual, mathematical and even moral-reasoning capabilities of large language models (LLMs), which power chatbots. One of the latest tests is the Graduate-Level Google-Proof Q&A Benchmark (GPQA)1, developed last year by a team including machine-learning researcher David Rein at New York University.

The GPQA, consisting of more than 400 multiple-choice questions, is tough: PhD-level scholars could correctly answer questions in their field 65% of the time. The same scholars, when attempting to answer questions outside their field, scored only 34%, despite having access to the Internet during the test (randomly selecting answers would yield a score of 25%). As of last year, AI systems scored about 30–40%. This year, Rein says, Claude 3 — the latest chatbot released by AI company Anthropic, based in San Francisco, California — scored about 60%. “The rate of progress is pretty shocking to a lot of people, me included,” Rein adds. “It’s quite difficult to make a benchmark that survives for more than a few years.”

Cost of business

As performance is skyrocketing, so are costs. GPT-4 — the LLM that powers ChatGPT and that was released in March 2023 by San Francisco-based firm OpenAI — reportedly cost US$78 million to train. Google’s chatbot Gemini Ultra, launched in December, cost $191 million. Many people are concerned about the energy use of these systems, as well as the amount of water needed to cool the data centres that help to run them2. “These systems are impressive, but they’re also very inefficient,” Maslej says.

Costs and energy use for AI models are high in large part because one of the main ways to make current systems better is to make them bigger. This means training them on ever-larger stocks of text and images. The AI Index notes that some researchers now worry about running out of training data. Last year, according to the report, the non-profit research institute Epoch projected that we might exhaust supplies of high-quality language data as soon as this year. (However, the institute’s most recent analysis suggests that 2028 is a better estimate.)

Ethical concerns about how AI is built and used are also mounting. “People are way more nervous about AI than ever before, both in the United States and across the globe,” says Maslej, who sees signs of a growing international divide. “There are now some countries very excited about AI, and others that are very pessimistic.”

In the United States, the report notes a steep rise in regulatory interest. In 2016, there was just one US regulation that mentioned AI; last year, there were 25. “After 2022, there’s a massive spike in the number of AI-related bills that have been proposed” by policymakers, Maslej says.

Regulatory action is increasingly focused on promoting responsible AI use. Although benchmarks are emerging that can score metrics such as an AI tool’s truthfulness, bias and even likability, not everyone is using the same models, Maslej says, which makes cross-comparisons hard. “This is a really important topic,” he says. “We need to bring the community together on this.”

[ad_2]

Source Article Link

Categories
Featured

Apple didn’t give us the iPad update we wanted, it gave us what we needed instead

[ad_1]

Go ahead and make fun of the Apple iPad on your favorite social network, I dare you. You will be swarmed by iPad fans, defending their favorite tablet to the death, which always seems to be just over the horizon for the tablet market. We got no new iPads in 2023, making it one of the hardest ever for iPad fanatics, but I say fear not! The iPad is healthy, and I see a brighter future than ever for Apple’s tablet

Is the iPad really healthy? Well, according to Canalys, iPad sales declined year-on-year by quite a bit, as much as 24%. That still left Apple in a distant first place among tablet makers. Samsung’s sales declined only 11%, but it still shipped less than half of the tablets that Apple delivered, according to Canalys estimates.

Samsung Galaxy Tab S9 Ultra

The Samsung Galaxy Tab S9 Ultra is incredibly capable (Image credit: Future / Philip Berne)

That’s gotta be tough news for Samsung. The latest Galaxy Tab S9 series, including the more affordable Galaxy Tab S9 FE, are some of Samsung’s best tablets ever. The entire lineup is IP68 water resistant, which is a first for tablets that aren’t sold as rugged business tablets. They come with an S Pen, which is a better stylus than the Apple Pencil, a $79 / £79 / AU$139 implement that doesn’t even work with every iPad.

The iPad didn’t need an update to stay up-to-date

[ad_2]

Source Article Link

Categories
Life Style

The beauty of what science can do when urgently needed

[ad_1]

A woman sits in an office room with a blue wall. A chart is shown on the glowing screen behind her.

Cultivarium chief scientific officer Nili Ostrov works to make model organisms more useful and accessible for scientific researchCredit: Donis Perkins

Nili Ostrov has always been passionate about finding ways to use biology for practical purposes. So perhaps it wasn’t surprising that, when the COVID-19 pandemic hit during her postdoctoral studies, she went in the opposite direction from most people, moving to New York City to work as the director of molecular diagnostics in the Pandemic Response Lab, providing COVID-19 tests and surveilling viral variants. She was inspired by seeing what scientists could accomplish and how much they could help when under pressure.

Now the chief scientific officer at Cultivarium in Watertown, Massachusetts, Ostrov is bringing that sense of urgency to fundamental problems in synthetic biology. Cultivarium is a non-profit focused research organization, a structure that comes with a finite amount of time and funding to pursue ‘moonshot’ scientific goals, which would usually be difficult for academic laboratories or start-up companies to achieve. Cultivarium has five years of funding, which started in 2022, to develop tools to make it possible for scientists to genetically engineer unconventional model organisms — a group that includes most microbes.

Typically, scientists are limited to working with yeast, the bacterium Escherichia coli and other common lab organisms, because the necessary conditions to grow and manipulate them are well understood. Ostrov wants to make it easier to engineer other microbes, such as soil bacteria or microorganisms that live in extreme conditions, for scientific purposes. This could open up new possibilities for biomanufacturing drugs or transportation fuels and solving environmental problems.

What is synthetic biology and what drew you to it?

Synthetic biology melds biology and engineering — it is the level at which you say, “I know how this part works. What can I do with it?” Synthetic biologists ask questions such as, what is this part useful for? How can it benefit people or the environment in some way?

During my PhD programme at Columbia University in New York City, my team worked with the yeast that is used for brewing beer — but we asked, can you use these yeast cells as sensors? Because yeast cells can sense their environment, we could engineer them to detect a pathogen in a water sample. In my postdoctoral work at Harvard University in Cambridge, Massachusetts, we investigated a marine bacterium, Vibrio natriegens. A lot of time during research is spent waiting for cells to grow. V. natriegens doubles in number about every ten minutes — the fastest growth rate of any organism.

Could we use it to speed up research? But using V. natriegens and other uncommon research organisms is hard work. You have to develop the right genetic-engineering tools.

How did the COVID-19 pandemic alter your career trajectory?

It pushed me to do something that I otherwise would not have done. During my postdoctoral programme, I met Jef Boeke, a synthetic biologist at New York University. In 2020, he asked me whether I wanted to help with the city’s Pandemic Response Lab, because of my expertise in DNA technology. I’m probably one of the only people with a newborn baby who moved into Manhattan when COVID-19 hit.

That was an amazing experience: I took my science and skills and used them for something essential and urgent. In a couple of months, we set up a lab that supported the city’s health system. We monitored for new variants of the virus using genomic sequencing and ran diagnostic tests.

Seeing what science can do when needed — it was beautiful. It showed me how effective science can be, and how fast science can move with the right set-up.

How did that influence what you’re doing now with Cultivarium?

COVID-19 showed me how urgently needed science can be done. It’s about bringing together the right people from different disciplines. Cultivarium is addressing fundamental problems in science, which is usually done in academic settings, with the fast pace and the dynamic of a start-up company.

We need to make progress on finding ways to use unconventional microbes to advance science. A lot of bioproduction of industrial and therapeutic molecules is done in a few model organisms, such as E. coli and yeast. Imagine what you could achieve if you had 100 different organisms. If you’re looking to produce a protein that needs to be made in high temperatures or at an extreme pH, you can’t use E. coli, because it won’t grow.

How is Cultivarium making unconventional microbes research-friendly?

It took my postdoctoral lab team six years to get to the point where we could take V. natriegens, which we initially didn’t know how to grow well or engineer, and knock out every gene in its genome.

At Cultivarium, we’re taking a more systematic approach to provide those culturing and engineering tools for researchers to use in their organism of choice. This kind of topic gets less funding, because it’s foundational science.

So, we develop and distribute the tools to reproducibly culture microorganisms, introduce DNA into them and genetically engineer them. Only then can the organism be used in research and engineering.

Developing these tools takes many years and a lot of money and skills. It takes a lot of people in the room: a biologist, a microbiologist, an automation person, a computational biologist, an engineer. As a non-profit company, we try to make our tools available to all scientists to help them to use their organism of choice for a given application.

We have funding for five years from Schmidt Futures, a non-profit organization in New York City. We’re already releasing and distributing tools and information online. We’re building a portal where all data for non-standard model organisms will be available.

Which appeals to you more — academic research or the private sector?

I like the fast pace of start-up companies. I like the accessibility of expertise: you can bring the engineer into the room with the biologists. I like that you can build a team of people who all work for the same goal with the same motivation and urgency.

Academia is wonderful, and I think it’s very important for people to get rigorous training. But I think we should also showcase other career options for early-career researchers. Before the pandemic, I didn’t know what it was like to work in a non-academic set-up. And once I got a taste of it, I found that it worked well for me.

This interview has been edited for length and clarity.

[ad_2]

Source Article Link

Categories
News

Microsoft reveals the hardware needed to run ChatGPT

Microsoft reveals the hardware needed to run ChatGPT

In the fast-paced world of artificial intelligence (AI), having a robust and powerful infrastructure is crucial, especially when you’re working with complex machine learning models like those used in natural language processing. Microsoft Azure is at the forefront of this technological landscape, offering an advanced AI supercomputing platform that’s perfectly suited for the demands of sophisticated AI projects.

At the heart of Azure’s capabilities is its ability to handle the training and inference stages of large language models (LLMs), which can have hundreds of billions of parameters. This level of complexity requires an infrastructure that not only provides immense computational power but also focuses on efficiency and reliability to counter the resource-intensive nature of LLMs and the potential for hardware and network issues.

Azure’s datacenter strength is built on state-of-the-art hardware combined with high-bandwidth networking. This setup is crucial for the effective grouping of GPUs, which are the cornerstone of accelerated computing and are vital for AI tasks. Azure’s infrastructure includes advanced GPU clustering techniques, ensuring that your AI models operate smoothly and efficiently.

What hardware is required to run ChatGPT?

Here are some other articles you may find of interest on the subject of Microsoft Azure :

Software improvements are also a key aspect of Azure’s AI offerings. The platform incorporates frameworks like ONNX, which ensures model compatibility, and DeepSpeed, which optimizes distributed machine learning training. These tools are designed to enhance the performance of AI models while cutting down on the time and resources required for training.

A shining example of Azure’s capabilities is the AI supercomputer built for OpenAI in 2020. This powerhouse system had over 285,000 CPU cores and 10,000 NVIDIA GPUs, using data parallelism to train models on a scale never seen before, demonstrating the potential of Azure’s AI infrastructure.

In terms of networking, Azure excels with its InfiniBand networking, which provides better cost-performance ratios than traditional Ethernet solutions. This high-speed networking technology is essential for handling the large amounts of data involved in complex AI tasks.

Microsoft Azure

Azure continues to innovate, as seen with the introduction of the H100 VM series, which features NVIDIA H100 Tensor Core GPUs. These are specifically designed for scalable, high-performance AI workloads, allowing you to push the boundaries of machine learning.

Another innovative feature is Project Forge, a containerization and global scheduling service that effectively manages Microsoft’s extensive AI workloads. It supports transparent checkpointing and global GPU capacity pooling, which are crucial for efficient job management and resource optimization.

Azure’s AI infrastructure is flexible, supporting a wide range of projects, from small to large, and integrates seamlessly with Azure Machine Learning services. This integration provides a comprehensive toolkit for developing, deploying, and managing AI applications.

In real-world applications, Azure’s AI supercomputing is already making a difference. For instance, Wayve, a leader in autonomous driving technology, uses Azure’s large-scale infrastructure and distributed deep learning capabilities to advance their innovations.

Security is a top priority in AI development, and Azure’s Confidential Computing ensures that sensitive data and intellectual property are protected throughout the AI workload lifecycle. This security feature enables secure collaborations, allowing you to confidently engage in sensitive AI projects.

Looking ahead, Azure’s roadmap includes the deployment of NVIDIA H100 GPUs and making Project Forge more widely available to customers, showing a dedication to continually improving AI workload efficiency.

To take advantage of Azure’s AI capabilities for your own projects, you should start by exploring the GPU-enabled compute options within Azure and using the Azure Machine Learning service. These resources provide a solid foundation for creating and deploying transformative AI applications that can lead to industry breakthroughs and drive innovation.

Image Source : Microsoft

Filed Under: Hardware, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Government-Issued IDs Needed for Paid Subscriber Verification on Twitter

The procedure also necessitates a quick selfie, which will be cross-referenced with the official ID using biometrics by a private company. However, the benefits of this kind of verification are currently rather minor.

Twitter’s new account verification process necessitates the submission of a government-issued picture ID as well as a live selfie.

The verification mechanism will be accessible only to paying customers. Twitter (previously known as X) has been urging its premium subscribers to sign up for the “ID verification” system by presenting a pop-up message about it, according to TechCrunch’s initial finding.

The prompt claims that you must have a government-issued picture ID to access your account. This should take no more than five minutes.

Twitter built the ID system, according to a company assistance manual, to prevent account “impersonation” and to “increase the overall integrity and trust on our platform.” This is because the company’s CEO, Elon Musk, changed the rules so that anybody who pays for Twitter Blue (previously X Premium) may acquire the blue verified checkmark on their account.

Because Twitter is requesting one of their most sensitive documents, the reveal of the verification process last month aroused privacy concerns.

Users are requested to grant the verification system access to collect biometric data. Twitter is reportedly collaborating with an external Israeli business called Au10tix to extract facial data from both the official ID and the live selfie in order to validate a user’s identify. According to Twitter’s new privacy policy, users must consent to Au10tix retaining their data for up to 30 days.

As a result of verifying your account, you will “receive a visibly labeled ID verification in the pop-up that appears when clicking on your blue check mark.” They will get “prioritized support” from the firm, which implies they will receive answers to their issues more quickly.

Although the verification procedure seems to be intrusive, it is entirely optional. Users may reject the ID verification pop-up window, however Twitter suggests that doing so will ultimately provide access to further functions.

The company’s help center states that “users who choose to participate in this optional ID verification may receive additional benefits associated with the specific X feature in the future.” In the future, verified accounts will get a blue check mark much more rapidly, and users will have “greater flexibility in making frequent changes to your profile photo, display name, or username (@handle).”

The support post also mentions that Twitter may request government-issued IDs from chosen users “to ensure the safety and security of accounts on our platform.”