Categories
Featured

Dear tech industry, we don’t need to follow behind gaming with terrible product trade-in values

[ad_1]

Recently a story made headlines concerning a potential seller finding out just how bad Microcenter’s trade-in value is for a Nvidia GeForce RTX 4090 graphics card. 

The retailer only offered $700 for a card that’s currently priced at nearly $2000 on its own online store, less than half its original value. And keep in mind that this is a current-gen high-end component, easily the best graphics card out there right now, not something from two generations ago.

screenshot of gpu trade in value

(Image credit: Wccftech / Mr. Biggie Smallz)

Of course, there are several factors involved in trade-in value, including the condition of the product in question. However, Wccftech reported that this was a simple look-up through Microcenter’s website, meaning that this value is the standard one. Compare this to what Newegg is offering, about $1,500 or over twice as much, and you see quite the discrepancy between the two amounts.

[ad_2]

Source Article Link

Categories
Entertainment

Google DeepMind’s new AI can follow commands inside 3D games it hasn’t seen before

[ad_1]

has unveiled new research highlighting an AI agent that’s able to carry out a swath of tasks in 3D games it hasn’t seen before. The team has long been experimenting with AI models that can win in the likes of and chess, and even learn games . Now, for the first time, according to DeepMind, an AI agent has shown it’s able to understand a wide range of gaming worlds and carry out tasks within them based on natural-language instructions.

The researchers teamed up with studios and publishers such as Hello Games (), Tuxedo Labs () and Coffee Stain ( and ) to train the Scalable Instructable Multiworld Agent (SIMA) on nine games. The team also used four research environments, including one built in Unity in which agents are instructed to form sculptures using building blocks. This gave SIMA, described as “a generalist AI agent for 3D virtual settings,” a range of environments and settings to learn from, with a variety of graphics styles and perspectives (first- and third-person).

“Each game in SIMA’s portfolio opens up a new interactive world, including a range of skills to learn, from simple navigation and menu use, to mining resources, flying a spaceship or crafting a helmet,” the researchers wrote in a blog post. Learning to follow directions for such tasks in video game worlds could lead to more useful AI agents in any environment, they noted.

A flowchart detailing how Google DeepMind trained its SIMA AI agent. The team used gameplay video and matched that to keyboard and mouse inputs for the AI to learn from.

Google DeepMind

The researchers recorded humans playing the games and noted the keyboard and mouse inputs used to carry out actions. They used this information to train SIMA, which has “precise image-language mapping and a video model that predicts what will happen next on-screen.” The AI is able to comprehend a range of environments and carry out tasks to accomplish a certain goal.

The researchers say SIMA doesn’t need a game’s source code or API access — it works on commercial versions of a game. It also needs just two inputs: what’s shown on screen and directions from the user. Since it uses the same keyboard and mouse input method as a human, DeepMind claims SIMA can operate in nearly any virtual environment.

The agent is evaluated on hundreds of basic skills that can be carried out within 10 seconds or so across several categories, including navigation (“turn right”), object interaction (“pick up mushrooms”) and menu-based tasks, such as opening a map or crafting an item. Eventually, DeepMind hopes to be able to order agents to carry out more complex and multi-stage tasks based on natural-language prompts, such as “find resources and build a camp.”

In terms of performance, SIMA fared well based on a number of training criteria. The researchers trained the agent in one game (let’s say Goat Simulator 3, for the sake of clarity) and got it to play that same title, using that as a baseline for performance. A SIMA agent that was trained on all nine games performed far better than an agent that trained on just Goat Simulator 3.

Chart showing hte relative performance of Google DeepMind's SIMA AI agent based on varying training data.

Google DeepMind

What’s especially interesting is that a version of SIMA that was trained in the eight other games then played the other one performed nearly as well on average as an agent that trained just on the latter. “This ability to function in brand new environments highlights SIMA’s ability to generalize beyond its training,” DeepMind said. “This is a promising initial result, however more research is required for SIMA to perform at human levels in both seen and unseen games.”

For SIMA to be truly successful, though, language input is required. In tests where an agent wasn’t provided with language training or instructions, it (for instance) carried out the common action of gathering resources instead of walking where it was told to. In such cases, SIMA “behaves in an appropriate but aimless manner,” the researchers said. So, it’s not just us mere mortals. Artificial intelligence models sometimes need a little nudge to get a job done properly too.

DeepMind notes that this is early-stage research and that the results “show the potential to develop a new wave of generalist, language-driven AI agents.” The team expects the AI to become more versatile and generalizable as it’s exposed to more training environments. The researchers hope future versions of the agent will improve on SIMA’s understanding and its ability to carry out more complex tasks. “Ultimately, our research is building towards more general AI systems and agents that can understand and safely carry out a wide range of tasks in a way that is helpful to people online and in the real world,” DeepMind said.

[ad_2]

Source Article Link

Categories
News

AI Video characters can now follow the laws of physics and more

AI Video characters can now follow laws of physics and more

The world of video production is undergoing a significant transformation, thanks to the advent of artificial intelligence (AI) in video generation. This shift is not just a fleeting glimpse into what the future might hold; it’s a dynamic change that’s happening right now, reshaping the way we create and experience movies and videos. With AI, filmmakers are gaining an unprecedented level of flexibility and creative control, which is altering the landscape of the industry.

Imagine a tool that can produce videos so realistic they seem to obey the laws of physics. Such a tool now exists in the form of OpenAI’s Sora, an advanced AI video generation technology. Its outputs are incredibly lifelike, a clear indicator of the strides AI technology has made. Another company, P Labs, is making its mark with a feature that allows AI-generated characters to speak with perfectly timed mouth movements, enhancing the realism of digital actors.

The ability to convey emotions through video is crucial, and Alibaba Group’s Emote Portrait Alive research has taken this to a new level. This technology can create expressive portrait videos that are synchronized with audio, achieving realistic lip-syncing and emotional expressions. As a result, AI-generated characters can now establish an emotional connection with viewers, which is vital for storytelling.

AI Video Generation Advancements

Personalized movie experiences are another area where AI is making an impact. Anamorph has developed scene reordering technology that can create different versions of a film for individual viewers. This was demonstrated with a film about the visual artist Brian Eno. Such technology suggests a future where movies can provide a unique viewing experience every time, increasing their value for audiences.

Here are some other articles you may find of interest on the subject of creating videos films and short animations using artificial intelligence :

The process of filmmaking itself is being redefined by Stability AI, in collaboration with Morph Studios, has introduced a platform that simplifies film production. It features a storyboard visual drag-and-drop builder, which streamlines the complex steps involved in creating a film. This innovation makes it easier for a broader range of creators to engage in filmmaking.

Morph Studios Stability AI drag-and-drop interface

Morph Studios Stability AI video clip creation

LTX Studio has launched a comprehensive video creation platform that is altering the way we think about movie production. With this platform, you can produce entire movies from simple text prompts. It includes music, dialogue, and sound effects, and it ensures consistency in character portrayal. This platform is a prime example of the extensive capabilities of AI in video creation.

AI animators are also pushing boundaries by using AI-generated video clips to remake classic films. A team is currently working on a new version of “Terminator 2,” which is expected to make its Hollywood debut soon. This project showcases the potential of AI to reinterpret and breathe new life into beloved stories.

The  Future of AI Video Creation

As we look ahead to 2024, the film industry is preparing for the introduction of more sophisticated AI technology that will continue to enhance the quality of AI-generated videos. Filmmaking is on the cusp of a major shift, with AI poised to offer personalized cinematic experiences that connect with audiences in ways we’ve never seen before. The potential of AI in video generation goes beyond just new tools; it’s about redefining the art of storytelling and the magic of cinema.

This new era in filmmaking is not just about the technology itself but about the possibilities it unlocks. AI is enabling creators to explore new narratives, experiment with different storytelling techniques, and engage with their audiences on a deeper level. As AI continues to evolve, we can expect to see more innovative applications in video production that will challenge our traditional notions of what’s possible in film and video content.

The implications of AI in video generation extend to various aspects of the industry, from the way we write scripts to the way we edit and produce films. It’s an exciting time for filmmakers, actors, and audiences alike, as the lines between reality and AI-generated content become increasingly blurred. The advancements in AI video generation are not just about creating content faster or more efficiently; they’re about expanding the creative horizons of filmmakers and offering viewers new and immersive experiences.

As we embrace this new technology, it’s important to consider the ethical implications and the impact it will have on the industry. Questions about authenticity, creativity, and the role of human actors in a world of AI-generated characters are becoming more relevant. The industry must navigate these challenges thoughtfully to ensure that AI serves as a tool for enhancing the art of filmmaking rather than diminishing the value of human creativity.

Filed Under: Gadgets News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.