Categories
News

Check Out This Apple Watch iPad Demo Unit From 2014

[ad_1]

With the 10th anniversary of the Apple Watch approaching, we thought it would be fun to take a look back at an interesting bit of Apple Watch history.

apple watch ipad demo 1
After the Apple Watch was announced in 2014, and before it became available in 2015, Apple sent out custom Apple Watch iPad demo kiosks to retail stores. The Apple Watch and ‌iPad‌ units used for these devices were specially designed, had custom software, and represent the first and only time that an Apple Watch was able to pair with an ‌iPad‌.

AppleDemoYT, known for sourcing rare prototypes, shared images and detailed information about the demo units with MacRumors, offering up detailed insight on the lengths Apple went to for this custom experience. AppleDemoYT was able to acquire one of these now-rare demo units.

apple watch ipad demo 2apple watch ipad demo 2
Jony Ive and his design team came up with the Apple Watch ‌iPad‌ Kiosks as a way for customers to try out an Apple Watch without needing help from an employee. Apple used a modified iPad mini 2 running iOS 8.2 paired with an original Apple Watch running watchOS 1.0, with the two devices fused in a custom housing.

The ‌iPad‌ Apple used had multiple components removed, including the camera, microphone, and speakers, and the housing of the setup served as the body of the ‌iPad‌. The Apple Watch was heavily modified as well, featuring a groove along the diagnostic port for cable routing, holes to affix it to the demo unit, and a special Sport Band that was shorter than normal.

apple watch ipad demo 3apple watch ipad demo 3
The Apple Watch was paired to the ‌iPad‌ using a wired connection. A lightning cable attached to the Apple Watch diagnostic port connected to a converter board inside the ‌iPad‌, allowing the ‌iPad‌ to communicate with and charge the Apple Watch. A special app called Apple Watch Demo was used to allow the Apple Watch to interface with the ‌iPad‌, and a connection to Apple’s server was required.

apple watch ipad demo 4apple watch ipad demo 4
The server that Apple used for the Apple Watch Demo app has long since gone offline, so the only way to see how the setup worked is through a demo unit that was paired in 2014 and not reset since then. With a functional unit, the ‌iPad‌ is able to mirror the Apple Watch, offering up transition animations and providing tips on the actions that can be performed on the Apple Watch. This functionality is demoed in AppleDemoYT’s video:

The custom ‌iPad mini‌ was not only the sole model able to connect with an Apple Watch, it was also the only ‌iPad‌ that could be charged using MagSafe 2, originally designed for the Mac. A ‌MagSafe‌ connector charged the ‌iPad‌, Apple Watch, and extra batteries inside the ‌iPad‌. A Lightning port is available as well, but Apple’s documentation suggests that it is only meant to be used for data transfer.

Apple discontinued the demo unit in 2016 because it was riddled with issues. Updates to the ‌iPad‌ or Apple Watch would erase demo content, and the front glass was prone to cracking because of the design of the housing. Batteries degraded quickly due to the always-on charging, and overheating and failure were continual problems. Apple also had to deal with pairing and syncing issues, and that caused Apple to tweak the interactive part of the demo functionality in 2015. After that change, the ‌iPad‌ provided Apple Watch info, but no longer mirrored the content on the Apple Watch.

apple watch ipad demo 5apple watch ipad demo 5
Demo units that were decommissioned were supposed to be destroyed, and so finding one that is still available, functional, and in good working condition is unusual. The Apple Watch ‌iPad‌ kiosk represents one of the most advanced custom devices that Apple had designed at the time, and it offers a neat look back at the Apple Watch’s debut.

Rumors suggest that Apple has plans for the 10th anniversary of the Apple Watch, and as soon as this year, we may see a redesigned “Apple Watch X” with an updated magnetic band attachment system, new health features, and more.



[ad_2]

Source Article Link

Categories
Featured

I got a Dolby Atmos soundtrack mixing demo at Sony Pictures Studios, and now I know how Spider-Man sounds get made

[ad_1]

Movie sound production is a complex and painstaking process, requiring multiple teams of individuals with special skills to create and mix the dialogue, music, and sound effects that comprise a typical Dolby Atmos movie soundtrack.

How do I know this? I spent two days touring the Sony Pictures Studios lot in Culver City, California where Sony produces the soundtracks for its films. The reason I was at the studio was to see the new Sony TV lineup for 2024, along with the new Sony Dolby Atmos soundbars, but Sony also provided me and other tech writers with a behind-the-scenes look at the studio’s production processes. 

[ad_2]

Source Article Link

Categories
News

NVIDIA’s AI personal assistant demo available for RTX GPU PCs

NVIDIA RTX AI PCs Chat Bot Demo

NVIDIA has recently unveiled a new technology demonstration that is set to enhance the way we interact with artificial intelligence. This new feature, known as “Chat With RTX,” is designed to work seamlessly on Windows RTX PCs, leveraging the power of NVIDIA RTX GPUs to deliver a personalized and efficient chatbot experience. The technology is aimed at providing users with quick, secure, and contextually relevant responses, drawing from their own documents and notes to ensure a private and customized interaction.

At the heart of “Chat With RTX” lies a sophisticated GPT large language model that is capable of tailoring conversations to the user’s specific needs. This is not your average chatbot; it’s an intelligent system that can process a variety of file types, including text documents, PDFs, Word documents, XML files, and even transcriptions from YouTube videos. This versatility allows the chatbot to provide assistance that is highly relevant to the user’s personal content.

NVIDIA RTX AI PCs Chat Bot Demo

One of the key features of NVIDIA’s new tech demo is the use of retrieval-augmented generation (RAG), which significantly enhances the quality of the chatbot’s responses. In addition, the demo incorporates TensorRT-LLM, a tool for optimizing large language models, ensuring that the chatbot operates at peak efficiency. Thanks to RTX acceleration, the chatbot is not only accurate but also incredibly fast, running directly on a user’s Windows RTX PC without the need for cloud processing.

Developers, in particular, may find “Chat With RTX” intriguing as it builds upon the TensorRT-LLM RAG developer reference project available on GitHub. This provides them with a valuable opportunity to explore advanced AI models and potentially integrate similar technologies into their own projects.

 

For those interested in experiencing “Chat With RTX,” there are certain system requirements that must be met. The user’s PC should be equipped with a GeForce RTX 30 Series GPU or a more advanced model, with a minimum of 8GB of VRAM. Additionally, the PC must be running either Windows 10 or 11 and have the latest NVIDIA drivers installed to ensure compatibility with the demo.

NVIDIA has acknowledged a current installation issue with “Chat With RTX” and has promised to resolve it in a forthcoming update. In the meantime, users are advised to install the application in the default directory to avoid any complications.

Furthermore, NVIDIA is encouraging developers to push the boundaries of generative AI by hosting a contest. Participants are invited to create innovative applications using NVIDIA RTX GPUs, with the chance to win prizes. This contest not only stimulates creativity within the developer community but also showcases the potential of NVIDIA’s technology in driving forward AI applications.

The introduction of “Chat With RTX” is a testament to NVIDIA’s ongoing efforts to advance AI and GPU technology. By focusing on high-performance computing and data privacy, NVIDIA is making it possible to integrate sophisticated AI capabilities into everyday tasks. This technology allows users to benefit from a smart, responsive, and personalized AI assistant, all while keeping their data securely processed on their local machine. As NVIDIA continues to innovate and address any initial teething problems, “Chat With RTX” is poised to become an essential tool for those seeking a more intelligent and responsive computing experience.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Millennia game demo available to play until February 12th

Millennia game demo

Paradox Interactive, a well-known publisher in the gaming industry, has recently unveiled a demo for their latest project, “Millennia,” crafted by the talented team at C Prompt Games. This new title is a turn-based strategy game that stands out for its unique blend of historical accuracy and imaginative scenarios. It allows players to delve into a world where they can rewrite history, offering a fresh and engaging experience for strategy enthusiasts.

At the heart of “Millennia” is the challenge of leading a civilization through the twists and turns of time. Players are given the power to make critical decisions that will shape their civilization’s destiny. The game is designed to provide a customizable journey, where the choices you make have a lasting impact on the world you’re building. Whether you’re following the footsteps of history or charting a new course, your strategic prowess will determine the legacy of your civilization.

One of the most intriguing aspects of “Millennia” is its structure, which is divided into ten distinct Ages. Each Age presents its own set of challenges and opportunities, pushing players to adapt their strategies as they progress. Advancing through the Ages unlocks new technologies, units, and buildings, which can significantly alter the course of your civilization’s development. This dynamic progression system ensures that no two playthroughs are the same, providing a rich and varied gaming experience.

Adding to the game’s depth is the National Spirits system, a feature that allows players to further customize their gameplay. This system offers unique technology trees that reflect the strengths and cultural attributes of your nation. Whether you’re aiming to dominate through military power or thrive through trade and diplomacy, the National Spirits system gives you the tools to steer your nation in the direction that best suits your style of play.

However, the path to success in “Millennia” is not just about military conquest or technological advancement. A strong economy is essential to support your ambitions. Players must carefully manage their resources to ensure that their economy can back up their military strategies. The game challenges you to strike a balance between production and consumption, fostering a thriving nation that is ready to face any challenge.

For those eager to dive into this world, the “Millennia” demo offers a taste of what the full game has in store. The demo is available to the public and provides access to the game up to the third Age, with a limit of 60 turns. It’s a single-player experience, currently available in English, and serves as an excellent opportunity for players to get acquainted with the game’s mechanics. Paradox Interactive is actively seeking feedback and bug reports through the game’s forum, which will help them polish “Millennia” to perfection.

The demo of “Millennia” is more than just a preview; it’s a chance for players to immerse themselves in a strategic adventure that promises to captivate and challenge. With its innovative features and the potential for endless replayability, “Millennia” is poised to become a favorite among those who love to strategize and shape history. If you’re a fan of strategy games, this is an opportunity you won’t want to miss. Join the ranks of early players and help shape the future of “Millennia” by trying out the demo today.

Filed Under: Gaming News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Homeworld 3 War Games demo available to play until Feb 12th 2024

Homeworld 3 War Games gameplay demonstration

Immerse yourself in the captivating world of Homeworld 3 with the “War Games Demo,” a limited-time event that promises to enthrall fans of strategy games. From February 5 to February 12, players can dive into a unique blend of real-time strategy and roguelike gameplay, available on Steam. This sneak peek into the much-anticipated Homeworld 3 universe is designed to challenge even the most seasoned strategists.

The demo introduces the War Games mode, where players face randomized fleet combat scenarios that test their tactical skills. As you engage in these battles, you’ll discover collectible Artifacts that can be used to enhance your ships, giving you an edge over your opponents. The excitement doesn’t stop there; the demo also supports multiplayer gameplay, allowing you to team up with friends and tackle the challenges together.

Homeworld 3 War Games

As you play through the War Games demo, you’ll find yourself progressing through a system that unlocks new fleets and Artifacts. This progression not only enriches your demo experience but also has lasting benefits. The rewards you earn will carry over to the full version of Homeworld 3, ensuring that your time invested in the demo is not lost.

Play until February 12 at 10am PT.

One of the standout features of the demo is the inclusion of four unique maps. Each map offers a different strategic environment, pushing you to adapt your tactics and think on your feet. Whether you prefer to play solo or join forces with others, the demo caters to both styles of play, supporting up to three players in online multiplayer matches.

Features :

– Multiplayer support for up to three players through Quick Match or private lobbies.
– Progression through the demo allows unlocking of new fleets and Artifacts.
– Persistence of unlocked Fleets and Artifacts into the full game, with a transfer process to be detailed later.
– The necessity for players not using Steam Cloud Saves to retain demo files for progress transfer.
– The demo includes four different maps and various unlockables to enhance gameplay.
– The option to play the War Games mode solo or with friends online, with a maximum of three players.

For those who are not Steam users, there’s no need to worry. The Homeworld 3 War Games demo is also accessible through the Epic Games Store, ensuring that a wider audience can participate in this exciting event. As you make your way through the War Games demo, rest assured that your achievements will not be in vain. A process will be put in place to transfer your progress to the full game. For those using Steam Cloud Saves, your progress is automatically stored. If you choose not to use this feature, be sure to keep your demo files safe for a smooth transition to the complete Homeworld 3 experience.

The Homeworld 3 “War Games Demo” is a must-play event for fans of real-time strategy and roguelike games. With its engaging features, multiplayer support, and a progression system that rewards your dedication, the demo offers a glimpse of the strategic depth and excitement that awaits in Homeworld 3. Mark your calendars and get ready for a week of intense strategic battles and cooperative gameplay.

Filed Under: Gadgets News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Runway Motion Brush AI video animation and creation demo

Runway Motion Brush AI video animation and creation

The world of digital animation is constantly evolving, and the latest development is making waves among creators. Runway Gen-2 has introduced a set of tools that are changing the way animators bring their static images to life. These motion brushes are a significant advancement, providing artists with an unprecedented level of precision and control. With these innovative tools, animators can now effortlessly breathe movement into their visuals, enhancing the art of visual storytelling.

At the heart of this new technology are the motion brushes, which have been meticulously crafted to cater to the unique needs of creators. These brushes allow animators to add detailed movement to elements within an image. Imagine being able to animate the gentle sway of trees, the subtle ripple of water, or the graceful flight of a bird with just a few simple brush strokes. This capability opens up a world of possibilities for animators, enabling them to create more dynamic and engaging scenes.

Runway Motion Brushes in Runway Gen-2

Another key aspect of storytelling is music, and Runway Gen-2 has made it easier than ever to integrate sound into animations. The platform’s workflow is designed for seamless audio synchronization, allowing creators to produce a unified and emotionally impactful viewing experience. This synchronization ensures that the visual and auditory elements of an animation complement each other perfectly.

Runway Gen-2 is versatile, accommodating the different approaches that creators take to animation. Whether an animator prefers to design all image assets before starting the animation process or likes to create as they go, the platform’s tools are flexible enough to fit any style. This adaptability is crucial for animators who have their unique way of bringing stories to life.

Text-to-Video AI animation and video creation

Here are some other articles you may find of interest on the subject of creating video using artificial intelligence :

For animations that require the illusion of complex, real-life motion, the ability to mask specific areas for animation is essential. Runway Gen-2 offers this feature, allowing animators to create differentiated movements within different parts of the image. This adds depth and realism to the scenes, making the animations more lifelike and captivating.

Once the animation process is complete, video editing software plays a crucial role in refining the creation. After using Runway Gen-2 to animate, this software helps synchronize audio and video, resulting in a polished and compelling narrative. This final step is where all the elements come together to tell a story that resonates with viewers.

The introduction of motion brushes by Runway Gen-2 is not just an enhancement to the animation process; it represents a shift in how animators work. The platform provides a new level of control over movement, music integration, and workflow flexibility. It sets a new standard for creators, offering tools that cater to both experienced animators and those new to the field. With Runway Gen-2, the realm of creative potential has expanded, allowing artists to explore new ways to tell their stories through animation.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Runway AI text-to-video Ambient Motion Control feature demo

New Runway AI text-to-video Ambient Motion Control feature demonstrated

Runway the text-to-video AI service is transforming the way we create videos and animations with a powerful new feature that allows users to add motion to static images with incredible precision. This ambient control setting is a breakthrough for those who use the platform, offering a sophisticated method to animate AI-generated content. Whether you’re looking to add a gentle sway to trees in a landscape or subtle expressions to a character’s face, this tool makes it possible.

The Ambient Motion Control feature is a leap forward for Runway text-to-video users, providing a refined way to animate AI-generated content. Imagine wanting to capture the subtle rustle of leaves or the nuanced expressions in a portrait that make it appear almost alive. With the ambient slider, you can adjust the intensity of the motion, customizing the animation to fit your vision. This user-friendly feature allows for the quick creation of different clips for comparison.

Runway text to video AI

Features of Runway

  • Pre-trained AI models: These models cover a variety of tasks, like generating photorealistic images or videos from text prompts, manipulating existing media like changing the style of a video or adding special effects, and analyzing content to identify objects or people.
  • Image of RunwayML AI model generating video from text prompt
  • No coding required: RunwayML’s interface is designed to be user-friendly and intuitive, even for those with no coding experience. You can access and use the various AI models with simple clicks and drags.
  • Customizable tools: The platform also allows users to train their own AI models and import models from other sources, giving them even more control over their creative process.
  • Community-driven: RunwayML has a thriving community of creators who share their work and collaborate on projects. This fosters a sense of inspiration and learning for everyone involved.

When you adjust the ambient settings, the impact on your videos is clear. A slight tweak can add a gentle movement to foliage, while a stronger setting can create the illusion of a windy day. For portraits, the technology can mimic realistic movements, such as hair fluttering in the breeze or the natural blink of an eye, giving your animations a sense of authenticity and life.

But the ambient control is just one part of what Runway text-to-video AI service offers. Others include camera controls and text prompts, which help direct the viewer’s attention and add narrative to your animation. To further enhance your work, you can use post-processing techniques with tools like Adobe After Effects to achieve a professional finish.

RunwayML text-to-video

  • AI Magic Tools: These are pre-trained models that let you perform various tasks with just a few clicks, such as generating different artistic styles for an image, changing the lighting or weather in a video, or adding facial expressions to a still image.
  • AI Training: This feature allows you to train your own custom AI models using RunwayML’s platform. This is helpful if you need a model that performs a specific task that is not already available in the pre-trained model library.
  • Video Editor: RunwayML also includes a full-featured video editor that you can use to edit your videos and add special effects.
  • Community: The RunwayML community is a great place to find inspiration, learn new things, and share your work with others.

By mastering the ambient controls and incorporating camera movements, you can produce animations that not only draw the viewer in but also fully immerse them in the story you want to tell. These creations go beyond simple videos; they are experiences that draw audiences into the worlds you create.

RunwayML’s ambient control setting within the motion brush feature opens up new possibilities for creativity. By experimenting with different images, artistic styles, and additional tools like camera controls and Adobe After Effects, you can create animations that are visually and emotionally compelling. As you become more skilled with these features, your work will stand out in the world of AI-generated content, captivating viewers with every frame. RunwayML is a powerful and versatile AI text to video platform that can be used to create all sorts of amazing things give it a try for yourself a free.

Image Credit :  RunwayML

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Real Gemini demo built using GPT4 Vision, Whisper and TTS

Real Gemini demo built using GPT4V and Whisper and TTS

If like me you were a little disappointed to learn that the Google Gemini demonstration released earlier this month was more about clever editing rather than technology advancements. You will be pleased to know that perhaps we won’t have to wait too long before something similar is available to use.

After seeing the Google Gemini demonstration  and the revelation from the blog post revealing its secrets. Julien De Luca asked himself “Could the ‘gemini’ experience showcased by Google be more than just a scripted demo?” He then went about creating a fun experiment to explore the feasibility of real-time AI interactions similar to those portrayed in the Gemini demonstration.  Here are a few restrictions he put on the project to keep it in line with Google’s original demonstration.

  • It must happen in real time
  • User must be able to stream a video
  • User must be able to talk to the assistant without interacting with the UI
  • The assistant must use the video input to reason about user’s questions
  • The assistant must respond by talking

Due to the current ability of Chat GPT  Vision to only accept individual images De Luca needed to upload a series of images and screenshots taken from the video at regular intervals for the GPT to understand what was happening. 

“KABOOM ! We now have a single image representing a video stream. Now we’re talking. I needed to fine tune the system prompt a lot to make it “understand” this was from a video. Otherwise it kept mentioning “patterns”, “strips” or “grid”. I also insisted on the temporality of the images, so it would reason using the sequence of images. It definitely could be improved, but for this experiment it works well enough” explains De Luca. To learn more about this process jump over to the Crafters.ai website or GitHub for more details.

Real Google Gemini demo created

AI Jason has also created a example combining GPT-4, Whisper, and Text-to-Speech (TTS) technologies. Check out the video below for a demonstration and to learn more about creating one yourself using different AI technologies combined together.

Here are some other articles you may find of interest on the subject of  ChatGPT Vision :

To create a demo that emulates the original Gemini with the integration of GPT-4V, Whisper, and TTS, developers embark on a complex technical journey. This process begins with setting up a Next.js project, which serves as the foundation for incorporating features such as video recording, audio transcription, and image grid generation. The implementation of API calls to OpenAI is crucial, as it allows the AI to engage in conversation with users, answer their inquiries, and provide real-time responses.

The design of the user experience is at the heart of the demo, with a focus on creating an intuitive interface that facilitates natural interactions with the AI, akin to having a conversation with another human being. This includes the AI’s ability to understand and respond to visual cues in an appropriate manner.

The reconstruction of the Gemini demo with GPT-4V, Whisper, and Text-To-Speech is a clear indication of the progress being made towards a future where AI can comprehend and interact with us through multiple senses. This development promises to deliver a more natural and immersive experience. The continued contributions and ideas from the AI community will be crucial in shaping the future of multimodal applications.

Image Credit : Julien De Luca

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.