El antiguo grupo de “The Acolyte” ya reveló números para Come señor sol Y el Mai asesinaAsí que tiene sentido que Usha se una a las filas. Aquí está la descripción oficial de la figura directamente de Hasbro:
Ambientada al final de la era de la Alta República, una ex padawan se reúne con su Maestro Jedi para investigar una serie de crímenes, pero las fuerzas que encuentran resultan ser mucho más siniestras y personales de lo esperado. Los fanáticos pueden celebrar un legado guerra de las Galaxiasuna saga espacial de acción y aventuras de una galaxia muy, muy lejana, con esta figura premium de OSHA ANISEYA a escala de 3,75 pulgadas (VC#327) inspirada en la apariencia del personaje enStar Wars: El Asistente. Esta figura presenta detalles y diseño de primera calidad en el producto y el empaque inspirados en la línea Kenner original, así como una decoración de colección inspirada en el entretenimiento. Incluye figura y 4 accesorios que incluyen pistola, accesorio de comunicación, bolsa y funda para cinturón.
Aunque la descripción de la figura solo enumera el artículo como un “accesorio de comunicación”, como puede ver, estos dispositivos no son otros que el útil robot Pip. Además, encaja perfectamente en su funda OSHA cuando tiene que dirigirse hacia el peligro. Teniendo en cuenta lo adorable que era Pip en “The Acolyte”, ¿te imaginas lo lindo que debe ser en esta pequeña forma? Esto debería detenernos hasta que se anuncie una versión Black Series de la figura (probablemente inevitablemente), cuando obtengamos una versión un poco más grande de Pip.
Y si te gusta Pip, quizás quieras obtener la versión de la figura de la Colección Retro, porque es igual de adorable en esta línea coleccionable.
While I hesitate to call the Rabbit R1 AI companion device useless, I would not describe it as useful. This is a cute, orange gadget that has spent much of its brief time in my pocket. I have little to reason to pull it out. And why would I? It does nothing better than my iPhone 15 Pro Max and the dozens of apps I have on it. It’s not even a better AI device than a smartphone with Gemini, Copilot, or ChatGPT.
Even the design, which gets points for solid construction and cute, retro looks, fails to inspire. The touch-screen, physical scroll wheel navigation is one of the worst system interaction strategies I’ve ever encountered. RabbitOS’s incredibly linear navigation only exacerbates the problem. I can’t remember the last mobile piece of consumer electronics that didn’t know to return to a home screen if you weren’t using it. I’d argue the developers took the “rabbit hole” metaphor a little too seriously and designed an operating system that is nothing but rabbit holes and the only way you get out of them is by carefully backing up.
Rabbit R1 was supposed to be different. it was supposed to be special. It’s not a smartphone and was never intended to be one or even compete with one. Instead, Rabbit tossed traditional smartphone and app tropes out the window and developed something new: a way of connecting your intentions to action without the need for apps. A new AI or Large Action Model (LAM), would connect spoken requests to app logins and then handle all the interactions and execution for you.
Specs
What’s in the box: Rabbit R1 Weight: 115g Dimensions: 3in. x 3in. x 0.5in. Battery: 1000mAh RAM: 4GB Storage: 128GB Display: 2.88in. TFT Connectivity: WiFi (2.4GHz and 5GHz), Bluetooth 5, SIM card-support Location: GPS Camera: 8MP CPU: MediaTek MT 6765
In practice, this means that you’re logging into your Uber, Door Dash, Spotify, and Midjourney accounts through the Rabbit Hole desktop interface and then using the Rabbit R1 hardware, its push-to-talk system, and on-board AI to request rides, food, music, and generative images.
Would it shock you to hear that most of that didn’t work for me? It’s not all Rabbit’s fault. Spotify won’t accept third-party music requests unless you have a paid account. Doordash couldn’t complete the sign-in. Midjourney works but the image generation is happening in Discord and not inside the Rabbit.
LAM turns out to be unimpressive and somewhat jerry-rigged. The built-in large language model that works with Rabbit Vision is somewhat better but why would I buy another $199 piece of hardware to duplicate something I can do with a cheap phone, much less the best phone currently available? I wouldn’t, and neither should you.
Rabbit R1: Pricing & availability
The Rabbit announced the Rabbit R1 AI companion at CES 2024 in January. It shipped in April, lists for $199 (about £160/AU$290), and is currently available in the US, Canada, United Kingdom, Denmark, France, Germany, Ireland, Italy, Netherlands, Spain, Sweden, South Korea, Japan, and Australia. The first run is done and new orders are shipping in June 2024.
Rabbit R1: Design & features
Image 1 of 4
(Image credit: Future)
(Image credit: Future)
(Image credit: Future)
(Image credit: Future)
You have to give Rabbit and design firm Teenage Engineering credit: the Rabbit R1 looks nothing like a traditional smartphone and that difference helps broadcast its intentions, which are ultimately nothing like your phone’s.
Rabbit R1 is a 3x3in by a half-inch thick orange paint-covered and fairly sturdy slab. It has a tiny 2.88-inch color touch screen, an enclosed, rotatable 8MP camera, and below that a large, slick scroll wheel. If you look on the side adjacent to that wheel, you’ll see a small gray push-to-talk (to the device) button that goes right through it. On the opposite side is a USB-C charge port (the device does not ship with a cable or charge adapter). Below that is a SIM slot that you can open with a fingernail, a nice change from all the phones that require a special pin.
There’s a pair of microphones along one edge and on the back is a large speaker grill (one inch by about 0.5 in).
(Image credit: Future / Lance Ulanoff)
Inside is 4GB of RAM, which doesn’t sound like much but considering how little Rabbit R1 does on board it’s probably enough. There’s a surprising 128GB of storage that will mostly go unused. The MediaTek MT 6765 is a middling CPU but it’s unclear how much of an impact since the Rabbit R1 is usually talking to the cloud. AI image generation through Midjourney, for example, is not performed in-device. Instead, it sends prompts to the cloud where Midjourney on Discord handles them, generates images, and then sends them back to the Rabbit R1 to be displayed on the tiny, albeit sharp, screen.
Considering how important that cloud connection is to Rabbit R1’s operation you’d think it would do a better job of maintaining it, but often when I picked up the Rabbit R1, it would say “establishing connection” while I waited. If I had it connected to my smartphone, the connection would often drop out. You can, by the way, buy and install a SIM card to deliver a constant, dedicated connection to your mobile network. Still, without the ability to make calls or even send and receive texts, what’s the point of that?
Design & features score: 3/5
Rabbit R1: Performance & Battery Life
Setup is mostly pain-free, though to use Rabbit R1, I had to get it on a network, which required typing in a WiFi password into a really tiny virtual touch screen. The Rabbit R1 wouldn’t work, though, until I plugged it in and accepted the first of what would become a series of regular updates.
There isn’t much about Rabbit R1’s operation that I’d call familiar. If you pick it up, you’ll notice the screen is dark until you press the talk button. The default screen is a graphical rabbit (Rabbit’s logo) with battery life and time. There’s nothing else on the display. Touching or tapping the screen does nothing. It’s important that you get used to talking to Rabbit R1, as it’s the only way to access its limited feature set. At least Rabbit R1’s microphones are powerful enough to pick up my requests even when I whisper them.
(Image credit: Future)
Rabbit R1 doesn’t do much of anything on its own. There’s the cloud-based large-language model (LLM) that does a decent job of answering questions about the weather, history, and other general-interest topics. It’s also quite good at reading labels. I noticed that when I pointed it at a rocket model, it accurately identified it and then walked me through the bullet list of details on the box. The built-in camera is not for taking pretty pictures (what do you expect from an 8MP sensor?) and is instead used with Rabbit Vision.
The camera is usually hidden but when I double-click the Talk button, the camera swivels to face out from the back of the Rabbit R1 – you use the scroll wheel to flip the camera from front to back and vice versa. I can hold the button down to ask Rabbit R1 to, for instance, describe what it’s seeing. After a few seconds, it usually responds accurately and in surprising detail.
(Image credit: Future)
It did well identifying a banana, a camera, and me as a late middle-aged man. But when I asked it to to help me plan a meal based on what it could see in my refrigerator, it only described what it saw in the fridge and told me there were many options. However, it did not describe a single dish and when I followed up and asked it to suggest a meal based on what’s in my fridge, it said it could not order food.
(Image credit: Future)
I don’t speak any other languages, so I tested Rabbit R1’s real-time translation abilities by letting it listen to some Japanese language videos on YouTube. I told it to translate Japanese to English and, when I held the talk button to let it listen and then released it, the Rabbit R1 quickly displayed on screen and repeated the conversation in English. That was pretty impressive, though, the lack of on-screen guidance on how to make this work was frustrating. Most people not comfortable with technology might just give up.
I can relive all these interactions with Vision through the online “Rabbit Hole,” which keeps the text and images from each interaction in calendar order. There’s no search function but each entry includes a trashcan icon so you can delete it.
Rabbit R1 doesn’t include communication, email, messaging, social media, games, or anything that might prompt me to engage with it more regularly. It’s just an AI wrapped inside a device.
(Image credit: Future / Lance Ulanoff)
There are some settings and controls for things like volume control. To access them you have to press the Talk button and then, I kid you not, shake Rabbit R1. To navigate the menu, you’ll need to use the large orange scroll wheel. This wheel is one of Rabbit’s worst decisions. I found it slippery and hard to turn. I hate it.
Navigating the Settings menu required a series of turns and presses. You navigate down to a menu item and then reverse those steps to get back home. It’s almost as if the designers never used a smartphone. If I weren’t testing the Rabbit R1, I might’ve pitched it out a window.
Rabbit R1 gets points for cute graphics. This is what I saw when I recharged the handset. (Image credit: Future)
Initial battery life on the Rabbit R1 was not good and I watched as it lost a quarter of it’s battery life in the space of an hour. Subsequent updates seemed to help that a bit but I still think battery life drains far too fast (even when you’re not using it). The average smartphone is more efficient and lasts far longer.
Performance and Battery Life Score: 2.5/5
Rabbit R1: Final verdict
If all it took to achieve success in consumer electronics is to deliver an adorable design at a relatively affordable price, Rabbit R1 might be a success. But that’s not the real world.
Rabbit R1 doesn’t do enough to replace your smartphone or even operate as a decent companion. It’s limited, and poorly thought out and much of the magic it promises happens – slowly – in the cloud and then is delivered back to this underpowered orange product.
If Rabbit hopes to lead the AI gadget charge, it better go back to the drawing board for Rabbit R2.
If I’m being honest, I’m writing this just so I can show you a bizarre 52-second video that stopped me in my tracks: it’s a Boston Dynamics Spot robot in a dog costume.
The robotics firm didn’t unveil any new technology or robotics breakthroughs. Nope, this video is just of an unadorned Spot dancing next to another Spot wearing an elaborate, if somewhat cartoonish dog getup.
When I first saw the video, I assumed Boston Dynamics had hired an animation studio to CGI a cute dog next to the $75,000 robot dog, popular in factories and with well-heeled enthusiasts. I was wrong.
According to the video description, “Sparkles is a custom costume designed just for Spot to explore the intersections of robotics, art, and entertainment.” In other words, it’s a blue-and-white sheepdog costume fitted atop a standard Spot.
It’s not just the adorable and realistic costume, which only leaves Spot’s real legs showing – it’s “Sparkles'” moves. In this instance, Spot was choreographed using software previously employed two years ago to choreograph Spot’s dance routine with Korean supergroup BTS, which celebrated Boston Dynamic’s partnership with Hyundai Motor Group.
Sparkles’ costume face doesn’t move and its eyes are unblinking. Even so, moves I’ve seen Spot perform countless times before take on a far more canine aspect when done by the Sparkles costume. The combination of friendly pup costume and animation transforms Spot from a slightly scary and off-putting industrial robot into something you might want to hug and pet (though I am relieved Sparkles never extends his neck the way a typical Spot robot can).
Even though Sparkles could bust a move, the costume covers most of its sensors, meaning it might not be a very useful at-home companion. Plus, the head covers its highly useful grabber face, which probably means Sparkles can’t even use his costume mouth to pick up a fake dog bone.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
Boston Dynamics dropped the video just days after it teased its most advanced humanoid robot yet, the fully reimagined and all-electric Atlas. It occurred to me that if Boston Dynamics is willing to put dog’s clothing on Spot, might it also sheath Atlas in fake human skin and even add a face?
Naturally, the response to that robot might be less “awww” and more like something approaching sheer horror.
In any case, enjoy Sparkles’ all-too-brief dance moves while we ponder how long it’ll be before this costumed Spot is starring in his own kids’ show.
There’s a new Nvidia GeForce RTX 4060 on the block and while it might not be the best graphics card out there, it has one massive advantage: it’s adorably cat-themed.
The card is currently available through one of Nvidia’s Chinese board partners, ASL, and can be shipped out to the US. Even better, the price is less than a normal RTX 4060. The design is a collaboration between ASL and SupremoCat, a Chinese cartoon brand, and features Wuhuang and Bazhahey (a cat and pug duo) wearing sunglasses and placed at the center of each fan. To top it off, the card is a pretty pastel pink, giving it even cuter vibes.
If you’re in the US, you can order this card on AliExpress for $374.99, which isn’t too shabby. And if you sign up for an AliExpress Choice subscription you can get free delivery on your order, which is an even better incentive considering how slow and expensive international shipping can be.
Make gaming PCs and peripherals cuter, please
I’ve been shouting from the digital rooftops for years that we need more cute tech to combat the dreaded gamer aesthetic that’s still popular with manufacturers to this day. Seeing this adorable collab birth such a cute graphics card is music to my eyes and I want to see more of this in the future.
Gaming setups are normally incredibly boring, a generic mix of RGB lighting trying to liven up black computers and accessories. But having unique colors, designs, shapes, and more does far more to add real personality and distinctiveness to a gamer space. I want people to immediately see my obsession with sickeningly cute animals the minute they lay eyes on my desk, not have strobe lights flash in their eyes.
So yes, I want more pink and purple laptops and keyboards with cat and dog keycaps, and there’s definitely a market out there for people like me.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
After seven years of development, indie visual novel game The Hayseed Knight is now available in full for PC on both Steam and Itch.io.
Helmed by solo developer Maxi Molina, the game follows a one-eyed farm boy called Ader as he pursues his personal quest to become the most celebrated knight in Acazhor – a fictional kingdom inspired by the Muslim-ruled regions of medieval Spain.
The narrative was greatly influenced by the Picaresque literary genre, a storytelling tradition rooted in Spanish culture that depicts a plucky adventurer overcoming one scrape after another. It features an intriguing mix of comedy, romance, and more serious moments too, with your decisions affecting the overall trajectory of the story.
Rather unusually for an indie title, The Hayseed Knight is also fully voice-acted and absolutely bursting with gorgeous hand-drawn animation. This allows scenes to unfold in an almost cartoon-like fashion and you can see some of the animation for yourself in an animated trailer that debuted alongside the full release.
It contains five chapters (in addition to small epilogues) that are estimated to take around 20 hours to complete in total, with potential for further playthroughs in order to discover all three major endings.
Development first began in 2017, with the game entering Steam early access and receiving major updates on a chapter-by-chapter basis. A recent Steam post outlines some of the biggest changes in the full version, including the release of the final chapter plus plenty of overhauls and improved illustrations throughout. The post also describes this release as “the last big update” providing there isn’t “anything failing catastrophically and needing patches.”
In line with Molina’s aim to offer “newcomers a chance to add a high-profile paid project to their resume,” the voice cast includes actors with a wide range of experience levels. In addition to the voice actors, all external contributors including musicians, translators, and consultants are credited in the game no matter the scope of their work – which is certainly commendable.