Uno de los personajes más geniales de Hora de Aventuras es Marceline la Reina Vampiro, una chica gótica tranquila, bebedora de rojo y que toca la guitarra que termina teniendo un dulce romance con la Dulce Princesa (Hynden Walsh). ) en el incidente ocurrido Después de la cancelación inicial de la serie Cartoon Network. Es toda una música, toca todo tipo de canciones con su guitarra y canta, y los fanáticos que la escuchen atentamente pueden reconocer su hermosa voz como la de Olivia Olson, la actriz que interpretó a Joanna en la película “Love Actual”. Para aquellos que no la han visto desde hace tiempo, ella es quien canta esta versión ondulada de “All I Want For Christmas is You” en lo que puede ser la balada más dulce de la película, que sigue el romance juvenil entre Joanna y un niño. llamado Sam (Thomas Brodie-Sangster).
Cada historia de “Love Actually” sigue a una especie de pareja desventurada que encuentra el amor en esta temporada navideña. En la historia de Sam, él está de luto por la pérdida de su madre (también conocida curiosamente como Joanna) mientras su padrastro (Liam Neeson) intenta ayudarlo a sobrellevar la situación. Se obsesionó con la idea de contarle a su colega Joanna que estaba enamorado de ella antes de que ella regresara a los Estados Unidos y finalmente se lo confesó en la sala del aeropuerto. Es muy lindo, incluso si el nombre de la madre es A. talla pequeña Un poco freudiano.
Cuando se le preguntó qué recuerda sobre el rodaje de The Naked Time, Moss tuvo una queja que claramente lo ha estado molestando durante décadas. Primero, describió la escena en la que se encontraba de la siguiente manera:
“Mi primera escena fue con Leonard Nimoy, Down From proyecto En un planeta alienígena. […] [W]Estamos en una estación espacial llena de cadáveres congelados. Spock me dijo que no tocara nada y se fue a mirar a mi alrededor. Mientras leo en la pared, me empieza a picar la nariz, así que me quito los guantes y toco la pared misma, provocando que un germen alienígena caiga sobre mi mano, infectándola con una enfermedad alienígena que me infecta a mí y a cualquiera. Quien me toca, y luego cualquiera que toca a cualquiera que me toca, perdemos el control de nuestras salvaguardas emocionales y, en resumen, nos volvemos locos”.
Moss entendió claramente su papel en la historia y el funcionamiento de la enfermedad alienígena. Para aquellos que no están familiarizados, “Naked Time” es el episodio que Sulu (George Takei) salta por los pasillos del Enterprise sin camiseta, usando una espada de esgrima. Solo lo hizo porque era diabético.
Moss también parece haberse dado cuenta plenamente de lo que era la Flota Estelar y comenzó a formar una base de datos (sorprendentemente precisa) sobre el tema. Esto fue mucho antes de que los obsesivos Trekkies se reunieran en los salones de convenciones para recoger detalles sobre episodios individuales y armar una teoría funcional de los eventos centrales vitales de “Star Trek”. Marcos Daniels Dirigió The Naked Time, y Moss recuerda haberlo apartado para cuestionar la impertinente estupidez de Tourmelon.
Take the strain, and three, two, one, pull! No, I’m not in the gym lifting weights, but in the woods with my Nikon DSLR and raising its optical viewfinder to my eye to compose a picture. It’s my D800‘s first outing in years and it’s quickly reminding me why I was so happy to switch to mirrorless. At 31.7oz / 900g and combined with my Nikon 70-200mm AF-S f/2.8 VR lens (50.4oz / 1430g) it’s well over 80oz / 2300g, and being cumbersome isn’t even the worst part.
Don’t get me wrong, I’ll come away from this walk in my local woods that’s bursting with fragrant bluebells and wild garlic with some pictures I’m super-excited about (see below), but boy do I have to work that much harder to get the results I want. And without wanting to lug a tripod around, I actually can’t get the same degree of sharpness in my pictures from this day in the dim conditions under a dense tree canopy.
There are aspects of the Nikon D800’s handling that I really enjoy and mixing up creative tools keeps me fresh as a photographer, but overall my mirrorless camera is a much more streamlined experience and I’m still glad that I made the leap from a Nikon DSLR to the Z6 II. Let’s look at where my DSLR struggles begin.
1. Carrying the gear
(Image credit: Future | Tim Coleman)
My Nikon D800 from 2012 is an extra 50% heavier than the Nikon Z6 II I’m now used to, and also the Z7 II that is arguably my DSLR’s modern day equivalent. The 70-200mm f/2.8 F-mount lens is also heavier than the mirrorless Z-mount version, although not by much. Overall, there’s approximately a 20% reduction in weight in the mirrorless version of my DSLR camera and lens pairing.
The DSLR camera body is also bulkier, and I notice this quickly with the chunkier handgrip. In some ways it’s actually a better balance with the fairly large telephoto lens than what my mirrorless camera offers, but in practice I’m wanting to place the DSLR down quicker than mirrorless.
When you’re repeating the motion of bringing the camera’s viewfinder up to your eye to compose a shot, the strain starts to take hold quite quickly.
2. Composing the shot
(Image credit: Future | Tim Coleman)
I like the D800’s optical viewfinder (OVF), a lot. It’s a bright and big display through which I can immerse myself in the scene. And it’s one less digital screen to look at, and I’m all for that.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
However, what you don’t get with an OVF, like you do with a mirrorless camera’s electronic viewfinder (EVF), is exposure preview, which is supremely helpful as you go about taking photos. You get a bright display but potentially a very different looking final image, both in brightness and depth of field / bokeh.
That can cause a problem for me because I tend to fiddle with exposure compensation based on the mood I want in the picture. It’s all too easy to leave the camera at -2EV for a low-key effect and unwittingly carry on shooting dark pictures because the end result is not reflected in the OVF display. Overall, I prefer an optical viewfinder display for the feeling and an electronic viewfinder to meet my practical needs.
Another point regarding my D800 is that its screen is fixed, whereas my mirrorless camera has a tilt display which is super helpful for shooting at low angles, which I often do especially in scenarios like this. Some DSLRs like the Nikon D850 also have a moveable screen, but most don’t, and once you’re used to working from a tilting or swivel screen, it’s hard to go back to a fixed one.
3. Focusing issues
(Image credit: Future | Tim Coleman)
Focusing isn’t bad with the D800. It’s actually very good, but it’s not as refined as the Z6 II mirrorless camera. It’s evident as I pinpoint certain bluebells – the focus points simply aren’t small enough. I wrestle with autofocus as it hunts for the subject that’s right there, more so than with mirrorless.
If I was taking portraits today, I’d be much more relaxed with my mirrorless camera too thanks to its reliable subject and eye detection autofocus, whereas my D800 has regular back-focusing issues.
I’ve also become accustomed to composing shots through the Z6 II’s LCD display, often instead of the viewfinder. If I try to do the same – focusing through the D800’s Live View – it is a significantly worse experience, too. Nikon DSLRs aren’t really designed to be used for photography with autofocus through Live View, though Canon DSLRs do a better job.
4. No image stablization
When looking closely at the detail of the tree bark in sharp focus, there’s a subtle softness that comes with shooting handheld using a high-resolution DSLR like the D800. (Image credit: Future | Tim Coleman)
The single thing I miss the most when opting for my DSLR over mirrorless is in-body image stabilization, which in the Z6 II enables me to shoot handheld in more situations.
I remember when I first bought my D800 just how unforgiving its 36MP sensor was regarding camera shake and its resulting effect – softening detail. At the time, my golden rule to calculate the minimum acceptable shutter speed for sharp shots was shutter speed equals the focal length of your lens – for example, 1/200sec when shooting at 200mm.
That rule went out the window with the D800, the highest resolution full-frame sensor ever, and I would have to be conservative by around 2EV. At the same 200mm focal length a faster than normal 1/1000sec was as slow as I could go really. Or I could bring out the tripod to eliminate camera shake.
Image 1 of 7
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
I don’t want a tripod for my shooting techniques where I need maximum portability, like this day in the woods, nor do I want to damage the woodland and bluebells – I need a light footprint. No, I’m going handheld all the way.
Now I’m in these woods shaded by a dense tree canopy and the shutter speed I need to use with the 70-200mm lens requires a high ISO, even with the f/2.8 aperture. Put simply, the quality of detail I can get in this scenario cannot match what I can with my mirrorless camera which is equipped with image stabilization and able to shoot at slower shutter speeds and low ISO because it compensates camera shake.
The photos I came away with using my DSLR
Visually most woodlands are messy. You have to search long and hard for tidy compositions such as a single tree standing out from the rest. Or you can embrace and work with the chaos.
I’ve intentionally used a telephoto lens and shot through branches and leaves to add layers, a sense of depth and to bring in those elements that you otherwise have to work so hard to avoid. And I’m certainly not about to cut away branches or rip up flowers to get the shot I want.
Image 1 of 5
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
(Image credit: Future | Tim Coleman)
My overall experience bringing my DSLR back out of retirement was fine, but it has reminded me how mirrorless has evolved the camera experience for the better. Ultimately mirrorless is a more refined experience than a DSLR in just about every department.
Images are better, too. I haven’t been able to shoot handheld at ISO 100 under dense tree cover like I could with mirrorless, and there’s just an edge of softness in my pictures caused by subtle camera shake that I don’t have with mirrorless. I’m less concerned with my DSLR’s inferior corner sharpness and pronounced vignetting compared to mirrorless.
I’m not about to sell my DSLR – I’ll give it another run out soon. It’s just I’ve been reminded the extra dedication needed to the craft in order to come away with the pictures that I’m happy with. As I own both a DSLR and mirrorless camera, opting for the DSLR feels like taking the hard path.
Apex Legends will soon offer a Solos mode for the first time since 2019, even though developer Respawn Entertainment said earlier this year it had no plans to let players run amok in the battle by themselves again. When the next season starts, Solos will replace the Duos mode for six weeks.
The game is designed and tuned for squads of three, but Respawn recently told reporters that it “wanted to acknowledge the growing interest in Solos from our players,” many of whom were looking for new ways to play the game. Running the mode for half of season 21 will give the developers a chance to gain plenty of feedback from players. Perhaps that could help them figure out if Solos could become a more permanent fixture.
“With growing demand from players and a desire on the team to explore the concept again with everything we’ve learned since the mode’s last appearance in 2019, Upheaval felt like the right time to reintroduce a Solos experience to Apex,” events lead Mike Button said.
To compensate for the lack of support from teammates, the revived Solos mode will have three unique features. If you’re eliminated in the first four rounds, you’ll be able to use a one-time respawn token to rejoin the action. Any unused tokens after the fourth circle closes are converted to Evo, which is used for shields and ability upgrades. The idea behind this, according to the developers, is to encourage players to be more engaged in the early going.
Respawn has also created a mechanic for Solos called Battle Sense. This gives you an audio and visual cue whenever an enemy is within 50 meters. Last but not least, you’ll heal passively when you’re out of combat. It’ll take a moment for the gradual health regeneration to start, but you can skip that initial timer by securing a kill. You’ll still be able to use med kits and such to heal manually. Respawn is making some other tweaks for Solos, including adding fully kitted-out weapons, adjusting circle sizes and reducing the lobby size from 60 to 50 players.
Respawn Entertainment/EA
Alongside some map, cosmetic, balance and ranked changes, there’ll be a new legend for players to check out. Alter hails from another dimension and that plays into her kit. She can create portals through walls, ceilings and floors.
The Void Passage ability can be fired from some distance away and it has a maximum depth of 20 meters, so it can’t go through mountains. After going through a portal, you’ll have a few seconds of safety to assess your surroundings and prepare for a fight if need be. Allies and enemies can use the portals too, so Void Passage can open up all kinds of opportunities for flanking and rotations.
With her passive ability, Alter is able to see death boxes through walls and snatch an item from one. Alter’s ultimate is called Void Nexus. This drops a device that you and your teammates can interact with remotely, even while knocked down. Doing so will teleport you back to the regroup point. However, enemies have a short window to follow you. Alter’s upgrades include the ability to see enemy health bars while moving through a portal.
You’ll be able to check out the revived Apex Legends Solos mode and play as Alter when the Upheaval season starts on May 7.
If the recent news concerning two Windows 11 updates that have been breaking various features isn’t enough, the recent reveal that the OS’s market share has dipped below 26% certainly should spark some alarm.
According to April 2024 data from Statcounter, Windows 11 plummeted to a 25.69% market share after it reached an all-time high of 28.16% back in February 2024. Meanwhile, Windows 10 has risen to over 70% market share during the same period, and this is after Microsoft announced its intentions to reach End of Support (EOS) for Windows 10 by October 2025.
Microsoft could be looking at a tremendous issue, in which its hopes for Windows 11 being the ultimate AI-supported OS with Copilot, are hampered due to not having the user base it needs. Normally, an OS drops in support once the successor launches, so Windows 11 falling nearly three points in just a few months is quite telling.
But is it honestly surprising?
It’s no secret that Windows 11 has been plagued with issues and bad updates since its launch — not to mention its biggest problem involving many users not being able to make the upgrade in the first place due to its much steeper installation requirements, which prevents many otherwise interested users from even upgrading in the first place.
There’s also the fact that the OS has been forcing ads as “recommendations” into the Start menu and has even begun testing promotional recommendation pages that take up your whole screen, urging users to make Edge the default browser and installing or enabling other services. The worst part is that there’s no way to fully opt out of these ads, which accomplish nothing but clog up the UI with constant notifications.
As for what features Windows 11 offers over Windows 10? There’s simply not enough incentive for users to make the jump, with some features like centering the icons and Start menu on the taskbar and bringing back desktop widgets, barely worth mentioning. And some features, like the ability to move the taskbar, were actually removed.
On the other hand, Windows 10 came after Windows 8/8.1 which endeared users to its many improvements including bringing back the Start menu. Not to mention how much more stable the OS is compared to its successor, with far fewer broken updates.
What’s the future for Windows 11?
The biggest reason to make the move to Windows 11 is possibly Microsoft Copilot, but that’s also coming to Windows 10. There are some unique AI tools that Windows 11 will be getting eventually, but that could also serve to further the divide between users with higher-end PCs and less powerful ones.
So then, what should Microsoft do? The tech giant might have to cut its losses and speed up the release of Windows 12, putting all the AI goodies and other new features there instead. The user base would be more willing to move to a new OS, and doing so could even prevent a possible ecological disaster in the making. There are also tons of other features and tools that could be added, plenty of which are fan favorites that would easily draw in users from Windows 10.
This move would be the kiss of death for Windows 11, but this would honestly be a net positive for Microsoft, as it could put all the bad press for Windows 11 behind it and fully support a superior OS while giving Windows 10 users far more incentive to make the switch in the process.
At some point in the last 24 hours, Siri on the HomePod and the HomePod mini seems to have forgotten how to relay the time. When asking Siri “what time is it?” Siri is unable to answer and directs users to the iPhone.
“I found some web results, I can show them if you ask again from your iPhone,” is Siri’s full response to the time question. If you ask what time it is in a specific location, Siri is able to respond, and Siri on iPhone, iPad, and Mac provides the time as usual when asked.
This is a bug that Apple will be able to fix server side, so it will likely be addressed quickly. In the meantime, to get the time from Siri on the HomePod without having to swap to an iPhone, include your location.
Siri has long been ridiculed for failing to understand requests and not providing the expected information, and small bugs like this are a bit embarrassing as Apple prepares for a major AI update.
For the last several months, Siri has also been struggling with HomeKit commands, and there have been many complaints from smart home users. Asking Siri to “turn off the lights in the living room,” for example, often results in the lights being turned on or turned off in another room entirely. Hopefully some of these issues will be solved with a Siri overhaul in iOS 18 and its sister updates.
Apple is expected to announce iOS 18 during its WWDC keynote on June 10, and new features have already been rumored for many apps, including Apple Music, Apple Maps, Calculator, Messages, Notes, Safari, and others. Below, we recap iOS 18 rumors on a per-app basis, based on reports from MacRumors, Bloomberg’s Mark Gurman, and others: Apple Maps: At least two new Apple Maps features are…
In his Power On newsletter today, Bloomberg’s Mark Gurman outlined some of the new products he expects Apple to announce at its “Let Loose” event on May 7. Subscribe to the MacRumors YouTube channel for more videos. First, Gurman now believes there is a “strong possibility” that the upcoming iPad Pro models will be equipped with Apple’s next-generation M4 chip, rather than the M3 chip that…
Apple’s upcoming iPad Pro models will feature “by far the best OLED tablet panels on the market,” according to Display Supply Chain Consultants. Set to be announced on May 7, the OLED iPad Pro models will feature LTPO (a more power efficient form of OLED), a 120Hz ProMotion refresh rate, and a tandem stack and glass thinning that will bring “ultra-thin and light displays” that support high…
Bloomberg’s Mark Gurman today said that iOS 18 will “overhaul” many of Apple’s built-in apps, including Notes, Mail, Photos, and Fitness. Gurman did not reveal any specific new features planned for these apps. It was previously rumored that the Notes app will gain support for displaying more math equations, and a built-in option to record voice memos, but this is the first time we have…
Best Buy today has discounted Apple’s M1 iPad Air (64GB Wi-Fi) to a new all-time low price of $399.99 in the Starlight color option, down from $599.99. Best Buy says this deal will last through the end of the day, and it’s only available in one color at this record low price. Note: MacRumors is an affiliate partner with Best Buy. When you click a link and make a purchase, we may receive a…
Apple has announced it will be holding a special event on Tuesday, May 7 at 7 a.m. Pacific Time (10 a.m. Eastern Time), with a live stream to be available on Apple.com and on YouTube as usual. The event invitation has a tagline of “Let Loose” and shows an artistic render of an Apple Pencil, suggesting that iPads will be a focus of the event. Subscribe to the MacRumors YouTube channel for more …
With iOS 17.5, Apple is adding a “Repair State” feature that is designed to allow an iPhone to be sent in for service without deactivating Find My and Activation Lock. The fourth iOS 17.5 beta that came out today adds a “Remove This Device” option for all devices in Find My, and using it with an iPhone puts that iPhone into the new Repair State. Right now, sending an iPhone to Apple to be…
Hello Nature readers, would you like to get this Briefing in your inbox free every day? Sign up here.
Previous research has shown that our perception of time is linked to our senses.Credit: Karol Serewis/SOPA Images/LightRocket/Getty
When people look at larger, less cluttered scenes — a big, empty warehouse, for example — they think they viewed it for longer than they actually did. Similarly, people experience time constriction when looking at more constrained, cluttered scenes, such as an image of a well-stocked cupboard. The study of 52 participants also showed that people are more likely to remember the images they thought they viewed for longer. “It suggests that we use time to gather information about the world around us, and when we see something that’s more important, we dilate our sense of time to get more information,” says cognitive neuroscientist and study co-author Martin Wiener.
NASA’s interstellar spacecraft has sent updates about its health and operating status after five months of transmitting garbled data. Launched in 1977, Voyager 1 was the first human-made object to leave the solar system. Now 24 billion kilometres from Earth, in November last year it started sending signals that didn’t make sense. Following modifications to how Voyager 1 stores data aimed at fixing the glitch, NASA’s flight team confirmed on 22 April that they were able to communicate with the spacecraft once again. They hope to restore its ability to send back science data, too.
Fossil vertebrae of possibly the longest snake to have ever lived have been unearthed in a coal mine in India. Researchers recovered 27 vertebrae of a snake estimated to reach up to 15 metres in length, more than twice that of the longest snakes alive today, reticulated pythons (Malayopython reticulatus), and probably slightly longer than the extinct Titanoboa. The snake, dubbed Vasuki indicus, lived 47 million years ago.
Some of the fossil vertebrae discovered in a mine in Gujarat in western India, the largest of which are about 11 cm wide (Debajit Datta et al/Scientific Reports)
Features & opinion
Tamsin Mather’s book Adventures in Volcanoland takes readers on a journey to some of the world’s most notorious and active volcanoes — and reminds us that the next volcanic catastrophe is inevitable. Yet global preparedness for volcanic eruptions is severely lacking, says fellow volcanologist and reviewer Heather Handley. There is no international treaty organization for volcanic hazards and no global coordination on issuing comprehensive warnings of risks of eruptions, she says. Mather’s book “reminds us that we should all keep careful watch on the world’s volcanoes”.
Researchers urgently need to explore the future carbon footprint of artificial intelligence (AI) technologies, argues a group of sustainability researchers. The direct impacts of AI computing infrastructure — currently about 0.01% of global greenhouse-gas emissions — are likely to remain relatively small, the researchers write. But there could be huge indirect impacts from the way AI tools transform our economies and societies. The group urges researchers to assess whether AI will help or hinder climate progress under different possible scenarios.
“In my flu career, we have not seen a virus that expands its host range quite like this,” says virologist Troy Sutton about H5N1, an avian influenza virus that has rapidly infiltrated species well beyond birds. While most mammal infections were probably caused by contact with an infected bird, there’s evidence that the virus has now evolved to spread directly between some species, such as sea lions. Spreading in more species gives H5N1 opportunities to further adapt to mammals, including humans. So far, the virus doesn’t show signs of being able to cause a pandemic, Sutton says. “If we don’t give it the panic but we give it the respect and due diligence, I believe we can manage it,” adds Rick Bright, chief executive of a public health consultancy.
Lindonne Telesford is a public-health researcher, associate lecturer and assistant dean at St. George’s University in Grenada.Credit: Micah B. Rubin for Nature
Public-health researcher Lindonne Telesford explores whether ‘foamed glass’ could help farmers on Grenada to adapt to climate change. The porous material, which is made from recycled glass, is added to soil where it traps and retains water during droughts. In a pilot study, plants grown in soil treated with porous glass had a higher yield than control plants did, Telesford explains. “Agricultural research is a major undertaking for Grenada, because the country has a low research capacity — but every little bit counts if it can bring benefits to farmers and protect our island environment.” (Nature | 3 min read) (Micah B. Rubin for Nature)
QUOTE OF THE DAY
Biologist Kelly Weinersmith, co-author of a book on human settlements in space, explains that the bags of waste left behind on the Moon would make good fertilizer for lunar soil — if NASA didn’t regard them as heritage. (Nature Podcast | 38 min listen)
A little while ago, I asked readers about their favourite dull lab tasks. You didn’t disappoint: counting worm eggs, restocking pipette tips and hand-grinding fish ears all sound incredibly boring yet strangely satisfying.
“The best job was sterility testing, where one injected media tube after media tube,” recalls retired nurse practitioner Danamaya Gorham. “Nobody would disturb you for about two hours. The loud hiss of the laminar flow obscured the music (and my singing) from the rest of the lab.”
Help to keep this newsletter exciting by sending your feedback to [email protected].
With contributions by Flora Graham, Smriti Mallapaty and Sarah Tomlin
Want more? Sign up to our other free Nature Briefing newsletters:
• Nature Briefing: Microbiology — the most abundant living entities on our planet — microorganisms — and the role they play in health, the environment and food systems.
An atomic clock that keeps time with the help of iodine molecules is sturdy enough to withstand a sea voyage.Credit: Will Lunden
Atomic clocks are usually either ultra-precise or sturdy, but not both. Now, scientists have created a precise clock that, when put through its paces aboard a naval ship, wavered by only 300-trillionths of a second per day.
The clock, which was detailed in a paper in Nature on 24 April1, could also provide a “vital fallback solution” if signals from global navigation systems are spoofed or jammed in conflict zones, says Tetsuya Ido, director of the Space-Time Standards Laboratory at the Radio Research Institute in Tokyo.
“I’m impressed,” says Elizabeth Donley, who heads the time and frequency division at the US National Institute of Standards and Technology in Boulder, Colorado. “We’re excited to get our hands on it.”
Atomic tick-tock
The ‘tick’ of the world’s best clocks is pegged to the frequency of the radiation that atoms absorb and emit as they oscillate between energy states. Clocks based on atoms of caesium and other elements that emit radiation at a microwave frequency have been used for decades. Some are portable and are sold commercially.
How climate change is affecting global timekeeping
Scientists have also developed clocks that use other elements, such as strontium, that emit at higher frequencies — visible light — to slice time even more finely. But these ‘optical’ clocks are usually the size of dining tables and operate well only under laboratory-controlled conditions.
Vector Atomic, an engineering firm based in Pleasanton, California, has created an optical clock that weighs only 26 kilograms and, including all its housing, takes up about the size of three shoe boxes. Although the firm’s clock is inferior to the best lab-based optical timekeepers, its precision is 1,000 times better than that of the similar sized clocks that ships currently use, says company co-founder Jamil Abo-Shaeer, a co-author of the study.
The team tested its system by placing three of the clocks aboard the Royal New Zealand Navy ship HMNZS Aotearoa during a three-week trip around the Hawaiian Islands. Despite the ship’s vibrations and rolling, the clocks performed almost as well as they had in the laboratory. They were notably stable, keeping time to within 300-trillionths of a second over a day.
Donley says this stability is similar to that of a hydrogen maser clock — a reliable kind of microwave atomic clock that is the workhorse for international timekeeping. But the clock is much more robust and around one tenth of the volume.
Fly me to the Moon
The clock’s robustness comes in part from its use of iodine molecules, which can be made to oscillate using compact and durable lasers of the type commonly used in labs. The molecules are also less sensitive than some atoms to temperature fluctuations, magnetic fields and pressure, says physicist Martin Boyd, a co-founder of Vector Atomic and co-author of the paper.
If the team can shrink the clock further, future models could fly aboard global navigation satellites, improving positioning resolution from metres to centimetres, adds Abo-Shaeer. They could even be the clocks that end up defining lunar time, he says.
Today is Earth Day, and it’s as good a time as any to think about the way we use technology and how we can use it better and more sustainably given the myriad challenges it represents to our environment.
Major computer manufacturers like Acer, Apple, Dell, HP, Lenovo, and many others have all started moving towards more sustainable products, and not just in their use of better packing materials that reduce new material use by increasing the amount of post-consumer material in their construction.
Even the computers themselves are starting to use post-consumer materials and manufacturers are expanding opportunities for upgrading the devices to keep them current longer, thereby reducing e-waste around the world.
But one area hasn’t seen nearly enough attention: processors, specifically their power usage.
A computer’s CPU is the brains of the whole operation, so naturally, it needs a good bit of power to operate at higher levels of performance. This is even more true of dedicated GPU chips in laptops or the best graphics cards used in desktop systems.
And while better power efficiency in laptops is going to be a bigger plus in terms of battery life, desktops have seemingly gone in the complete opposite direction, with Intel, AMD, and Nvidia configured systems drawing a lot of power to run hardware that often exceeds what users need, all so they can be called the ‘fastest’ or ‘most powerful’. It’s not a sustainable approach.
We need to start emphasizing efficiency over power
There will be circumstances when a lot of power for a component is necessary to do important work, and I’m not saying that every graphics card needs to have its power consumption halved as a basic rule.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
But for consumers who are barely tapping into the performance that an Nvidia RTX 4080 Super brings to the table, much less what an Nvidia RTX 4090 or AMD RX 7900 XTX offers, you have to ask if this kind of performance is worth the cost in terms of carbon emissions.
And of course, this is only in terms of useful work, like video editing or gaming, and not for something like cryptocurrency mining, which has at best a marginal social utility, and whose cost in terms of energy usage in the aggregate far outstrips any practical benefit cryptocurrency has in the real world (unless you’re really into criminal activity or need to launder some money).
Regular old processors aren’t immune either, with the current generation of Intel processors soaking up an extraordinary amount of energy (at least in bursts) relative to competitors like AMD and especially Apple.
What you get for that energy draw is some incredible performance numbers, but for 97-98% of users, this kind of performance is absolutely unnecessary, even if users are using the appropriate processor for their needs.
The performance arms race needs to change
Ultimately, AMD, Intel, and Nvidia are locked in an arms race for making the fastest and most powerful processors and graphics cards, and it shows no signs of stopping. Meanwhile, Apple’s move to its own M-series silicon has been a major win, both for the company and for consumers.
Apple’s chips are based on Arm‘s big.LITTLE architecture, which is incredibly energy efficient from the ground up, originally having been intended for mobile devices. But now that the architecture is sophisticated enough to be used in laptops and even desktops, the energy efficiency remains while Apple has scored major performance gains over both AMD and Intel and kept energy use down.
If we are going to maintain a livable planet in the future, we must go on an energy diet. Seeing the difference between what Apple’s done and what AMD, Intel, and Nvidia aren’t doing puts to bed any excuse those three latter chipmakers have for not refocusing on efficiency going forward.
As hard as it may be to hear or accept, these three chipmakers must acknowledge that we don’t have a performance problem, we have a sustainability problem. They should turn away from squeezing even more performance out of their hardware that we don’t need, and give us the efficiency that is desperately needed, especially when these marginal increases in power are coming at far too high a cost.
Some power users and enthusiasts might not like seeing decreasing power usage while maintaining roughly the same level of performance gen-on-gen, or slightly better performance but far less than with previous generations, but it’s what needs to be done, and the sooner everyone acknowledges this and adapts, the better.
Previous research has shown that our perception of time is linked to our senses.Credit: Karol Serewis/SOPA Images/LightRocket/Getty
How the brain processes visual information — and its perception of time — is heavily influenced by what we’re looking at, a study has found.
In the experiment, participants perceived the amount of time they had spent looking at an image differently depending on how large, cluttered or memorable the contents of the picture were. They were also more likely to remember images that they thought they had viewed for longer.
The findings, published on 22 April in Nature Human Behaviour1, could offer fresh insights into how people experience and keep track of time.
“For over 50 years, we’ve known that objectively longer-presented things on a screen are better remembered,” says study co-author Martin Wiener, a cognitive neuroscientist at George Mason University in Fairfax, Virginia. “This is showing for the first time, a subjectively experienced longer interval is also better remembered.”
Sense of time
Research has shown that humans’ perception of time is intrinsically linked to our senses. “Because we do not have a sensory organ dedicated to encoding time, all sensory organs are in fact conveying temporal information” says Virginie van Wassenhove, a cognitive neuroscientist at the University of Paris–Saclay in Essonne, France.
Previous studies found that basic features of an image, such as its colours and contrast, can alter people’s perceptions of time spent viewing the image. In the latest study, researchers set out to investigate whether higher-level semantic features, such as memorability, can have the same effect.
How the brain’s face code might unlock the mysteries of perception
The researchers first created a set of 252 images, which varied according to the size of the scene and how cluttered it was, then developed tests to determine whether those characteristics affected the sense of time in 52 participants. For example, an image of a well-stocked cupboard would be defined as being smaller but more cluttered than one featuring an empty warehouse. Participants were shown each image for less than a second, and asked to rate the time they were shown a specific image as ‘long’ or ‘short’.
When viewing larger or less-cluttered scenes, participants were more likely to experience time dilation; thinking that they had viewed the picture for longer than they actually did. The opposite effect — time constriction — occurred when viewing smaller-scale, more cluttered images.
The researchers suggest two possible explanations for these distortions. One posits that visual clutter is perceived as harder to navigate and move through, whereas the other says that clutter impairs our ability to recognize objects, making it harder to mentally encode the visual information. These difficulties could both lead to time constriction.
Memorable sights
To investigate whether more-memorable images could have an effect on time perception, the researchers showed 48 participants a set of 196 images rated according to their memorability by a neural network. Participants not only experienced time dilation when looking at more-memorable images, but were also more likely to remember those images the next day.
Sleep loss impairs memory of smells, worm research shows
The images were then applied to a neural-network model of the human visual system, one that could process information over time, unlike other networks that take in data only once. The model processed more-memorable images faster than less-memorable ones. A similar process in the human brain could be responsible for the time-dilation effect when looking at a memorable image, says Wiener. “It suggests that we use time to gather information about the world around us, and when we see something that’s more important, we dilate our sense of time to get more information.” This adds to converging evidence that suggests a link between memorability and increased brain processing, says van Wassenhove.
Questions remain about exactly how people perceive time and how this interacts with memory. “We’re still missing pieces of the puzzle,” says Wiener. The next step would be to validate the findings with a larger sample of participants, and to refine the model of the visual system, he adds. Van Wassenhove suggest that future studies could use neuroimaging to study brain activity during perception tests. Eventually, Wiener hopes to test whether the brain could be stimulated artificially to influence the way it processes time and memory.