Google introduced the Memory Saver feature to its Chrome browser in February 2023 and has been enhancing it ever since. Now a new option will give users even more control over Memory Saver by introducing a way to configure its aggressiveness.
While Memory Saver is an excellent tool — as it addresses Chrome’s RAM issue by identifying tabs that are not being used and removing them from memory — there’s nothing in the way of controlling when a tab is flagged as inactive and therefore put on snooze. But a recently discovered flag in Chrome Canary by Windows Report shows that Google is testing out a feature that will give you three options for Memory Saver: Conservative, Medium, and Aggressive.
Once the toggle setting is enabled, you’ll have access to those three settings:
Moderate Memory Savings: With this setting, your tabs become inactive after a long period of time. It gives a balance between memory usage and keeping recently accessed tabs active.
Balanced Memory Savings: Selecting balanced memory savings means that your tabs become inactive after a moderate period of time.
Maximum Memory Savings: If you choose maximum memory savings, your tabs become inactive after a shorter period of time. This aggressive mode minimizes memory usage but may require more frequent tab reloads.
Google is also adding a new visual cue for inactive tabs, a dotted circle that will appear on inactive tabs to indicate that they’ve been put to sleep and are no longer consuming memory.
According to the report, Google has been extensively testing out the tool for quite some time. The tech giant “tested a multi-state option for memory mode with heuristic mode, fixed timer, and discard, and offered options behind flags to select the time when tabs can be discarded.” And while those tests eventually went nowhere in terms of new features, they influenced improvements made to Memory Saver.
There currently isn’t a timeframe for when this Memory Saver update will be rolled out to all Chrome users, but once it is you should be able to access it through performance settings at chrome://settings/performance.
You might also like
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
An older version of LiteSpeed Cache, a popular plugin for the WordPress website builder, is vulnerable to a high-severity flaw that hackers have been increasingly exploiting.
The flaw is described as an unauthenticated cross-site scripting vulnerability, and tracked as CVE-2023-40000. It carries a severity score of 8.8.
By adding malicious JavaScript code directly into WordPress files through the plugin, the attackers are able to create new administrator accounts, essentially completely taking over the website. Admin accounts can be used to modify the site’s content, add or remove plugins, or change different settings. Victims can be redirected to malicious websites, served malicious advertising, or have their sensitive user data taken.
Mitigations and fixes
The flaw was uncovered by WPScan, a cybersecurity project serving as an enterprise vulnerability database for WordPress. Its researchers observed increased activity from different hacking groups, as they scan the internet for compromised WordPress sites. These are all running LiteSpeed Cache version 5.7.0.1 or older. The current version is 6.2.0.1 and is considered immune to this flaw.
One threat actor made more than a million probing requests in April 2024 alone, it was said.
Allegedly, LiteSpeed Cache has more than five million active users, of which roughly two million (1,835,000) are using the outdated, vulnerable variant.
LiteSpeed Cache is a plugin promising faster page load times, better user experience, and improved Google Search Results Page positions.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Those fearing they might get targeted are advised to update their plugins to the latest version as soon as possible. Furthermore, they should uninstall all plugins and themes they are not actively using, and delete all suspicious files and folders.
Those suspecting they might have been targeted already, should look for suspicious strings in the database: “Search in [the] database for suspicious strings like ‘eval(atob(Strings.fromCharCode,'” WPScan said. “Specifically in the option litespeed.admin_display.messages.”
The developers of Tiny11 (a slimmed-down third-party version of Windows 11) have released a new version of their Tiny11 Builder, a tool that enables you to modify and customize your own version of Windows 11 to make it more trimmed-down.
This version will allow you to make Windows 11 ISOs (installation media) with disabled telemetry – basically, Microsoft’s inbuilt automated data collection and communication process for monitoring, analysis, and reporting of your system. Disabling telemetry has multiple implications for increasing user privacy, using fewer system resources to run Windows 11, and getting greater control over your user data.
Tiny11 Builder is essentially an open-source script that you can run on your device to make it possible to slim down your Windows 11 for a smoother user experience. You can get the script for Tiny11 from the developers’ GitHub page by copying and pasting the code into a Windows PowerShell window, or by downloading the script file (which will have a .ps1 extension), right-clicking the file, and selecting ‘Run with PowerShell.’
For the uninitiated, PowerShell is a Microsoft tool that allows you to automate tasks and processes in the Windows operating system. The easiest way to find it is simply by searching for it in Windows Search, but it’ll open automatically if you follow the second method listed above.
How Tiny11 Builder works to unlock Windows 11’s efficiency
Running Tiny11 Builder this way will prompt your system to use Microsoft-made tools to remove items that aren’t essential, but that you wouldn’t be able to remove in its default state.
This process isn’t as straightforward as downloading an official Microsoft ISO from its dedicated website, but according to Neowin, the resulting IOS image comes out clean and fully functional. It also allows you to bypass issues like needing a Microsoft account and certain hardware requirements, as well as permitting you to kill off Microsoft Edge, Get Started, OneDrive, and any other Windows bloatware that you might consider unnecessary.
The updated version of the Tiny11 Builder script allowing for disabled telemetry was put up on GitHub on April 29, 2024, and announced by Tiny11 creators NDTEV on X. If you’re concerned about how much of your user data is collected and shared with Microsoft, this is a popular option with many people who share such concerns. It allows you to curb the sharing of data through Windows functionalities like Application Compatibility Appraiser, Customer Experience Improvement Program, and others.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
The first update to tiny11 builder is now up! It disables telemetry as well as some of the scheduled tasks associated with it.Go check it out and let me know how it works!https://t.co/qmtOcmkPdO https://t.co/cvkCllUma3April 29, 2024
The ability to remove telemetry looks like the only change to this iteration of Tiny11 Builder, but NDTEV seems to have plans to give Tiny11 additional capabilities as per the developers’ GitHub repository. Future plans include enhanced language detection, more flexibility in managing which Windows 11 features to keep and purge, and maybe even a unique new user interface.
This is maybe one notch above ‘beginner’ when it comes to implementing software on your PC, but if you’re interested in it, I’d encourage you to try it. With Microsoft’s recent onslaught of ads, I can see tools and solutions like this becoming more popular, and for all of our sakes, I hope Tiny11 Builder stays open-source.
Remedy Entertainment has provided an update on the development of Control 2, Codename Condor, and the Max Payne 1 and 2 remakes.
In a recent shareholders report published on April 29 (via Eurogamer), Remedy confirmed that its remakes of Max Payne 1 and Max Payne 2 – which were announced in 2022 – are set to move into “full production” during Q2 of 2024 after completing its production readiness stage.
Elsewhere, the studio’s upcoming co-operative multiplayer game Codename Condor, set in the Control universe, has now entered into full production. That means it has “reached the final development stages before the game is launched.”
The report indicates that, based on “wide internal playtest,” it “can see that the core loop is engaging, and the game brings a unique Remedy angle to the genre.”
As for Control 2, the development team has “focused on finalizing the proof-of-concept stage, in which the game world, game mechanics and visual targets are proven” and the studio expects the project to advance to the “production readiness stage” during Q2 2024.
Finally, Codename Kestrel – Remedy’s multiplayer action game – is still in the concept stage as the team “works to refine the game concept.”
Remedy adds that Alan Wake 2, which launched in October, has now sold 1.3 million units as of the beginning of February and at the end of the first quarter, the game has been able to recoup “significant part of the development and marketing expenses.”
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
Alan Wake 2 is also set to receive two paid expansions – Night Springs and The Lake House. The former is expected to release in late spring, while the latter has yet to get a release date.
In addition to game development updates, Remedy reported that Chinese tech conglomerate Tencent has now increased its shareholding in the studio to 14.80% after acquiring a 3.8% stake back in 2021.
The new iMazing 3 app redesign brings a fresh user interface and Vision Pro connectivity support to the software, developer DigiDNA said Wednesday. The widely used software helps you manage iPhone, iPad and iPod content and functionality from a Mac.
“In the eight year since we released iMazing 2, regular updates have improved functionality and expanded device and OS support,” said DigiDNA CEO Jerome Bedat.
“To achieve our vision for iMazing 3, we had to redevelop our approach, with a modern user interface and new codebase that will allow us to deliver features in the future that no one else can offer,” he added.
DigiDNA iMazing 3 app redesign
At the heart of the overhauled iMazing 3 lies its Discover section. It’s a dedicated space that streamlines access to commonly used tools for transferring photos, downloading messages, managing music libraries and creating backups.
The intuitive interface ensures you can easily locate and use desired functions. And DigiDNA emphasized the fact that all iMazing 3 functionality is local, meaning no data leaves the computer. That’s a boon to privacy and security.
Notably, iMazing 3 extends its compatibility beyond iPhones, iPads and iPods. It now supports Apple’s Vision Pro AR/VR headset. Remote pairing allows connection and management of the device from a distance.
But the software’s main use is still dealing with iPhone and iPad functions and content from your Mac desktop.
Updated features of iMazing 3
Check out the various updates to iMazing 3. Photo: DigiDNA
A standout addition is the Device Overview section. It provides a slew of details about connected devices, including serial numbers, model numbers, device IDs and the date of the latest backup. The feature simplifies the process of managing multiple devices, ensuring users have a centralized hub for monitoring and maintaining devices.
Battery management also got an overhaul. It now resides in a dedicated section that offers insights into current temperature, design max charge, effective max charge and charge cycles. Furthermore, users can now easily manage storage capacity, ensuring optimal performance and efficient use of their devices’ resources.
Enhancing the overall user experience, iMazing 3 introduces a Dark Mode option, improved Backup and snapshot management tools, plus a redesigned settings interface.
You can download iMazing 3 for Mac or PC from the developer website. That particular version does not yet appear on App Store. Prices start at $40. Existing users who purchased the software after October 20, 2020, can upgrade for free. Folks holding older licenses can get a a 50% discount.
The sun continues to set on the iconic WindowsControl Panel, as another key part, the Fonts page, makes its way to the Settings app instead. The Control Panel isn’t on the way out just yet, but it’s directing users to the Settings app for an increasing number of functions. And now, reports suggest that later this year, if you try to open the Fonts page from the Control Panel you’ll be automatically redirected to the Settings app.
The Fonts page can currently be found in the following location:
Control Panel > Appearance and Personalization > Fonts
This is the latest development in an ongoing migration process that Windows Latest has been documenting for several years, which has seen features transition from the Control Panel to the Settings app. Windows Latest reports that Microsoft doesn’t currently seem to have plans to completely remove Control Panel from regular Windows 11 versions.
(Image credit: Future)
The next version of font management in Windows 11
Over in the Settings app, there will be a modern font management interface and it will work similarly to its Control Panel predecessor. At the moment, the legacy version of the Fonts page still exits and can be found in Control Panel, and it can be located using Windows Search.
Here, you can browse the fonts available on your system and use the legacy font management page.
That said, Microsoft wants to guide users to the Settings app for font management and Windows Latest writes that Fonts will be completely removed from the Control Panel in a future Windows update. Instead, users will be redirected to Settings > Personalization > Fonts, which is where the new Fonts page resides.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
This will be a noticeable change, but it shouldn’t be too disruptive as it apparently has all of the functionality and features of the legacy page. Also, the future update probably won’t remove the legacy Control Panel Fonts page right away, and users will still be able to find it in C:\Windows\Fonts within File Explorer.
If you’re particularly annoyed by the change and want to stick to the classic interface, you can create a shortcut link in your Settings page which will open the above location in File Explorer as well.
Again, Microsoft is pretty insistent that it would like users to get used to performing font management through Settings, and when Windows Latest opened the Fonts page in File Explorer, it got this message:
“This page is being decoupled from Fonts Control Panel. For more font settings, go to the Fonts page in the Settings app.”
A lot of users are used to Control Panel, which has been a part of Windows since the very first version in 1985, so Windows Latest thinks it’s here to stay. What will change is that with every new feature that’s migrated to the Settings app from Control Panel, users will be redirected to the new analogous page in Settings.
I think this is a wise decision from Microsoft as it makes sense to have a single place where you can manage all of your computer’s settings, especially as new generations of people are introduced to the operating system. It’s preserving the interface and (it seems like) full functionality of Control Panel, while attaching it to the new architecture that’s being built in a way that isn’t especially disruptive or difficult for existing users.
In the conflict between Russia and Ukraine, video footage has shown drones penetrating deep into Russian territory, more than 1,000 kilometres from the border, and destroying oil and gas infrastructure. It’s likely, experts say, that artificial intelligence (AI) is helping to direct the drones to their targets. For such weapons, no person needs to hold the trigger or make the final decision to detonate.
The development of lethal autonomous weapons (LAWs), including AI-equipped drones, is on the rise. The US Department of Defense, for example, has earmarked US$1 billion so far for its Replicator programme, which aims to build a fleet of small, weaponized autonomous vehicles. Experimental submarines, tanks and ships have been made that use AI to pilot themselves and shoot. Commercially available drones can use AI image recognition to zero in on targets and blow them up. LAWs do not need AI to operate, but the technology adds speed, specificity and the ability to evade defences. Some observers fear a future in which swarms of cheap AI drones could be dispatched by any faction to take out a specific person, using facial recognition.
Warfare is a relatively simple application for AI. “The technical capability for a system to find a human being and kill them is much easier than to develop a self-driving car. It’s a graduate-student project,” says Stuart Russell, a computer scientist at the University of California, Berkeley, and a prominent campaigner against AI weapons. He helped to produce a viral 2017 video called Slaughterbots that highlighted the possible risks.
The emergence of AI on the battlefield has spurred debate among researchers, legal experts and ethicists. Some argue that AI-assisted weapons could be more accurate than human-guided ones, potentially reducing both collateral damage — such as civilian casualties and damage to residential areas — and the numbers of soldiers killed and maimed, while helping vulnerable nations and groups to defend themselves. Others emphasize that autonomous weapons could make catastrophic mistakes. And many observers have overarching ethical concerns about passing targeting decisions to an algorithm.
The issue of weapons equipped with artificial intelligence was discussed by the United Nations Security Council in July 2023.Credit: Bianca Otero/Zuma/eyevine
For years, researchers have been campaigning to control this new threat1. Now the United Nations has taken a crucial step. A resolution in December last year added the topic of LAWs to the agenda of the UN General Assembly meeting this September. And UN secretary-general António Guterres stated in July last year that he wants a ban on weapons that operate without human oversight to be in place by 2026. Bonnie Docherty, a human rights lawyer at Harvard Law School in Cambridge, Massachusetts, says that getting this topic on to the UN agenda is significant after a decade or so of little progress. “Diplomacy moves slowly, but it’s an important step,” she says.
The move, experts say, offers the first realistic route for states to act on AI weapons. But this is easier said than done. These weapons raise difficult questions about human agency, accountability and the extent to which officials should be able to outsource life-and-death decisions to machines.
Under control?
Efforts to control and regulate the use of weapons date back hundreds of years. Medieval knights, for example, agreed not to target each other’s horses with their lances. In 1675, the warring states of France and the Holy Roman Empire agreed to ban the use of poison bullets.
Today, the main international restrictions on weaponry are through the UN Convention on Certain Conventional Weapons (CCW), a 1983 treaty that has been used, for example, to ban blinding laser weapons.
Autonomous weapons of one kind or another have been around for decades at least, including heat-seeking missiles and even (depending on how autonomy is defined) pressure-triggered landmines dating back to the US Civil War. Now, however, the development and use of AI algorithms is expanding their capabilities.
The CCW has been formally investigating AI-boosted weapons since 2013, but because it requires international consensus to pass regulations — and because many countries actively developing the technology oppose any ban — progress has been slow. In March, the United States hosted an inaugural plenary meeting on the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, a parallel effort that emphasizes voluntary guidelines for best practice rather than a legally enforceable ban.
Part of the problem has been a lack of consensus about what LAWs actually are. A 2022 analysis found at least a dozen definitions of autonomous weapons systems proposed by countries and organizations such as the North Atlantic Treaty Organization (NATO)2. The definitions span a wide range and show a limited amount of agreement on, or even an understanding of, AI, says Russell.
The United Kingdom, for example, says LAWs are “capable of understanding higher-level intent and direction”, whereas China says such a weapon can “learn autonomously, expand its functions and capabilities in a way exceeding human expectations”. Israel declares: “We should stay away from imaginary visions where machines develop, create or activate themselves — these should be left for science-fiction movies.” Germany includes “self-awareness” as a necessary attribute of autonomous weapons — a quality that most researchers say is far away from what’s possible with AI today, if not altogether impossible.
“That sort of means that the weapon has to wake up in the morning and decide to go and attack Russia by itself,” says Russell.
Although a more comprehensive, specific and realistic definition for LAWs will need to be ironed out, some experts say this can wait. “Traditionally in disarmament law, although it’s counter-intuitive, actually they often do the definition last in negotiation,” Docherty says. A working definition is usually enough to start the process and can help to soften initial objections from countries opposed to action.
The AI advantage
According to a 2023 analysis published by the Center for War Studies at University of Southern Denmark in Odense3, the autonomous weapons guided by AI available to army commanders today are relatively crude — slow-moving and clumsy drones equipped with enough explosive to blow up themselves and their targets.
These ‘loitering munitions’ can be the size of a model aircraft, cost about $50,000, and carry a few kilograms of explosive up to 50 kilometres away, enough to destroy a vehicle or to kill individual soldiers. These munitions use on-board sensors that monitor optical, infrared or radio frequencies to detect potential targets. The AI compares these sensor inputs with predesignated profiles of tanks, armoured vehicles and radar systems — as well as human beings.
Observers say that the most significant advantage offered by these autonomous bombs over remote-controlled drones is that they still work if the other side has equipment to jam electronic communications. And autonomous operation eliminates the risk that remote operators could be traced by an enemy and themselves attacked.
Although there were rumours that autonomous munitions killed fighters in Libya in 2020, reports from the conflict in Ukraine have cemented the idea that AI drones are now being used. “I think it’s pretty well accepted now that in Ukraine, they have already moved to fully autonomous weapons because the electronic jamming is so effective,” says Russell. Military commanders such as Ukraine’s Yaroslav Honchar have said that the country “already conducts fully robotic operations, without human intervention”3.
It’s hard to know how well AI weapons perform on the battlefield, in large part because militaries don’t release such data. Asked directly about AI weapons systems at a UK parliamentary enquiry in September last year, Tom Copinger-Symes, the deputy commander of the UK Strategic Command, didn’t give much away, saying only that the country’s military is doing benchmarking studies to compare autonomous with non-autonomous systems. “Inevitably, you want to check that this is delivering a bang for a buck compared with the old-fashioned system of having ten imagery analysts looking at the same thing,” he said.
Although real-world battlefield data is sparse, researchers note that AI has superior processing and decision-making skills that, in theory, offer a significant advantage. In annual tests of rapid image recognition, for example, algorithms have outperformed expert human performance for almost a decade. A study last year, for example, showed that AI could find duplicated images in scientific papers faster and more comprehensively than a human expert4.
In 2020, an AI model beat an experienced F-16 fighter-aircraft pilot in a series of simulated dogfights thanks to “aggressive and precise manoeuvres the human pilot couldn’t outmatch”. Then, in 2022, Chinese military researchers said that an AI-powered drone had outwitted an aircraft flown remotely by a human operator on the ground. The AI aircraft got onto the tail of its rival and into a position where it could have shot it down.
The US Air Force’s X-62A VISTA aircraft has been used to test the ability of autonomous agents to carry out advanced aerial manoeuvres.Credit: U.S. Air Force photo/Kyle Brasier
A drone AI can make “very complex decisions around how it carries out particular manoeuvres, how close it flies to the adversary and the angle of attack”, says Zak Kallenborn, a security analyst at the Center for Strategic and International Studies in Washington DC.
Still, says Kallenborn, it’s not clear what significant strategic advantage AI weapons offer, especially if both sides have access to them. “A huge part of the issue is not the technology itself, it’s how militaries use that technology,” he says.
AI could also in theory be used in other aspects of warfare, including compiling lists of potential targets; media reports have raised concerns that Israel, for example, used AI to create a database of tens of thousands of names of suspected militants, although the Israeli Defence Forces said in a statement that it does not use an AI system that “identifies terrorist operatives”.
Line in the sand
One key criterion often used to assess the ethics of autonomous weapons is how reliable they are and the extent to which things might go wrong. In 2007, for example, the UK military hastily redesigned its autonomous Brimstone missile for use in Afghanistan when it was feared it might mistake a bus of schoolchildren for a truckload of insurgents.
AI weapons can fairly easily lock on to infrared or powerful radar signals, says Kallenborn, comparing them to a library of data to help decide what is what. “That works fairly well because a little kid walking down the street is not going to have a high-powered radar in his backpack,” says Kallenborn. That means that when an AI weapon detects the source of an incoming radar signal on the battlefield, it can shoot with little risk of harming civilians.
But visual image recognition is more problematic, he says. “Where it’s basically just a sensor like a camera, I think you’re much, much more prone to error,” says Kallenborn. Although AI is good at identifying images, it’s not foolproof. Research has shown that tiny alterations to pictures can change the way they are classified by neural networks, he says — such as causing them to confuse an aircraft with a dog5.
Another possible dividing line for ethicists is how a weapon would be used: to attack or defend, for example. Sophisticated autonomous radar-guided systems are already used to defend ships at sea from rapid incoming targets. Lucy Suchman, a sociologist at Lancaster University, UK, who studies the interactions between people and machines, says that ethicists are more comfortable with this type of autonomous weapon because it targets ordnance rather than people, and because the signals are hard to falsely attribute to anything else.
One commonly proposed principle among researchers and the military alike is that there should be a ‘human in the loop’ of autonomous weapons. But where and how people should or must be involved is still up for debate. Many, including Suchman, typically interpret the idea to mean that human agents must visually verify targets before authorizing strikes and must be able to call off a strike if battlefield conditions change (such as if civilians enter the combat zone). But it could also mean that humans simply program in the description of the target before letting the weapon loose — a function known as fire-and-forget.
Some systems allow users to toggle between fully autonomous and human-assisted modes depending on the circumstances. This, say Suchman and others, isn’t good enough. “Requiring a human to disable an autonomous function does not constitute meaningful control,” she says.
The idea of full autonomy also muddies the water about accountability. “We’re very concerned about the use of autonomous weapons systems falling in an accountability gap because, obviously, you can’t hold the weapon system itself accountable,” Docherty says. It would also be legally challenging and arguably unfair to hold the operator responsible for the actions of a system that was functioning autonomously, she adds.
Russell suggests that there be “no communication between the on-board computing and the firing circuit”. That means the firing has to be activated by a remote operator and cannot ever be activated by the AI.
There is at least one point in the LAWs discussions that (almost) everybody seems to agree on: even nations generally opposed to controls, including the United States and China, have indicated that autonomous agents, including those with AI, should play no part in the decision to launch nuclear weapons, says Russell.
However, Russia seems to be more circumspect on this issue. Moscow is widely thought to have resurrected a cold-war programme called Perimetr, which — in theory at least — could launch a first nuclear strike on the West with no human oversight6. The United States and China have raised this issue in various talks about autonomous weapons, which many say could put pressure on Russia to change its strategy.
Policing the system
Unfortunately, says Kallenborn, any ban on the use of LAWs would be hard to enforce through inspections and observations — the classic ‘trust but verify’ approach commonly used for other regulated weaponry.
With nuclear weapons, for example, there’s a well-established system for site inspections and audits of nuclear material. But with AI, things are easier to conceal or alter on the fly. “It could be as simple as just changing a couple lines of code to say, all right, now the machine gets to decide to go blow this up. Or, you know, remove the code, and then stick it back in when the arms-control inspectors are there,” says Kallenborn. “It requires us to rethink how we think about verification in weapons systems and arms control.”
Checks might have to switch from time-of-production to after-the-fact, Kallenborn says. “These things are going to get shot down. They’re going to be captured. Which means that you can then do inspections and look at the code,” he says.
All these issues will feed into the UN discussions, beginning at the General Assembly this September; a precursor conference has also been set up by Austria at the end of April to help to kick-start these conversations. If enough countries vote to act in September, then the UN will probably set up a working group to set out the issues, Docherty says.
A treaty might be possible in three years, adds Docherty, who had a key role in the negotiations of the UN’s 2017 Treaty on the Prohibition of Nuclear Weapons. “In my experience, once negotiations start, they move relatively quickly.”
April 10, 1985: During a fateful meeting, Apple CEO John Sculley threatens to resign unless the company’s board of directors removes Steve Jobs as executive VP and general manager of the Macintosh division.
This triggers a series of events that will ultimately result in Jobs’ exit. The marathon board meeting — which continued for several hours the next day — results in Jobs losing his operating role within the company, but being allowed to stay on as chairman. Things don’t exactly play out like that.
Steve Jobs vs. John Sculley
As noted last week in “Today in Apple history,” Sculley joined Apple after a remarkable run as president of PepsiCo. He had no background in high-tech products, but was considered a marketing genius. Apple’s board figured his advertising savvy would prove invaluable for growing the nascent personal computer industry.
With Jobs considered too young and inexperienced to run Apple, the idea was that he and Sculley would manage the company together in a sort of partnership. However, a number of problems arose that kept this from playing out as planned.
One was that sales of the Macintosh 128K — launched soon after Sculley arrived at Apple — proved disappointing. Unlike previous Apple flops such as the Apple III and Lisa, this caused Apple’s first quarterly loss. The company laid off a large number of employees as a result.
In addition, Jobs remained an incredibly disruptive presence at Apple. A perfectionist who could be incredibly insightful, he hadn’t yet learned the skills that made him a brilliant CEO and manager later in his career. In addition, he continually bad-mouthed Sculley behind his back, undermining the CEO’s authority.
Forcing Sculley’s hand on Macintosh
Sculley envisioned Jobs taking on a role similar to the one he ultimately occupied years later, during his last years at Apple: focusing on finding the next insanely great product to bring to market.
During the pivotal meeting that took place on this day in 1985, Jobs and Sculley made separate appeals to the Apple board, which ultimately supported Sculley unanimously.
That could have settled things, but Jobs kept pushing. The following month, he confronted Sculley again. Jobs asked for another shot at proving himself by running the Mac division.
A shouting match, a showdown and an incurable rift between Jobs and Sculley
When Sculley refused, Jobs began yelling at him. The two got into a shouting match. Jobs then began planning a coup to kick Sculley out of Apple, although the board once again sided with the CEO.
After a few more failed proposals from Jobs — including the unrealistic suggestion that he could take over as CEO and president, with Sculley relegated to chairman — the Apple co-founder eventually resigned from the company on September 16, 1985. (Ironically, he quit on exactly the same day that he would return to become Apple CEO in 1997.)
Jobs and Sculley, who previously enjoyed a very close relationship, never spoke again.
With macroeconomic headwinds persisting in the wake of cutbacks for many UK businesses, it’s clear that the pressure on companies to save money is not going away. But organizations must be wary of the temptation to reduce investment in data technology and analysis, as they risk losing a crucial competitive advantage. With data analysis and artificial intelligence (AI) growing in importance, almost half of businesses (44%) plan to push through data modernization efforts in 2024, according to PwC. Over half of organizations therefore cannot afford to turn their backs on technologies which can deliver key business advantages, such as improved customer experiences and enhanced product innovations.
In the year ahead, the organizations that will be most effective at navigating the economic landscape will be those that focus on managing spend and increasing efficiency to drive better business outcomes. According to IDC, the world is producing more data than ever, as much as 181 zettabytes of data per year by 2025 or the capacity of 45 trillion data DVDs. Especially with the boom of generative AI, data will continue to be a key differentiator for those looking to capitalise on AI – the more diverse and comprehensive the data, the better AI can perform. For businesses to remain competitive, harnessing the power of data insights, along with effective cost management and planning must be front of mind for business leaders.
James Hall
UK Country Manager, Snowflake.
Business value and transparency
Achieving transparency on existing costs is the first step towards becoming data efficient. For data admins – someone responsible for processing data into a convenient data model – this means using their analytical skills to scrutinize existing workloads, allowing them to identify which are actually delivering valuable insights. From this point, they can take a view on whether to re-architecture, increase or decrease the usage of the workload, or even retire ones which are not delivering results. A full understanding of data lineage, including where data comes from and what happens to it, can also be a useful starting point to help establish cost controls, as well as pinpointing costly errors.
Business transparency must also derive from the SaaS vendor and platform they select to use when it comes to spend. This enables businesses to understand what they are investing in each workload and weigh this up against return on investments. Understanding per-query costs can highlight the most expensive queries and allow admins or IT leaders to rethink them in terms of rewriting or refactoring. Increased visibility and control of spend will provide businesses with the best chance of maximising existing resources.
Predicting future costs
Only when businesses get hold of their data costs can they truly begin to predict future costs, and implement measures to keep spending as efficiently as possible. Many legacy data platforms are highly inflexible, with fixed cost pricing and long-term vendor lock-in contracts, making it harder to implement changes when times are tough, or even when scaling back requirements during quieter periods of data analysis. Such tools often require complex, time-consuming capacity planning in order to keep control of data costs, which can ironically prove expensive in itself.
The costs of data processing, monitoring and control mechanisms cannot be an afterthought. Flexible scaling and consumption-based pricing models are a great way of avoiding unnecessary overprovisioning and paying for processing and storage that does not deliver for the business. A growing number of organizations are also choosing to set up budgets in advance, with spending limits, digital ‘guards’ against overspending, and daily alert notifications and warnings. This allows businesses to pinpoint where money is being spent, how much value it is delivering, and how it can be reined in.
Modern data platforms built in the cloud provide an intuitive UI to examine usage and usage trends, with clear dashboards visualizing which teams, customers and cost centers are responsible for the bulk of spending. Rather than waiting for spending to go over budget, companies can get ahead of the game and see when spending limits are projected to be exceeded. In the long run, this will help technical leaders and CFOs reduce operational costs through more efficient usage.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Tracking usage at a granular level — think account level, per user, or per task — will be a key differentiator. However, larger companies should also contemplate taking control at an organizational level. This can require restricting the actions of teams or individuals to perform credit-consuming resources, such as warehouse creation. Such capabilities also offer in-depth control over factors such as size and number of clusters, and offer granular control over when clusters are spun up, to help to control costs now and in the future. Per-job-cost attribution helps organizations manage department costs and maximize resources as they scale to more teams and jobs. Furthermore, auto-suspend and auto-resume capabilities can be enabled by default. This capability turns platforms off when they aren’t required, preventing paying for unnecessary usage and thus saving customers money.
Harnessing data, controlling costs
Even in tough economic times, organizations should not abandon ambitions to harness the power of data. For businesses in any sector, analyzing and understanding data has never been more important. The focus must instead shift towards changes that actually deliver results, such as moving from legacy on-premises platforms to modern SaaS data platforms that enable better transparency and planning on costs.
Doing so will have a massive impact and empower businesses to take control of their tech investments, which can be a key differentiator in today’s challenging macroeconomic landscape. Businesses should avoid taking the self-defeating, retrograde path of cutting back on their data usage, and should embrace the potential of modern data platforms to maximize cost efficiencies and control, while still forging a path into a data-driven future.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
ChromeOS is slated to receive some new privacy tools in a future update, and chief among them is the ability to control your Chromebook’s location privacy setting. According to a post on the Google Cloud blog, the feature is an expansion of the privacy controls that the company added last year. They’re referring to the microphone and camera toggles from last April. Google didn’t really provide a whole lot of details in their post, but 9To5Google helped with a recent deep dive.
The site states you can determine which apps and system services on your laptop have “access [to] your geolocation”, giving you almost total anonymity. It’s not perfect. The publication explains that the tool “specifically disables Google Location Services,” however it is still possible for an app or website to have an idea of where you currently are by looking at the IP address.
(Image credit: Google)
Geolocation controls do exist on ChromeOS, but are limited to the Chrome browser itself. On-device software is still free to collect your information unless you go into an app and manually disable the respective tool. This update will make the process easier to do. No more micromanaging.
Controls for camera, microphone, and location privacy
Alongside the privacy upgrade, ChromeOS will also introduce more granular camera, microphone, and geolocation controls. For certain apps like Instagram, you can decide how you want it to interact with your hardware. Access to a Chromebook’s microphone can be outright denied, allowed for free interaction, or something in between. For example, Instagram can connect to a webcam, but only when you, the user, are actively using the social network. Otherwise, the connection is blocked.
The Google Cloud blog does mention other features coming down the pipeline, but they pertain more towards enterprise customers; not everyday users. It talks about local data recovery as well as an expansion of Google’s data loss prevention policy.
A company representative told us the geolocation patch will roll out to all Chromebooks within the first half of 2024 – so hopefully before the end of June.
To find the new tools, you’ll need to first launch the Settings menu, then go to the Security and Privacy tab. They’ll be under the Privacy controls. Or as an alternative, you can go to a specific app in Settings and expand the Permissions tab. The controls can also be found there.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.
If you’re in the market for a new laptop, check out TechRadar’s list of the best Chromebook for 2024.