Some Apple employees who worked on Apple Car and micro LED projects are being laid off. Photo: Laurenz Heymann/Unsplash
Following the shutdown of the Apple Car project, the Cupertino giant is laying off more than 600 employees. Many affected employees worked on developing micro LED displays for the Apple Watch and future Apple products.
The laid-off employees worked in and around Santa Clara, near Apple’s California headquarters.
Layoffs affect Apple Car and micro LED project teams
To comply with the California Employment Development Department’s regulations, Apple filed a Worker Adjustment and Retraining Notification (WARN) report informing about the layoffs.
Based on Apple’s submissions, 87 people worked at its secret facility, purportedly working on the micro-LED display technology. 371 employees worked at an Apple Car-related office in Santa Clara, California. Bloomberg‘s report claims Apple laid off dozens more employees in various satellite offices working on the two projects.
These layoffs are not new, as the company has already let go of the employees. It is just that this is the first time an approximate number is known. Plus, Gurman notes that the number of employees laid off should be much higher, as Apple had many people working on the Car project in Arizona. Similarly, the company had many display engineers working on micro LED technology at its offices in Asia.
Apple moved many employees working on the Apple Car to its artificial intelligence division, led by John Giannandrea. They will focus on the company’s various generative AI projects, an area where Apple is significantly trailing its rivals.
Similarly, Apple has abandoned its plans to transition to micro LED screens on its future products. This is after the company realized its production costs were too high to make economic sense. With the project canceled, the company is letting go of many people working on it.
Apple this week filed a required notice with the state of California, confirming plans to permanently lay off more than 600 employees. Under California law, employers must give employees and state representatives a 60-day notice before a mass layoff event.
The employees listed are located in several Apple-occupied buildings around Santa Clara, California, which is close to Apple’s Cupertino headquarters. Several of these locations were rumored to be associated with Apple Car development in the past, so it is likely that these layoffs are related to Apple’s decision to stop work on the car project.
Apple officially ended development on the Apple Car in March. Approximately 2,000 employees working on the Apple Car were told that the project was winding down at that time, and Apple began the process of moving some of them to work on artificial intelligence under John Giannandrea and in other relevant departments.
Other employees were given 90 days to apply for open positions within the company, but Apple hired hardware engineers and car designers while working on the Apple Car, and these employees may not have had skills applicable to other projects.
Apple also recently ended development on in-house microLED displays, so some of the layoffs might also be related to the decision to discontinue that work.
While the iPhone 16 Pro and iPhone 16 Pro Max are still months away from launching, there are already over a dozen rumors about the devices. Below, we have recapped new features and changes expected for the devices so far. These are some of the key changes rumored for the iPhone 16 Pro models as of April 2024:Larger displays: The iPhone 16 Pro and iPhone 16 Pro Max will be equipped with large…
A first look at iOS 18’s rumored visionOS-style redesign may have been revealed by a new image of the Camera app. Alleged iOS 18 design resource. MacRumors received the above iPhone frame template from an anonymous source who claims they obtained it from an iOS engineer. It will allegedly be included as part of the Apple Design Resources for iOS 18, which helps developers visually design apps …
Apple is exploring various “personal robotics” projects in an effort to create its “next big thing,” according to Bloomberg’s Mark Gurman. Amazon’s Astro robot One of these projects is described as a “mobile robot” that would “follow users around their homes,” while another is said to be an “advanced table-top home device that uses robotics to move a display around”:Engineers at Apple have…
Apple researchers have developed an artificial intelligence system named ReALM (Reference Resolution as Language Modeling) that aims to radically enhance how voice assistants understand and respond to commands. In a research paper (via VentureBeat), Apple outlines a new system for how large language models tackle reference resolution, which involves deciphering ambiguous references to…
Nearly one year after it launched in the U.S., the Apple Card’s high-yield savings account will be receiving its first-ever interest rate decrease. Starting on April 3, the Apple Card savings account’s annual percentage yield (APY) will be lowered to 4.4%, according to data on Apple’s backend discovered by MacRumors contributor Aaron Perris. The account currently has a 4.5% APY. 4.4% will …
Apple TV+ now offers almost 30 more popular and classic movies for a limited time. Last month, Apple added over 50 movies to its back catalog of content for a limited period – the biggest addition of content to its library to date. For April, the company has added many more titles, including: 42 Anchorman: The Legend of Ron Burgundy Armageddon Arrival Bridesmai…
Apple has yet to release the first beta of iOS 17.5 for the iPhone, but two changes are already expected with the upcoming software update. iOS 17.5 will likely allow iPhone users in the EU to download apps directly from the websites of eligible developers, and the update might include some changes to how Apple ID recovery contacts work. More details about these potential changes follow. W…
It’s been five months since President Joe Biden signed an executive order (EO) to address the rapid advancements in artificial intelligence. The White House is today taking another step forward in implementing the EO with a policy that aims to regulate the federal government’s use of AI. Safeguards that the agencies must have in place include, among other things, ways to mitigate the risk of algorithmic bias.
“I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its benefits,” Vice President Kamala Harris told reporters on a press call.
Harris announced three binding requirements under a new Office of Management and Budget (OMB) policy. First, agencies will need to ensure that any AI tools they use “do not endanger the rights and safety of the American people.” They have until December 1 to make sure they have in place “concrete safeguards” to make sure that AI systems they’re employing don’t impact Americans’ safety or rights. Otherwise, the agency will have to stop using an AI product unless its leaders can justify that scrapping the system would have an “unacceptable” impact on critical operations.
Impact on Americans’ rights and safety
Per the policy, an AI system is deemed to impact safety if it “is used or expected to be used, in real-world conditions, to control or significantly influence the outcomes of” certain activities and decisions. Those include maintaining election integrity and voting infrastructure; controlling critical safety functions of infrastructure like water systems, emergency services and electrical grids; autonomous vehicles; and operating the physical movements of robots in “a workplace, school, housing, transportation, medical or law enforcement setting.”
Unless they have appropriate safeguards in place or can otherwise justify their use, agencies will also have to ditch AI systems that infringe on the rights of Americans. Purposes that the policy presumes to impact rights defines include predictive policing; social media monitoring for law enforcement; detecting plagiarism in schools; blocking or limiting protected speech; detecting or measuring human emotions and thoughts; pre-employment screening; and “replicating a person’s likeness or voice without express consent.”
When it comes to generative AI, the policy stipulates that agencies should assess potential benefits. They all also need to “establish adequate safeguards and oversight mechanisms that allow generative AI to be used in the agency without posing undue risk.”
Transparency requirements
The second requirement will force agencies to be transparent about the AI systems they’re using. “Today, President Biden and I are requiring that every year, US government agencies publish online a list of their AI systems, an assessment of the risks those systems might pose and how those risks are being managed,” Harris said.
As part of this effort, agencies will need to publish government-owned AI code, models and data, as long as doing so won’t harm the public or government operations. If an agency can’t disclose specific AI use cases for sensitivity reasons, they’ll still have to report metrics
ASSOCIATED PRESS
Last but not least, federal agencies will need to have internal oversight of their AI use. That includes each department appointing a chief AI officer to oversee all of an agency’s use of AI. “This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris noted. Many agencies will also need to have AI governance boards in place by May 27.
The vice president added that prominent figures from the public and private sectors (including civil rights leaders and computer scientists) helped shape the policy along with business leaders and legal scholars.
The OMB suggests that, by adopting the safeguards, the Transportation Security Administration may have to let airline travelers opt out of facial recognition scans without losing their place in line or face a delay. It also suggests that there should be human oversight over things like AI fraud detection and diagnostics decisions in the federal healthcare system.
As you might imagine, government agencies are already using AI systems in a variety of ways. The National Oceanic and Atmospheric Administration is working on artificial intelligence models to help it more accurately forecast extreme weather, floods and wildfires, while the Federal Aviation Administration is using a system to help manage air traffic in major metropolitan areas to improve travel time.
“AI presents not only risk, but also a tremendous opportunity to improve public services and make progress on societal challenges like addressing climate change, improving public health and advancing equitable economic opportunity,” OMB Director Shalanda Young told reporters. “When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services to improve accuracy and expand access to essential public services.”
This policy is the latest in a string of efforts to regulate the fast-evolving realm of AI. While the European Union has passed a sweeping set of rules for AI use in the bloc, and there are federal billsin the pipeline, efforts to regulate AI in the US have taken more of a patchwork approach at state level. This month, Utah enacted a law to protect consumers from AI fraud. In Tennessee, the Ensuring Likeness Voice and Image Security Act (aka the Elvis Act — seriously) is an attempt to protect musicians from deepfakes i.e. having their voices cloned without permission.
Many companies and platforms are wrangling with how to handle AI-generated content as it becomes more prevalent. One key concern for many is the to make it clear that an AI model whipped up a photo, video or piece of audio. To that end, has for labeling videos made with artificial intelligence.
Starting today, the platform will require anyone uploading a realistic-looking video that “is made with altered or synthetic media, including ” to label it for the sake of transparency. YouTube defines realistic content as anything that a viewer could “easily mistake” for an actual person, event or place.
YouTube
If a creator uses a synthetic version of a real person’s voice to narrate a video or replaces someone’s face with another person’s, they’ll need to include a label. They’ll also need to include the disclosure if they alter footage of a real event or place (such as by modifying an existing cityscape or making it look like a real building is on fire).
YouTube says that it might apply one of these labels to a video if a creator hasn’t done so, “especially if the altered or synthetic content has the potential to confuse or mislead people.” The team notes that while it wants to give creators some time to get used to the new rules, YouTube will likely penalize those who persistently flout the policy by not including a label when they should be.
These labels will start to appear across YouTube in the coming weeks, starting with the mobile app and then desktop and TVs. They’ll mostly appear in the expanded description, noting that the video includes “altered or synthetic content,” adding that “sound or visuals were significantly edited or digitally generated.”
YouTube
However, when it comes to more sensitive topics (such as news, elections, finance and health), YouTube will place a label directly on the video player to make it more prominent.
Creators won’t need to include the label if they only used generative AI to help with things like script creation, coming up with ideas for videos or to automatically generate captions. Labels won’t be necessary for “clearly unrealistic content” or if changes are inconsequential. Adjusting colors or using special effects like adding background blur alone won’t require creators to use the altered content label. Nor will applying lighting filters, beauty filters or other enhancements.
In addition, YouTube says it’s still working on a revamped takedown request process for synthetic or altered content that depicts a real, identifiable person’s face or voice. It plans to share more details about that updated procedure soon.