Categories
Bisnis Industri

Power Mac 7100 spurs Carl Sagan lawsuit: Today in Apple history

[ad_1]

March 14: Today in Apple history: Power Mac 7100 lands Apple in hot water with Carl Sagan March 14, 1994: Apple introduces the Power Macintosh 7100, a midrange Mac that will become memorable for two reasons.

The first is that it is among the first Macs to use new PowerPC processors. The second is that it results in Apple getting taken to court by astronomer Carl Sagan — not once but twice.

Power Macintosh 7100: A solid Mac

The Power Macintosh 7100 was one of three Macs introduced in March 1994, with the other two being the lower-end Power Macintosh 6100 and the high-end 8100 model.

The Power Mac 7100’s PowerPC processor ran at 66 MHz (a spec that Apple upgraded to 80 MHz in January 1995). The computer’s hard drive ranged between 250MB and 700MB in size. The Mac also sported Apple’s then-standard NuBus card slots and 72-pin paired RAM slots.

The Mac 7100 came in a slightly modified Macintosh IIvx case. (The IIvx was the first Mac to come in a metal case and feature an internal CD-ROM drive.)

Costing between $2,900 and $3,500, the Mac 7100 was a solid piece of hardware that bridged the gap nicely between the low-end consumer 6100 and its higher-end 8100 sibling. It was, for example, perfectly capable of running two monitors. However, it could overheat when performing particularly strenuous tasks such as complex rendering of images or videos.

The Power Mac 7100 compared with the other Power Macintoshes of its day
The Mac 7100 compared with the other Power Macintoshes of its day.
Photo: Apple

Carl Sagan sues Apple over Power Mac 7100 code name

As many Apple fans will know, the company’s engineers frequently give code names to projects they’re working on, either to maintain secrecy or just for fun. They gave the Power Mac 7100 the code name “Carl Sagan” as a tribute to the famous astronomer.

Unfortunately, the secret in-joke spilled in a 1993 issue of MacWeek that eventually found its way into Sagan’s hands. In a letter to MacWeek, Sagan wrote:

“I have been approached many times over the past two decades by individuals and corporations seeking to use my name and/or likeness for commercial purposes. I have always declined, no matter how lucrative the offer or how important the corporation. My endorsement is not for sale.

For this reason, I was profoundly distressed to see your lead front-page story ‘Trio of Power PC Macs spring toward March release date’ proclaiming Apple’s announcement of a new Mac bearing my name. That this was done without my authorization or knowledge is especially disturbing. Through my attorneys, I have repeatedly requested Apple to make a public clarification that I knew nothing of its intention to capitalize on my reputation in introducing this product, that I derived no benefit, financial or otherwise, from its doing so. Apple has refused. I would appreciate it if you so apprise your readership.”

A new code name: ‘Butt-Head Astronomer’

Carl Sagan wasn't on the best terms with Apple in 1994
Carl Sagan wasn’t on the best terms with Apple in 1994.
Photo: Carl Sagan Planetary Society CC

Forced to change the code name, Apple engineers began calling the project “BHA,” which stood for “Butt-Head Astronomer.”

Sagan then sued Apple over the implication that he was a “butt-head.” The judge overseeing the matter made the following statement:

“There can be no question that the use of the figurative term ‘butt-head’ negates the impression that Defendant was seriously implying an assertion of fact. It strains reason to conclude the Defendant was attempting to criticize Plaintiff’s reputation of competency as an astronomer. One does not seriously attack the expertise of a scientist using the undefined phrase ‘butt-head.’”

Still, Apple’s legal team asked the engineers to change the code name once more. They picked “LAW” — standing for “Lawyers Are Wimps.”

Sagan appealed the judge’s decision. Eventually, in late 1995, the famous astronomer reached a settlement with Apple. From that point on, Cupertino appears to have used only benign code names related to activities like skiing.

Do you remember the Power Macintosh 7100? Leave your comments below.



[ad_2]

Source Article Link

Categories
News

More details on Elon Musk’s lawsuit against OpenAI

Elon Musk's lawsuit against OpenAI

Elon Musk is concerned that OpenAI they’ve strayed from their original path, prioritizing profits over the open-source and non-profit ideals Musk envisioned at the beginning. This is a lawsuit about principles, about the direction of artificial intelligence, and about what happens when a vision for the future of technology is at risk of being compromised.

Back in 2015, Elon Musk, along with other tech visionaries, set up OpenAI with a clear mission: to advance digital intelligence in a way that could benefit humanity as a whole. Elon Musk even put in $50 million to kickstart this venture. The goal was to develop artificial intelligence that was open to all—not locked away behind corporate doors. But now, it seems things have taken a different turn.

On February 29, 2024, Elon Musk filed a lawsuit against OpenAI. His claim? That they’ve veered off course, becoming more focused on making money than on the greater good. One of Elon Musk’s main gripes is their exclusive partnership with Microsoft regarding the GPT-3 technology. Elon Musk sees this as a betrayal, a move away from the open sharing of AI advancements that Musk had intended.

Elon Musk lawsuit against OpenAI

Elon Musk’s worries don’t stop there. He is also uneasy about the makeup of OpenAI’s board. It appears to Elon Musk that they’ve been reshaped to prioritize commercial success over the public’s interest. Musk is questioning the motives of co-founder Sam Altman, wondering if his investment decisions are truly aligned with the organization’s philanthropic roots.

Here are some other articles you may find of interest on the subject of OpenAI

The stakes of this lawsuit are incredibly high. If OpenAI has indeed developed Artificial General Intelligence, or AGI, the ramifications could be enormous. AGI has the potential to transform industries, economies, and societies. But who governs this technology? Who ensures it’s used responsibly? These are the questions at the heart of his legal challenge.

The world of AI has been buzzing with excitement over recent breakthroughs, particularly in deep learning and Transformer algorithms. These advancements have opened up new possibilities, but they’ve also sparked a debate about whether AI technology should be open-source or proprietary. Elon Musk’s lawsuit brings this debate into the spotlight, forcing the tech community to confront the ethical implications of AI development.

Elon Musk not just fighting for the soul of OpenAI; he is also fighting for the future of AI itself. Elon Musk believes that developers have a responsibility to balance innovation with the public’s welfare. The direction that OpenAI takes could set a precedent for how AI is managed and deployed globally.

This legal battle isn’t just about one company or one technology. It’s about setting the course for how AI will evolve and how it will affect humanity. It’s a reminder that with great power comes great responsibility, and that those who lead the charge in technological advancement must always consider the broader impact of their actions.

As this case unfolds, it will undoubtedly shape the conversation around ethical AI. It will force us to ask tough questions about the role of AI in society and how we can harness its potential for the common good. The outcome of this lawsuit will have far-reaching implications, influencing not just OpenAI’s trajectory but the future of AI development worldwide.

So, as you watch this legal drama play out, remember that it’s not just about corporate disputes or technological secrets. It’s about the vision we have for our future and the kind of world we want to live in. It’s about ensuring that as we step boldly into the age of artificial intelligence, we do so with our eyes wide open, guided by principles that put humanity first.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

OpenAI responds to New York Times ChatGPT Lawsuit

OpenAI ChatgPT

OpenAI has responded to the New York Times lawsuit against their ChatGPT language model and has said that their lawsuit is ‘without merit”. They have said that training its AI model is fair use and they also provide an opt-out option.

The AI company has said that “Regurgitation” is a rare bug that they are working on ironing out from their platform and they have also said that the New York Times is not telling the full story, more details are below.

Our discussions with The New York Times had appeared to be progressing constructively through our last communication on December 19. The negotiations focused on a high-value partnership around real-time display with attribution in ChatGPT, in which The New York Times would gain a new way to connect with their existing and new readers, and our users would gain access to their reporting. We had explained to The New York Times that, like any single source, their content didn’t meaningfully contribute to the training of our existing models and also wouldn’t be sufficiently impactful for future training. Their lawsuit on December 27—which we learned about by reading The New York Times—came as a surprise and disappointment to us.

You can find out more information about the lawsuit between the New York Times and OpenAI and their ChatGPT language model over at the Open AI website at the link below.

Source Open AI

Image Credit: Jonathan Kemper

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

New York Times and OpenAI GPT lawsuit explained

New York Times and OpenAI GPT lawsuit explained

The esteemed publication, The New York Times, has taken a bold step by filing a lawsuit against OpenAI, the creators of the sophisticated AI model known as GPT-4. This legal challenge, which also involves tech giant Microsoft due to its association with OpenAI, is centered around claims of copyright infringement. The New York Times is seeking significant financial compensation, alleging that its copyrighted content was used without permission to train the AI system.

At the heart of this dispute is the demand for the complete removal of GPT-4 and any other models that may have utilized The New York Times’ copyrighted material during their training. This case is critical as it could set a new legal standard that might affect the future of AI development and the use of copyrighted materials in machine learning.

The New York Times argues that OpenAI’s models, which have consumed its content, now pose a threat to its business by offering similar journalistic services. The publication claims that GPT-4 can generate summaries and even reproduce exact excerpts from its articles, essentially redistributing its content without authorization.

A key point in the lawsuit is whether AI systems like GPT-4 retain exact copies of copyrighted texts or whether they simply learn patterns and generate similar content independently. This distinction is crucial and could determine the outcome of the case.

New York Times OpenAI lawsuit

In the past, U.S. courts have been reluctant to hold AI systems accountable for the data on which they are trained, often dismissing lawsuits related to such issues. However, this case could break that pattern, particularly if it is proven that GPT-4 can recall and reproduce copyrighted material.

The implications of this legal battle are far-reaching. Should The New York Times emerge victorious, it could reshape the AI industry, especially regarding how AI models are trained and the necessity of securing permissions for copyrighted content. Such a shift could fundamentally change how AI companies acquire and use training data.

Quick summary of the New York Times and OpenAI GPT lawsuit :

  • The New York Times is suing OpenAI and its affiliates for copyright infringement.
  • The lawsuit seeks significant financial damages and the removal of GPT-4 and related models.
  • Previous lawsuits against AI models for similar reasons have generally not succeeded.
  • U.S. courts have typically rejected claims against the training data used by AI models.
  • The lawsuit argues that OpenAI’s models, using data from the New York Times, compete with the newspaper’s ability to deliver news.
  • The New York Times alleges that OpenAI’s models can generate detailed summaries and verbatim excerpts from its articles without authorization.
  • The case may hinge on whether OpenAI’s models store actual copies of copyrighted material.
  • The outcome of the lawsuit could have implications for the future of AI models and their interaction with copyrighted content.

As the situation unfolds, it is crucial to consider the balance between encouraging AI innovation and protecting intellectual property rights. The outcome of this lawsuit will likely have significant consequences not only for OpenAI and its affiliates but also for the wider AI community and its interaction with copyrighted materials.

The confrontation between The New York Times and OpenAI is not just a legal matter; it is a pivotal moment that could influence the direction of technological advancement and the protection of creative works. The resolution of this case is eagerly anticipated, as it will set a precedent for how AI entities and content creators coexist and collaborate in the rapidly evolving digital landscape.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.