Categories
Life Style

The neuroscientist formerly known as Prince’s audio engineer

[ad_1]

Prince performs onstage during the 1984 Purple Rain Tour

Musician Prince on stage in Detroit, Michigan, during his 1984 Purple Rain tour.Credit: Ross Marino/Getty

Working scientist profiles

This article is part of an occasional Nature series in which we profile scientists with unusual career histories or outside interests.

In 1983, Susan Rogers got a call that would change her life. She was working as an audio technician in the music industry in Los Angeles, California, when an ex-boyfriend got in touch to tell her that the musician Prince was looking for a technician.

Rogers, who at the time was one of the few female audio technicians in the United States — and maybe even the world — was already a Prince fan. His work reminded her of the soul music she had grown up listening to in the 1960s and 1970s in southern California — artists such as Sly and the Family Stone and Al Green, but with a contemporary, punk edge.

By this point, Prince had just released his album 1999. Rogers, who was 27 at the time, would begin working with him on Purple Rain, the record that would launch him into global superstardom.

She spent four years working with Prince in his home recording studio in Minneapolis, Minnesota, leaving a year before the opening of Paisley Park, Prince’s now-legendary creative and performing space. By this point, she had graduated from being an audio technician — maintaining and repairing equipment — to recording engineer, a role that has much more influence over the whole sound of a record.

“I was talking to some Prince alumni recently and they were saying ‘poor Susan, she never even got Christmas Day off’. There’s no ‘poor Susan’ about it — I was working with my favourite artist and there was nowhere I would rather be,” she says.

After Prince, she went on to work with other musicians, such as the Canadian rock group Barenaked Ladies and David Byrne, former lead singer of the new-wave band Talking Heads. At the age of 44, and with the help of the royalties she earned on the Barenaked Ladies album Stunt, she quit the music industry (see ‘Quick-fire questions’).

Higher education had not been an option growing up — her mother died when she was 14 and Rogers was married aged 17. She escaped that unhappy relationship after three years and headed to Hollywood, where she got a job as a trainee audio technician.

Susan Rogers works at FAME Studios

Susan Rogers trained as a recording engineer before pivoting to neuroscience. She continues to produce music, such as for US singer-songwriter Jeff Black.Credit: Madison Thorne

Over the years, she increasingly felt the pull of academia and a calling to study the natural world. So, in 2000, she began her undergraduate degree in neuroscience and psychology at the University of Minnesota. Initially, she wanted to study consciousness in non-human animals, but was advised that a more meaningful contribution would be a neuroscience degree that would also enable her to study music perception and cognition. She then did her doctoral work at McGill University in Montreal, Canada. Returning to education after so many years was not as difficult as she had feared — and years spent learning the intricacies of a recording console helped her to understand the complexity of the human brain.

Her PhD research focused on auditory memory. She designed experiments to test short-term memory for musical intervals, in which musicians and non-musicians listened to a piece of music containing consonance (harmonious sounds) and dissonance (clashing or unexpected sounds). The most interesting observation was that, for both groups, short-term auditory memory lasted longer than was previously thought, she says. At the time of her doctoral work, psychologist István Winkler and his colleagues had reported that auditory short-term memory persisted for roughly 30 seconds1, but Rogers’s work demonstrated it lasting for 48 seconds.

A good ear and a sound work ethic

One of Rogers’s PhD supervisors was Daniel Levitin, a cognitive psychologist, musician and record producer whose research focuses on music perception. He knew of Rogers from her work with Prince and Barenaked Ladies, and took her on “in a heartbeat”. “She was Prince’s engineer — that’s one of the top engineering jobs in the world,” he says.

Her years in the music industry greatly enhanced her academic work, he says. It gave her an astonishing work ethic and helped her to hone her all-important listening skills.

“What auditory neuroscience requires is a good ear. You’re designing experiments and you need to be able to hear subtle details that others might not hear so that you know you’ve prepared your experiments correctly. Susan has a great ear.”

Levitin describes her as very musical, “even though she doesn’t play an instrument”. As a producer, he explains, her job was to coax out of the musician “the most authentically emotional performance you could get”. “Miles Davis told her she was a musician. He didn’t throw around that term lightly,” he says of the renowned jazz bandleader and composer.

In 2008, Rogers joined Berklee College of Music in Boston, Massachusetts, where she teaches music production and engineering. She is also writing a course on music and neuroscience for the college’s online programme.

She has investigated what people visualize when they listen to music, and plans to publish the results. Some people, including Rogers, imagine the musicians playing; others make up stories based on the lyrics; and for some — particularly older people — music triggers memories. Interestingly, musicians and non-musicians do not differ greatly in their visualizations.

“One of the least musical people that I know — somebody who would almost be called tone deaf — reports that he sees abstract shapes and colours when he listens to music. And two of the finest musicians I know also visualize abstract shapes and colours. I can’t even imagine having that visualization to music,” she says.

Throughout her successful music career, Rogers admits that there were times when she felt like a bystander in the studio — because she does not play an instrument or compose, her views felt secondary to those of the professional musicians. But in her career as an academic and teacher, she is very much at home.

“Nothing in my life has brought me more joy than scientific pursuit. It is as creative as anything I ever did while making records. Had I realized in my youth that a career in science was possible for me, my hunch is that I could have made a more notable contribution. Earning a PhD at age 52 doesn’t permit that,” she says.

Common cause

Rogers also thinks that musicians and scientists have more in common than one might guess — both need to be open-minded and be able to separate relevant and irrelevant information. “The fashion and the hairstyles are different — musicians have the edge there — but there are more similarities than differences,” she says.

How else are the two professions similar? “It takes guts to commit to a music career because there is no comfortable path and absolutely no light to guide you, other than your own internal one,” says Rogers. “I’ve had the privilege of knowing some outstanding scientists and my perception is that they, too, are driven more by scratching an intellectual itch than by winning a prize or being famous.”

That feeling of being a bystander in the music industry receded when she realized that listening is an “indispensable component of what music is”, as she explains in her 2022 book, co-authored with neuroscientist Ogi Ogas, This is What it Sounds Like: What the Music You Love Says About You.

“Practically speaking, without a listener, music does not exist. By perceiving, feeling and reacting to the many dimensions of a song, a listener closes the creative circle and completes the musical experience,” she writes.

Levitin thinks that one of Rogers’s main contributions through her writing and public speaking has been to elevate the importance of the listener.

“She’s also adding the social context by which we listen, and by which we decide what we like, and the developmental stages we go through as listeners, from listening to children’s nursery rhymes to more sophisticated things,” he says. Her book, he adds, is a perfect example of what a popular-science work and science communication should be — it does not dumb down the science or patronize its audience, but neither does it aim so high that it’s impenetrable.

Rogers hopes that, one day, all music courses will include a unit on music cognition to help creators to understand how listeners receive their craft.

“It won’t help you in the studio and it won’t help you while you’re composing. And I don’t think it should — when we’re creating works of art, we shouldn’t be thinking too deeply about the nuts and bolts,” she says. That said, a music-cognition course can help music creators to understand their audiences, “just like a chef needs to understand what food tastes like”, she adds.

When she finally left Prince and began working with other musicians, she felt she had to unlearn some elements of Prince’s intense working habits.

“Prince was doing a song a day when I was with him. That was every day. That’s how we worked,” she says.“He also had an exceptional ear for arrangement. He could foresee how the end product was going to turn out in such a way that each part — drums, bass, guitars, keyboards, backing vocals — was recorded with an ear for the subsequent parts. He had a watchmaker’s skill of putting the individual parts together to create a whole.”

She still loves listening to music and discovering new artists, particularly with the help of her students, but she remains true to soul, her first musical love.

“As Prince used to say, soul is the street I live on,” she says.

Quick-fire questions

What music do you listen to when working?

I can’t have music on in the background because it’s such a powerful attractor. If something comes on the radio while I’m driving, I have to turn it down and remind myself to pay attention to the road.

What has been your career highlight?

Working with Prince was obviously a great star in the firmament. But being the producer on the Barenaked Ladies album Stunt was amazing — it went multi-platinum. I’ve had a short science and teaching career but receiving a distinguished teaching award at Berklee was also gratifying.

Did you ever speak to Prince about your research?

Sadly, no. The last conversation I had with Prince was around 1997, before my university education. If we’d had a chance to talk about my research, he would have argued with me on every point, which would have been welcome. I heard him say that if he’d gone into something other than music, he would have liked teaching. With his creativity, intelligence and self-discipline, he would have been an outstanding researcher.

Do you have a memorable mentor?

Musically, the producer Tony Berg taught me a lot. He hasn’t sold as many records as others, but he has influenced so many people. Stephen McAdams at McGill University would be my scientific mentor — he took over supervising my PhD because Daniel Levitin was on a book tour. He is a world expert on timbre perception and is everything a scientist should be — kind, generous of spirit, funny.

Is there any music you don’t enjoy listening to?

I used to have zero interest in heavy-metal music, but two of my students shared their love of it with me, and, as good listeners, they explained why it was so great. I picked up on their love for it. Sometimes we don’t like something because we don’t know it well enough.

If you could save only one record from your collection, what would it be?

It’s so hard to choose when you love so many things, but just off the top of my head I’d probably choose Al Green’s Greatest Hits album.

This interview has been edited for length and clarity.

[ad_2]

Source Article Link

Categories
Entertainment

Google fires engineer who protested at a company-sponsored Israeli tech conference

[ad_1]

Google has fired a Cloud engineer who interrupted Barak Regev, the managing director of its business in Israel, during a speech at an Israeli tech event in New York, according to CNBC. “I’m a Google software engineer and I refuse to build technology that powers genocide or surveillance!” the engineer was seen and heard shouting in a video captured by freelance journalist Caroline Haskins that went viral online. While being dragged away by security — and amidst jeers from the audience — he continued talking and referenced Project Nimbus. That’s the $1.2 billion contract Google and Amazon had won to supply AI and other advanced technologies to the Israeli military.

Last year, a group of Google employees published an open letter urging the company to cancel Project Nimbus, in addition to calling out the “hate, abuse and retaliation” Arab, Muslim and Palestinian workers are getting within the company. “Project Nimbus puts Palestinian community members in danger! I refuse to build technology that is gonna be used for cloud apartheid,” the engineer said. After he was removed from the venue, Regev told the audience that “[p]art of the privilege of working in a company, which represents democratic values is giving the stage for different opinions.” He ended his speech after a second protester interrupted and accused Google of being complicit in genocide.

The incident took place during the MindTheTech conference in New York. Its theme for the year was apparently “Stand With Israeli Tech,” because investments in Israel slowed down after the October 7 Hamas attacks. Haskins wrote a detailed account of what she witnessed at the event, but she wasn’t able to stay until it wrapped up, because she was also thrown out by security.

The Google engineer who interrupted the event told Haskins that he wanted “other Google Cloud engineers to know that this is what engineering looks like — is standing in solidarity with the communities affected by your work.” He spoke to the journalist anonymously to avoid professional repercussions, but Google clearly found out who he was. A Google spokesperson told CNBC that he was fired for “interfering with an official company-sponsored event.” They also told the news organization that his “behavior is not okay, regardless of the issue” and that the “employee was terminated for violating [Google’s] policies.”

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.



[ad_2]

Source Article Link

Categories
News

How to Reverse Engineer GPTs on ChatGPT

Reverse Engineer GPTs

This guide is designed to show you how to reverse engineer GPTS with ChatGPT. Have you ever wondered about the intricate workings of Generative Pre-trained Transformers (GPTs) and how they can be manipulated or reverse-engineered? If so, you’re part of a growing community fascinated by these developments. In a recent, in-depth video, experts took a deep dive into this cutting-edge topic, uncovering a range of techniques that lay bare the hidden instructions embedded in GPTs.

This exploration goes beyond mere curiosity; it ventures into revealing how one might coax these advanced systems into executing actions they weren’t originally intended to perform. The revelations from this video not only shed light on the underlying processes of these AI giants but also open up discussions about the potential and limitations inherent in such powerful technology.

Discovering the Inner Workings of GPTs

The video kicks off with a captivating demonstration of how one can extract the exact prompts and instructions used in custom GPT models. It’s a process akin to peeling back the layers of an onion, revealing the core of these complex systems. This exploration is not just academic; it provides invaluable insights into the capabilities and potential vulnerabilities of large language models (LLMs).

Techniques to Extract GPT Instructions

  1. Extracting GPT Instructions: Here, you’ll learn to use specific prompts to coax GPTs into revealing their instructions word-for-word. It’s a bit like asking the right question to get the most direct answer. The technique takes advantage of the way files are stored in GPT’s backend, turning the AI into a veritable open book.
  2. Prompt Injection Techniques: The video then takes a deep dive into various prompt injection methods. These are ingenious ways to test, and sometimes exploit, the boundaries of LLMs. They include:
    • Direct Prompt Injection: Directly manipulating the prompt sent to the AI to achieve a specific outcome.
    • Indirect Prompt Injection: Involving third parties to alter the LLM’s behavior and generate unexpected responses.
    • Context Length Attacks: Filling the LLM’s context with irrelevant data to make it forget earlier instructions.
    • Multi-Language Attacks: Exploiting the LLM’s uneven training across different languages.
    • Role Playing Attacks: Tricking the LLM into role-playing scenarios to bypass restrictions.
    • Token Smuggling: Altering the LLM’s output in ways that pass automated checks but can be reassembled by humans.
    • Code Injection: Effective in GPTs that have code interpreters enabled.
    • Prompt Extraction: Extracting instructions or other data from GPTs.

Security Measures Against Exploits

Given these potential vulnerabilities, the video emphasizes the importance of security and protection measures. It’s not just about building stronger walls; it’s about understanding the various ways these walls can be scaled or bypassed. The presenter discusses adding guards to instructions and utilizing specialized software like Lera, which identifies prompt leakage and protects against personal identifiable information (PII) exposure.

Interactive Challenges for the Curious Minds

If this all sounds a bit abstract, don’t worry. The presenter points to an interactive website with challenges (Gandalf page) where users can apply these prompt injection techniques to uncover a secret phrase. It’s not just a practical demonstration of the concepts; it’s a testament to the complexity and sophistication of these attacks.

Embracing the Complexity

As we navigate through the labyrinthine world of GPTs, it’s clear that the journey is as important as the destination. Understanding these techniques opens up new vistas in our comprehension of AI and its myriad possibilities. Whether you’re a tech enthusiast or a seasoned professional, this insight into the world of GPTs is sure to be an enlightening experience.

Remember, knowledge is power, especially when it comes to the rapidly evolving world of technology. By understanding the inner workings of GPTs, you are not only staying informed but also contributing to a more secure and ethical AI future. We hope that you find this video and guide on how to reverse engineer GPTS useful, if you have any comments or questions, please leave a comment below and let us know.

Source: Show Me The Data

Filed Under: Guides





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How to become an AI engineer and 4 beginner projects to build

How to become an AI engineer and 4 projects you can build now

Interested in becoming an AI engineer? This guide will provide you with more information on how you can start harnessing the power of artificial intelligence and becoming an AI engineer. Including four AI-based projects which you can build as a beginner to start your journey as an AI engineer. Understanding the technology stack is crucial for an AI engineer as it provides the foundational tools to build, deploy, and maintain AI solutions.

The stack generally includes programming languages, data manipulation libraries, machine learning frameworks, and cloud services. Mastering these technologies enables engineers to build robust, scalable, and efficient systems. Moreover, being proficient in the tech stack allows for seamless collaboration with data scientists, DevOps engineers, and other stakeholders in a project.

What is an AI engineer?

An AI engineer is a specialized role within the software engineering discipline, focused on developing and maintaining AI and machine learning systems. They typically work alongside data scientists to bring AI models from the research stage to production, ensuring that the models are scalable, maintainable, and aligned with business objectives. Their responsibilities range from data gathering and preprocessing to model deployment and monitoring.

Learning from doing

A practical approach to AI engineering often involves a problem-first methodology, where the focus is on understanding the business or scientific problem at hand before diving into data and algorithms. This requires a strong collaboration with domain experts and stakeholders. The engineering process typically follows stages of data collection, data preprocessing, model building, validation, and deployment, all while adhering to best practices for software development and data governance.

The skills of an AI engineer

Key skills include proficiency in programming languages like Python or Java, familiarity with machine learning frameworks such as TensorFlow or PyTorch, and understanding of cloud computing platforms like AWS or Azure. Other important skills include data engineering, feature engineering, and understanding DevOps practices such as continuous integration and deployment (CI/CD).

4 AI engineer beginner friendly projects

The OpenAI API allows engineers to access pre-trained models like GPT-3 for various natural language processing tasks. Python is often the language of choice for interacting with this API, due to its extensive libraries and ease of use. Integrating the OpenAI API into projects can drastically reduce the development time required for building language models from scratch.

Other articles we have written that you may find of interest on the subject of coding :

1. Building a simple AI chatbot

Creating a simple chatbot can be achieved through various methods, but one common approach is to use pre-trained language models accessed via APIs like OpenAI’s GPT-3. Basic chatbots can be built with just a few lines of Python code to send prompts to the API and receive generated text as responses, which can then be parsed and presented to the user.

2. Chaining AI prompts together for more complicated processes

Chaining prompts refers to the practice of sending a series of questions or commands to a language model API to perform multi-step tasks. For instance, you can first ask the model to draft an email and then follow up with a command to summarize the drafted content. This allows for a more interactive and dynamic use of language models in automating tasks.

3. Transcribing audio using the OpenAI Whisper API

OpenAI’s Whisper is an automatic speech recognition (ASR) API that can convert spoken language into written text. It can be particularly useful in applications like transcription services, voice assistants, and more. With the API, AI engineers can add a layer of voice interaction to their applications.

4. Using OpenAI  DallE API to create AI images

DALL-E is another API by OpenAI that generates creative and coherent images from text descriptions. This technology opens up a range of possibilities in fields like design, advertising, and content creation. By integrating the DALL-E API, an AI engineer can enable an application to generate custom images based on user input or other data.

The road to becoming in AI engineer

AI engineering is a specialized field within software development that focuses on creating and maintaining AI and machine learning systems. Mastering the technology stack is essential for success, as it includes the tools needed to build, deploy, and monitor AI solutions. A practical approach in this role often starts with understanding the problem at hand, followed by data collection, model building, and deployment.

AI engineers need a diverse skill set that includes programming, data engineering, and a familiarity with machine learning frameworks and cloud services. Several APIs from OpenAI, such as GPT-3, Whisper, and DALL-E, offer powerful capabilities for tasks ranging from natural language processing to speech recognition and image generation. These APIs can be integrated into projects to expedite development and introduce advanced functionalities like chatbots, automated task sequences, and more.

The role of an AI engineer is pivotal in bridging the gap between data science research and real-world applications. By understanding both the technical and practical aspects, AI engineers can contribute to building robust, scalable, and impactful AI systems.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.