Categories
Life Style

the ten research papers that policy documents cite most

[ad_1]

G7 leaders gather for a photo at the Itsukushima Shrine during the G7 Summit in Hiroshima, Japan in 2023

Policymakers often work behind closed doors — but the documents they produce offer clues about the research that influences them.Credit: Stefan Rousseau/Getty

When David Autor co-wrote a paper on how computerization affects job skill demands more than 20 years ago, a journal took 18 months to consider it — only to reject it after review. He went on to submit it to The Quarterly Journal of Economics, which eventually published the work1 in November 2003.

Autor’s paper is now the third most cited in policy documents worldwide, according to an analysis of data provided exclusively to Nature. It has accumulated around 1,100 citations in policy documents, show figures from the London-based firm Overton (see ‘The most-cited papers in policy’), which maintains a database of more than 12 million policy documents, think-tank papers, white papers and guidelines.

“I thought it was destined to be quite an obscure paper,” recalls Autor, a public-policy scholar and economist at the Massachusetts Institute of Technology in Cambridge. “I’m excited that a lot of people are citing it.”

The top ten most cited papers in policy documents are dominated by economics research. When economics studies are excluded, a 1997 Nature paper2 about Earth’s ecosystem services and natural capital is second on the list, with more than 900 policy citations. The paper has also garnered more than 32,000 references from other studies, according to Google Scholar. Other highly cited non-economics studies include works on planetary boundaries, sustainable foods and the future of employment (see ‘Most-cited papers — excluding economics research’).

These lists provide insight into the types of research that politicians pay attention to, but policy citations don’t necessarily imply impact or influence, and Overton’s database has a bias towards documents published in English.

Interdisciplinary impact

Overton usually charges a licence fee to access its citation data. But last year, the firm worked with the London-based publisher Sage to release a free web-based tool that allows any researcher to find out how many times policy documents have cited their papers or mention their names. Overton and Sage said they created the tool, called Sage Policy Profiles, to help researchers to demonstrate the impact or influence their work might be having on policy. This can be useful for researchers during promotion or tenure interviews and in grant applications.

Autor thinks his study stands out because his paper was different from what other economists were writing at the time. It suggested that ‘middle-skill’ work, typically done in offices or factories by people who haven’t attended university, was going to be largely automated, leaving workers with either highly skilled jobs or manual work. “It has stood the test of time,” he says, “and it got people to focus on what I think is the right problem.” That topic is just as relevant today, Autor says, especially with the rise of artificial intelligence.

Walter Willett, an epidemiologist and food scientist at the Harvard T.H. Chan School of Public Health in Boston, Massachusetts, thinks that interdisciplinary teams are most likely to gain a lot of policy citations. He co-authored a paper on the list of most cited non-economics studies: a 2019 work3 that was part of a Lancet commission to investigate how to feed the global population a healthy and environmentally sustainable diet by 2050 and has accumulated more than 600 policy citations.

“I think it had an impact because it was clearly a multidisciplinary effort,” says Willett. The work was co-authored by 37 scientists from 17 countries. The team included researchers from disciplines including food science, health metrics, climate change, ecology and evolution and bioethics. “None of us could have done this on our own. It really did require working with people outside our fields.”

Sverker Sörlin, an environmental historian at the KTH Royal Institute of Technology in Stockholm, agrees that papers with a diverse set of authors often attract more policy citations. “It’s the combined effect that is often the key to getting more influence,” he says.

Sörlin co-authored two papers in the list of top ten non-economics papers. One of those is a 2015 Science paper4 on planetary boundaries — a concept defining the environmental limits in which humanity can develop and thrive — which has attracted more than 750 policy citations. Sörlin thinks one reason it has been popular is that it’s a sequel to a 2009 Nature paper5 he co-authored on the same topic, which has been cited by policy documents 575 times.

Although policy citations don’t necessarily imply influence, Willett has seen evidence that his paper is prompting changes in policy. He points to Denmark as an example, noting that the nation is reformatting its dietary guidelines in line with the study’s recommendations. “I certainly can’t say that this document is the only thing that’s changing their guidelines,” he says. But “this gave it the support and credibility that allowed them to go forward”.

Broad brush

Peter Gluckman, who was the chief science adviser to the prime minister of New Zealand between 2009 and 2018, is not surprised by the lists. He expects policymakers to refer to broad-brush papers rather than those reporting on incremental advances in a field.

Gluckman, a paediatrician and biomedical scientist at the University of Auckland in New Zealand, notes that it’s important to consider the context in which papers are being cited, because studies reporting controversial findings sometimes attract many citations. He also warns that the list is probably not comprehensive: many policy papers are not easily accessible to tools such as Overton, which uses text mining to compile data, and so will not be included in the database.

“The thing that worries me most is the age of the papers that are involved,” Gluckman says. “Does that tell us something about just the way the analysis is done or that relatively few papers get heavily used in policymaking?”

Gluckman says it’s strange that some recent work on climate change, food security, social cohesion and similar areas hasn’t made it to the non-economics list. “Maybe it’s just because they’re not being referred to,” he says, or perhaps that work is cited, in turn, in the broad-scope papers that are most heavily referenced in policy documents.

As for Sage Policy Profiles, Gluckman says it’s always useful to get an idea of which studies are attracting attention from policymakers, but he notes that studies often take years to influence policy. “Yet the average academic is trying to make a claim here and now that their current work is having an impact,” he adds. “So there’s a disconnect there.”

Willett thinks policy citations are probably more important than scholarly citations in other papers. “In the end, we don’t want this to just sit on an academic shelf.”

[ad_2]

Source Article Link

Categories
Bisnis Industri

‘Hollywood Con Queen’ on Apple TV+ documents major scam

[ad_1]

A new three-part documentary series explores the story of an international con artist impersonating Hollywood’s most powerful women, Apple TV+ said Monday. And it provided a “first look” image, above. You can watch Hollywood Con Queen on Apple TV+ starting May 8.

Emmy Award-winning filmmaker Chris Smith created the three-part series. It’s based on investigative journalism by Scott Johnson in The Hollywood Reporter.

Hollywood Con Queen documentary series airs on Apple TV+ May 8

Apple TV+ called the new documentary series from Smith (Tiger King, Fyre, 100 Foot Wave) “incredible” and “riveting.” As such, the story it tells comes from work by investigative journalist Scott Johnson. He writes for The Hollywood Reporter and published a book through Harper Collins entitled, Hollywood Con Queen: The Hunt for an Evil Genius.

So here’s how Apple TV+ describes the documentary series:

A mysterious figure dubbed the ‘Con Queen’ impersonates the industry’s most powerful women, luring unsuspecting victims to Indonesia with the promise of a life-changing career opportunity. The Con Queen’s marks exhaust their personal finances in pursuit of a big break, while being exploited in a perverse psychological game spanning the globe.

The scam eventually draws the attention of veteran investigative journalist of The Hollywood Reporter, and dedicated private investigator Nicole Kotsianas, formerly of K2 Integrity, who set out to find the truth, only to discover a story more strange than they could have imagined.

Library Films produced Hollywood Con Queen for Apple TV+. Smith serves as director and executive producer alongside executive producer Ben Anderson. Johnson is a consulting producer.

Watch documentaries on Apple TV+

So does that sound tempting? You can see Hollywood Con Queen on Apple TV+ starting May 8. It joins numerous other documentary films and shows on Apple TV+.

The service is available by subscription for $9.99 with a seven-day free trial. You can also get it via any tier of the Apple One subscription bundle. For a limited time, customers who purchase and activate a new iPhone, iPad, Apple TV, Mac or iPod touch can enjoy three months of Apple TV+ for free.

After launching in November 2019, “Apple TV+ became the first all-original streaming service to launch around the world, and has premiered more original hits and received more award recognitions faster than any other streaming service. To date, Apple Original films, documentaries and series have been honored with 471 wins and 2,090 award nominations and counting,” the service said.

In addition to award-winning movies and TV shows (including breakout soccer comedy Ted Lasso), Apple TV+ offers a variety of documentaries, dramas, comedies, kids shows and more.

Watch on Apple TV

Source: Apple TV+



[ad_2]

Source Article Link

Categories
News

Analyse large documents locally using AI securely and privately

Analyse large documents locally securely and privately using PrivateGPT and LocalGPT

If you have large business documents that you would like to analyze, quickly and efficiently without having to read every word. You can harness the power of artificial intelligence to answer questions about these documents locally on your personal laptop. Using PrivateGPT and LocalGPT you can securely and privately, quickly summarize, analyze and research large documents. By simply asking questions to extracting certain data that you might need for other uses, efficiently and effectively thanks to the power of GPT AI models.

Dealing with large volumes of digital documents is a common yet daunting task for most of us in business. But what if you could streamline this process, making it quicker, more efficient, secure and private? Using AI tools such as PrivateGPT and LocalGPT this is now possible transforming the way we interact with our documents locally making sure that no personal or private data centre third-party servers such as OpenAI, Bing, Google or others.

Using PrivateGPT and LocalGPT, you can now tap into the power of artificial intelligence right from your personal laptop. These tools allow you to summarize, analyze, and research extensive documents with ease. They are not just time-savers; they are smart, intuitive assistants ready to sift through pages of data to find exactly what you need.

  • Efficiency at Your Fingertips: Imagine having the ability to quickly scan through lengthy business reports or research papers and extract the essential information. With PrivateGPT and LocalGPT, this becomes a reality. They can summarize key points, highlight crucial data, and even provide analysis – all in a fraction of the time it would take to do manually.
  • Local and Private: One of the defining features of these tools is their focus on privacy. Since they operate locally on your device, you don’t have to worry about sensitive information being transmitted over the internet. This local functionality ensures that your data remains secure and private, giving you peace of mind.
  • User-Friendly Interaction: These tools are designed with the user in mind. They are intuitive and easy to use, making them accessible to anyone, regardless of their technical expertise. Whether you’re a seasoned tech professional or a business person with minimal tech knowledge, you’ll find these tools straightforward and practical.
  • Versatility in Application: Whether you’re looking to extract specific data for a presentation, find answers to complex questions within a document, or simply get a quick overview of a lengthy report, PrivateGPT and LocalGPT are up to the task. Their versatility makes them valuable across various industries and applications.
  • Simplified Document Handling: Gone are the days of poring over pages of text. These tools help you navigate through extensive content, making document handling a breeze. They are especially useful in scenarios where time is of the essence, and accuracy cannot be compromised.

How to analyze large documents securely & privately using AI

If you are wondering how these tools could fit into your workflow, you will be pleased to know that they are adaptable and can be tailored to meet your specific needs. Whether you are a legal professional dealing with case files, a researcher analyzing scientific papers, or a business analyst sifting through market reports, PrivateGPT and LocalGPT can be your allies in managing and understanding complex documents.

Other articles we have written that you may find of interest on the subject of running AI models locally for privacy and security :

PrivateGPT vs LocalGPT

For more information on how to use PrivateGPT and to download the open source AI model jump over to its official GitHub repository.

PrivateGPT

“PrivateGPT is a production-ready AI project that allows you to ask questions about your documents using the power of Large Language Models (LLMs), even in scenarios without an Internet connection. 100% private, no data leaves your execution environment at any point.”

  • Concept and Architecture:
    • PrivateGPT is an API that encapsulates a Retrieval-Augmented Generation (RAG) pipeline.
    • It is built using FastAPI and follows OpenAI’s API scheme.
    • The RAG pipeline is based on LlamaIndex, which provides abstractions such as LLM, BaseEmbedding, or VectorStore.
  • Key Features:
    • It offers the ability to interact with documents using GPT’s capabilities, ensuring privacy and avoiding data leaks.
    • The design allows for easy extension and adaptation of both the API and the RAG implementation.
    • Key architectural decisions include dependency injection, usage of LlamaIndex abstractions, simplicity, and providing a full implementation of the API and RAG pipeline​​​​.

LocalGPT

For more information on how to use LocalGPT and to download the open source AI model jump over to its official GitHub repository.

LocalGPT is an open-source initiative that allows you to converse with your documents without compromising your privacy. With everything running locally, you can be assured that no data ever leaves your computer. Dive into the world of secure, local document interactions with LocalGPT.”

  • Utmost Privacy: Your data remains on your computer, ensuring 100% security.
  • Versatile Model Support: Seamlessly integrate a variety of open-source models, including HF, GPTQ, GGML, and GGUF.
  • Diverse Embeddings: Choose from a range of open-source embeddings.
  • Reuse Your LLM: Once downloaded, reuse your LLM without the need for repeated downloads.
  • Chat History: Remembers your previous conversations (in a session).
  • API: LocalGPT has an API that you can use for building RAG Applications.
  • Graphical Interface: LocalGPT comes with two GUIs, one uses the API and the other is standalone (based on streamlit).
  • GPU, CPU & MPS Support: Supports multiple platforms out of the box, Chat with your data using CUDACPU or MPS and more!
  • Concept and Features:
    • LocalGPT is an open-source initiative for conversing with documents on a local device using GPT models.
    • It ensures privacy as no data ever leaves the device.
    • Features include utmost privacy, versatile model support, diverse embeddings, and the ability to reuse LLMs.
    • LocalGPT includes chat history, an API for building RAG applications, two GUIs, and supports GPU, CPU, and MPS​​.
  • Technical Details:
    • LocalGPT runs the entire RAG pipeline locally using LangChain, ensuring reasonable performance without data leaving the environment.
    • ingest.py uses LangChain tools to parse documents and create embeddings locally, storing the results in a local vector database.
    • run_localGPT.py uses a local LLM to process questions and generate answers, with the ability to replace this LLM with any other LLM from HuggingFace, as long as it’s in the HF format​​.

PrivateGPT and LocalGPT both emphasize the importance of privacy and local data processing, catering to users who need to leverage the capabilities of GPT models without compromising data security. This aspect is crucial, as it ensures that sensitive data remains within the user’s own environment, with no transmission over the internet. This local processing approach is a key feature for anyone concerned about maintaining the confidentiality of their documents.

In terms of their architecture, PrivateGPT is designed for easy extension and adaptability. It incorporates techniques like dependency injection and uses specific LlamaIndex abstractions, making it a flexible tool for those looking to customize their GPT experience. On the other hand, LocalGPT offers a user-friendly approach with diverse embeddings, support for a variety of models, and a graphical user interface. This range of features broadens LocalGPT’s appeal, making it suitable for various applications and accessible to users who prioritize ease of use along with flexibility.

The technical approaches of PrivateGPT and LocalGPT also differ. PrivateGPT focuses on providing an API that wraps a Retrieval-Augmented Generation (RAG) pipeline, emphasizing simplicity and the capacity for immediate implementation modifications. Conversely, LocalGPT provides a more extensive range of features, including chat history, an API for RAG applications, and compatibility with multiple platforms. This makes LocalGPT a more comprehensive option for those with a broader spectrum of technical requirements.

Both tools are designed for users who interact with large documents and seek a secure, private environment. However, LocalGPT’s additional features, such as its user interface and model versatility, may make it more appealing to a wider range of users, especially those with varied technical needs. It offers a more complete solution for individuals seeking not just privacy and security in document processing, but also convenience and extensive functionality.

While both PrivateGPT and LocalGPT share the core concept of private, local document interaction using GPT models, they differ in their architectural approach, range of features, and technical details, catering to slightly different user needs and preferences in document handling and AI interaction.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Create an AI second brain and chat to your documents

Create an AI second brain and chat to your documents

If you are interested in having the ability to give your large language model or preferred AI model a memory you might be interested in Quivr. A unique LLM data storage and query interface, designed to function as a second brain. Read this guide and watch the videos below to learn how to set up and use Quivr on a local machine.

Quivr has been specifically designed to use the power of Generative AI such as that provided by OpenAI’s ChatGPT,  Anthropic AI,  and other large language models to store and retrieve unstructured information. At the current time Quivr only supports a connection to ChatGPT but is looking to expand its capabilities as the project progresses to connect to other LLMs.

Quivr is an open-source software that accepts almost every type of document or media, including pictures, videos, PDFs, CSVs, and PowerPoint documents. It can be used with chat CPT and local models on a user’s computer, allowing users to upload files and ask questions about their content.

Adding memory to your AI model

One of the features that make Quivr unique is the ability to create different “brains” for different sets of documents. This allows users to separate personal and work documents or categorize documents based on projects, topics, or any other criteria. The software is production-ready and can be used to chat with documents at any time. Users can select different models, set the temperature, set the number of tokens, and even create API keys to code on top of Quivr

Other articles you may find of interest on the subject of improving your productivity :

What is a second brain and how is it used to improve productivity?

The term “second brain” in the context of activity and document handling refers to an external system for storing, organizing, and managing information, tasks, and documents that you encounter in your personal and professional life. The idea is to offload the cognitive work of remembering, sorting, and synthesizing information to this external system, thereby freeing up mental space and improving productivity and creativity.

The concept is rooted in productivity methods like Getting Things Done (GTD) by David Allen and takes advantage of modern digital tools to make the process more efficient. Tools like note-taking apps Notion, Obsidian, Devon Think 3 and others as well as task managers such as Asana, Omnifocus and even more specialized software for document handling can serve as components of one’s second brain.

However with the explosion of artificial intelligence over the last two years it has now become even easier to build your very own AI second brain. Using large language models or connections to existing GPT style chatbots you can quickly create a personal assistant and organise your documents that can be accessed using AI queries.

Here’s how a second brain might handle activities and documents:

  • Capture: The first step involves capturing all relevant information, tasks, and documents as they come. This could be meeting notes, ideas, to-do items, or important documents like invoices and contracts.
  • Organize: Once captured, these pieces of information are sorted into appropriate folders, tags, or databases. The system should be intuitive enough that you can find items easily.
  • Review: Periodically, the stored information is reviewed to update tasks, delete or archive obsolete information, and integrate new data.
  • Execute: With a well-organized second brain, you can focus on executing tasks more efficiently, knowing that all the contextual information you might need is readily available.
  • Synthesize: Finally, the second brain can help in synthesizing new ideas by connecting disparate pieces of information stored within it.
  • Search and Retrieve: Because the second brain is well-organized, searching for and retrieving documents or information becomes much easier than if they were scattered in various places.

The second brain essentially serves as an extension of your cognitive faculties, allowing you to manage activities and documents more efficiently than relying solely on your own memory and intuition.

Building an AI second brain with Quivr

Features of Quivr

  • Universal Data Acceptance: Quivr is capable of managing a wide range of data types, including text, images, and code snippets.
  • Generative AI: The platform utilizes advanced AI algorithms to aid in the creation and retrieval of information.
  • Speed and Efficiency: Quivr is engineered for quick and efficient data access.
  • Security: The platform offers robust security features, ensuring that you have full control over your data.
  • OS Compatibility: The software is compatible with Ubuntu 22 and newer versions.
  • File Compatibility: Quivr supports a variety of file types: Text, Markdown, PDF, PowerPoint, Excel (Upcoming), CSV, Word, Audio and Video.
  • Open Source: Quivr is an open-source platform, making it free to use and modify.

One of the key advantages of the Quivr AI second brain is its flexibility. The project can be put on the internet and walled behind authorization, allowing users to give access to friends or colleagues to chat with documents they create. This feature is particularly useful for collaborative projects where multiple users need to interact with the same set of documents.

Quivr is a powerful tool that takes document interaction to a new level. Its unique approach of creating an ‘AI second brain’ allows users to interact with their documents in a conversational manner, making it easier to extract information and insights. Whether you are a student, a professional, or just someone who deals with a lot of documents, building and AI second brain  using Quivr can be a valuable addition to your AI toolkit. For more information and to download the project to install locally jump over to the official GitHub repository

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.