Categories
Business Industry

La estrella de “Rebel Ridge” de Netflix, Aaron Peer, interpreta al Linterna Verde John Stewart

[ad_1]





Desde el sedán mediano hasta el exitoso papel de superhéroe de su vida. El actor de rápido ascenso aaron bier recientemente causó sensación por su actuación protagónica en La última película de acción de Jeremy Saulnier es Rebel Ridge. Lo que se distingue instantáneamente de todos los demás programas de transmisión disponibles actualmente en Netflix. Para el resto de nosotros, siempre será conocido como el rapero pegadizo que le dio el nombre artístico brillantemente tonto a un sedán mediano de la canción “Old” de M. Night Shyamalan en 2021. Sin embargo, independientemente de cómo le presentaron a Pierre por primera vez, Todos podemos estar de acuerdo en que rápidamente se ha convertido en uno de los artistas más solicitados de la industria. Ahora, semanas después Especulaciones y rumores de los fanáticos de que sería la elección perfecta para el nuevo Linterna Verde.Los presidentes de DC Studios, James Gunn y Peter Safran, parecen estar completamente de acuerdo con eso.

Esta tarde, finalmente se conoció la emocionante noticia de que DC Studios ha elegido a Aaron Peier para interpretar al personaje favorito de los fanáticos, John Stewart, en la próxima serie de HBO actualmente titulada “Lanterns” (a través de reportero de hollywood). Si firma oficialmente en la línea de puntos, pondrá fin a meses de informes sobre qué estrellas se enfrentarán a los dos héroes más grandes de la franquicia, con varios de los nombres más populares potencialmente compitiendo por ser los próximos Jon Stewart y Hal Jordan. DC una vez puso su mirada en la estrella de 'Dune' Josh Brolin para este último antes de su muerte y Finalmente le dio el papel a Kyle Chandler.donde asume un papel mayor y más de mentor para un Linterna Verde más joven. Ahora, es difícil imaginar que cualquier fan se sentiría decepcionado si Pierre se acercara y se pusiera el legendario traje verde.

¡Más detalles a continuación!

Aaron Peier es el nuevo John Stewart en la serie Green Lantern de HBO

Después de más de una década La película no deseada de 'Linterna Verde' protagonizada por Ryan Reynolds fracasó Y poniendo fin a cualquier plan inmediato para Space Cops en la pantalla grande, la Legión ahora da la bienvenida a sus miembros más nuevos y emocionantes hasta el momento. Aaron Bier como John Stewart podría parecer una elección bastante obvia, ya que inevitablemente estaría cerca de la cima de cualquier lista de actores negros prometedores que podrían encajar en el trabajo. Pero simplemente no se puede discutir el carisma o la pura fisicalidad que aporta a la película. Su trabajo en “Rebel Ridge” habla por sí solo, pero definitivamente deberías Mire su impresionante actuación en la película independiente de 2022 “Brother”. Su papel protagónico en The Underground Railroad de Barry Jenkins y, por supuesto, también su papel sorprendentemente matizado como un sedán mediano en Old. De hecho, técnicamente este no sería el primer papel de Pierre en DC: ese honor es para su aparición secundaria en la serie precuela de Superman de 2018 en Syfy llamada “Krypton”.

De cualquier manera, las “linternas” ahora tienen una forma muy redondeada. Con Pierre a bordo, se une a su coprotagonista Kyle Chandler en un elenco que inevitablemente se llenará con muchos más nombres en las próximas semanas. Los propios “Lanterns” provienen del libro de Chris Mundy. (“Ozark”), Damon Lindelof (“Watchmen”, “Lost”) y el escritor de DC Comics Tom King. Aunque los detalles exactos de la trama se mantienen en secreto, Se ha descrito que la serie está inspirada en “True Detective”. En una historia terrestre que involucra algún tipo de misterio de asesinato.

Aún no se ha anunciado una fecha de estreno, ¡pero estad atentos a /Film para conocer todas las actualizaciones de “Lanterns” a medida que lleguen!


[ad_2]

Source Article Link

Categories
Business Industry

La serie Rebel Ridge de Netflix casi presenta a un actor de Star Wars antes que Aaron Peer

[ad_1]





Los actores se alejan de los proyectos todo el tiempo por diversas razones. Los “conflictos de programación” y las “diferencias creativas” se encuentran entre las razones más comunes, pero a menudo, cuando se corre la voz de que un actor se separa de una película o programa de televisión, se marcha antes de que comience la filmación. A veces se alejan peligrosamente antes de que comience la producción, como fue el caso cuando los actores se separaron en la película “Creative Chaos”. Joaquin Phoenix se retira de la película de Todd Haynes El mes pasado, menos de una semana antes de que se suponía que comenzaría a filmar el video (y potencialmente enfrentaría acciones legales por cortar el video demasiado cerca).

Pero en 2021 sucedió algo aún más extraño: El actor John Boyega dejó la producción del thriller de acción de Netflix del escritor y director Jeremy Saulnier “Rebel Ridge” durante un mes. después El rodaje de la película ha comenzado.. Una salida como esta es En toda su extensión Esto era una rareza en el Hollywood moderno (al menos una salida publicitada, de todos modos) y provocó que la película se detuviera hasta que Saulnier pudiera encontrar un nuevo actor principal.

Se dice que Boyega, quien interpretó al soldado retirado Finn en la última trilogía de “Star Wars”, dejó Ripple Ridge por razones familiares no especificadas; Aproximadamente un mes después, Boyega estaba en el set del thriller sobre robo a un banco Breaking, después de haber reemplazado al actor principal de la película, Jonathan Majors. Hasta donde sé, nunca ha hablado públicamente del tema familiar que lo llevó a abandonar el proyecto.

Pero para el director de “Ripple Ridge”, Jeremy Saulnier, la salida de Boyega era “necesaria” y condujo a una de las mayores refundiciones de los últimos tiempos.

El director de Rebel Ridge dice que el casting fue a su favor

Hablé con Jeremy Saulnier en el episodio de hoy del podcast Film Daily. Puedes escuchar nuestra conversación completa a continuación, pero comencé preguntándole sobre ese período de incertidumbre cuando Boyega dejó el proyecto pero antes de elegir a Aaron Bier (The Underground Railroad) para el papel principal:

“Definitivamente enfrentamos vientos en contra en esa versión de la película, pero, sinceramente, terminó siendo para mejor. Afortunadamente, ya había elegido a Aaron en unas semanas, así que nos confabulamos, hablamos por Zoom y nos enamoramos del proyecto. y estaba dispuesto a emprender la mudanza conmigo. Hablé con él por Zoom y en dos minutos supe: “Dios mío, este es el tipo. Este es nuestro hombre”. Por respeto a todos los involucrados, no voy a hablar de toda la partida, pero fue necesaria y en realidad no hubo mala voluntad. Fue más como, oh, después del cierre de COVID, Tuvimos que cerrar de nuevo, pero en realidad fue la version “La unica de esta pelicula que pude reconocer. Vi a Aaron en Zoom y luego, un ano despues, lo vi en el set y supe que tenia un exito. “

Boyega es un muy buen actor (puedes ver su actuación en “Small Axe” de Steve McQueen si aún no la has visto y, por supuesto, estuvo genial en su primera película, “Attack the Block”), pero Pierre da una gran desempeño físico que agrega una dinámica más interesante entre su personaje Terry y los policías corruptos liderados por el personaje de Don Johnson. Pierre ofrece una actuación increíble y, dado que Boyega ya era un nombre muy conocido, espero que el hecho de que Pierre protagonice “Small Axe” de Steve McQueen sea la razón de su éxito. “Rebel Ridge” actualmente encabeza las listas de Netflix Esto significa que habrá suficientes personas viendo su trabajo para tener ese tipo de impulso en su antiguo perfil previo a la transmisión, lo que en realidad lo eleva a un nivel mucho más alto de estrellato.

Mire el episodio del podcast de hoy a continuación, que incluye una conversación sobre la película antes de la entrevista con Jeremy Saulnier:

Puedes suscribirte a /Daily Movie en Pódcasts de Apple, nublado, SpotifyO dondequiera que obtenga sus archivos de audio, envíenos sus comentarios, preguntas, inquietudes y temas por correo electrónico a [email protected]. Deje su nombre y ubicación geográfica general si mencionamos su correo electrónico al aire.


[ad_2]

Source Article Link

Categories
Life Style

Structure peer review to make it more robust

[ad_1]

In February, I received two peer-review reports for a manuscript I’d submitted to a journal. One report contained 3 comments, the other 11. Apart from one point, all the feedback was different. It focused on expanding the discussion and some methodological details — there were no remarks about the study’s objectives, analyses or limitations.

My co-authors and I duly replied, working under two assumptions that are common in scholarly publishing: first, that anything the reviewers didn’t comment on they had found acceptable for publication; second, that they had the expertise to assess all aspects of our manuscript. But, as history has shown, those assumptions are not always accurate (see Lancet 396, 1056; 2020). And through the cracks, inaccurate, sloppy and falsified research can slip.

As co-editor-in-chief of the journal Research Integrity and Peer Review (an open-access journal published by BMC, which is part of Springer Nature), I’m invested in ensuring that the scholarly peer-review system is as trustworthy as possible. And I think that to be robust, peer review needs to be more structured. By that, I mean that journals should provide reviewers with a transparent set of questions to answer that focus on methodological, analytical and interpretative aspects of a paper.

For example, editors might ask peer reviewers to consider whether the methods are described in sufficient detail to allow another researcher to reproduce the work, whether extra statistical analyses are needed, and whether the authors’ interpretation of the results is supported by the data and the study methods. Should a reviewer find anything unsatisfactory, they should provide constructive criticism to the authors. And if reviewers lack the expertise to assess any part of the manuscript, they should be asked to declare this.

Other aspects of a study, such as novelty, potential impact, language and formatting, should be handled by editors, journal staff or even machines, reducing the workload for reviewers.

The list of questions reviewers will be asked should be published on the journal’s website, allowing authors to prepare their manuscripts with this process in mind. And, as others have argued before, review reports should be published in full. This would allow readers to judge for themselves how a paper was assessed, and would enable researchers to study peer-review practices.

To see how this works in practice, since 2022 I’ve been working with the publisher Elsevier on a pilot study of structured peer review in 23 of its journals, covering the health, life, physical and social sciences. The preliminary results indicate that, when guided by the same questions, reviewers made the same initial recommendation about whether to accept, revise or reject a paper 41% of the time, compared with 31% before these journals implemented structured peer review. Moreover, reviewers’ comments were in agreement about specific parts of a manuscript up to 72% of the time (M. Malički and B. Mehmani Preprint at bioRxiv https://doi.org/mrdv; 2024). In my opinion, reaching such agreement is important for science, which proceeds mainly through consensus.

I invite editors and publishers to follow in our footsteps and experiment with structured peer reviews. Anyone can trial our template questions (see go.nature.com/4ab2ppc), or tailor them to suit specific fields or study types. For instance, mathematics journals might also ask whether referees agree with the logic or completeness of a proof. Some journals might ask reviewers if they have checked the raw data or the study code. Publications that employ editors who are less embedded in the research they handle than are academics might need to include questions about a paper’s novelty or impact.

Scientists can also use these questions, either as a checklist when writing papers or when they are reviewing for journals that don’t apply structured peer review.

Some journals — including Proceedings of the National Academy of Sciences, the PLOS family of journals, F1000 journals and some Springer Nature journals — already have their own sets of structured questions for peer reviewers. But, in general, these journals do not disclose the questions they ask, and do not make their questions consistent. This means that core peer-review checks are still not standardized, and reviewers are tasked with different questions when working for different journals.

Some might argue that, because different journals have different thresholds for publication, they should adhere to different standards of quality control. I disagree. Not every study is groundbreaking, but scientists should view quality control of the scientific literature in the same way as quality control in other sectors: as a way to ensure that a product is safe for use by the public. People should be able to see what types of check were done, and when, before an aeroplane was approved as safe for flying. We should apply the same rigour to scientific research.

Ultimately, I hope for a future in which all journals use the same core set of questions for specific study types and make all of their review reports public. I fear that a lack of standard practice in this area is delaying the progress of science.

Competing Interests

M.M. is co-editor-in-chief of the Research Integrity and Peer Review journal that publishes signed peer review reports alongside published articles. He is also the chair of the European Association of Science Editors Peer Review Committee.

[ad_2]

Source Article Link

Categories
Life Style

Signs that ChatGPT is polluting peer review

[ad_1]

Hello Nature readers, would you like to get this Briefing in your inbox free every day? Sign up here.

Coloured functional magnetic resonance imaging (fMRI) scan of a healthy human brain at rest.

Coloured functional magnetic resonance imaging of a healthy brain at rest.Credit: Mark & Mary Stevens Neuroimaging and Informatics Institute/Science Photo Library

A new technique for measuring brain activity in anaesthetized animals, known as direct imaging of neuronal activity (DIANA), has been difficult for neuroscientists to reproduce. The DIANA technique offered the exciting prospect of tracking neuronal firing on millisecond timescales. But two new studies suggest that the original results might have arisen from experimental error or subjective data selection. The lead researcher on the original paper stands by the results: “I’m also very curious as to why other groups fail in reproducing DIANA,” says physicist Jang-Yeon Park.

Nature | 6 min read

On 19 April, 970 million people in India will head to the ballot box to vote in a general election that polls predict will see Prime Minister Narendra Modi and his party win a third five-year term. Many scientists in India are hopeful that the period could bring greater spending on applied science. Some have also expressed concerns that funding is not increasing in line with India’s booming economy, and that the government’s top-down control of science, as some researchers see it, allows them little say in how money is allocated.

Nature | 6 min read

Bioengineered immune cells have been shown to attack and even cure cancer, but they tend to get exhausted if the fight goes on for a long time. Now, two separate research teams have found a way to rejuvenate these cells: make them more like stem cells. Both groups found that the bespoke immune cells called CAR T cells gain new vigour if engineered to have high levels of a particular protein. These boosted CAR T cells have gene activity similar to that of stem cells and a renewed ability to fend off cancer. The papers “open a new avenue for engineering therapeutic T cells for cancer patients”, says immunologist Tuoqi Wu.

Nature | 4 min read

Alongside using AI tools for writing research papers, academics might now be using ChatGPT to assist in peer review, according to a preprint (itself not peer reviewed). The study looked at conference proceedings submitted to four computer-science meetings and identified buzzwords typical of AI-generated text in 17% of peer review reports. The buzzwords included positive adjectives, such as ‘commendable’, ‘meticulous’ and ‘versatile’. It’s unclear whether researchers used the tools to construct their reviews from scratch or just to edit and improve written drafts. “It seems like when people have a lack of time, they tend to use ChatGPT,” says computer scientist and study co-author Weixin Liang.

Nature | 5 min read

Reference: arXiv preprint (not peer reviewed)

Features & opinion

There is growing evidence that climate change worsens mental health in multiple ways. These include the trauma and distress caused directly by extreme weather and a more general ‘eco-anxiety’: a chronic fear of environmental doom. Negative news about the climate crisis — along with inaction by world leaders — is itself a source of eco-anxiety and frustration. And it’s not just a problem in rich nations. More than 55% of young people in a global 2021 survey said that climate change made them feel powerless, and 58% felt betrayed by their government. On the flip side, studies suggest that individuals who take action to combat climate change can also help to curb their eco-anxiety: a double win.

Nature feature | 11 min read & Nature editorial | 4 min read

Climate anxiety around the world: chart showing the results of a 2021 global survey of 10,000 people aged 16–25 years old.

Source: Ref. 1

Malicious deepfakes aren’t the only thing we should be concerned about when it comes to content that can affect the integrity of elections, says US Science Envoy for AI Rumman Chowdhury. Political candidates are increasingly using ‘softfakes’ to boost their campaigns — obviously AI-generated video, audio, images or articles that aim to whitewash a candidate’s reputation and make them more likeable. Social media companies and media outlets need to have clear policies on softfakes, Chowdhury says, and election regulators should take a close look.

Nature | 5 min read

A lack of evidence is hindering health care for young people with gender dysphoria or incongruence, finds a much-anticipated report in England. Clinical guidelines used around the world “are built on shaky foundations”, writes the chair of the report, Hilary Cass, who was president of the Royal College of Paediatrics and Child Health. In particular, the rationale for early puberty suppression is weak and there is next-to-no research specifically relevant to non-binary people. To add to the challenge, intense politicization makes recruiting clinicians difficult. The medical pathway might not be right for everyone, says the report, and young people need holistic mental-health and social support. “The problem arises when the right thing is too medicalized,” says Cass in an interview with the BMJ. “Medication is binary, but gender expressions are often not.”

BMJ | 8 min read & Hilary Cass summarizes the report’s findings in the BMJ | 7 min read & Interview with Cass in the BMJ | 32 min watch

Reference: The Cass review: Independent review of gender identity services for children and young people

“I’ve brought apes a little closer to humans but I’ve also brought humans down a bit,” said primatologist Frans de Waal in 2014. Building on careful observations of primates’ unfettered behaviour, de Waal’s research suggested that the biggest intellectual challenge for chimpanzees lay in their complex social lives, leading to the study of social intelligence in apes and other species. His and others’ studies of aggression, reconciliation, imitation and learning have progressively narrowed the perceived gap between humans and other animals, writes psychologist Andrew Whiten. de Waal was equally comfortable with peer-reviewed research, popular science books and TED talks, and he was unafraid to tackle thorny topics like sex and gender. de Waal has died, aged 75.

Nature | 5 min read

QUOTE OF THE DAY

Marine biologist Selina Ward shares her distress over the severe and widespread mass bleaching of corals in the Great Barrier Reef — the worst on record — during the Australian summer of 2024. (The Guardian | 5 min read or 2 min watch)

Today I’ll be strolling home — backwards. Walking backwards can be helpful for knee pain and working important muscles in your lower reaches, say experts. Just a minute or two per day is useful, but do be careful when ‘retroambulating’.

As I practise making the ‘beep-beep’ sound of a reversing vehicle, why not send me your feedback on this newsletter? Your e-mails are always welcome at [email protected].

Thanks for reading,

Flora Graham, senior editor, Nature Briefing

With contributions by Gemma Conroy, Katrina Krämer and Sarah Tomlin

Want more? Sign up to our other free Nature Briefing newsletters:

Nature Briefing: Anthropocene — climate change, biodiversity, sustainability and geoengineering

Nature Briefing: AI & Robotics — 100% written by humans, of course

Nature Briefing: Cancer — a weekly newsletter written with cancer researchers in mind

Nature Briefing: Translational Research covers biotechnology, drug discovery and pharma

[ad_2]

Source Article Link

Categories
Life Style

Is ChatGPT corrupting peer review? Telltale words hint at AI use

[ad_1]

A close up view of ChatGPT displayed on a laptop screen while two hands are poised to type.

A study suggests that researchers are using chatbots to assist with peer review.Credit: Rmedia7/Shutterstock

A study that identified buzzword adjectives that could be hallmarks of AI-written text in peer-review reports suggests that researchers are turning to ChatGPT and other artificial intelligence (AI) tools to evaluate others’ work.

The authors of the study1, posted on the arXiv preprint server on 11 March, examined the extent to which AI chatbots could have modified the peer reviews of conference proceedings submitted to four major computer-science meetings since the release of ChatGPT.

Their analysis suggests that up to 17% of the peer-review reports have been substantially modified by chatbots — although it’s unclear whether researchers used the tools to construct reviews from scratch or just to edit and improve written drafts.

The idea of chatbots writing referee reports for unpublished work is “very shocking” given that the tools often generate misleading or fabricated information, says Debora Weber-Wulff, a computer scientist at the HTW Berlin–University of Applied Sciences in Germany. “It’s the expectation that a human researcher looks at it,” she adds. “AI systems ‘hallucinate’, and we can’t know when they’re hallucinating and when they’re not.”

The meetings included in the study are the Twelfth International Conference on Learning Representations, due to be held in Vienna next month, 2023’s Annual Conference on Neural Information Processing Systems, held in New Orleans, Louisiana, the 2023 Conference on Robot Learning in Atlanta, Georgia, and the 2023 Conference on Empirical Methods in Natural Language Processing in Singapore.

Nature reached out to the organizers of all four conferences for comment, but none responded.

Buzzword search

Since its release in November 2022, ChatGPT has been used to write a number of scientific papers, in some cases even being listed as an author. Out of more than 1,600 scientists who responded to a 2023 Nature survey, nearly 30% said they had used generative AI to write papers and around 15% said they had used it for their own literature reviews and to write grant applications.

In the arXiv study, a team led by Weixin Liang, a computer scientist at Stanford University in California, developed a technique to search for AI-written text by identifying adjectives that are used more often by AI than by humans.

By comparing the use of adjectives in a total of more than 146,000 peer reviews submitted to the same conferences before and after the release of ChatGPT, the analysis found that the frequency of certain positive adjectives, such as ‘commendable’, ‘innovative’, ‘meticulous’, ‘intricate’, ‘notable’ and ‘versatile’, had increased significantly since the chatbot’s use became mainstream. The study flagged the 100 most disproportionately used adjectives.

Reviews that gave a lower rating to conference proceedings or were submitted close to the deadline, and those whose authors were least likely to respond to rebuttals from authors, were most likely to contain these adjectives, and therefore most likely to have been written by chatbots at least to some extent, the study found.

“It seems like when people have a lack of time, they tend to use ChatGPT,” says Liang.

The study also examined more than 25,000 peer reviews associated with around 10,000 manuscripts that had been accepted for publication across 15 Nature portfolio journals between 2019 and 2023, but didn’t find a spike in usage of the same adjectives since the release of ChatGPT.

A spokesperson for Springer Nature said the publisher asks peer reviewers not to upload manuscripts into generative AI tools, noting that these still have “considerable limitations” and that reviews might include sensitive or proprietary information. (Nature’s news team is independent of its publisher.)

Springer Nature is exploring the idea of providing peer reviewers with safe AI tools to guide their evaluation, the spokesperson said.

Transparency issue

The increased prevalence of the buzzwords Liang’s study identified in post-ChatGPT reviews is “really striking”, says Andrew Gray, a bibliometrics support officer at University College London. The work inspired him to analyse the extent to which some of the same adjectives, as well as a selection of adverbs, crop up in peer-reviewed studies published between 2015 and 2023. His findings, described in an arXiv preprint published on 25 March, show a significant increase in the use of certain terms, including ‘commendable’, ‘meticulous’ and ‘intricate’, since ChatGPT surfaced2. The study estimates that the authors of at least 60,000 papers published in 2023 — just over 1% of all scholarly studies published that year — used chatbots to some extent.

Gray says it’s possible peer reviewers are using chatbots only for copyediting or translation, but that a lack of transparency from authors makes it difficult to tell. “We have the signs that these things are being used,” he says, “but we don’t really understand how they’re being used.”

“We do not wish to pass a value judgement or claim that the use of AI tools for reviewing papers is necessarily bad or good,” Liang says. “But we do think that for transparency and accountability, it’s important to estimate how much of that final text might be generated or modified by AI.”

Weber-Wulff doesn’t think tools such as ChatGPT should be used to any extent during peer review, and worries that the use of chatbots might be even higher in cases in which referee reports are not published. (The reviews of papers published by Nature portfolio journals used in Liang’s study were available online as part of a transparent peer-review scheme.) “Peer review has been corrupted by AI systems,” she says.

Using chatbots for peer review could also have copyright implications, Weber-Wulff adds, because it could involve giving the tools access to confidential, unpublished material. She notes that the approach of using telltale adjectives to detect potential AI activity might work well in English, but could be less effective for other languages.

[ad_2]

Source Article Link