Categories
News

La nave espacial Perseverance de la NASA observa un eclipse Googly Eye en Marte

[ad_1]

NASA Perseverancia del rover Ubicado en el cráter Jezero en Marte, recientemente fue testigo de un espectacular evento celestial cuando la luna Fobos cruzó el sol. Captado el 30 de septiembre, el momento proporcionó una rara visión del cielo marciano, cuando el exclusivo efecto de “ojo que mira furtivamente” del eclipse apareció frente a la cámara Mastcam-Z del rover. El vídeo, publicado por la NASA, demuestra la interacción entre las órbitas de la luna marciana y proporciona información valiosa sobre la trayectoria de Fobos y su desplazamiento gradual hacia Marte.

Un eclipse inesperado crea una vista de “ojo saltón” en Marte

Persistencia observada Marte' La silueta de la superficie y el cielo se ha registrado desde 2021, moviéndose rápidamente a través de la cara del Sol desde el cráter Jezero occidental de Marte. Fobosmayor que Dos lunas de MarteCrean un efecto visual distintivo de “dolor en los ojos” porque bloquean parcialmente la luz solar, un fenómeno que normalmente no es visible desde la Tierra. El eclipse, capturado en el Sol número 1.285 de la misión, destaca la rápida órbita de Fobos, que tarda sólo 7,6 horas en completar una órbita completa alrededor de Marte. Debido a su órbita cercana, Fobos cruza regularmente el cielo marciano, lo que permite estos tránsitos cortos que duran sólo unos 30 segundos cada uno.

El extraño camino y el futuro de Fobos en Marte

Fobos, su nombre astrónomo Asaph Hall fue construido en 1877 en honor al dios griego asociado con el miedo y tiene unos 27 kilómetros de ancho. A diferencia de la luna más grande de la Tierra, Fobos parece mucho más pequeña en el cielo marciano. Su órbita lo acerca a Marte con el tiempo, lo que los científicos esperan que eventualmente lleve a que Fobos colisione con la superficie de Marte en los próximos 50 millones de años. Los eclipses anteriores de Fobos, también registrados por otros vehículos exploradores de Marte como Curiosity y Opportunity, continúan aportando datos esenciales para comprender las lunas de Marte y sus órbitas cambiantes.

Misión perseverancia y futura exploración de Marte

Como parte de la misión Mars 2020 de la NASA, Perseverance se centra en explorar la geología y la astrobiología marcianas. La misión, gestionada por el Jet Propulsion Laboratory (JPL) de la NASA, es la primera en recoger muestras de materiales de la superficie marciana, que está previsto recuperar en futuras misiones conjuntas con la Agencia Espacial Europea (Agencia Espacial Europea). Mastcam-Z de Perseverance, desarrollado con el apoyo de la Universidad Estatal de Arizona, Malin Space Science Systems y el Instituto Niels Bohr, desempeña un papel fundamental en la recopilación de imágenes de alta resolución para respaldar los estudios geológicos. Esta misión es consistente con el objetivo más amplio de la NASA de prepararse para la exploración humana en Marte, comenzando con las misiones Artemis a la luna.

[ad_2]

Source Article Link

Categories
Business Industry

Una explicación teaser de la Batalla de Above The Gods Eye en el episodio final de la segunda temporada de la serie House of The Dragon.

[ad_1]





Advertencia: este artículo trata… Grandes spoilers Para el final de la temporada 2 de “House of the Dragon”, el libro “Fire & Blood” y posibles episodios futuros de la serie. en serio. A usted Palabras más atrevidas para advertirte sobre spoilers.

“House of the Dragon” fue un lento tira y afloja de violencia. Así como la serie “Juego de Tronos” habló sobre el regreso de los dragones y la magia al mundo, la serie “La Casa del Dragón” habla sobre el regreso de la guerra y el derramamiento de sangre a Poniente. Pasamos la mayor parte de la primera temporada en las cámaras del consejo, donde las puñaladas por la espalda y las intrigas políticas sembraron las semillas del conflicto y la guerra, pero no hubo muchos combates. Excepto cuando el barco de Crabfeeder montó una resistencia masiva contra la corrupción de Westeros. Casi aplastó a todo el continente. Descansa en paz, rey.

Pero la temporada actual ha estado llena de acontecimientos. La segunda temporada nos mostró el entusiasmo de algunas personas por involucrarse en una guerra sangrienta, y también nos mostró la renuencia de otros a permitir que las cosas vayan más allá de ese punto. Damon ha cometido numerosos crímenes de guerra.Mientras Rhaenyra volaba hasta Desembarco del Rey para tratar de evitar la guerra en Una escena maravillosa inventada para el espectáculo.Sin embargo, la temporada 2 de House of the Dragon no estuvo completamente exenta de acción, ya que tuvimos la emocionante y bastante sangrienta batalla de Rook's Rest, que nos dio mucha emoción con el dragón que escupe fuego.

Durante toda esta temporada, la Casa de los Dragones se ha estado preparando para otra batalla titánica, una batalla en Harrenhal donde los equipos Negro y Verde se enfrentarán con todas sus fuerzas. Hasta ahora, Ser Creston Cole ha evitado esta confrontación, trasladando sus fuerzas a Rook's Rest y enfrentándose a una fuerza mucho más pequeña. Sin embargo, el final de la temporada 2 prepara el escenario para la batalla más épica de Dance of the Dragons y de todo Fire and Blood: la batalla sobre el Ojo de los Dioses.

Profecías en la segunda temporada de House of the Dragon

En el episodio final de la temporada 2 de “House of the Dragon”, Helena despeja cualquier duda de que es una vidente verde, como la vemos en la visión de Damon, y es plenamente consciente de lo que la rodea en el sueño de otra persona. Luego, cuando su hermano Aemond intenta obligarla a ayudarlo en la batalla, ella declara siniestramente que Aemond morirá, “devorado por los ojos de los dioses”. Por supuesto, esto podría ser simplemente un intento espeluznante de intimidar a su hermano, a quien ya ha enfrentado por intentar matar a su marido, el rey, pero tal vez no lo sea. Una vez más, el episodio casi confirma que Helena realmente puede ver eventos futuros, o al menos tener visiones aterradoras..

También está sucediendo todo lo que da miedo allí. La historia de Daemon “La mansión de Luigi” en HarrenhalAdemás de ser una trama secundaria entretenida, inquietante, extraña y, a veces, divertida, hay indicios de profecía en los sueños de Damon y en sus tratos con… Alice Rivers, quien hizo mucho ruido sobre el destino de DamonEn el final de la temporada 2, Damon tiene otra visión después de tocar el arciano en Harrenhal.

Cuando se miran por separado, estas cosas no necesariamente significan nada, y el Ojo de Dios podría significar muchas cosas. Pero los lectores del libro saben que Helena acaba de estropear otra parte importante de Fuego y sangre.

Explicación de la batalla sobre el ojo de los dioses

Más que cualquier otro lugar, la segunda temporada de House of the Dragon quería que el público supiera que Harrenhal es un lugar muy especial e importante, y lo es. En los libros, este fue el lugar de la primera victoria masiva de Aegon el Conquistador, cuando Harren el Negro (conocido como Harren el Negro) y toda su casa fueron asados ​​vivos en la torre más alta de Harrenhal, que originalmente iba a ser la más grande y El más magnífico de los castillos de Poniente. Pero como vimos en “La Casa del Dragón”, las ruinas de Harrenhal parecen traer nada más que problemas a quienes residen allí, como una maldición de Harren el Negro.

En Fuego y Sangre, esto culminó en la Batalla sobre el Ojo de los Dioses, una batalla librada por Harrenhal y el cercano Ojo de los Dioses, el lago más grande de los Siete Reinos. Cuando Damon desafió a Aemond a un duelo, esperó en Harrenhal a que el actual rey guardián dejara de quemar Riverlands y viniera a enfrentarlo. Cuando los dos finalmente comenzaron a pelear, fue una batalla que iluminó el cielo, y se cree que ambos caballeros murieron peleando con sus dragones, Caraxes y Vagar, cuando cayeron al lago.

Además de las profecías y señales, House of the Dragon hizo algo interesante para burlarse de los dos jinetes de dragones por ser iguales. Vimos que Aemond era bastante malo liderando al Viejo Vagar. Mientras tanto, Damon era un excelente luchador y jinete de dragones, pero Caraxes tenía simplemente la mitad del tamaño de Vagar. Esto significaba que los dos probablemente serían igualmente poderosos cuando se enfrentaran, lo que resultaría en la muerte de los dos. Los personajes de anime más geniales de “House of the Dragon”.

House of the Dragon regresará para su tercera temporada en una fecha aún por anunciar.


[ad_2]

Source Article Link

Categories
Business Industry

Brad Pitt es el responsable del personaje “Googly Eye” en la película 12 Monos

[ad_1]

Para aquellos que necesiten refrescar la memoria, “12 Monkeys” se centra en Cole (Willis), quien es enviado al pasado para salvar a la humanidad de un virus mortal. En el camino, conoce a una psiquiatra (Madeleine Stowe) y a una paciente psiquiátrica (Bette) que tienen la clave de un misterioso grupo conocido como el Ejército de los Doce Monos, que se cree que es responsable de la liberación del virus.

El contexto es clave aquí. Pete venía de la película “Entrevista con el vampiro”. y “Legends of the Fall” en 1994, que lo impulsó al estrellato. “La gente no necesariamente lo reconocía”, explicó Gilliam en la misma entrevista. “The Legend of the Fall” se estrenó el primer fin de semana de rodaje. ¡De repente! El mundo ha cambiado. “Tuvimos que contratar mucha seguridad porque se convirtió en la persona más famosa del planeta”. Stu también comparó la experiencia con estar cerca de los Beatles en el apogeo de su fama en la década de 1960:

“Me estaba imaginando a los Beatles llegando al set de Filadelfia. Estaba histérico. Pensé: 'Dios mío, pobrecito'. “Mira lo que pasa”. Hubo informes de radio y la gente intentaba localizarlo. Luego vino y dio esta actuación increíble, lo que sorprendió a Terry. No tenía idea de lo que iba a pasar.

A pesar de las dificultades de la fama, Pitt ayudó a que esta película fuera un gran éxito, con… La película “12 Monos” fue un éxito de taquilla y también generó un querido programa de televisión.El actor tomó riesgos creativos en lugar de ir a lo seguro, y esta película es un buen ejemplo de ello. Con otros trabajos que siguieron, como “Se7ven” y “El club de la lucha”, evitó los riesgos de clasificación y se convirtió no sólo en un actor soñador, sino en uno de los mejores actores de su generación.

[ad_2]

Source Article Link

Categories
Life Style

AI’s keen diagnostic eye

[ad_1]

When China locked down the city of Shanghai in April 2022 during the COVID-19 pandemic, the ripples from that decision quickly reached people receiving treatment for cardiac conditions in the United States. The lockdown shut a facility belonging to General Electric (GE) Healthcare, an important producer of ‘iodinated contrast dyes’, used to make blood visible in angiograms.

Soon US hospitals were asking people with mild chest pain to wait, so that the suddenly precious dyes could be reserved for use in those thought to be experiencing acute heart attacks. GE Healthcare scrambled to shift some of its production to Ireland to increase supply. A study in the American Journal of Neuroradiology later revealed that during the shortage, which lasted from mid-April to early June, the number of daily computed tomography (CT) angiograms dropped by 10% and CT perfusion tests were down almost 30%1.

Such disruption caused by supply-chain problems might, in future, be avoided through the use of virtual contrast agents. Techniques powered by artificial intelligence (AI) could highlight the same hidden features that the dyes reveal, without having to inject a foreign substance into the body. “With AI tools, all this hassle can be removed,” says Shuo Li, an AI researcher at Case Western Reserve University in Cleveland, Ohio.

AI has already made its way into conventional medical imaging, with deep-learning algorithms able to match and sometimes exceed the performance of radiologists in spotting anomalies in X-ray or magnetic resonance imaging (MRI) scans. Now the technology is starting to go even further. In addition to the computer-generated contrast agents that several groups around the world are working on, some researchers are exploring what features AI can detect that radiologists don’t normally even look for in scans. Other scientists are studying whether AI might enable brain scans to be used to diagnose neuro-developmental issues, such as attention deficit hyperactivity disorder (ADHD).

Li has been pursuing virtual contrast agents since 2017, and now he’s seeing a global wave of interest in the area. The potential benefits are many. All imaging methods can be enhanced by contrast agents — iodinated dyes in the case of computed tomography (CT) scans, microbubbles in ultrasound, or gadolinium in MRI. And all of those contrast agents, although generally safe, carry some risks, including allergic reactions. Gadolinium, for instance, often can’t be given to people with kidney problems, pregnant people or those who take certain diabetes or blood-pressure medications.

There’s also the issue of cost. The global market for gadolinium as a contrast agent was estimated to be worth US$1.6–2 billion in 2023, and the market for contrast agents in general is worth at least $6.3 billion. The use of contrast agents also requires extra time: many scans involve taking an image, then injecting the agent and repeating part of the scan.

Although it drags out the imaging process, that repetition helps to provides training data for an AI model. The computer studies the initial image to learn subtle variations in the pixels, then compares those with the corresponding pixels on the image taken after the contrast agent was injected. After training, the AI can look at a fresh image and show what it would look like if the contrast agent had been applied.

At the start of this year, Li and his colleagues at the Center for Imaging Research in Case Western’s School of Medicine received a $1.1-million grant from the US National Science Foundation to pursue this idea. They’d already done some preliminary work, training an AI on a few hundred images. Because of the small data set, the results were not as accurate as they would like, Li says. But with funding to study 10,000 or even 100,000 images, performance should improve. The researchers are also working on a similar project to detect liver cancer from scans.

Filling in the picture

If a computer can identify health issues in images, the next step will be to show radiologists a set of images produced with actual and virtual contrast agents, to see whether the specialists, who don’t know which is which, get different results from stained as opposed to AI-enhanced images. After that, says Li, it will take a clinical trial to win approval from the US Food and Drug Administration.

A similar approach could work for slides of tissue samples that pathologists stain and view under a microscope. By treating thin slices of tissue taken during a biopsy, pathologists can make certain features stand out and thereby see cellular abnormalities that aid in the identification of cancer or other diseases.

With AI-assisted virtual staining, Aydogan Ozcan, an optical engineer at the University of California, Los Angeles, says he can take an image using a mobile phone attached to a microscope and then, despite the image’s limited resolution and distortion, teach a neural network to make it look as if it was created by a laboratory-grade instrument2. The technology’s ability to transform one type of image into another doesn’t stop there. Ozcan starts with standard tissue samples, but rather than staining them, he places them under a fluorescence microscope and shines light through them, prompting the tissue to autofluoresce. The resulting images come out in shades of grey, very different from the coloured ones pathologists are used to. “Microscopically it’s very rich, but nobody cares to look at those black-and-white images,” Ozcan says.

To incorporate colour, he passes the samples to a histopathology lab for conventional staining, and captures images of the samples with a standard microscope. Ozcan then shows both types of image to a neural network, which learns how the details in the fluorescence images match up with the effects of the chemical stains. Having learnt this correspondence, the AI can then take new fluorescence images and present them as if they had been stained3.

Three images side-by-side. Left image: black & white cells, centre & right images: pink and purple cells

A fluorescent microscope captures a black-and-white image of a tissue sample (left). The AI generates a version of that image with a virtual stain (centre), which closely resembles the chemically stained sample (right).Credit: Ref. 2

Although one particular stain, H&E, made up of the compounds haematoxylin and eosin, is by far the most common, pathologists use plenty of others, some of which are preferable for highlighting certain features. Trained on the other stains, the AI can transform the original image to incorporate any stain the pathologist wants. This technique allows researchers to simulate hundreds of different stains for the same small tissue sample. That means pathologists will never run out of tissue for a particular biopsy and ensures that they’re looking at the same area in each stain.

AI’s ability to manipulate medical images is not limited to transforming them. It can also extrapolate missing image data in such a way as to give radiologists access to clinically important information that they would otherwise have missed. Kim Sandler, a radiologist at Vanderbilt University Medical Center in Nashville, Tennessee, was interested in whether measures of body fat could help to predict clinical outcomes in people receiving CT scans to screen for lung cancer.

Often, radiologists will crop out areas of a chest CT scan that they’re not interested in, such as the abdomen and organs such as the spleen or liver. This selectivity improves the quality of the rest of the image and aids the identification of shadows or nodules that might indicate lung disease. But, Sandler thought, an AI could perhaps learn more by taking the opposite tack and expanding the field of view4. She worked with computer engineers who taught a neural network to look at the image differently by either adding back the cropped-out parts from the raw data, or combining what it saw with knowledge from the medical literature to decide what should be in the missing areas.

Having done that, the AI then made quantitative estimates of the amount of fat in the skeletal muscles — the lower the muscle density, the more fat present. There is a known association between body composition and health outcomes. In people with a lower muscle density as determined by AI, “we found that there was a higher risk of cardiovascular-disease-related death, a higher incidence of lung-cancer-related death,” as well as higher death rates from any cause over the 12.3 years the study looked at5, Sandler says. The AI did not, however, improve cancer diagnosis. “This was not helpful in terms of who would develop lung cancer, but it was helpful in predicting mortality,” she says.

The results are nonetheless diagnostically useful, Sandler says. People whose risk of mortality is elevated can be offered more aggressive therapies or more frequent screening if no lung cancer is yet apparent in the scans.

Invisible signs

AI might even be able to spot types of diagnostic information that physicians had never thought to look for, in part because it’s not something they’ve been able to see themselves. ADHD, for instance, is diagnosed on the basis on self-reported and observed behaviour rather than a biomarker. “There are behaviours that are relatively specific for ADHD, but we don’t have a good understanding of how those manifest in the neural circuitry of the brain,” says Andreas Rauschecker, a neuroradiologist at the University of California, San Francisco. As someone who spends a lot of time looking at brain images, he wanted to see whether he could find such an indicator.

He and his team trained an AI on MRI scans of 1,704 participants in the Adolescent Brain Cognitive Development Study, a long-term investigation of brain development in US adolescents. The system learnt to look at water molecules moving along certain white-matter tracts that connect different areas of the brain, and tried to link any variations with ADHD. It turned out that certain measurements in the tracts were significantly higher in children identified as possibly having ADHD.

Image of brain tissue (white) with areas of purple, blue, orange, turquoise

Andreas Rauschecker and his colleagues have been studying the movement of water molecules along tracts of white matter in the brain.Credit: Pierre Nedelec

Rauschecker emphasizes that this is a preliminary study; it was presented at a Radiological Society of North America meeting in November 2023 and has not yet been published. In fact, he says, no type of brain imaging currently in use can diagnose any neuropsychiatric condition. Still, he adds, it would make sense if some of those conditions were linked to structural changes in the brain, and he holds out hope that scans could prove useful in the future. Within a decade, he says, it’s likely that there will be “a lot more imaging related to neuropsychiatric disease” than there is now.

Even with help from AI, physicians don’t make diagnoses on the basis of on images alone. They also have their own observations: clinical indicators such as blood pressure, heart rate or blood glucose levels; patient and family histories; and perhaps the results of genetic testing. If AI could be trained to take in all these different sorts of data and look at them as a whole, perhaps they could become even better diagnosticians. “And that is exactly what we found,” says Daniel Truhn, a physicist and clinical radiologist at RWTH Aachen University in Germany. “Using the combined information is much more useful” than using either clinical or imaging data alone.

What makes combining the different types of data possible is the deep-learning architecture underlying the large language models behind applications such as ChatGPT6. Those systems rely on a form of deep learning called a transformer to break data into tokens, which can be words or word fragments, or even portions of images. Transformers assign numerical weights to individual tokens on the basis of on how much their presence should affect tokens further down the line — a metric known as attention. For instance, based on attention, a transformer that sees a mention of music is more likely to interpret ‘hit’ to mean a popular song than a striking action when it comes up a few sentences later. The attention mechanism, Truhn says, makes it possible to join imaging data with numerical data from clinical tests and verbal data from physicians’ notes. He and his colleagues trained their AI to diagnose 25 different conditions, ruling ‘yes’ or ‘no’ for each7. That’s obviously not how humans work, he says, but it did help to demonstrate the power of combining modalities.

In the long run, Sandler expects AI to show physicians clues they couldn’t glean before, and to become an important tool for improving diagnoses. But she does not see them replacing specialists. “I often use the analogy of flying a plane,” she says. “We have a lot of technology that helps planes fly, but we still have pilots.” She expects that radiologists will spend less time writing reports about what they see in images, and more time vetting AI-generated reports, agreeing or disagreeing with certain details. “My hope is that it will make us better and more efficient, and that it’ll make patient care better,” Sandler says. “I think that is the direction that we’re going.”

[ad_2]

Source Article Link

Categories
Politics

Children with ‘lazy eye’ are at increased risk of serious disease in adulthood

[ad_1]

Adults who had amblyopia (‘lazy eye’) in childhood are more likely to experience hypertension, obesity, and metabolic syndrome in adulthood, as well as an increased risk of heart attack, finds a new study led by UCL researchers.

In publishing the study in eClinicalMedicine, the authors stress that while they have identified a correlation, their research does not show a causal relationship between amblyopia and ill health in adulthood.

The researchers analysed data from more than 126,000 participants aged 40 to 69 years old from the UK Biobank cohort, who had undergone ocular examination.

Participants had been asked during recruitment whether they were treated for amblyopia in childhood and whether they still had the condition in adulthood. They were also asked if they had a medical diagnosis of diabetes, high blood pressure, or cardio/cerebrovascular disease (ie. angina, heart attack, stroke).

Meanwhile, their BMI (body mass index), blood glucose, and cholesterol levels were also measured and mortality was tracked.

The researchers confirmed that from 3,238 participants who reported having a ‘lazy eye’ as a child, 82.2% had persistent reduced vision in one eye as an adult.

The findings showed that participants with amblyopia as a child had 29% higher odds of developing diabetes, 25% higher odds of having hypertension and 16% higher odds of having obesity. They were also at increased risk of heart attack — even when other risk factors for these conditions (e.g. other disease, ethnicity and social class) were taken into account.

This increased risk of health problems was found not only among those whose vision problems persisted, but also to some extent in participants who had had amblyopia as a child and 20/20 vision as an adult, although the correlation was not as strong.

Corresponding author, Professor Jugnoo Rahi (UCL Great Ormond Street Institute for Child Health, UCL Institute of Ophthalmology and Great Ormond Street Hospital), said: “Amblyopia is an eye condition affecting up to four in 100 children. In the UK, all children are supposed to have vision screening before the age of five, to ensure a prompt diagnosis and relevant ophthalmic treatment.

“It is rare to have a ‘marker’ in childhood that is associated with increased risk of serious disease in adult life, and also one that is measured and known for every child — because of population screening.

“The large numbers of affected children and their families, may want to think of our findings as an extra incentive for trying to achieve healthy lifestyles from childhood.”

Amblyopia is when the vision in one eye does not develop properly and can be triggered by a squint or being long-sighted.

It is a neurodevelopmental condition that develops when there’s a breakdown in how the brain and the eye work together and the brain can’t process properly the visual signal from the affected eye. As it usually causes reduced vision in one eye only, many children don’t notice anything wrong with their sight and are only diagnosed through the vision test done at four to five years of age.

A recent report from the Academy of Medical Sciences* involving some researchers from the UCL Great Ormond Street Institute for Child Health, called on policymakers to address the declining physical and mental health of children under five in the UK and prioritise child health.

The team hope that their new research will help reinforce this message and highlight how child health lays the foundations for adult health.

First author, Dr Siegfried Wagner (UCL Institute of Ophthalmology and Moorfields Eye Hospital), said: “Vision and the eyes are sentinels for overall health — whether heart disease or metabolic disfunction, they are intimately linked with other organ systems. This is one of the reasons why we screen for good vision in both eyes.

“We emphasise that our research does not show a causal relationship between amblyopia and ill health in adulthood. Our research means that the ‘average’ adult who had amblyopia as a child is more likely to develop these disorders than the ‘average’ adult who did not have amblyopia. The findings don’t mean that every child with amblyopia will inevitably develop cardiometabolic disorders in adult life.”

The research was carried out in collaboration with the University of the Aegean, University of Leicester, King’s College London, the National Institute for Health and Care Research (NIHR) Biomedical Research Centre (BRC) at Moorfields Eye Hospital and UCL Institute of Ophthalmology and the NIHR BRC at UCL Great Ormond Street Institute of Child Health and Great Ormond Street Hospital.

The work was funded by the Medical Research Council, the NIHR and the Ulverscroft Foundation.

[ad_2]

Source Article Link