Categories
Featured

What is ‘sleep banking’ and can it really help you prepare for lost sleep?

[ad_1]

‘Sleep banking’ is the process of sleeping more in the days leading up to a period where you know you’ll be sleeping less. By accumulating this excess rest, you can (partly) counteract the effects of sleep debt, helping you feel better and more alert even after a bad night. Although this isn’t a quick-fix for consistently bad sleep, it can be a method to cope better with expected sleep loss, such as after a clock change or when traveling to a different time zone.

For sleep banking to be effective, you need to be able to get good, extended sleep when you need it. To do this, it’s essential to have a sleep setup that supports your needs. Our best mattress and best pillow guides can help you optimize your bedroom for rest, so you can grab those extra few hours. Want to give sleep banking a go? We asked an expert how it works, and how you can get started saving your sleep for a rainy day.

What is sleep banking?

Sleep banking is a method that involves accumulating extra sleep before a period of less sleep. This ‘banked’ rest can then counterbalance the sleep you’ve lost, helping you feel more alert and awake, despite your disrupted night. “Think of it as having a sleep savings account,” says Dr Jake Deutsch, board certified emergency physician and medical advisory board member for Oura

A woman stretching in bed

(Image credit: Getty Images)

The term ‘sleep banking’ was coined by a research team from the Walter Reed National Military Medical Center, after conducting a study to see whether excess sleep could improve performance and alertness during a later period of reduced sleep.

[ad_2]

Source Article Link

Categories
Life Style

CAR T cells can shrink deadly brain tumours — though for how long is unclear

[ad_1]

Coloured FLAIR (fluid-attenuated inversion recovery) magnetic resonance imaging scan (MRI) of an axia section through a human brain showing a glioblastoma affecting the frontal lobe.

A glioblastoma (green and blue, artificially coloured) grows in the frontal lobe of a person’s brain.Credit: Pr Michel Brauner, ISM/Science Photo Library

Two preliminary studies suggest that next-generation engineered immune cells show promise against one of the most feared forms of cancer.

A pair of papers published on 13 March, one in Nature Medicine1 and the other in New England Journal of Medicine2, describe the design and deployment of immune cells called chimeric antigen receptor T (CAR T) cells against glioblastoma, an aggressive and difficult-to-treat form of brain cancer. The average length of survival for people with this tumour is eight months.

Both teams found early hints of progress using CAR T cells that target two proteins made by glioblastoma cells, thereby marking those cells for destruction. CAR T cells are currently approved only for treating blood cancers such as leukaemia and are typically engineered to home in on only one target. But the new results add to mounting evidence that CAR T cells could be modified to treat a wider range of cancers.

“It lends credence to the potential power of CAR-T cells to make a difference in solid tumours, especially the brain,” says Bryan Choi, a neurosurgeon at Massachusetts General Hospital in Boston, and a lead author of the New England Journal of Medicine study. “It adds to the excitement that we might be able to move the needle.”

A highly lethal tumour

Glioblastomas offer a formidable challenge. Fast-growing glioblastomas can mix with healthy brain cells, forming diffuse tumours that are difficult to remove surgically. Surgery, chemotherapy and radiation therapy are typically the only treatment options and tend to produce short-lived, partial responses.

In CAR-T therapy, a person’s own T cells are removed from the body and kitted out with proteins that help the cells home in on tumours. The souped-up cells are then reinfused into the body.

In the past few years, researchers have been developing CAR T cells that target specific molecules made by some glioblastomas. The two new papers take this a step farther by designing CAR T cells that target not one type of molecule but two.

In one approach, Choi and his colleagues designed CAR T cells to latch onto a mutated form of a protein called EGFR that is produced by some glioblastomas. The CAR T cells also secreted antibodies that bind to both T cells and the unmutated form of EGFR, which is not typically made by brain cells but is often made by glioblastoma cells. The result is a CAR-T therapy that unleashes the immune system against cells that express either the mutated or the unmutated form of EGFR.

Choi and his team administered these cells to three adults with glioblastoma. Tumours appeared to shrink in all three, but later recurred. One man who received the treatment, however, had a response that lasted for more than six months.

Seven months and counting

The other team, led by Stephen Bagley, a neurooncologist at the University of Pennsylvania Perelman School of Medicine in Philadelphia, used CAR T cells that target both EGFR and another protein found in glioblastomas called interleukin-13 receptor alpha 2. Tumours appeared to shrink in all six of the people they treated. One participant’s glioblastoma began to grow again within a month, but one participant has not shown signs of tumour progression for seven months so far, says Bagley. Of the remaining four participants, one left the trial, and tumours have not rebounded in the remaining three, but they are within six months of treatment.

The results are promising, but the goal is to generate longer-lasting responses, says Bagley. It was exciting, he says, to watch tumours shrink in the first day after CAR-T therapy. “We hadn’t seen that before,” he says. “We were thrilled.”

But the excitement faded as participants relapsed after treatment: “It’s very humbling to go on that roller coaster ride,” he says. “One week you feel like you’ve made a real difference in their lives, and the next week the tumour is back again.”

Versatile T cells

The field will eagerly await additional results, says Sneha Ramakrishna, a paediatric oncologist at Stanford Medicine in California. The size of glioblastomas is notoriously difficult to measure because of their diffuse shape, and apparent changes in tumour size could be affected by inflammation following surgery to administer the CAR T cells directly into the brain.

But the images are impressive, and measures of tumour RNA in Choi’s study suggest that the tumours might have indeed shrunk, says Ramakrishna. And constructing CAR T cells with multiple targets could ultimately yield long lasting therapies, she says, by making it more difficult for cancer cells to develop ways to resist the therapy.

“I’m looking forward to seeing what they do over time,” she says. “I hope that as we get more experience, we can learn how to make the right CAR for our patients.”

[ad_2]

Source Article Link

Categories
Featured

New email standards: what you need to know

[ad_1]

In a significant move towards enhancing email security, Google and Yahoo will implement new email authentication protocols for high-volume email providers starting in February 2024. This initiative aims to bolster cybersecurity by mandating bulk senders who distribute over 5,000 messages daily to adhere to strict validation standards. The protocols, including Domain-based Message Authentication, Reporting and Conformance (DMARC), Sender Policy Framework (SPF), and DomainKeys Identified Mail (DKIM), focus on preventing list abuse, enhancing sender verification, and reducing phishing risks.

DMARC is particularly crucial in the fight against cyberattacks, as it authenticates sender addresses to block phishing and domain impersonation. In an era where AI-driven phishing attempts are increasingly sophisticated, tools like DMARC, SPF, and DKIM are essential for protecting email recipients. SPF safeguards domain names by verifying the sender’s IP address, while DKIM adds a layer of cryptographic authentication to validate message ownership.

[ad_2]

Source Article Link

Categories
Life Style

Roll-to-roll, high-resolution 3D printing of shape-specific particles

[ad_1]

Particles on the scale of hundreds of micrometres to nanometres are ubiquitous key components in many advanced applications including biomedical devices1,2, drug-delivery systems3,4,5,15, microelectronics12 and energy storage systems16,17, and exhibit inherent material applicability in microfluidics6,7, granular systems8,9 and abrasives14. Approaches to particle fabrication inherently have trade-offs among speed, scalability, geometric control, uniformity and material properties.

Traditional particle fabrication methods range from milling and emulsification techniques to advanced moulding and flow lithography, and approaches can be classified as either bottom-up or top-down. Bottom-up particle fabrication approaches, best exemplified by grinding and milling18, emulsification19, precipitation20, nucleation-and-growth21 and self-assembly5,10,11 techniques, can have high throughput but lead to heterogeneous populations of granular particles with limited control over shape and uniformity. To address the geometric shortcomings of bottom-up approaches, top-down particle fabrication methods such as direct lithography10,22, single-step roll-to-roll soft lithography23,24 and multistep moulding4 have been employed.

Scalable particle moulding approaches, such as particle replication in non-wetting templates (PRINT) and stamped assembly of polymer layers (SEAL), incorporate lithographic approaches to attain two-dimensional (2D) geometric control4,24. PRINT utilizes a non-wetting fluoropolymer layer to facilitate rapid fabrication of isolated micro- and nanoparticles with demonstratable precise control over shape, size, surface functionalization and fillers such as drugs, proteins or DNA/RNA24,25. Detailed in vitro studies of these particles have elucidated shape-dependent tendencies of cellular uptake and enhanced localized cargo release24,25,26. Moreover, in vivo studies have shown the significant role played by particle size, shape, charge, surface chemistry and particle deformability on biodistribution via multiple different dosage forms (injection and inhalation)27,28,29. Extending the PRINT technology, the stacking of moulded particles enables more complex particle geometries as exemplified by SEAL4. Harvested moulded sections are welded together to gain three-dimensional (3D) fabrication control, yielding demonstratable pulsatile-release, drug-delivery vehicles. The trajectory and demonstrated application potential of these technologies lays the groundwork for future methods of fabricating advanced particles.

For example, continuous-flow lithography (or optofluidic fabrication) produces particles as a photopolymerizable resin flows through a fluidic channel, curing in 2D to 3D geometries30,31. The stop-polymerize-flow technique has been demonstrated to achieve quasi-continuous fabrication of 2D to 2.5D geometries (anisotropic properties on a 2D-defined shape)32. Deterministic deformation based on microfluidic flow can further enable the fabrication of concave-surface geometries, previously demonstrated at the rate of 86,400 particles per day31. Furthermore, additional dimensional control processes may be introduced to create Janus particles (particles whose surfaces have two or more distinct physical properties), nanoporous meshes using sacrificial additives or porogens or micropatterning via secondary chemical coating or formation control steps2,33,34.

One remaining major engineering challenge is to develop a particle fabrication technique that simultaneously enables all dimensions of micron-scale 3D geometric control, complexity, speed, material selection and permutability. Herein we introduce a scalable, high-resolution 3D printing technique for particle fabrication based on a roll-to-roll form of continuous liquid interface production (r2rCLIP). We demonstrate r2rCLIP using single-digit, micron-resolution optics in combination with a continuous roll of film in lieu of a static platform, enabling fast, rapidly permutable fabrication and harvesting of particles with a variety of materials and complex geometries (Fig. 1).

Fig. 1: r2rCLIP is a rapid fabrication process for particles with complex geometries.
figure 1

a, r2rCLIP is a quasi-continuous technique wherein a 3D geometry of simple to complex nature is designed and subsequently sliced into 2D images. These images are then used to fabricate 3D geometries from a photopolymerizable resin in a roll-to-roll process. b, Diagram of experimental r2rCLIP setup wherein an aluminium-coated PET film is unrolled from a feed roll (I) and mechanically braked (II) to provide tension before passing over a high-precision z stage and CLIP assembly (III). A designed geometry is projected through a Teflon AF window into a vat of photopolymerizable resin. The geometry materializes onto the film and the stage pulls in the z direction to direct vertical part formation. Once materialized, the particles on film are passed under a spring-tensioning system to maintain relative substrate positioning during stage movement (IV). The film is then passed through a cleaning step (V) before secondary curing (VI) and immersion in a non-ionic surfactant solution within a heated sonication bath and a razor blade to induce delamination (VII). The film is finally collected on a second roller with a stepper motor that provides translational movement throughout the process (VIII; Extended Data Fig. 1). Insets show a graphic of particle clearance over a guide roller (IX) and an image of particles on the film post cleaning (X). c, This scalable process is demonstrated by the production of around 30,000 hollow cube particles observed in a set of computer-stitched scanning electron microscopy (SEM) images. d, Octahedrons, icosahedrons and dodecahedrons with unit cell size ranging from 200 to 400 µm printed within a singular printed array. c,d, Samples printed from the HDDA–HDDMA system and coated with Au/Pd (60:40) before SEM imaging. Scale bars, 3 mm (b,c), 500 µm (d).

Continuous liquid interface production is an additive manufacturing technique that uses digital light processing (DLP) to project videos of 2D images describing 3D models into a vat of photopolymerizable resin. The resolution of this technique has improved from 50 to 4.5 μm, as well as providing speeds of up to 3,000 mm h−1 (refs. 35,36,37,38). CLIP utilizes a 385 nm ultraviolet light-emitting diode (LED) and digital micromirror device to simultaneously pattern an array of actinic photons, activating photo-initiators dissolved in liquid resin and inducing radical polymerization in each printed voxel. The CLIP technique is distinguished by the introduction of an oxygen-induced, photopolymerization-inhibited ‘dead zone’ between the photocurable resin and an optically clear vat window (Teflon amorphous fluoropolymer (AF) 1600 or 2400), effectively obviating any delamination step (Extended Data Fig. 2. and Supplementary Note 1). Lack of adherence, or glueing, of the growing particle onto the window facilitates fabrication of fragile green parts, such as thin struts on hollowed particle geometries, while maintaining high throughput speeds35,36. This technique is demonstrably versatile for a broad range of polymer chemistries, functionalization, fillers and multimaterial platforms35,38. High-resolution CLIP is used herein to obtain geometric control for the scalable fabrication of particles in the sub-200-µm regime with resin-dependent, layer-wise control down to single-digit-micron range and 2.00 × 2.00 µm2 xy resolution.

To achieve a rapid and fully automated particle-printing process we substituted the conventional static build plate of a high-resolution CLIP printer with a continuous-film, modular, roll-to-roll system. This enables semicontinuous printing and automated in-line postprocessing including cleaning, postcuring and harvesting (particle liftoff). An aluminium-coated polyethylene terephthalate (PET) film was chosen as the primary film substrate to maintain particle adhesion during printing at a level above in situ orthogonal resin reflow forces and normal suction forces, still allowing for delamination from film without fracture during harvesting (for additional substrates tested see Supplementary Note 2).

Complementary to film integration for particle printing, we constructed a high-resolution CLIP setup to fabricate fine particle features that achieves single-digit-micron optical resolution (2.00 × 2.00 or 6.00 × 6.00 µm2 depending on desired build area) in the xy plane. Voxel definition further depends on vertical resolution, dependent on stage movement repeatability (±0.12 μm), depth of focus of the optical setup (for example, 30 μm for 2.00 × 2.00 µm2 setup) and resin physical properties (refraction and diffraction of light, penetration depth and critical exposure dose for gelation; Table 1, Fig. 2 and Supplementary Note 3).

Table 1 Experimental curing parameters for high-resolution resins utilized in particle fabrication
Fig. 2: r2rCLIP is amenable to a range of high-resolution in-house and commercial materials with high-precision optimization.
figure 2

a, The bridging method enables working curve determination of resin-curing properties, as demonstrated for several bridge series from resins of increasing penetration depth at constant dosage and corresponding measured cure depth. Ridging artefacts coincide with pixel pitch at 6 µm spacing. Exposure measurement bridges coated with Au/Pd (60:40) before SEM imaging. b, Determination of intrinsic penetration depth and critical cure dosage. A lower slope correlates with greater analytical cure depth control at a given dosage (Emax), as well as with a lower propensity for fluctuations in exposure to result in major changes in cure depth (Cd). Scale bars, 15 µm.

Source Data

Previous work has studied surface and resolution optimization in photopolymerization-based 3D printing systems39; achieving z resolution below 25 µm remains a challenge due to intrinsic resin penetration depth and overcuring from accumulated dosages40,41,42. To fabricate optimal, complex particle geometries a resin system must be designed to achieve high z resolution; a 1,6-hexanediol diacrylate–1,6-hexanediol dimethacrylate (HDDA–HDDMA)-based system was previously described as achieving up to 4 µm vertical resolution39. We utilize this resin system herein and adopt an analytical bridging technique to measure intrinsic resin properties, as opposed to the common glass slide method40,42,43 which does not analytically describe in situ high-resolution CLIP as accurately. Our HDDA–HDDMA resin has a characteristic penetration depth of 8.0 ± 0.4 µm and experimentally resolved a minimum unsupported bridge thickness of 1.1 ± 0.3 µm. We characterized several additional high-resolution custom and commercial resin compositions, which are also compatible with r2rCLIP and may be substituted depending on materials requirements, desired vertical resolution and application (Table 1 and Fig. 2). Notably, unsupported film bridges characterized in the curing assay are thin (under 100 µm, relevant to particle fabrication) and resolve proximal to the dead zone, introducing periodic artefacts ascribed to fluctuations in light intensity between pixels. Surface irregularities may further be attributed to either resin reflow (elongated lines) or cavitation (bubbles) and may be addressed with optimization. Resin parameterization and optimization are essential in regard to vertical resolution determination for fabrication limitations; resins with greater characteristic penetration depth are not as amenable to thin vertical geometric features.

To demonstrate the potential of r2rCLIP in the fabrication of dimensionally complex structures we designed a range of shapes with increasing geometric complexity using computer-aided design. These designs not only mirror those created by previous 2D fabrication and multistep moulding techniques4,24 but also include several geometries that cannot be moulded, exemplifying the unique capabilities of our approach (Fig. 3). Herein we categorize geometric complexity on a spectrum ranging from shapes that can be moulded at scale to those that cannot. Mouldable geometries are defined to be plausibly fabricated at scale in a single step using a uniaxial die draw, core and cavity. Geometries increase in moulding complexity (and subsequently decrease in mouldability at scale) if a theoretical moulding approach requires an increasing number of parting lines, ejector pins and angles and extensive alignment or contains non-mouldable negative internal spaces. In addition, thin or sharp geometric features may introduce moulding complications and part anisotropy due to, for example, flash, short shot, shrinkage or air pockets exacerbated at the micron scale (Supplementary Note 4)44. It should be noted that it is plausible to couple a multistep moulding process with a sacrificial etching step to achieve some geometries deemed non-mouldable in this work, although without a high degree of reproducibility given mould alignment requirements.

Fig. 3: SEM images of mouldable to non-mouldable geometries fabricated by r2rCLIP.
figure 3

Particles were fabricated using the HDDA–HDDMA system and informed exposure intensities obtained from bridge fitting data (Fig. 2 and Table 1), washed as described and coated with a 60:40 Au/Pd before SEM observation. Insets show a rendering of each respective geometry for reference. Capped hollow cone inset shown as quarter cut-through for clarity. Scale bars, 250 µm.

One significant benefit of using the r2rCLIP method for particle fabrication is its inherent mouldless process, which enables changing of fabricated geometries within or between arrays based solely on optimized printing parameters. This means that a wide variety of particle geometries can be produced without needing to alter the setup, as would be necessary with previous particle fabrication methods (for example, mould interchange). This flexibility is particularly beneficial when needing to adjust geometric requirements, such as when fabricating precise ratios of heterogeneous mixtures of polydisperse particles (Fig. 1d).

To demonstrate the scalability afforded by r2rCLIP we fabricated approximately 30,000 hollow cube-shaped particles of 200 µm width and high reproducibility (Fig. 1c; 96 ± 1% fabrication success rate, n = 300; −10 ± 20% average relative error from nominal strut feature size, n = 300). Whereas optimized particle array (up to 16.4 mm2 for 2 µm or 147.5 mm2 for 6 µm resolution) fabrication speed is subminute, gram-scale production (thousands to millions of particles) necessitates the removal of time-consuming, manual manipulation steps. Previously the slow step of particle production involved the manual replacement of build substrate (requiring 4 ± 2 min for manual manipulation between high-resolution CLIP print jobs, n = 6,436; Supplementary Note 5). Replacing this manual manipulation step with mechanical substrate translation shifts the rate-limiting step to particle fabrication time—an inherent advantage of the r2rCLIP technique. For instance, fabrication of 1 million 200-µm-unit octahedrons (equal to approximately 1.4 g) would require just over 1 day with demonstrated array fabrication speeds of up to 38 s print duration with 26 s interprint delay (Supplementary Note 6). The r2rCLIP platform thus enables a new design application of particle fabrication in a wide range of accessible geometries, materials and batch sizes. r2rCLIP is a modular process that can thus be adapted to include additional steps in series such as coating, filling or sterilization, as well as additional postharvesting treatments such as devolatilization, electroless deposition or functionalization. The high throughput of r2rCLIP has direct implications for industrial-scale production of microdevices such as microrobots and cargo delivery systems.

As an example, this system is amenable to the production of ceramic materials. Preceramic resins can be used to mass produce technical ceramic particles, with potential applications in chemical mechanical planarization techniques as slurry components, conductive particles, in microtools, microelectromechanical systems or waveguides, enabling industrial applications such as electronics, telecommunications and healthcare13. As an example, we created 200 µm particles from a HDDA–preceramic mix and pyrolysed them in nitrogen at 800 °C to produce 103 µm hollow ceramic particles of feature size 25 µm (Fig. 4a). Energy-dispersive X-ray spectroscopy (EDS) analysis of these particles showed uniform composition distribution of O, Si and C (Fig. 4b). With subsequent annealing up to 1,400 °C in nitrogen, phases including Si3N4 and SiO2 can be achieved depending on the precursor material and processing conditions (Extended Data Fig. 3 and Supplementary Note 7). Future research can investigate the effectiveness of this process with different preceramic formulations and explore their potential applications.

Fig. 4: Particles fabricated via r2rCLIP enable a range of applications including ceramic particles and drug delivery.
figure 4

a, Hollow ceramic cubes formed from pyrolysis of HDDA–ceramic mix resin. b, EDS analysis of the surface of a hollow ceramic cube (top left) showing uniform distribution of silicon and oxygen, quantified as 30 ± 1% silicon, 35 ± 1% oxygen and 35 ± 2% carbon by normalized mass. Elemental distribution of O, Si and C (top right, bottom left and bottom right, respectively) overlaid on secondary electron image of the hollow cubes. c,d, Drug-delivery cubes may be designed to meet the goals of payload volume, release profile, material and so on (c) and fabricated via r2rCLIP (d) (PEGDMA550 material, for example). e,f, Devices may be then filled, as demonstrated with trypan blue dye for visualization (e), and subsequently capped (f). Scale bars, 100 µm (a), 5 µm (b, top left), 100 µm (b, other three images), 3 mm (d,e), 200 µm (f).

One further application enabled by r2rCLIP is the creation of hydrogel particles, which can be used as drug-delivery vessels. These particles can be filled to achieve adjustable, gradient or pulsatile-release profiles in a singular injection, as previously demonstrated for the SEAL process4,45,46. Previous studies have explored the development of suitable photopolymer resin systems and the impact of materials biocompatibility, cytotoxicity, shape and size on localization and delivery, enabling the creation of bioscaffolds and delivery manifolds5,15,23,25,28,45,46,47,48,49. This opens new possibilities for the fabrication of hydrogel particles for drug delivery but lacks a permutable, scalable fabrication process. As a proof of concept we have fabricated hydrogel cubes of 400 µm unit size, manually filled with around 8 nl of representative cargo postprinting and subsequently topped with a hydrogel cap (Fig. 4c). Future research can build on previous studies on drug-delivery vehicle kinetics, leveraging the adjustable properties of molecular weight and wall thickness to achieve a programmable pallet of cargo release.

Furthermore, amine-functionalized polymer end groups could be added to facilitate postfunctionalization with fluorophores, enabling the potential to integrate single-particle, one-pot analytical techniques to localize signal for better detection. Smaller unit scale geometries and additional materials such as metals may even be achieved through thermal conversion postprocessing that could lead to roughly 70% reduction in feature size50, which would bring our current xy resolution onto the nanometre scale. Future system improvement work can explore print and speed optimization, soluble film coatings, cleaning and particle-harvesting methods.

The mechanical and material versatility, ranging from hard ceramics to soft hydrogels, could support the creation of Janus particle properties and smart materials and aid in fundamental studies in materials and granular physics. Although the system requires a photopolymerizable component, it can accommodate weak, green-state particles enabling mixed, dual-curing systems containing a non-photopolymerizable component addressed in postprocessing. This flexibility allows for tunable particle materials properties dependent on the resin system, enabling a variety of particles with different mechanical properties to meet application requirements.

Herein we present a new, roll-to-roll, high-resolution, continuous liquid interface production technique capable of mass production of particles up to 200 µm at up to 2.0 µm feature resolution. Optical design of both printer and resin optimization enables printing of objects with up to single-digit-micron unsupported z resolution. Rapid permutability, complex 3D fabrication capabilities and inherent amenability to a wide variety of resin chemistries are demonstrated in the fabrication of mouldable, multistep mouldable and non-mouldable particle geometries. Moreover, rapid particle production enables gram-scale potential yield within a period of around 24 h for sub-200-µm units. This scalable particle production technique has demonstrated fabrication potential over a wide range, from ceramic to hydrogel manifolds, with subsequent potential application in microtools, electronics and drug delivery.

[ad_2]

Source Article Link

Categories
Entertainment

The best docking stations for laptops in 2024

[ad_1]

Depending on how much stuff you need to plug in, your laptop may not have enough ports to support it all — particularly if you have more wired accessories than Bluetooth ones. Docking stations add different combinations of Ethernet, HDMI, DisplayPort, 3.5mm, memory card and USB connections and, unlike simple hubs, are often DC-powered. For those who switch up their working location regularly, a docking station can make it easier to swap between a fully-connected desk setup and a simple laptop, since just one port links your computer to the dock. Which docking station you should get depends in part on what you want to plug in, but sifting through the hundreds of models out there can be tough. We tried out a dozen different options to help you narrow down the best docking station for your needs.

What to look for in a docking station

First and foremost, consider what you need to plug in. This will likely be the deciding factor when you go to actually buy a docking station. Do you need three screens for an expanded work view? A quick way to upload photos from an SD card? Are you looking to plug in a webcam, mic and streaming light, while simultaneously taking advantage of faster Ethernet connections? Once you’ve settled on the type of ports you need, you may also want to consider the generation of those ports as well; even ports with the same shape can have different capabilities. Here’s a brief overview of the connectivity different docking stations offer.

Monitor ports

External monitors typically need one of three ports to connect to a PC: HDMI, DisplayPort or USB-C. HDMI connections are more common than DisplayPort and the cables and devices that use them are sometimes more affordable. The most popular version of the DisplayPort interface (v1.4) can handle higher resolutions and refresh rates than the most common HDMI version (2.0). All of the docking stations with HDMI sockets that we recommend here use version 2.0, which can handle 4K resolution at 60Hz or 1080p up to 240Hz. The DisplayPort-enabled docks support either version 1.2, which allows for 4K resolution at 60Hz, or version 1.4, which can handle 8K at 60Hz or 4K at 120Hz.

You can also use your dock’s downstream (non-host) Thunderbolt ports to hook up your monitors. If your external display has a USB-C socket, you can connect directly. If you have an HDMI or DisplayPort-only monitor, you can use an adapter or a conversion cable.

Of course, the number of monitors you can connect and the resolutions/rates they’ll achieve depend on both your computer’s GPU and your monitors — and the more monitors you plug in can bring down those numbers as well. Be sure to also use cables that support the bandwidth you’re hoping for. MacOS users should keep in mind that MacBooks with the standard M1 or M2 chips support just one external monitor natively and require DisplayLink hardware and software to support two external displays. MacBooks with M1 Pro, M2 Pro or M2 Max chips can run multiple monitors from a single port.

USB ports

Most docking stations offer a few USB Type-A ports, which are great for peripherals like wired mice and keyboards, bus-powered ring lights and flash drives. For faster data transfer speeds to your flash drive, go for USB-A sockets labeled 3.1 or 3.2 — or better yet, use a USB-C Thunderbolt port.

Type-C USB ports come in many different flavors. The Thunderbolt 3, 4 and USB4 protocols are newer, more capable specifications that support power delivery of up to 100W, multiple 4K displays and data transfer speeds of up to 40Gbps. Other USB-C ports come in a range of versions, with some supporting video, data and power and some only able to manage data and power. Transfer rates and wattages can vary from port to port, but most docks list the wattage or GB/s on either the dock itself or on the product page. And again, achieving the fastest speeds will depend on factors like the cables you use and the devices you’re transferring data to.

Nearly every dock available today connects to a computer via USB-C, often Thunderbolt, and those host ports are nearly always labeled with a laptop icon. They also allow power delivery to your laptop: available wattage varies, but most docks are rated between 85 and 100 watts. That should be enough to keep most computers powered — and it also means you won’t have to take up an extra laptop connector for charging.

Other ports

None of our currently recommended laptops include an Ethernet jack; a docking station is a great way to get that connection back. We all know objectively that wired internet is faster than Wi-Fi, but it might take running a basic speed comparison test to really get it on a gut level. For reference, on Wi-Fi I get about a 45 megabit-per-second download speed. Over Ethernet, it’s 925 Mbps. If you pay for a high-speed plan, but only ever connect wirelessly, you’re probably leaving a lot of bandwidth on the table. Every docking station I tested includes an Ethernet port, and it could be the connector you end up getting the most use out of.

Just two of our favorite laptops have SD card readers, and if you need a quick way to upload files from cameras or audio recorders, you may want to get a dock with one of those slots. Of the docks we tested, about half had SD readers. For now, most (but not all) laptops still include a 3.5mm audio jack, but if you prefer wired headphones and want a more accessible place to plug them in, many docking stations will provide.

When you’re counting up the ports for your new dock, remember that most companies include the host port (the one that connects to your computer) in the total number. So if you’re looking for a dock with three Thunderbolt connections, be sure to check whether one of them will be used to plug in your laptop.

The Cal Ditgit TS4 stands upright on a desk and we can see the ports clearly.

Photo by Amy Skorheim / Engadget

Design

Most docking stations have either a lay-flat or upright design. Most docks put the more “permanent” connections in back — such as Ethernet, DC power, monitor connections and a few USBs. Up-front USB ports can be used for flash drive transfers, or even plugging in your phone for a charge (just make sure the port can deliver the power you need). USBs in the rear are best for keyboards, mice, webcams and other things you’re likely to always use. Some docks position the host port up front, which might make it easier to plug in your laptop when you return to your desk, but a host port in back may look neater overall.

How we tested

We started out by looking at online reviews, spec sheets from various brands and docking stations that our fellow tech sites have covered. We considered brands we’ve tested before and have liked, and weeded out anything that didn’t have what we consider a modern suite of connections (such as a dock with no downstream USB-C ports). We narrowed it down to 12 contenders and I tested each dock on an M1 MacBook Pro, a Dell XPS 13 Plus and an Acer Chromebook Spin 514. I plugged in and evaluated the quality of the connections for 12 different peripherals including a 4K and an HD monitor, a 4K and an HD webcam, plus USB devices like a mouse, keyboard, streaming light and mic. I plugged in wired earbuds, and transferred data to a USB-C flash drive and an external SSD. I ran basic speed tests on the Ethernet connections as well as the file transfers. I judged how easy the docks were to use as well as the various design factors I described earlier. I made spreadsheets and had enough wires snaking around my work area that my cat stayed off my desk for three weeks (a new record).

Photo by Amy Skorheim / Engadget

Host connection: 2 x USB-C | Power delivery to host: 75W (USB-C) | USB-C: 1 x USB 3.0, 1 x 3.1 | USB-A: 2 | Monitor: 2 x HDMI 2.0, 1 x DisplayPort 1.4 | Aux 3.5mm: No | SD Card: No

The Satechi Dual Dock Stand is different from all the other docks we tested in two respects: it doesn’t require a power source and it goes beneath your MacBook instead of beside it. You could almost classify it as a hub, but I think the high number of ports earns it docking-station status. It plugs into the two USB-C ports at the side of a Mac, which allows MacBooks with M1, M2 or M3 Pro or Max chips to operate two external monitors in extended mode. Unfortunately, MacBooks with standard M1 or M2 chips, can natively only power a second external display in mirrored mode. The new MacBook Air with the M3 chip can only power two displays in extended mode with the laptop lid closed. If you have a Mac with a standard chip and need two monitors, you’ll need a docking station that supports DisplayLink hardware and software, such as the Kensington SD4780P, which is our top pick for Chromebooks.

Since the Dual Dock works without power, it’s a lot easier to set up than other docks with transformer boxes and DC cables. I found it made the most sense to just use the MagSafe connector on the laptop, but you can also supply power to the dock using the non-data USB-C port and it will pass 75 watts to your machine.

Both the 4K and HD monitors I tested looked great and worked well in extended mode. There are two USB-Cs for a webcam and mic, plus two USB-As which could be used for a dongle mouse and a streaming light — that’s likely enough ports for conferencing or even a basic video creator setup. The dock is ultimately limited by the fact that none of the USB-C connections are Thunderbolt and there are only two USB-A sockets to work with. But it’s a great choice for extending productivity in a way that tucks beneath a MacBook, neatly moving the cords to the back of the machine and out of the way.

Pros

  • Unique design complements MacBooks
  • Can power two monitors on Macs with M1 Pro or M2 Pro chips
  • Good variety of ports
Cons

  • Just two USB-A ports
  • No Thunderbolt ports

$130 at Adorama

Photo by Amy Skorheim / Engadget

Host connection: Thunderbolt 4 | Max power delivery to host: 96W (DC) | USB-C: 1 x TB4, 1 x 3.2 | USB-A: 4 | Monitor: 2 x HDMI 2.0 | Aux 3.5mm: Yes | SD Card: SD and microSD

For those who want the extra speed and connectivity of the latest Thunderbolt interface, I recommend Kensington’s AD2010T4 Thunderbolt 4 Dual 4K Docking Station. Of all the TB4 docking stations tested, the AD2010 is the only one under $300, yet it performed on par with and even offered a better selection of ports than some of the others I tested. It gives you two Thunderbolt 4 connections, one for the host and one for accessories, plus an additional 3.2 USB-C. Dual HDMI 2.0 sockets can handle two external screens with up to 4K resolution (at 60Hz). But if you need three additional monitors or have an 8K screen, you can tap into the Thunderbolt port.

There’s a total of four USB-As, which is enough for a wired mouse or keyboard and a couple other peripherals. It has an SD and a microSD card slot, a 3.5mm audio combo jack and an Ethernet jack. There are even two Kensington lock slots that let you physically secure your dock with a cable.

The device itself has a solid feel and an attractive metal design. My only gripe is with the lay-flat orientation and that nearly half of the ports are on the front edge — I think upright docks that keep most connections around back have an overall neater look on a desk. However, I should point out that Kensington sells mounts for its docks, which could help with aesthetics. 

Pros

  • Competitively priced
  • Powerful downstream TB4 port
  • Plenty of USB-A connections
Cons

  • Most ports are up front
  • Lay-flat design can be a space hog without a mount

$200 at Amazon

Photo by Amy Skorheim / Engadget

Host connection: Thunderbolt 4 | Power delivery to host: 98W (DC) | USB-C: 2 x TB4, 3 x 3.2 | USB-A: 5 | Monitor: 1 x DisplayPort 1.4 | Aux 3.5mm: 1 x audio combo, 1 x audio in, 1 x audio out | SD Card: SD and microSD

There’s a lot to appreciate about CalDigit’s TS4 docking station: It has a sturdy, upright design with a host connection at the rear and a whopping five downstream USB-C ports, two of which are Thunderbolt 4. Up front, you get an SD and a microSD card slot along with a headphone jack, two USB-C and a USB-A connector. In back, there’s room for four more USB-A devices and two 3.5mm jacks, one for audio in and one for audio out. One area where the dock may feel lacking is in display inputs. It only has one DisplayPort 1.4, but it has plenty of TB4 ports, which you can easily use to outfit a full command center (if you don’t have a USB-C monitor, there are plenty of adapters).

The multi-gig Ethernet jack can handle up to 2.5Gbps, so if you’re paying for a screaming-fast internet plan, this dock can help you take advantage of it. The TS4 can deliver up to 98W of power to your laptop, though like any docking station, the wattage goes down when other items are also drawing power.

The TS4 worked equally well with my MacBook Pro and the Dell XPS13 Plus and was even compatible with a Chromebook. I tested read/write speeds on a Samsung T7 SSD via a Thunderbolt port and got 734 MB/s read and 655 MB/s write speeds on the Mac and 1048/994 on the Dell. Compared to the other docks, that was in the lower-middle range for the Mac and the fastest overall for the PC. On PC, it also handled a 1GB folder transfer to a flash drive faster than any other dock and delivered the fastest connection speeds over the Ethernet. It’s the only unit that let me plug in every single peripheral I had on hand at once. If you’ve got lots of tech you want to use simultaneously (and money isn’t a concern), this is the one to get.

Pros

  • An abundance of ports
  • Compact, upright design
  • 2.5Gbps Ethernet port

$400 at Amazon

Photo by Amy Skorheim / Engadget

Host connection: USB-C | Power delivery to host: 100W (DC) | USB-C: 1 x 3.1 | USB-A: 5 | Monitor: 2 x HDMI 2.0, 2 x DisplayPort 1.2 | Aux 3.5mm: 1 x audio combo | SD Card: No

The Kensington SD4780P Dual 4K typically requires a DisplayLink driver, but any Chromebook made after 2017 supports the connection from the jump. Finding a docking station that works with ChromeOS is tough; of the 12 units I tested, only four connected at all with the Acer Chromebook Spin 514, and one of those four couldn’t run two monitors. The SD4780P uses a USB-C host connection, through which it offers a maximum power delivery of 100W and was able to run both the 4K and HD screens cleanly.

It allows for a wide range of USB-A peripherals through five such ports, but there’s only a single downstream USB-C, so I wasn’t able to use both a webcam and mic at the same time. That means you’ll need to use your Chromebook’s built-in ports if you want more than one of those types of devices set up. The plastic build makes it look a little cheap and I’m not crazy about the lay-flat design, but the host port is in the back, which will make your setup neater. If all you’re looking for is a way to get a few extra monitors and use your wired USB accessories, this is a good pick for Chromebooks. 

Pros

  • Works well with Chromebooks
  • Five USB-A ports
Cons

  • Requires a driver for non Chromebooks
  • Just one downstream USB-C

$199 at Amazon

Other docking stations we tested

Plugable TBT4-UDZ

When I pulled the Plugable TBT4-UDZ Thunderbolt 4 out of the box, I was convinced it would make the cut: It has a practical upright design, an attractive metal finish, and the host connection is TB4. While there are plenty of USB-A and monitor ports, there’s just one downstream USB-C. A modern dock, particularly one that costs $300, should let you run, say, a USB-C cam and mic at the same time. Otherwise, it’s pretty limiting.

Anker 575 USB-C

At $250 (and more often $235), the Anker 575 USB-C could make for a good budget pick for Windows. It performed well with the Dell XPS 13 Plus, but had trouble with the third screen, the 4K webcam and headphone jack when connected to the MacBook Pro. It’s quite compact, which means it can get wobbly when a bunch of cables are plugged in, but it has a good selection of ports and was able to handle my basic setup well.

Belkin Connect Pro Thunderbolt 4

Belkin’s Connect Pro Thunderbolt 4 Dock is a contender for a Thunderbolt 4 alternative. It has nearly the same ports as the AD2010 (minus the microSD slot) and an attractive rounded design — but it’s $90 more, so I’d only recommend getting it if you find it on sale.

Acer USB Type-C Dock

Acer’s USB Type-C Dock D501 costs $10 more than our Kensington pick for Chromebooks, but it performs similarly and is worth a mention. It has nearly the same ports (including the rather limiting single downstream USB-C) but both the Ethernet and data transfer speeds were faster.

FAQs

Are docking stations worth it?

Docking stations are worth it if you have more accessories to plug in than your laptop permits. Say you have a USB-C camera and mic, plus a USB-A mouse, keyboard and streaming light; very few modern laptops have enough connections to support all of that at once. A docking station can make that setup feasible while also giving you extra ports like an Ethernet connection, and supplying power to your laptop. However, if you just need a few extra USB sockets, you might be better off going with a hub, as those tend to be cheaper.

How much does a laptop dock cost?

Laptop docking stations tend to be bigger and more expensive than simple USB-A or USB-C hubs, thanks to the wider array of connections. You can find them as low as $50 and they can get as expensive as $450. A reasonable price for a dock with a good selection of ports from a reputable brand will average around $200.

How do I set up my laptop dock?

Most docking stations are plug and play. First, connect the DC power cable to the dock and a wall outlet. Then look for the “host” or upstream port on the dock — it’s almost always a USB-C/Thunderbolt port and often branded with an icon of a laptop. Use the provided cable to connect to your computer. After that, you can connect your peripherals to the dock and they should be ready to use with your laptop. A few docking stations, particularly those that handle more complex monitor setups, require a driver. The instructions that come with your dock will point you to a website where you can download that companion software.

Does a laptop charge on a docking station?

Nearly all docking stations allow you to charge your laptop through the host connection (the cable running from the dock to your computer). That capability, plus the higher number of ports is what separates a docking station from a hub. Docks can pass on between 65W and 100W of power to laptops, and nearly all include a DC adapter.

Are all docking stations compatible with all laptops?

No, not all docking stations are compatible with every laptop. In our tests, the Chromebook had the biggest compatibility issues, the Dell PC had the least, and the MacBook fell somewhere in between. All docks will list which brands and models they work with on the online product page — be sure to also check the generation of your laptop as some docks can’t support certain chips.

What are some popular docking station brands?

Kensington, Anker, Pluggable and Belkin are reputable and well-known brands making docking stations for all laptops. Lenovo, Dell and HP all make docks that will work with their own computers as well as other brands.

[ad_2]

Source Article Link

Categories
Life Style

Did ‘alien’ debris hit Earth? Startling claim sparks row at scientific meeting

[ad_1]

An electron microprobe image of a grey sphere on a black background. The sphere has a partially irregular surface and is about 200 micrometres across according to the scale bar.

Avi Loeb and his team say that metallic balls found near Papua New Guinea could be of extraterrestrial origin.Credit: Avi Loeb’s photo collection

The Woodlands, Texas

A sensational claim made last year that an ‘alien’ meteorite hit Earth near Papua New Guinea in 2014 got its first in-person airing with the broader scientific community on 12 March. At the Lunar and Planetary Science Conference in The Woodlands, Texas, scientists clashed over whether a research team has indeed found fragments of a space rock that came from outside the Solar System.

The debate occurred at a packed session featuring Hairuo Fu, a graduate student at Harvard University in Cambridge, Massachusetts, who is a member of the team that found the fragments. Team leader Avi Loeb, an astrophysicist at Harvard who did not attend the conference, has made other controversial claims about extraterrestrial discoveries. Many scientists have said that they don’t want to spend much of their time analysing and refuting these claims.

During his presentation, Fu described tiny metallic blobs that Loeb’s expedition dredged from the sea floor near Papua New Guinea last year, and said that the spherules have a chemical composition of unknown origin1. He then faced questions from a long line of scientists sceptical of the implications of extraterrestrial material. “At the very least, it is something different from what we know,” Fu responded.

New work questions the team’s findings. In a manuscript posted on the arXiv preprint server on 8 March2, ahead of peer review, a researcher argues that the debris collected by Loeb and his co-workers is actually molten blobs generated when an asteroid hit Earth 788,000 years ago.

“What they found has all the characteristics of microtektites — little pieces of melted Earth that came from this impact,” says preprint author Steve Desch, an astrophysicist at Arizona State University in Tempe.

Meanwhile, other studies are challenging different aspects of Loeb’s claim, such as whether the meteor that reportedly produced the fragments was on the trajectory Loeb says it was. Together, the findings show how the broader scientific community is engaging with Loeb’s extraterrestrial claims, in spite of reluctance to do so.

A unique find?

‘Interstellar’ objects remained in the realm of theory until 2017, when astronomers spotted the first known celestial object to be on a trajectory that meant it could only have come from outside the Solar System. Loeb made headlines when he speculated that the object, a comet-like body named ‘Oumuamua, was an artefact sent by an extraterrestrial civilization.

‘Oumuamua passed through the Solar System far from Earth, but Loeb hoped to find another interstellar object that had hit the planet. He later proposed that a bright meteor that appeared in the sky north of Papua New Guinea in January 2014 had an interstellar trajectory and could have scattered debris in the ocean.

Three people use a vacuum tool on a metallic sledge on board a ship.

Avi Loeb (in hat) and colleagues recover particles from a magnetic sledge on their 2023 expedition.Credit: Avi Loeb’s photo collection

In June 2023, Loeb led a privately funded expedition to the site that used magnetic sledges to recover more than 800 metallic spherules from the sea floor. About one-quarter of the spherules had chemical compositions indicating that they came from igneous, or once-molten, rocks. Of those, a handful were unusually enriched in the elements beryllium, lanthanum and uranium. The researchers concluded that those spherules are unlike any known materials in the Solar System1.

However, Desch counters that the spherules could have come from an asteroid impact in southeast Asia. Key to his proposal2 is a kind of soil called laterite, which forms in tropical regions when heavy rainfall carries some chemical elements from the topmost layers of soil into deeper ones. This leaves the upper soil enriched in other elements, including beryllium, lanthanum and uranium — similar to the composition of the spherules collected by Loeb and his colleagues. Desch says that an asteroid known to have struck the region around 788,000 years ago3 probably hit lateritic rock and created the molten blobs found by Loeb’s team.

In an e-mail to Nature, Loeb argues that spherules from an impact 788,000 years ago should have been buried by ocean sediments. Desch counters that sedimentation rates are relatively low in the offshore area where the spherules were collected.

But others are sceptical of Desch’s proposal, too. Scientists have yet to find any confirmed tektites from lateritic rock, notes Pierre Rochette, a geoscientist at Aix-Marseille University in Aix-en-Provence, France, who is not affiliated with either team. And very few tektites are magnetic, he says, so it would be difficult for Loeb and his colleagues to have pulled up hundreds from the sea floor.

Fiery critiques

Desch was not the only scientist to challenge Loeb’s work this week.

After Fu’s conference presentation, Ben Fernando, a seismologist at Johns Hopkins University in Baltimore, Maryland, spoke and took aim at claims concerning the 2014 meteor. Fernando and his colleagues, including Desch, analysed seismic and acoustic data gathered by ground-based sensors at the time the meteor hit the atmosphere4. Data from a seismometer on nearby Manus Island, which Loeb and his team studied as they were deciding where to dredge, show no characteristics of a high-altitude fireball — but do indicate a vehicle driving past, Fernando said. “This is almost certainly a truck,” he told the meeting. A second set of observations, made using infrasound sensors that listen for clandestine nuclear tests, seems to have detected the meteor hitting the atmosphere, but suggests it happened around 170 kilometres away from where Loeb’s team calculates.

Loeb told Nature that such critiques do not take into account US Department of Defense data that he says confirm the exact trajectory of that fireball. But because those data are held by the government, they have not been independently cross-checked by other scientists.

As conference-goers poured out of the room after his talk, Fu told Nature that Loeb’s team is working on further analyses, such as isotopic studies, that could shed more light on what the spherules are. After that, Fu said, he is looking forward to graduating and working on a new project — on how the Moon was formed.

[ad_2]

Source Article Link

Categories
Featured

YouTube TV refreshed UI makes video watching more engaging for users

[ad_1]

YouTube is redesigning its smart TV app to increase interactivity between people and their favorite channels.

In a recent blog post, YouTube described how the updated UI shrinks the main video a bit to make room for an information column housing a video’s view counts, amount of likes it has, description, and comments. Yes, despite the internet’s advice, people do read the YouTube comments section. The current layout has the same column, but it obscures the right side of the screen. YouTube states in its announcement the redesign allows users to enjoy content “without interrupting [or ruining] the viewing experience.” 

[ad_2]

Source Article Link

Categories
Life Style

How AI is being used to accelerate clinical trials

[ad_1]

At the right of the image an illustrated figure holding clipboard looks at abstract depiction of head with brain visible. Three multicoloured vertical bars to left of image.

Credit: Taj Francis

For decades, computing power followed Moore’s law, advancing at a predictable pace. The number of components on an integrated circuit doubled roughly every two years. In 2012, researchers coined the term Eroom’s law (Moore spelled backwards) to describe the contrasting path of drug development1. Over the previous 60 years, the number of drugs approved in the United States per billion dollars in R&D spending had halved every nine years. It can now take more than a billion dollars in funding and a decade of work to bring one new medication to market. Half of that time and money is spent on clinical trials, which are growing larger and more complex. And only one in seven drugs that enters phase I trials is eventually approved.

Some researchers are hoping that the fruits of Moore’s law can help to curtail Eroom’s law. Artificial intelligence (AI) has already been used to make strong inroads into the early stages of drug discovery, assisting in the search for suitable disease targets and new molecule designs. Now scientists are starting to use AI to manage clinical trials, including the tasks of writing protocols, recruiting patients and analysing data.

Reforming clinical research is “a big topic of interest in the industry”, says Lisa Moneymaker, the chief technology officer and chief product officer at Saama, a software company in Campbell, California, that uses AI to help organizations automate parts of clinical trials. “In terms of applications,” she says, “it’s like a kid in a candy store.”

Trial by design

The first step of the clinical-trials process is trial design. What dosages of drugs should be given? To how many patients? What data should be collected on them? The lab of Jimeng Sun, a computer scientist at the University of Illinois Urbana-Champaign, developed an algorithm called HINT (hierarchical interaction network) that can predict whether a trial will succeed, based on the drug molecule, target disease and patient eligibility criteria. They followed up with a system called SPOT (sequential predictive modelling of clinical trial outcome) that additionally takes into account when the trials in its training data took place and weighs more recent trials more heavily. Based on the predicted outcome, pharmaceutical companies might decide to alter a trial design, or try a different drug completely.

A company called Intelligent Medical Objects in Rosemont, Illinois, has developed SEETrials, a method for prompting OpenAI’s large language model GPT-4 to extract safety and efficacy information from the abstracts of clinical trials. This enables trial designers to quickly see how other researchers have designed trials and what the outcomes have been. The lab of Michael Snyder, a geneticist at Stanford University in California, developed a tool last year called CliniDigest that simultaneously summarizes dozens of records from ClinicalTrials.gov, the main US registry for medical trials, adding references to the unified summary. They’ve used it to summarize how clinical researchers are using wearables such as smartwatches, sleep trackers and glucose monitors to gather patient data. “I’ve had conversations with plenty of practitioners who see wearables’ potential in trials, but do not know how to use them for highest impact,” says Alexander Rosenberg Johansen, a computer-science student in Snyder’s lab. “Best practice does not exist yet, as the field is moving so fast.”

Most eligible

The most time-consuming part of a clinical trial is recruiting patients, taking up to one-third of the study length. One in five trials don’t even recruit the required number of people, and nearly all trials exceed the expected recruitment timelines. Some researchers would like to accelerate the process by relaxing some of the eligibility criteria while maintaining safety. A group at Stanford led by James Zou, a biomedical data scientist, developed a system called Trial Pathfinder that analyses a set of completed clinical trials and assesses how adjusting the criteria for participation — such as thresholds for blood pressure and lymphocyte counts — affects hazard ratios, or rates of negative incidents such as serious illness or death among patients. In one study2, they applied it to drug trials for a type of lung cancer. They found that adjusting the criteria as suggested by Trial Pathfinder would have doubled the number of eligible patients without increasing the hazard ratio. The study showed that the system also worked for other types of cancer and actually reduced harmful outcomes because it made sicker people — who had more to gain from the drugs — eligible for treatment.

Area chart showing the number of drugs developed by companies based in six selected countries that made from phase I clinical trials to regulatory submission in 2007 to 2022

Sources: IQVIA Pipeline Intelligence (Dec. 2022)/IQVIA Institute (Jan. 2023)

AI can eliminate some of the guesswork and manual labour from optimizing eligibility criteria. Zou says that sometimes even teams working at the same company and studying the same disease can come up with different criteria for a trial. But now several firms, including Roche, Genentech and AstraZeneca, are using Trial Pathfinder. More recent work from Sun’s lab in Illinois has produced AutoTrial, a method for training a large language model so that a user can provide a trial description and ask it to generate an appropriate criterion range for, say, body mass index.

Once researchers have settled on eligibility criteria, they must find eligible patients. The lab of Chunhua Weng, a biomedical informatician at Columbia University in New York City (who has also worked on optimizing eligibility criteria), has developed Criteria2Query. Through a web-based interface, users can type inclusion and exclusion criteria in natural language, or enter a trial’s identification number, and the program turns the eligibility criteria into a formal database query to find matching candidates in patient databases.

Weng has also developed methods to help patients look for trials. One system, called DQueST, has two parts. The first uses Criteria2Query to extract criteria from trial descriptions. The second part generates relevant questions for patients to help narrow down their search. Another system, TrialGPT, from Sun’s lab in collaboration with the US National Institutes of Health, is a method for prompting a large language model to find appropriate trials for a patient. Given a description of a patient and clinical trial, it first decides whether the patient fits each criterion in a trial and offers an explanation. It then aggregates these assessments into a trial-level score. It does this for many trials and ranks them for the patient.

Helping researchers and patients find each other doesn’t just speed up clinical research. It also makes it more robust. Often trials unnecessarily exclude populations such as children, the elderly or people who are pregnant, but AI can find ways to include them. People with terminal cancer and those with rare diseases have an especially hard time finding trials to join. “These patients sometimes do more work than clinicians in diligently searching for trial opportunities,” Weng says. AI can help match them with relevant projects.

AI can also reduce the number of patients needed for a trial. A start-up called Unlearn in San Francisco, California, creates digital twins of patients in clinical trials. Based on an experimental patient’s data at the start of a trial, researchers can use the twin to predict how the same patient would have progressed in the control group and compare outcomes. This method typically reduces the number of control patients needed by between 20% and 50%, says Charles Fisher, Unlearn’s founder and chief executive. The company works with a number of small and large pharmaceutical companies. Fisher says digital twins benefit not only researchers, but also patients who enrol in trials, because they have a lower chance of receiving the placebo.

Bar chart showing the number of clinical trial subjects by disease type for 2010 to 2022

Source: Citeline Trialtrove/IQVIA Institute (Jan. 2023)

Patient maintenance

The hurdles in clinical trials don’t end once patients enrol. Drop-out rates are high. In one analysis of 95 clinical trials, nearly 40% of patients stopped taking the prescribed medication in the first year. In a recent review article3, researchers at Novartis mentioned ways that AI can help. These include using past data to predict who is most likely to drop out so that clinicians can intervene, or using AI to analyse videos of patients taking their medication to ensure that doses are not missed.

Chatbots can answer patients’ questions, whether during a study or in normal clinical practice. One study4 took questions and answers from Reddit’s AskDocs forum and gave the questions to ChatGPT. Health-care professionals preferred ChatGPT’s answers to the doctors’ answers nearly 80% of the time. In another study5, researchers created a tool called ChatDoctor by fine-tuning a large language model (Meta’s LLaMA-7B) on patient-doctor dialogues and giving it real-time access to online sources. ChatDoctor could answer questions about medical information that was more recent than ChatGPT’s training data.

Putting it together

AI can help researchers manage incoming clinical-trial data. The Novartis researchers reported that it can extract data from unstructured reports, as well as annotate images or lab results, add missing data points (by predicting values in results) and identify subgroups among a population that responds uniquely to a treatment. Zou’s group at Stanford has developed PLIP, an AI-powered search engine that lets users find relevant text or images within large medical documents. Zou says they’ve been talking with pharmaceutical companies that want to use it to organize all of the data that comes in from clinical trials, including notes and pathology photos. A patient’s data might exist in different formats, scattered across different databases. Zou says they’ve also done work with insurance companies, developing a language model to extract billing codes from medical records, and that such techniques could also extract important clinical trial data from reports such as recovery outcomes, symptoms, side effects and adverse incidents.

To collect data for a trial, researchers sometimes have to produce more than 50 case report forms. A company in China called Taimei Technology is using AI to generate these automatically based on a trial’s protocol.

A few companies are developing platforms that integrate many of these AI approaches into one system. Xiaoyan Wang, who heads the life-science department at Intelligent Medical Objects, co-developed AutoCriteria, a method for prompting a large language model to extract eligibility requirements from clinical trial descriptions and format them into a table. This informs other AI modules in their software suite, such as those that find ideal trial sites, optimize eligibility criteria and predict trial outcomes. Soon, Wang says, the company will offer ChatTrial, a chatbot that lets researchers ask about trials in the system’s database, or what would happen if a hypothetical trial were adjusted in a certain way.

The company also helps pharmaceutical firms to prepare clinical-trial reports for submission to the US Food and Drug Administration (FDA), the organization that gives final approval for a drug’s use in the United States. What the company calls its Intelligent Systematic Literature Review extracts data from comparison trials. Another tool searches social media for what people are saying about diseases and drugs in order to demonstrate unmet needs in communities, especially those that feel underserved. Researchers can add this information to reports.

Zifeng Wang, a student in Sun’s lab, in Illinois, says he’s raising money with Sun and another co-founder, Benjamin Danek, for a start-up called Keiji AI. A product called TrialMind will offer a chatbot to answer questions about trial design, similar to Xiaoyan Wang’s. It will do things that might normally require a team of data scientists, such as write code to analyse data or produce visualizations. “There are a lot of opportunities” for AI in clinical trials, he says, “especially with the recent rise of larger language models.”

At the start of the pandemic, Saama worked with Pfizer on its COVID-19 vaccine trial. Using Saama’s AI-enabled technology, SDQ, they ‘cleaned’ data from more than 30,000 patients in a short time span. “It was the perfect use case to really push forward what AI could bring to the space,” Moneymaker says. The tool flags anomalous or duplicate data, using several kinds of machine-learning approaches. Whereas experts might need two months to manually discover any issues with a data set, such software can do it in less than two days.

Other tools developed by Saama can predict when trials will hit certain milestones or lower drop-out rates by predicting which patients will need a nudge. Its tools can also combine all the data from a patient — such as lab tests, stats from wearable devices and notes — to assess outcomes. “The complexity of the picture of an individual patient has become so huge that it’s really not possible to analyse by hand anymore,” Moneymaker says.

Xiaoyan Wang notes that there are several ethical and practical challenges to AI’s deployment in clinical trials. AI models can be biased. Their results can be hard to reproduce. They require large amounts of training data, which could violate patient privacy or create security risks. Researchers might become too dependent on AI. Algorithms can be too complex to understand. “This lack of transparency can be problematic in clinical trials, where understanding how decisions are made is crucial for trust and validation,” she says. A recent review article6 in the International Journal of Surgery states that using AI systems in clinical trials “can’t take into account human faculties like common sense, intuition and medical training”.

Moneymaker says the processes for designing and running clinical trials have often been slow to change, but adds that the FDA has relaxed some of its regulations in the past few years, leading to “a spike of innovation”: decentralized trials and remote monitoring have increased as a result of the pandemic, opening the door for new types of data. That has coincided with an explosion of generative-AI capabilities. “I think we have not even scratched the surface of where generative-AI applicability is going to take us,” she says. “There are problems we couldn’t solve three months ago that we can solve now.”

[ad_2]

Source Article Link

Categories
Featured

Forget Sora, this is the AI video that will blow your mind – and maybe scare you

[ad_1]

Humanoid robotic development has for the better part of two decades moved at a snail’s pace but rapid acceleration is underway thanks to a collaboration between Figure AI and OpenAI with the result being the most stunning bit of real humanoid robot video I’ve ever seen.

On Wednesday, startup robotics firm Figure AI released a video update (see below) of its Figure 01 robot running a new Visual Language Model (VLM) that has somehow transformed the bot from a rather uninteresting automaton into a full-fledged sci-fi bot that approaches C-3PO-level capabilities.

[ad_2]

Source Article Link

Categories
Life Style

Ancient malaria genome from Roman skeleton hints at disease’s history

[ad_1]

A coloured transmission electron micrograph showing a blue and green cell with several organelles inside a red cell.

The malaria parasite Plasmodium falciparum infecting a red blood cell.Credit: Dennis Kunkel Microscopy/Science Photo Library

Researchers have sequenced the mitochondrial genome of the deadliest form of malaria from an ancient Roman skeleton. They say the results could help to untangle the history of the disease in Europe.

It’s difficult to find signs of malaria in ancient human remains, and DNA from the malaria-causing parasite Plasmodium rarely shows up in them. As a result, there had never been a complete genomic sequence of the deadliest species, Plasmodium falciparum, from before the twentieth century — until now. “P. falciparum was eliminated in Europe a half century ago, and genetic data from European parasites — ancient or recent — has been an elusive piece in the puzzle of understanding how humans have moved parasites around the globe,” says Daniel Neafsey, who studies the genomics of malaria parasites and mosquito vectors at the Harvard T.H. Chan School of Public Health in Boston, Massachusetts.

Malaria has long been a leading cause of human deaths. “With the development of treatments such as quinine in the last hundreds of years, it seems clear [humans and malaria] are co-evolving,” says Carles Lalueza Fox, a palaeogenomicist at the Institute of Evolutionary Biology in Barcelona, Spain. “Discovering the genomes of the ancient, pre-quinine plasmodia will likely reveal information about how they have adapted to the different anti-malarial drugs.”

Ancient pathogen

There are five malaria-causing species of Plasmodium, which are thought to have arisen in Africa between 50,000 and 60,000 years ago, and then spread worldwide. Most researchers agree that they reached Europe at least 2,000 years ago, by the time of the Roman Empire.

Plasmodium falciparum “has significantly impacted human history and evolution”, says Neafsey. “So, that makes it particularly important to discover how long different societies have had to deal with [it], and how human migration and trade activities spread it.”

Researchers can glean valuable information about the origin, evolution and virulence of the parasite from DNA extracted from the ancient remains of infected people. But it is difficult to know where to look: it is not always obvious whether a person was infected with Plasmodium, and whether DNA can be recovered depends on how well it has been preserved.

In a preprint posted on the server bioRxiv1, a team of researchers led by a group at the University of Vienna identified the first complete mitochondrial genome sequence of P. falciparum from the bones of a Roman who lived in Italy in the second century ad, known as Velia-186.

Plasmodium falciparum had been detected in Velia-186 in a previous study2. The authors of the latest preprint extracted the parasite’s DNA from the body’s teeth, and were able to identify 5,458 pieces of unique genetic information that they combined to get a sequence covering 99.1% of the mitochondrial genome. They also used software to compare the genome with modern samples, and found that the Velia-186 sequence is closely related to a group of present-day strains found in India.

Carried by migration

The researchers say their findings support a hypothesis that P. falciparum spread to Europe from Asia around at least 2,000 years ago3. The Indian strains “were already present in Europe [then]; thus, a potential arrival with globalization episodes such as the Hellenistic period — when it is first described by Greeks — seems plausible”, says Lalueza Fox.

Neafsey says the work is a “technical tour de force” and an interesting addition to the limited field of ancient malaria genomics. But he adds that the results should be interpreted with caution because there are only a few samples, and points out that a genome sequence from DNA in the parasite’s cell nuclei, rather than its mitochondria, “might indicate a more complex story of parasite movement among ancient human populations”.

Lalueza Fox suggests exploring other potential sources of Plasmodium DNA, such as old bones, antique medical equipment and even mosquito specimens in museums. “The integration of genetic data from these heterogeneous sources will provide a nuanced view of this disease,” he says. “It would be interesting to see what lessons we can learn from the past on the strains and dispersals of this pathogen.”

[ad_2]

Source Article Link