El dueño de uno de los boletos ganadores del Mega Millions Vendido en la estación Chevron en Encino. Los funcionarios de la lotería de California anunciaron el martes que diciembre se adelantó.
F. Lahijani se ha presentado para reclamar su parte del premio mayor de $395 millones. El segundo ganador, que aún no ha presentado su solicitud, tiene un año a partir de la fecha del sorteo para reclamar su premio.
El minorista, ubicado en 18081 Ventura Boulevard, también recibió un “cheque de bonificación récord” de $1,9 millones por vender los dos boletos ganadores a nivel nacional para el torneo Mega Millions.
“Los socios minoristas de la Lotería de California que venden cualquier boleto que gane $1 millón o más reciben una bonificación de la mitad del uno por ciento del valor del premio hasta $1 millón”. Nuevo lanzamiento Él dijo.
“Esto sólo fue posible gracias al hecho de que la misma tienda vendió dos boletos ganadores del premio mayor en dos transacciones separadas”, dijo en un comunicado la portavoz de la Lotería de California, Carolyn Baker. “¡Esto nunca ha sucedido antes y queremos felicitar a los propietarios de este establecimiento comercial!”
Lahijani y los dueños de Chevron se negaron a hablar con los medios.
Aquellos interesados en probar suerte en su próximo sorteo de lotería incluso 19:45 para adquirir sus entradas. Los boletos cuestan $2. El sorteo se realizará el martes a las 8:00 pm.
Un afortunado jugador de Mega Millions del sur de California se hizo medio millón de dólares más rico el viernes por la noche. Según la Lotería de California.
Un boleto comprado en una gasolinera de Chino Hills acertó todos los números excepto el grande, lo que les dio un premio de 508.408 dólares.
El boleto ganador se vendió en la gasolinera 76 ubicada en 3260 Chino Street.
El sorteo de Mega Millions del viernes resultó en los siguientes números: 4, 11, 23, 33, 49 y Mega #23.
El premio mayor de 522 millones de dólares sigue sin reclamarse. Este premio mayor aumentará a unos 560 millones de dólares en el sorteo del martes.
Cybersecurity researchers from JFrog recently discovered three malicious campaigns in Docker Hub – Docker’s cloud-based registry service for storing and sharing container images. These campaigns contained millions of repositories that pushed generic trojan malware to the developers.
The conclusion of JFrog’s findings is that with open-source repositories such as Docker Hub, keeping them clean of malware is an immensely difficult task.
As the researchers explained, Docker Hub repositories have two key aspects: the images (an application that can be updated and accessible through a fixed name), and the metadata (short descriptions and documentation in HTML format, which will be displayed on the repository’s main page).
Millions of bad repositories
“Usually, repository documentation aims to explain the purpose of the image and provide guidelines for its usage,” the researchers explained.
However, roughly 4.6 million repositories contained no Docker images, meaning they couldn’t be run using a Kubernetes cluster, or a Docker engine – they were practically useless. They just contained the overview page which tried to trick the developers into visiting phishing websites, or other pages hosting malicious code.
Of the 4.6 million repositories, 2.81 million were linked to three campaigns: “Downloader”, “eBook Phishing”, and “Website SEO”.
In terms of the number of malicious repositories, Downloader was the biggest one, amounting to almost 10% of the entire share (1,453,228 repositories). However, it did not have as many users (9,309) as, for example, Website SEO (194,699). The latter, however, only took up 1.4% of the share, having a “mere” 215,451 repositories.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
With 7.1% of the share, eBook Phishing was the second, with 1,069,160 repositories. It only had 1,042 users though.
JFrog disclosed its findings to Docker, which prompted the project to remove the malicious repositories – 3.2 million of them.
“Unlike typical attacks targeting developers and organizations directly, the attackers in this case tried to leverage Docker Hub’s platform credibility, making it more difficult to identify the phishing and malware installation attempts,” JFrog said.
“Almost three million malicious repositories, some of them active for over three years highlight the attackers’ continued misuse of the Docker Hub platform and the need for constant moderation on such platforms.”
Millions of devices are still connected to the PlugX malware, despite its creators abandoning it months ago, experts have warned.
Cybersecurity analysts Sekoia managed to obtain the IP address associated with the malware’s command & control (C2) server, and observed connection requests over a six-month period.
During the course of the analysis, infected endpoints attempted 90,000 connection requests every day, amounting to 2.5 million connections in total. The devices were located in 170 countries, it was said. However, just 15 of them made up more than 80% of total infections, with Nigeria, India, China, Iran, Indonesia, the UK, Iraq, and the United States making up the top eight.
Still at risk
While at first it might sound like there are many infected endpoints around the world, the researchers did stress that the numbers might not be entirely precise. The malware’s C2 does not have unique identifiers, which messes with the results, as many compromised workstations can exit through the same IP address.
Furthermore, if any of the devices use a dynamic IP system, a single device can be perceived as multiple ones. Finally, many connections could be coming in through VPN services, making country-related statistics moot.
PlugX was first observed in 2008 in cyber-espionage campaigns mounted by Chinese state-sponsored threat actors, the researchers said. The targets were mostly organizations in government, defense, and technology sectors, located in Asia. The malware was capable of command execution, file download and upload, keylogging, and accessing system information. Over the years, it grew additional features, such as the ability to autonomously spread via USB drives, which makes containment today almost impossible. The list of targets also expanded towards the West.
However, after the source code leaked in 2015, PlugX became more of a “common” malware, with many different groups, both state-sponsored and financially-motivated, using it, which is probably why the original developers abandoned it.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
A 45-year-old Nebraska man Charles O. Parks III has been charged with numerous financial crimes, including wire fraud and money laundering, as part of a scheme defrauding two cloud storage providers based in Seattle and Redmond, Washington.
Ars Technica, reporting on the indictment, beat us to the punch in alleging that the cloud providers in question are probably Amazon Web Services and Microsoft Azure.
Parks registered a number of accounts using various identities to secure the computing resources, and continued registering accounts even after being kicked off of the Seattle provider for non-payment and suspicions of fraudulent activity.
Don’t try this at home
Though Parks is estimated to have cost both providers a combined $3.5 million in unpaid service fees and the sheer amount of energy consumed by his operation, the whole charade only netted around $1 million. That’s probably not worth the 30 year custodial sentence facing Parks if he’s convicted on all charges.
Parks was methodical, laundering his ill-gotten Ether, Litecoin and Monero via crypto exchanges, unnamed NFT marketplaces and online payment gateways, as well as plain old bank accounts.
He was cunning, managing to socially engineer, trick and fraud his way to persuading employees at the provider to defer demands for payment and elevate his service allowances.
However, Parks was also reckless in not paying his bills in the first place, as well as splashing the cash about a little too much, buying a car, jewelry, ‘first-class hotel and travel accommodations’, and ‘other luxury goods and services’.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The decision was perhaps particularly unwise, as we’ve already highlighted that the return on investment with cryptocurrency mining isn’t great, in part because companies are wise to it now. They can throttle their resources (which is almost certainly why Parks had to do some social engineering fraud) and have the telemetry (as they almost certainly did on Parks) to look at what their systems were doing and put two and two together.
Plus, caring about the environment might be for squares or whatever when there’s money to be made, but the environmental impact of data centers generally is bleak and depressing, even when you’re not using them to mine monopoly money. What better reflection is there of capitalism in action than a race to the bottom, ultimately set to doom us all, for a meager return on investment?
Nvidia recently unveiled its DGX GB200 NVL72 supercomputer-in-a-rack at Nvidia GTC 2024 and Patrick Kennedy at Serve The Home took a selection of great photos showcasing the impressive beast.
The name of the DGX GB200 NVL72 tells you much of what you need to know. The GB200 signifies the Grace Blackwell GB200 compute structure, while the NVL72 denotes there are 72 Blackwell GPUs connected by NVLink.
The Blackwell platform contains 208 billion transistors across its two GPU dies. These are connected by 10 TB/second chip-to-chip link into a single, unified GPU. Blackwell, set to ship later this year, will reportedly offer up to 20 petaflops of FP4 power and be up to 30x faster than Hopper for AI inference tasks.
TechRadar Pro also snapped our own picture of the DGX GB200 at Nvidia GTC 2024 (Image credit: Future / Mike Moore)
120kW power load
The rack scale system comprises ten compute nodes in the top stack, each featuring dual Infiniband ports, four E1.S drive trays, and management ports. Each node is powered by two Grace Arm CPUs connected to two Blackwell GPUs. Below these nodes are nine NVSwitch shelves, with gold handles for easy removal.
The rear of the rack reveals the power delivery system designed for blind-mate power via the bus bar, liquid cooling nozzles, and NVLink connections for each component. This setup allows for slight movement to ensure proper blind mating.
DGX GB200 NVL72 weighs 1.36 metric tons (3,000 lbs) and consumes a 120kW, a power load that Serve The Home points out, not all data centers will be able to handle. As many can only support a maximum of 60kW racks, a future half-stack system seems a possibility. The rack uses 2 miles (3.2 km) of copper cabling instead of optics to lower the system’s power draw by 20kW.
You can view the rest of the photos taken by Kennedy at GTC 2024 here.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
A French government agency suffered a cyberattack which has apparently resulted in the country’s largest-ever data leak incident, affecting as many as 43 million victims.
The agency is called France Travail, the country’s unemployment registry and assistance organization, which helps find jobs for the unemployed, and provides them with financial aid. The organization was created in 2008, after ANPE and ASSEDIC merged, and currently counts roughly 45,000 employees.
In a press release published earlier this week, the organization said it fell victim to a cyberattack in which sensitive data collected over the last 20 years was stolen. It warned the citizens to be wary of potential identity theft attacks, phishing attempts, and similar cyberattacks.
Motives unknown
BleepingComputersays that an estimated 43 million individuals were affected, making this the largest data leak incident in the country’s history, surpassing February’s 33 million attack on Viamedis and Almerys. The data that was stolen in this attack includes people’s full names, dates of birth, places of birth, social security numbers, France Travail identifiers, email addresses, postal addresses, and phone numbers. Financial, or payment data, was not stolen, it was added.
The attack was spotted in early March, and lasted almost a month, the agency confirmed. Besides the unemployed, the hackers also stole data from job candidates.
France Travail did not say who the threat actors behind the incident are, or what their goals were. So, we don’t know if this was a ransomware attack, or just a data grab. No hacking collectives have yet assumed responsibility for the attack.
This is not the first time France Travail suffered a devastating cyberattack that resulted in data leaks. Last August, hackers made away with sensitive information on 10 million people. That attack was attributed to the Cl0p ransomware collective, which abused the MOVEit Transfer software vulnerability to breach the system.
A freedom of information (FOI) request submitted by Parliament Street, a Conservative Party aligned think tank, has found HMRC has spent over £80 million on hybrid working technology over the last three years.
In a breakdown of its spending, HMRC disclosed that 175,250 devices were purchased for its staff in order to allow 95% of its workforce to adopt a hybrid working policy of at least one day per week at home.
Of these devices, 88,362 were laptops, 54,093 were tablet computers, 32,013 were mobile phones, and 782 were desktop computers.
“Couch potato culture”
Parliament Street’s Chairman, Patrick Sullivan, said, “HMRC cannot continue to splash our hard-earned cash to fuel this absurd remote working binge. It’s time to put an end to this couch potato culture, with staff ordered back into the office as a mandatory part of their job description.”
(Image credit: Parliament Street think tank)
This is despite numerous studies finding hybrid and remote working policies actually contribute to greater levels of productivity, a better work-life balance, and a happier workforce overall.
Earlier this year, Civil Service bosses were threatened with penalties if they did not get their staff back into offices after the widely adopted hybrid working policies of the pandemic were deemed to be no longer necessary.
Sachin Agrawal, Managing Director at Zoho UK, said, “Remote working is proven to deliver a dramatic increase to employee productivity, allowing staff to collaborate and manage important tasks wherever they may be.
“This level of tech investment should be part of a wider strategy, with employees getting access to the latest software applications, and being educated and fully trained to understand full capabilities. This ensures critical work is completed effectively and synchronised to deliver maximum value and contribute significantly to business success.”
Remote and hybrid working policies also reduce the cost of commuting significantly, which is particularly helpful in the UK where the cost-of-living crisis is forcing employees to readjust budgets in order to afford the essentials, while energy companies are reporting record profits in the billions of pounds.
“Flexible working is critical for cutting travel time and reducing overheads in terms of office costs,” said Stuart Munton, Chief for Delivery at AND Digital. “If we want to build a leaner, more effective public sector then these kind of tech investments are key.”
The use of outdated devices puts organizations at a greater risk of cyberattacks due to the lack of vulnerability patching and security updates. The public sector has seen frequent criticism for its lack of digital upgrades, with the 2017 WannaCry attack having a critical impact on the NHS largely due to the use of outdated devices.
Therefore, the spending that HMRC has made to provide updated, and therefore more secure, technology can potentially be regarded as an investment of taxpayer money rather than a waste, as the cost of recovering from a cyber attack could far exceed £80 million in terms of data loss, remediation, and lost working hours.
Millions of secrets and authentication keys were leaked on GitHub in 2023, with the majority of developers not caring to revoke them even after being notified of the mishap, new research has claimed.
A report from GitGuardian, a project that helps developers secure their software development with automated secrets detection and remediation, claims that in 2023, GitHub users accidentally exposed 12.8 million secrets in more than 3 million public repositories.
These secrets include account passwords, API keys, TLS/SSL certificates, encryption keys, cloud service credentials, OAuth tokens, and similar.
Slow response
During the development stage, many IT pros would hardcode different authentication secrets in order to make their lives easier. However, they often forget to remove the secrets before publishing the code on GitHub. Thus, should any malicious actors discover these secrets, they would get easy access to private resources and services, which can result in data breaches and similar incidents.
India was the country from which most leaks originated, followed by the United States, Brazil, China, France, and Canada. The vast majority of the leaks came from the IT industry (65.9%), followed by education (20.1%). The remaining 14% was split between science, retail, manufacturing, finance, public administration, healthcare, entertainment, and transport.
Making a mistake and hardcoding secrets can happen to anyone – but what happens after is perhaps even more worrying. Just 2.6% of the secrets are revoked within the hour – practically everything else (91.6%) remains valid even after five days, when GitGuardian stops tracking their status. To make matters worse, the project sent 1.8 million emails to different developers and companies, warning them of its findings, and just 1.8% responded by removing the secrets from the code.
Riot Games, GitHub, OpenAI, and AWS were listed as companies with the best response mechanisms.
Artificial intelligence (AI) is swiftly becoming a powerful force in the world of science and technology. This isn’t just about machines getting smarter; it’s about how they’re helping us make leaps in understanding and innovation that were once thought impossible. AI is not just a buzzword; it’s a tool that’s reshaping how we approach complex problems and opening doors to new discoveries.
At the forefront of AI’s evolution are thinkers like Jurgen Schmidhuber, who has been instrumental in developing the concepts that drive AI today. Alongside him, Demis Hassabis of DeepMind is pushing the boundaries of what AI can achieve. Their work is setting the stage for a future where AI is integral to scientific progress.
One of the most impressive areas where AI is making a mark is in the discovery of new materials. Using deep learning, AI systems have identified millions of potential new materials. This isn’t just theoretical; it’s happening right now. Neural networks, which mimic the brain’s structure, can generate images and unravel complex 3D structures from 2D data, a task that would be incredibly challenging for humans.
Take AlphaFold, for example. Developed by Google DeepMind, this AI has made waves by predicting protein structures with remarkable accuracy. Understanding these structures is essential for biological research and developing new medicines. AlphaFold’s success is a testament to the power of AI in accelerating scientific understanding and potentially leading to medical breakthroughs.
AI discovers new materials
Here are some other articles you may find of interest on the subject of artificial intelligence and its development :
But AI’s role isn’t limited to just finding new things; it’s also about creation. AI-driven robots can now autonomously synthesize and test new materials, speeding up the process of discovery. This means that AI isn’t just an analytical tool; it’s also a creator, capable of innovation.
The impact of AI is further amplified by its ability to learn and improve on its own. This self-improvement could dramatically increase the pace of discovery, turning what might have taken centuries into a matter of decades. Imagine the possibilities when AI can evolve and enhance its capabilities without human intervention.
Thanks to AI, researchers now have access to vast databases filled with information on these new materials. This democratization of data is crucial, as it allows scientists from all over the world to collaborate and build on each other’s work, fostering further innovation.
The potential of AI extends across various fields, including genetics, where it could help us understand and possibly cure diseases that have plagued humanity for ages. The contributions of AI pioneers like Schmidhuber and Hassabis are monumental, redefining what we consider possible.
AI is changing the landscape of scientific discovery and development. It’s not just altering the rules; it’s transforming the entire field. From revealing protein structures to creating new materials, from making coding more accessible to speeding up the pace of scientific advancement, the influence of AI is clear. As we continue to explore the capabilities of AI, one thing is certain: the future of scientific discovery is bright, with AI leading the way.
Filed Under: Gadgets News
Latest timeswonderful Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.