Categories
Featured

AWS quiere dar una segunda vida a los centros de datos más antiguos con nuevas instalaciones de reciclaje y reparación

[ad_1]

Amazonas Ha ampliado sus operaciones de logística inversa en Europa a través de sus instalaciones re:Cycle Reverse Logistics en Dublín, con el objetivo de extender la vida útil de los equipos de su centro de datos y reducir su huella ambiental.

Las instalaciones, que prueban, reparan y reutilizan hardware de los centros de datos de AWS, son parte de la estrategia más amplia de la compañía para lograr emisiones netas de carbono cero.

[ad_2]

Source Article Link

Categories
Featured

AWS se vio obligada a pagar millones en una importante disputa de patentes

[ad_1]

Un jurado estadounidense dictaminó Amazonas AWS infringió voluntariamente dos patentes y ahora debe pagar 30,5 millones de dólares por infringir los derechos del propietario de la patente en redes informáticas y tecnología de transmisión por secuencias.

Las tecnologías infractoras fueron la red de entrega de contenido Cloudfront de AWS y Virtual Private Cloud, que infringieron patentes originalmente propiedad de Boeing, pero adquiridas por Acceleration Bay.

[ad_2]

Source Article Link

Categories
Featured

Google, Microsoft y AWS se culpan mutuamente por la investigación del organismo de control británico sobre la competencia en la nube

[ad_1]

Amazonas, microsoft y Google Todos defienden sus prácticas comerciales, que están siendo investigadas por la Autoridad de Mercados y Competencia del Reino Unido (CMA), señalándose sutilmente unos a otros.

Al intercambiar culpas, las tres empresas esperan desviar la atención y evitar más investigaciones y medidas punitivas por parte de la Autoridad de Mercados y Competencia.

[ad_2]

Source Article Link

Categories
Featured

AWS agrega compatibilidad con claves de acceso para mejorar la protección MFA

[ad_1]

Las claves de acceso FIDO2 han llegado para Amazonas Servicios web (AWS) para mejorar Autenticación multifactor (MFA) en la plataforma en la nube.

El nuevo método de autenticación pronto se implementará como estándar y los usuarios raíz de AWS tendrán hasta finales de julio de 2024 para habilitar MFA.

[ad_2]

Source Article Link

Categories
Featured

El mercado de la nube del Reino Unido está dominado por AWS y Microsoft, y esto podría ser un problema para ambos

[ad_1]

microsoft Y Amazonas Se ha descubierto que los servicios web (AWS) son los actores dominantes en el Reino Unido. Almacenamiento en la nube mercado, según el primer conjunto de documentos de trabajo emitidos por la Autoridad de Mercados y Competencia del país (CMA), como parte de su investigación sobre los servicios en la nube.

el Informes En el mercado de la nube del Reino Unido (trans Registro) indica que su valor se duplicó entre 2019 y 2022, alcanzando los £7,500 millones (entre 8,900 y 9,500 millones de dólares). Aunque ahora es seguro que esta cifra está desactualizada, es motivo de preocupación cuando AWS y Microsoft por sí solos parecen ser los principales impulsores.

[ad_2]

Source Article Link

Categories
Featured

AWS abre una nube soberana europea

[ad_1]

Amazonas Servicios web (AWS) tiene Cierto Planea lanzar una Nube Soberana Europea destinada a promover la residencia de datos en la región para abordar las preocupaciones de los comisarios y reguladores.

El líder de la industria, que representa aproximadamente un tercio del mercado de infraestructura en la nube, reveló que establecerá su región inaugural de nube soberana de AWS en Brandeburgo, Alemania.

[ad_2]

Source Article Link

Categories
Featured

AWS makes move to host your company’s home-made Gen AI models

[ad_1]

In an effort to position itself as one of the leading platforms for custom generative AI models, AWS has announced the launch of Custom Model Import within Bedrock, its suite of enterprise-focused GenAI services.

As the name suggests, Custom Model Import will allow organizations to import and access their own generative AI models as fully managed APIs, which means they’ll get to benefit from the same infrastructure and tools that are available for existing models in Bedrock.

[ad_2]

Source Article Link

Categories
Featured

“When security is a seamless part of how we do our jobs, it works best” — Why AWS wants to be the go-to security for your gen AI data

[ad_1]

With generative AI transforming the way businesses around the world work, plan and evolve, the need to ensure the data such platforms use and generate is paramount.

Although primarily still seen as a cloud and storage leader, Amazon Web Services is looking to play a key role in ensuring businesses of all sizes remain safe against the myriad of security threats facing organizations today.

[ad_2]

Source Article Link

Categories
News

65 ExaFLOP AI Supercomputer being built by AWS and NVIDIA

65 ExaFLOP AI Supercomputer being built by AWS and NVIDIA

As the artificial intelligence explosion continues the demand for more advanced artificial intelligence (AI) infrastructure continues to grow. In response to this need, Amazon Web Services (AWS) and NVIDIA have expanded their strategic collaboration to provide enhanced AI infrastructure and services by building a new powerful AI Supercomputer capable of providing 65 ExaFLOPs  of processing power.

This partnership aims to integrate the latest technologies from both companies to drive AI innovation to new heights. One of the key aspects of this collaboration is AWS becoming the first cloud provider to offer NVIDIA GH200 Grace Hopper Superchips. These superchips come equipped with multi-node NVLink technology, a significant step forward in AI computing. The GH200 Grace Hopper Superchips present up to 20 TB of shared memory, a feature that can power terabyte-scale workloads, a capability that was previously unattainable in the cloud.

New AI Supercomputer under construction

In addition to hardware advancements, the partnership extends to cloud services. NVIDIA and AWS are set to host NVIDIA DGX Cloud, NVIDIA’s AI-training-as-a-service platform, on AWS. This service will feature the GH200 NVL32, providing developers with the largest shared memory in a single instance. This collaboration will allow developers to access multi-node supercomputing for training complex AI models swiftly, thereby streamlining the AI development process.

65 ExaFLOP of processing power

The partnership between AWS and NVIDIA also extends to the ambitious Project Ceiba. This project aims to design the world’s fastest GPU-powered AI supercomputer. AWS will host this supercomputer, which will primarily serve NVIDIA’s research and development team. The integration of the Project Ceiba supercomputer with AWS services will provide NVIDIA with a comprehensive set of AWS capabilities for research and development, potentially leading to significant advancements in AI technology. Here are some other articles you may find of interest on the subject of AI supercomputers :

Summary of collaboration

  • AWS will be the first cloud provider to bring NVIDIA GH200 Grace Hopper Superchips with new multi-node NVLink technology to the cloud. The NVIDIA GH200 NVL32 multi-node platform connects 32 Grace Hopper Superchips with NVIDIA NVLink and NVSwitch technologies into one instance. The platform will be available on Amazon Elastic Compute Cloud (Amazon EC2) instances connected with Amazon’s powerful networking (EFA), supported by advanced virtualization (AWS Nitro System), and hyper-scale clustering (Amazon EC2 UltraClusters), enabling joint customers to scale to thousands of GH200 Superchips.
  • NVIDIA and AWS will collaborate to host NVIDIA DGX Cloud—NVIDIA’s AI-training-as-a-service—on AWS. It will be the first DGX Cloud featuring GH200 NVL32, providing developers the largest shared memory in a single instance. DGX Cloud on AWS will accelerate training of cutting-edge generative AI and large language models that can reach beyond 1 trillion parameters.
  • NVIDIA and AWS are partnering on Project Ceiba to design the world’s fastest GPU-powered AI supercomputer—an at-scale system with GH200 NVL32 and Amazon EFA interconnect hosted by AWS for NVIDIA’s own research and development team. This first-of-its-kind supercomputer—featuring 16,384 NVIDIA GH200 Superchips and capable of processing 65 exaflops of AI—will be used by NVIDIA to propel its next wave of generative AI innovation.
  • AWS will introduce three additional new Amazon EC2 instances: P5e instances, powered by NVIDIA H200 Tensor Core GPUs, for large-scale and cutting-edge generative AI and HPC workloads, and G6 and G6e instances, powered by NVIDIA L4 GPUs and NVIDIA L40S GPUs, respectively, for a wide set of applications such as AI fine-tuning, inference, graphics and video workloads. G6e instances are particularly suitable for developing 3D workflows, digital twins and other applications using NVIDIA Omniverse, a platform for connecting and building generative AI-enabled 3D applications.
  • “AWS and NVIDIA have collaborated for more than 13 years, beginning with the world’s first GPU cloud instance. Today, we offer the widest range of NVIDIA GPU solutions for workloads including graphics, gaming, high performance computing, machine learning, and now, generative AI,” said Adam Selipsky, CEO at AWS. “We continue to innovate with NVIDIA to make AWS the best place to run GPUs, combining next-gen NVIDIA Grace Hopper Superchips with AWS’s EFA powerful networking, EC2 UltraClusters’ hyper-scale clustering, and Nitro’s advanced virtualization capabilities.”

Amazon NVIDIA partner

To further bolster its AI offerings, AWS is set to introduce three new Amazon EC2 instances powered by NVIDIA GPUs. These include the P5e instances, powered by NVIDIA H200 Tensor Core GPUs, and the G6 and G6e instances, powered by NVIDIA L4 GPUs and NVIDIA L40S GPUs, respectively. These new instances will enable customers to build, train, and deploy their cutting-edge models on AWS, thereby expanding the possibilities for AI development.

AWS NVIDIA DGX Cloud hosting

Furthermore, AWS will host the NVIDIA DGX Cloud powered by the GH200 NVL32 NVLink infrastructure. This service will provide enterprises with fast access to multi-node supercomputing capabilities, enabling them to train complex AI models efficiently.

To boost generative AI development, NVIDIA has announced software on AWS, including the NVIDIA NeMo Retriever microservice and NVIDIA BioNeMo. These tools will provide developers with the resources they need to explore new frontiers in AI development.

The expanded collaboration between AWS and NVIDIA represents a significant step forward in AI innovation. By integrating their respective technologies, these companies are set to provide advanced infrastructure, software, and services for generative AI innovations. The partnership will not only enhance the capabilities of AI developers but also pave the way for new advancements in AI technology. As the collaboration continues to evolve, the possibilities for AI development could reach unprecedented levels.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Amazon AWS Graviton4 and AWS Trainium2 chips unveiled

Amazon AWS Graviton4 and AWS Trainium2

Amazon Web Services (AWS) has recently made an exciting announcement at the AWS re:Invent event introducing two new processors, the Graviton4 and Trainium2. These processors are specifically designed to improve the performance of machine learning training and generative AI applications, making them highly relevant for today’s artificial intelligence explosion.

Amazon Graviton4

The Graviton4 chip is a significant step up from its predecessor, the Graviton3. Users can expect a 30% improvement in computing performance, which means applications will run more smoothly and quickly. This chip also boasts a 50% increase in the number of cores, allowing it to handle multiple tasks simultaneously and boost productivity. Furthermore, with a 75% increase in memory bandwidth, data transfer is more efficient, reducing delays and speeding up processing times.

Amazon Trainium2

For those working with complex databases or engaging in big data analytics, the Amazon EC2 R8g instances powered by Graviton4 are designed to meet your needs. These instances are optimized to enhance the performance of demanding applications, enabling you to process and analyze data at impressive speeds.

Turning to the Trainium2 chip, it’s a game-changer for those involved in machine learning. It offers training speeds that are up to four times faster than the original Trainium chips, which means less time waiting and quicker access to insights. The Trainium2 chip can also be used in EC2 UltraClusters, which can scale up to an incredible 100,000 chips. This level of scalability allows you to tackle complex training tasks, such as foundation models and large language models, with performance that rivals supercomputers.

The Amazon EC2 Trn2 instances, which come equipped with Trainium2 chips, are built for these heavy workloads. They ensure high efficiency, meaning your AI models are trained faster and with less energy consumption, supporting sustainable computing practices.

AWS doesn’t just  provide its own silicon it also offers the flexibility to run applications on a variety of processors from other manufacturers like AMD, Intel, and NVIDIA. This diverse ecosystem ensures that you can select the best chip for your specific workload, optimizing both performance and cost.

Energy Efficient

When you use AWS managed services with Graviton4, you’ll notice an improvement in the price performance of your applications. This means you get more computing power for your money, which enhances the value of your investment in cloud infrastructure.

At the heart of AWS’s new chip releases is silicon innovation. AWS is committed to providing cost-effective computing options by developing chip architectures that are tailored to specific workloads. The Graviton4 and Trainium2 chips are not only designed for top-notch performance but also for energy-efficient operation.

The introduction of the Graviton4 and Trainium2 chips is a testament to AWS’s develop cloud infrastructure. Whether you’re managing high-performance databases, exploring big data, or training complex AI models, these chips are crafted to meet your needs. With AWS’s focus on silicon innovation, the future looks bright for cost-effective and environmentally friendly computing solutions that don’t compromise on performance.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.