Categories
News

IBM watsonx Korea Quantum Computing (KQC) deal sealed

Korea Quantum Computing Signs IBM watsonx Deal

IBM has teamed up with Korea Quantum Computing (KQC) in a strategic partnership that’s set to advance on some computing. This alliance is not just a handshake between two companies; it’s a fusion of IBM’s trailblazing AI software and quantum computing services with KQC’s ambition to push the boundaries of technology.

“We are excited to work with KQC to deploy AI and quantum systems to drive innovation across Korean industries. With this engagement, KQC clients will have the ability to train, fine-tune, and deploy advanced AI models, using IBM watsonx and advanced AI infrastructure. Additionally, by having the opportunity to access IBM quantum systems over the cloud, today—and a next-generation quantum system in the coming years—KQC members will be able to combine the power of AI and quantum to develop new applications to address their industries’ toughest problems,” said Darío Gil, IBM Senior Vice President and Director of Research.

This collaboration includes an investment in infrastructure to support the development and deployment of generative AI. Plans for the AI-optimized infrastructure includes advanced GPUs and IBM’s Artificial Intelligence Unit (AIU), managed with Red Hat OpenShift to provide a cloud-native environment. Together, the GPU system and AIU combination is being engineered to offer members state-of-the-art hardware to power AI research and business opportunities.

Quantum Computing

That’s the vision KQC is chasing, and by 2028, they plan to bring this vision to life by installing an IBM Quantum System Two right in their Busan site. This isn’t just about getting their hands on new gadgets; it’s about weaving quantum computing into the very fabric of mainstream applications. To make this a reality, KQC is already on the move, beefing up their infrastructure with the latest GPUs and IBM’s AI Unit, all fine-tuned for AI applications that will redefine what’s possible.

But what’s advanced technology without a solid foundation? That’s where Red Hat OpenShift comes into play. It’s the backbone that will ensure this complex infrastructure stands strong, offering the scalable cloud services that KQC needs to manage their high-tech setup. And it doesn’t stop there. KQC is also diving into the world of Red Hat OpenShift AI for management and runtime, and they’re exploring the frontiers of generative AI technologies with the WatsonX platform. These are the tools that will fuel the next wave of innovation and efficiency in AI.

Now, let’s talk about the ripple effect. This partnership isn’t just about KQC and IBM; it’s about sparking a fire of innovation across entire industries. Korean companies in finance, healthcare, and pharmaceuticals are joining the fray, eager to collaborate on research that leverages AI and quantum computing. The goal? To craft new applications that will catapult these industries into a new era of technological prowess.

The KQC-IBM partnership is more than a milestone for Korea’s tech landscape; it’s a beacon that signals a new dawn in the application of AI and quantum computing. With the integration of Red Hat OpenShift and the WatsonX platform, KQC is not just boosting its capabilities; it’s setting the stage for groundbreaking research and innovation. This collaboration is a testament to the power of partnership and the shared commitment to sculpting the future of industries with the finest technology at our fingertips.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

IBM System Two Quantum Computer unveiled crosses 1000 Qubits threshold

IBM System Two Quantum Computer

In the rapidly evolving world of quantum computing, IBM is making significant strides. Recently announcing that its latest quantum processor, the IBM Condor, which boasts 1,121 qubits, a significant increase from the previous 433-qubit chip. This development aligns with IBM’s projected quantum roadmap. Qubits, the fundamental units of quantum computers, enable significantly faster calculations than traditional computers when entangled. However, the sheer number of qubits is not the sole indicator of a quantum computer’s performance.

This cutting-edge field, once confined to theoretical research, is now seeing practical applications that could transform how we tackle complex problems. The IBM Quantum System Two, a new system that houses the Condor, is a marvel of engineering. Enclosed in a 15-foot structure, it operates in conditions that mimic the extreme cold of outer space. Initially, it will run on three 133-qubit Heron processors, but its design is future-proof, ready to integrate subsequent technological leaps.

IBM Quantum System Two computer

One of the most impressive features of the Quantum System Two is its modular architecture. This design is key to its ability to perform an astounding 100 million operations within a single quantum circuit. IBM isn’t stopping there; they have set their sights on scaling up to 1 billion operations by the year 2033.

To support the people who will develop the future of quantum computing, IBM has released Qiskit 1.0, a software development kit (SDK) that enhances the tools available to developers. This SDK makes it easier to compile quantum circuits with the help of artificial intelligence and introduces a batch mode that streamlines job execution. These improvements are designed to make the quantum computing workflow more user-friendly.

IBM is also focused on building a robust quantum computing ecosystem. They are doing this by developing resources like Qiskit Patterns and Quantum Serverless, which aid in the creation of algorithms and applications. Additionally, IBM is pioneering the integration of generative AI into quantum code programming through Watsonx, showcasing the synergy between artificial intelligence and quantum computing.

IBM Condor Qubit processor

At the forefront of this advancement is IBM’s latest creation, the IBM Condor, a powerful 1,121-qubit processor that is setting new benchmarks in computational capabilities. The IBM Condor’s large number of qubits is a clear indication of the progress IBM has made on their quantum computing roadmap. The power of a quantum computer comes from the entanglement of qubits, which allows for an exponential increase in computational capabilities. This means that quantum computers can address problems that are currently beyond the reach of classical computers.

Creating a quantum processor like the IBM Condor involves complex superconducting circuits that are etched onto silicon wafers. This is a crucial step in the advancement of quantum computing technology. However, it’s not just about having a large number of qubits. It’s also essential to achieve low error rates and maintain high fidelity in the operations of these qubits for them to be practically applied.

Although the qubit count of the IBM Condor is noteworthy, IBM has not yet shared detailed performance data for this new processor. The company has previously emphasized the importance of ‘quantum volume’ as a metric, which takes into account not only the number of qubits but also their quality, connectivity, and the error rates of operations. This metric has not been updated since 2020, leaving us waiting for more information on the processor’s capabilities.

1000 Qubits threshold crossed what does that mean?

The potential uses for the IBM Condor are still being explored. Experts in the field suggest that quantum computing will require millions of qubits to become commercially viable. This means that, despite the advancements the IBM Condor represents, there is still a long way to go before quantum computing can transform various industries. Here are some other articles you may find of interest on the subject of Quantum computing :

As we consider IBM’s latest development, it’s crucial to remember that the promise of quantum computing is not solely based on the number of qubits. It also includes the complexity of their interconnections and the accuracy with which they can be manipulated. The IBM Condor is a sign of the progress being made in quantum computing and signals the approach of a new era in this exciting field.

Quantum computing is an area of technology that has the potential to transform how we solve complex problems. Unlike traditional computers that use bits to process information, quantum computers use qubits, which can exist in multiple states simultaneously. This allows them to perform many calculations at once, providing a level of processing power that’s unattainable with current classical computers.

IBM’s unveiling of the IBM Condor quantum processor with 1,121 qubits is a testament to the rapid advancements in quantum technology. The IBM Condor represents a significant leap from IBM’s previous quantum processors and is a key milestone on their roadmap for the development of quantum computing.

The power of quantum computing lies in the ability of qubits to be entangled, which allows for an exponential increase in computational capabilities. This entanglement enables quantum computers to tackle problems that are currently unsolvable by traditional computers. The IBM Condor’s large number of qubits is a clear indication of the progress IBM has made in this area.

However, the number of qubits is not the only challenge in quantum computing. Achieving low error rates and maintaining high fidelity in qubit operations are also critical for the practical application of quantum processors. While the qubit count of the IBM Condor is impressive, IBM has yet to release detailed performance data for the processor. The company has previously highlighted ‘quantum volume’ as an important metric, which considers the number of qubits, their quality, connectivity, and the error rates of operations. This metric has not been updated since 2020, leaving us waiting for more information on the processor’s capabilities.

Looking ahead, IBM has laid out a comprehensive roadmap that extends to 2033. This plan includes a series of enhancements to their quantum computing systems, which will eventually feature processors with over 100 qubits. IBM is also forging partnerships with research institutions to explore quantum-powered applications.

IBM’s dedication to quantum computing is not just about technological prowess; it’s about providing enterprise solutions that are tailored to specific industries. As IBM’s quantum computing technology matures, it opens up possibilities for addressing some of the most challenging issues facing the world today. The advancements IBM is making today are paving the way for a future where quantum computing plays a pivotal role in solving complex problems and unlocking new opportunities.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

6 Observability myths in AIOps explored by IBM

myths in AIOps

There’s a tendency to think that Application Performance Monitoring (APM) is the same as observability. However, APM is more focused on tracking specific metrics and logs, which is great for simpler systems. On the other hand, observability is designed for the intricate nature of today’s applications that use microservices. It gives you a detailed view of your system’s health and performance, helping you get to the bottom of problems for a more effective fix.

AIOps, short for Artificial Intelligence for IT Operations, is an approach in the field of IT operations that utilizes artificial intelligence, machine learning, and big data analytics to automate and enhance IT operations processes. The primary goal of AIOps is to help IT teams manage the increasing complexity and scale of their operations environments, especially as businesses grow and adopt more advanced technologies.

When it comes to observability, some people believe that log files are all you need. While logs are important, they’re just one piece of the puzzle. For the best results, you should be analyzing metrics, traces, and logs in real-time. This way, you can address issues before they impact your users. Observability goes beyond logs, offering insights into how your system is running and how users are interacting with it, which is key for keeping things running smoothly.

AIOps involves a number of areas including :

  • Data Analysis: AIOps platforms can process vast amounts of operational data from various IT sources, including performance monitoring tools, logs, and helpdesk systems. By analyzing this data, AIOps can detect patterns, anomalies, and potential issues.
  • Automation: A key aspect of AIOps is automating routine processes. This can range from simple tasks, like resetting a server, to more complex processes, like orchestrating a response to a network outage.
  • Machine Learning and AI: AIOps uses machine learning algorithms to learn from data over time. This enables the system to predict and prevent potential issues before they impact the business, and also to provide actionable insights for IT decision-making.
  • Enhancing IT Operations: AIOps helps IT teams become more proactive rather than reactive. It does this by offering insights that can drive better decision-making and by automating responses to common issues, freeing up IT staff to focus on more strategic tasks.
  • Incident Management and Response: In the event of IT issues or outages, AIOps can assist in rapid diagnosis and response, often identifying the root cause of a problem more quickly than a human could.
  • Capacity Optimization: AIOps tools can analyze usage patterns and trends to optimize the allocation of IT resources, such as server and storage capacity, ensuring that resources are used efficiently and effectively.

Another myth is that observability tools are always expensive. It’s true that some can be costly, but there are many options with different pricing models to suit various budgets. For instance, per-host pricing can give you a predictable cost, so you can improve your monitoring without worrying about unexpected expenses. It’s important to look at the different pricing options available to find one that fits your budget and needs.

AIOps myths

Here are some other articles you may find of interest on the subject of AI automation :

There’s also a misconception that observability is only for Site Reliability Engineers (SREs). This isn’t the case. Observability makes data accessible to many teams, like marketing, development, DevOps, and business analysts. This means that everyone can use this data to make better decisions. By breaking down data silos, observability encourages teamwork and helps everyone contribute to making the system more reliable and successful.

  • Difference Between APM and Observability: Application Performance Monitoring (APM) is designed for monolithic runtimes, while observability caters to complex, microservices-based applications, offering a comprehensive view of the entire system.
  • Misconception of Log Files as Observability: Relying solely on log files for problem resolution is an anti-pattern. Effective monitoring involves real-time analysis of various system components and user performance to proactively address issues.
  • Cost of Observability Tools: Observability tools can be expensive, but there are pricing models that offer predictability and inclusivity, such as per-host pricing, as opposed to variable costs based on data volume or user count.
  • Observability is Not Just for SREs: Observability is not exclusively for Site Reliability Engineers (SREs). It democratizes data access across different teams, including marketing, development, DevOps, and business users, enabling them to make informed decisions.
  • Avoiding Favoritism in Application Monitoring: Traditional monitoring tools often force organizations to prioritize certain applications due to resource constraints. Observability allows for comprehensive monitoring, ensuring that all applications receive attention.
  • The Pitfalls of DIY Monitoring: Building custom monitoring solutions can slow down development and lead to lower quality applications. Automated observability solutions are recommended to maintain development speed and application performance.

In the past, monitoring tools might have focused more on certain applications because of limited resources. This could lead to an uneven emphasis. Observability changes this by allowing for equal monitoring of all applications. This ensures that no application is neglected and that performance issues are dealt with across the entire system. This balanced approach is essential for providing a good user experience.

Finally, the idea of creating a custom DIY monitoring system might seem appealing, but it comes with its own set of problems. Building your own system can take away resources from your main development work, which might lower the quality of your applications. Instead, it’s better to use automated observability solutions. They help keep your development on track and ensure your applications are performing well, all while saving you the hassle of managing a monitoring system yourself.

By understanding these aspects of observability and monitoring, you can avoid common mistakes and adopt practices that improve your system’s performance and reliability. Good observability means having a full view of your system, solving problems before they happen, and working together across different teams. With the right tools and approaches, you can make sure your applications are running perfectly and providing a great experience for your users.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

IBM unveils next-generation cloud data storage platform for AI and beyond

IBM unveils next-generation cloud data platform for AI and beyond

IBM has introduced its new IBM Storage Scale System 6000, a cloud-scale global data platform designed to meet the demands of data-intensive and artificial intelligence (AI) workloads. This new system is a part of the IBM Storage for Data and AI portfolio, a collection of advanced storage solutions designed to support the increasing demands of modern data environments.

IBM’s reputation as a leader in the field of distributed file systems and object storage has been recognized by Gartner, a leading research and advisory company. For the seventh consecutive year, IBM has been named a leader in the 2022 Gartner Magic Quadrant for Distributed File Systems and Object Storage. This recognition highlights IBM’s commitment to providing high-performance storage solutions that are optimized for today’s data-driven world.

IBM Storage Scale System 6000

The IBM Storage Scale System 6000 is a high-performance system, providing up to 7 million IOPs and up to 256 GB/s throughput for read-only workloads per system in a 4U footprint. This system is designed to unify data from multiple sources in near real-time, optimizing performance for GPU workloads. It is particularly well-suited for storing semi-structured and unstructured data, including video, imagery, text, and instrumentation data.

In terms of future developments, the IBM Storage Scale System 6000 is set to incorporate IBM FlashCore Modules (FCM) in the first half of 2024. This addition will provide capacity efficiency with a 70% lower cost and 53% less energy per terabyte. The system also features powerful inline hardware-accelerated data compression and encryption, ensuring data security.

Other articles you may find of interest on the subject of AI and related technologies :

“With our current Storage Scale Systems 3500, we are helping decrease time to discovery and increase research productivity for a growing variety of scientific disciplines. For AI research involving medical image analysis, we have decreased latency of access by as much as 60% compared to our previous storage infrastructure. For genomics and complex fluid dynamics workloads, we have increased throughput by as much as 70%,” said Jake Carroll, Chief Technology Officer, Research Computing Centre, The University of Queensland, Australia. “We get all the benefits of a high-speed parallel file system inside our supercomputing resources with the data management transparency and global data access that the IBM Storage Scale software provides.”

Carroll added, “IBM’s Storage Scale System 6000 should be a gamechanger for us. With the specs that I’ve seen, by doubling the performance and increasing the efficiency, we would be able to ask our scientific research questions with higher throughput, but with a lower TCO and lower power consumption per IOP, in the process.”

The IBM Storage Scale System 6000 is designed to be flexible, supporting a range of multi-vendor storage options including AWS, Azure, IBM Cloud, and other public clouds, in addition to IBM Storage Tape. This compatibility with a diverse range of storage options makes it a versatile solution for various data storage needs.

When compared with its competitors, the IBM Storage Scale System 6000 offers impressive performance. The system provides faster access to data with over 2.5 times the GB/s throughput and 2 times the IOPs performance of market-leading competitors. This makes it a powerful tool for organizations that require fast, efficient access to large volumes of data.

The IBM Storage Scale System 6000 is already being used in practical applications. For instance, the University of Queensland has utilized the IBM Storage Scale global data platform and IBM Storage Scale System for research in applied AI for neurodegenerative diseases and vaccine technologies. This showcases the system’s capacity to support complex, data-intensive research projects.

The Storage Scale System 6000 also integrates with NVIDIA technology, supporting NVIDIA Magnum IOTM GPUDirect Storage (GDS). This provides a direct path between GPU memory and storage, designed to increase performance with data movement IO when GDS is enabled.

The IBM Storage Scale System 6000 is a powerful, flexible, and high-performance storage solution designed to meet the demands of data-intensive and AI workloads. With its impressive performance, capacity for future expansion, and compatibility with a range of storage options, this system is well-positioned to support the data needs of modern organizations.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

IBM Quantum System One PINQ² quantum computer

IBM Quantum System One PINQ quantum computer

The inauguration of an IBM Quantum System One in Quebec by the Platform for Digital and Quantum Innovation of Quebec (PINQ²) has marked a significant milestone in the field of information technology and innovation. This development has not only strengthened Quebec’s and Canada’s position in the rapidly advancing field of quantum computing but also opened new prospects for the technological future of the province and the country.

The establishment of the IBM Quantum System One in Quebec by PINQ², a non-profit organization founded by the Ministry of Economy, Innovation and Energy of Quebec and the Université de Sherbrooke, in collaboration with IBM, is a significant step forward. The quantum computer is housed at IBM Bromont, making PINQ² the only administrator to operate an IBM Quantum System One in Canada. This unique position has positioned Quebec as the only place outside of the United States to be engaged in an IBM Discovery Accelerator.

IBM Quantum System One

The new quantum computer will promote the growth of Quebec’s quantum sciences ecosystem and the development of DistriQ innovation zones in Sherbrooke and Technum Québec in Bromont. These innovation zones will have access to cutting-edge technology, fostering a conducive environment for technological advancements and research.

PINQ² has also set up a high-performance computing centre (HPC) at the Humano District in Sherbrooke. This will enable PINQ² to offer a hybrid computing approach, providing businesses with a unique opportunity to access a full range of hybrid quantum computing services. This approach combines the best of classical and quantum computing, offering a more efficient and powerful computing solution.

In a bid to explore quantum computing solutions for sustainability challenges, PINQ² and IBM will lead a world-class quantum working group. This group will be supported by founding members Hydro-Québec and the Université de Sherbrooke, further strengthening the collaboration between academia and industry in the field of quantum computing.

To accelerate the adoption of quantum technologies, PINQ² is establishing a Centre of Excellence. This centre will provide accessible access to its infrastructure for businesses and researchers, fostering a community dedicated to quantum software. The Centre of Excellence aims to make quantum software easier to use, create, and foster dynamic collaboration, all while setting industry benchmarks in software engineering.

IBM Quantum System One being installed in Japan

Other articles you may find of interest on the subject of quantum computers :

Quantum computer

PINQ² is also working with a network of Canadian academic partners such as IVADO, Université de Sherbrooke, University of Saskatchewan, Quantum Algorithms Institute and Concordia University. This collaboration aims to train quantum talent and foster collaborative projects, further strengthening Canada’s position in the field of quantum computing.

In addition to these initiatives, PINQ² is creating a multidisciplinary team through the Centre of Excellence in Quantum Hybrid Software Engineering. This team will accelerate the development of quantum business solutions, contributing to the technological revolution and offering valuable services for businesses.

The IBM Quantum System One is the first integrated quantum system with a compact design optimized for stability, reliability and continuous use. It has been deployed in Germany, Japan, the United States and now Canada, marking the creation of the world’s largest commercial quantum research infrastructure.

Next generation System Two

The inauguration of the IBM Quantum System One by PINQ² in Quebec is a significant development that strengthens Quebec’s and Canada’s position in the field of quantum computing. It offers numerous opportunities for businesses, researchers, and the innovation zones of DistriQ and Technum Québec. Through its various initiatives and partnerships, PINQ² is playing a crucial role in the technological revolution, fostering the growth of the quantum sciences ecosystem in Quebec and Canada.

Source : IBM

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.