Categories
Featured

Mastering cloud economics in the era of AI adoption

[ad_1]

The acceleration of artificial intelligence (AI) adoption has had significant implications for enterprise cloud economics. As businesses invest heavily in AI, they must also focus on managing escalating cloud costs strategically to remain competitive in this transformative era of AI. In this article, we look at the steps business can take to navigate the economic terrain of cloud computing.

Governance and process optimization

In cloud economics, the proliferation of costs poses a significant challenge due to the lack of effective governance. To address this, businesses must take proactive steps in establishing a robust governance framework for their AI services. This involves defining a predetermined set of services tailored to the organization’s specific needs, coupled with the creation of clear service level agreements (SLAs). These SLAs outline performance metrics, availability and support for each service, ensuring transparency and accountability in the utilization of AI resources.

[ad_2]

Source Article Link

Categories
Featured

UK government releases new cloud SCADA security guidance for OT

[ad_1]

The UK National Cyber Security Center (NCSC) has released new guidance on securing supervisory control and data acquisition (SCADA) cloud environments for operational technology (OT).

UK critical national infrastructure (CNI) is highly dependent on SCADA as a means for data collection and control, and due to the importance of their environments they are at a higher risk of cyber attack.

[ad_2]

Source Article Link

Categories
Featured

‘What if the operating system is the problem’: Linux was never created for the cloud — so engineers developed DBOS, a new operating system that is part OS, part database

[ad_1]

Michael Stonebraker has developed several influential database management systems over the years, including Ingres, PostgreSQL, and VoltDB. Matei Zaharia is the creator of Apache Spark and co-founder and CTO of Databricks. 

Working with a team from the Massachusetts Institute of Technology and Stanford University, the pair have created a revolutionary prototype operating system called DBOS – DataBase OS.

[ad_2]

Source Article Link

Categories
Featured

Want to store 1PB of data in the cloud? This startup can do it for you for as little as $10,000 a month — Qumulo says it can scale to Exabytes off premise and wants to eradicate tapes once and for all

[ad_1]

Qumulo has launched Azure Native Qumulo Cold (ANQ Cold), which it claims is the first truly cloud-native, fully managed SaaS solution for storing and retrieving infrequently accessed “cold” file data.

Fully POSIX-compliant and positioned as an on-premises alternative to tape storage, ANQ Cold can be used as a standalone file service, a backup target for any file store, including on-premises legacy scale-out NAS, and it can be integrated into a hybrid storage infrastructure, enabling access to remote data as if it were local. It can also scale to an exabyte-level file system in a single namespace.

[ad_2]

Source Article Link

Categories
Featured

Inference: The future of AI in the cloud

[ad_1]

Now that it’s 2024, we can’t overlook the profound impact that Artificial Intelligence (AI) is having on our operations across businesses and market sectors. Government research has found that one in six UK organizations has embraced at least one AI technology within its workflows, and that number is expected to grow through to 2040.

With increasing AI and Generative AI (GenAI) adoption, the future of how we interact with the web hinges on our ability to harness the power of inference. Inference happens when a trained AI model uses real-time data to predict or complete a task, testing its ability to apply the knowledge gained during training. It’s the AI model’s moment of truth to show how well it can apply information from what it has learned. Whether you work in healthcare, ecommerce or technology, the ability to tap into AI insights and achieve true personalization will be crucial to customer engagement and future business success.

Inference: the Key to true personalisation

[ad_2]

Source Article Link

Categories
Featured

Why you should make a cloud backup this March

[ad_1]

March 30 is World Backup Day. No, you don’t get the day off. It’s an initiative backed by some of the providers we recommend in our cloud backup guide like Mega and Backblaze, and even Amazon, asking everyone – individuals? Organizations? – to make at least one backup of their precious data.

At TechRadar Pro, we, and maybe you too, reader, believe that any person or business refusing to admit the mortality of their external hard drives and SSDs is possibly (definitely) from another planet. Backblaze data from 2021 suggests that 21% of people have never made a backup.

[ad_2]

Source Article Link

Categories
News

IBM LinuxONE 4 affordable AI and hybrid cloud platform unveiled

IBM LinuxONE 4 affordable AI and hybrid cloud platform

IBM has recently introduced the LinuxONE 4 Express, a cutting-edge platform tailored to boost the computing prowess of small and medium-sized businesses. This innovative solution is set to elevate performance, fortify security measures, and expand artificial intelligence capabilities across a variety of data center settings.

The LinuxONE 4 Express, a pre-configured, rack-mounted system, is designed to cut costs and streamline the management of diverse workloads, from managing digital assets to enhancing medical imaging with AI, and consolidating multiple tasks into one system.

At the heart of the LinuxONE 4 Express lies the IBM Telum processor, which is celebrated for its high availability, energy efficiency, and robust security features. The system promises an impressive up to 99.999999% availability, positioning it as a reliable choice for businesses that cannot afford downtime. Additionally, it supports confidential computing, which is essential for safeguarding sensitive data.

IBM LinuxONE 4

One of the key advantages of the LinuxONE 4 Express is its support for hybrid cloud strategies. It addresses the challenges of isolated stacks and seamlessly integrates AI into cloud environments. This makes it an ideal option for businesses looking to benefit from both private and public cloud services.

Here are some other articles you may find of interest on the subject of IBM :

Educational institutions, such as University College London, are planning to deploy the LinuxONE 4 Express for computational research and digital scholarship. They aim to utilize its high performance and scalability for complex tasks like Next Generation Sequencing and AI-powered analysis of medical data.

The LinuxONE 4 Express shines in several applications:

  • It offers advanced security for digital assets, including confidential computing capabilities, to safeguard sensitive data.
  • The IBM Telum processor’s on-chip AI inferencing allows for real-time data analysis in medical imaging, which is vital for AI-assisted diagnostics.
  • In terms of workload consolidation, the system can lead to significant cost savings, with a potential 52% reduction in total cost of ownership over five years when compared to traditional x86 servers.

IBM is not only delivering a powerful system but is also investing in the development of an ecosystem by partnering with companies like AquaSecurity and Clari5. These partnerships are focused on tackling sustainability and cybersecurity challenges, enhancing the system’s features, and ensuring it meets the evolving needs of businesses.

Clients, including Saudi Business Machines (SBM), have praised the LinuxONE 4 Express for its superior performance and user-friendliness compared to x86 servers. Such endorsements underscore the system’s ability to transform business computing.

The IBM LinuxONE 4 Express will be available for purchase starting February 20, 2024, with prices beginning at $135,000. IBM is also planning to host an educational webinar on the release date to offer deeper insights into the system and discuss current industry trends. This event is a chance for businesses to learn more about the LinuxONE 4 Express and understand how it can support their computing requirements.

Filed Under: Hardware, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Build and publish AI apps and models on the cloud for free

Create and publish AI apps and models to the cloud for free

If you would like to build AI apps and AI models on the cloud you might be interested in a new workflow and system which offers a free package allowing you to test out its features before parting with your hard earned cash. BentoML is a tool that’s making waves by making it easier for developers to take their AI models from the drawing board to real-world use. This framework is fantastic for those looking to deploy a wide variety of AI applications, such as language processing, image recognition, and more, without getting bogged down in the technicalities.

BentoML stands out because it’s designed to be efficient. It helps you move your AI models to a live environment quickly. The framework is built to handle heavy workloads, perform at high speeds, and keep costs down. It supports many different models and frameworks, which means you won’t have to worry about whether your AI application will be compatible with it.

One of the most significant advantages of BentoML is its cloud service. This service takes care of all the technical maintenance for you. It’s especially useful when you need to scale up your AI applications to handle more work. The cloud service adjusts to the workload, so you don’t have to manage the technical infrastructure yourself.

Building AI Apps and Models

Another key feature of BentoML is its support for serverless GPU computing. This is a big deal for AI applications that require a lot of computing power. With serverless GPUs, you can scale up your computing tasks without overspending. This ensures that your applications run smoothly and efficiently, even when they’re doing complex tasks.

Here are some other articles you may find of interest on the subject of  AI models

BentoML’s cloud service can handle many different types of AI models. Whether you’re working with text, images, or speech, or even combining different types of data, BentoML has you covered. This flexibility is crucial for deploying AI applications across various industries and use cases.

The interface of BentoML is another highlight. It’s designed to be user-friendly, so you can deploy your AI models without a hassle. You can choose from different deployment options to fit your specific needs. The cloud service also includes monitoring tools, which let you keep an eye on how much CPU and memory your applications are using. This helps you make sure that your applications are running as efficiently as possible.

BentoML is an open-source framework, which means that anyone can look at the source code and contribute to its development. There’s also a lot of documentation available to help you get started and troubleshoot any issues you might run into. Currently, access to BentoML’s cloud version is limited to a waitlist, but those who support the project on Patreon get some extra benefits. This limited access ensures that users get the support and resources they need to make the most of their AI applications.

For those who need something more tailored, BentoML is flexible enough to be customized for specific projects. This means you can tweak the framework to meet the unique demands of your AI applications, ensuring they’re not just up and running but also optimized for your particular needs.

Things to consider when building AI apps

Creating and publishing AI applications and models to the cloud involves several steps, from designing and training your model to deploying and scaling it in a cloud environment. Here are some areas with consideration when building your AI app  or model.

1. Design and Development

Understanding Requirements:

  • Objective: Define the purpose of your AI application. Is it for data analysis, predictive modeling, image processing, or another use case?
  • Data: Determine the data you need. Consider its availability, quality, and the preprocessing steps required.

Model Selection and Training:

  • Algorithm Selection: Choose an appropriate machine learning or deep learning algorithm based on your application’s requirements.
  • Training: Train your model using your dataset. This step may require significant computational resources, especially for large datasets or complex models.

Validation and Testing:

  • Test your model to ensure it meets your accuracy and performance requirements. Consider using a separate validation dataset to prevent overfitting.

2. Preparing for Deployment

Optimization for Production:

  • Model Optimization: Optimize your model for better performance and efficiency. Techniques like quantization, pruning, and model simplification can be helpful.
  • Containerization: Use containerization tools like Docker to bundle your application, dependencies, and environment. This ensures consistency across different deployment environments.

Selecting a Cloud Provider:

  • Evaluate cloud providers (e.g., AWS, Google Cloud, Azure) based on the services they offer, such as managed machine learning services, scalability, cost, and geographic availability.

3. Cloud Deployment

Infrastructure Setup:

  • Compute Resources: Choose between CPUs, GPUs, or TPUs based on your model’s requirements.
  • Storage: Decide on the type of storage needed for your data, considering factors like access speed, scalability, and cost.

Cloud Services and Tools:

  • Managed Services: Leverage managed services for machine learning model deployment, such as AWS SageMaker, Google AI Platform, or Azure Machine Learning.
  • CI/CD Integration: Integrate continuous integration and continuous deployment pipelines to automate testing and deployment processes.

Scaling and Management:

  • Auto-scaling: Configure auto-scaling to adjust the compute resources automatically based on the load.
  • Monitoring and Logging: Implement monitoring and logging to track the application’s performance and troubleshoot issues.

4. Security and Compliance

Data Privacy and Security:

  • Ensure your application complies with data privacy regulations (e.g., GDPR, HIPAA). Implement security measures to protect data and model integrity.

Access Control:

  • Use identity and access management (IAM) services to control access to your AI application and data securely.

5. Maintenance and Optimization

Continuous Monitoring:

  • Regularly monitor your application for any performance issues or anomalies. Use cloud monitoring tools to get insights into usage patterns and potential bottlenecks.

Updating and Iteration:

  • Continuously improve and update your AI model and application based on user feedback and new data.

Cost Management:

  • Keep an eye on cloud resource usage and costs. Use cost management tools provided by cloud providers to optimize spending.

Considerations

  • Performance vs. Cost: Balancing the performance of your AI applications with the cost of cloud resources is crucial. Opt for the right mix of compute options and managed services.
  • Latency: For real-time applications, consider the latency introduced by cloud deployment. Select cloud regions close to your users to minimize latency.
  • Scalability: Plan for scalability from the start. Cloud environments make it easier to scale, but efficient scaling requires thoughtful architecture and resource management.

BentoML is proving to be an indispensable tool for anyone looking to deploy AI applications in the cloud. Its ability to support rapid deployment, handle scalability, and cater to a wide range of AI model types makes it a valuable asset. The user-friendly interface and robust monitoring tools add to its appeal. Whether you’re a seasoned AI expert or just starting out, BentoML provides the infrastructure and flexibility needed to bring your AI models into the spotlight of technological progress.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How to accelerate ML with AI Cloud Infrastructure

The digital environment and business have never been as demanding as they are now. An ever-increasing competition creates a need for new solutions and tools to elevate the efficiency of performance and maximize the output of enterprises and companies involved.

Machine learning (ML) is one of the core features of modern business functioning. Despite being introduced a long time ago, it is now that it is unleashing its true potential, optimizing the workflow of every company implementing it.

With all the beneficial features machine learning offers today, there is still lots of room for improvement. The recent development of the digital sphere features a powerful combination of machine learning and AI cloud services. The Gcore AI Cloud Infrastructure exemplifies this trend, offering a robust platform that elevates machine learning capabilities to new heights. What are the expectations of such a merger and how to implement it? Let’s follow the guide.

accelerate ML with AI Cloud Infrastructure

What Is Machine Learning?

Machine learning (ML) is a subcategory of artificial intelligence, which aims to imitate the behavioral and mental patterns of humans. Gcore says ML algorithms learn from massive volumes of historical data patterns and statistical models, which lets them make predictions, create data clusters, generate new content, automate routine jobs, etc. It makes these without explicit programming.

What Is AI Cloud Infrastructure?

Cloud computing has started a new era in the delivery of computing services. It introduced a new layer of convenience, as the users can reach the services, storage, databases, software, and analytics through the cloud (the Internet), without the need to build an on-premise hardware infrastructure.

According to Google, cloud computing is typically represented in three forms: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS).

Cloud computing alone is one of the cornerstones of a sustainable digital presence; however, its beneficial nature has been improved by introducing AI tools.

When AI and cloud computing are merged, the capabilities of both just double. Cloud computing provides the resources and infrastructure to train AI models and successfully deploy them in the cloud, while AI is used to automate routine or complex tasks in the cloud, optimizing the overall system performance.

The Benefits of AI Cloud Computing

  1. Maximized efficiency – as long as the AI algorithms automate numerous processes of system functioning, it leads to improved efficiency of the system, and reduced downtime.
  2. Improved Security – AI is trained to detect data breaches and system malfunctioning, preventing all potential threats. It can also analyze the behavioral patterns of users, spot anomalies, and thus, prevent access to potentially dangerous traffic.
  3. Predictive analytics – AI analytics provides valuable insights into the user’s behavior, current trends, demands, etc. Such data lets organizations and companies make informed and timely decisions regarding service updates and optimization.
  4. Personalization – AI algorithms can fully personalize the user’s journey, which improves the user experience and elevates the level of customer satisfaction.
  5. Scalability – By implementing AI, cloud systems can scale up or down their resources and performance regarding the number of applications, variability of data, locations, etc.
  6. Cost reductions – With the help of AI analytics and its timely insights, companies can optimize the usage of their inventory and financial resources, preventing over- or under-stocking of inventory

accelerate ML with AI Cloud Infrastructure

Benefits of Machine Learning in AI Cloud Infrastructure

AI Cloud Infrastructure enhances the capabilities of machine learning. After the algorithms are built, the models are deployed into the cloud computing clusters. The main benefits are the following:

  • No need for large financial investments. The businesses can side with on-demand pricing models and implement machine learning algorithms.
  • Businesses can scale their production and services according to the demand, growing the capabilities of machine learning. Moreover, they can experiment with a variety of algorithms without the need to invest in hardware.
  • The AI cloud environment lets businesses access machine learning capabilities without advanced skills in data science and artificial intelligence.
  • The AI cloud environment enhances the performance of GPUs without additional investments into the hardware.

How to speed up ML with the help of AI Cloud Infrastructure?

Choose the cloud platform

Machine learning capabilities can only be fully unleashed with the right platform. There are numerous providers of cloud services, each one promising specific services, features for ML, and pricing policies.

Among the most recognised platforms are Google Cloud AI Platform, Amazon SageMaker, Microsoft Azure Machine Learning, IBM Watson Studio, AI IPU Cloud Infrastructure by GCore, etc.

When comparing the platform, it is important to check the key features and aspects – security, scalability opportunities, pre-built models, libraries, integration opportunities, flexibility, customization, and pricing options.

Exploit GPUs and TPUs

The main benefit of cloud services is an ability to to use powerful hardware to accelerate machine learning without the need to develop the on-premises infrastructure.

GPUs (graphic processing units) and TPUs (tensor processing units) are the two devices that enable the processing of large amounts of data and complex operations much faster than CPU (central processing units). Such time efficiency reduces the time and cost for building the algorithms and training the models.

Optimize model architecture and hyperparameters

The model architecture refers to its structure and design; the hyperparameters are the set of rules that establish and monitor the behavior of the model. When the two are co-tuned, it benefits the accuracy, efficiency of the model.

The usage of the right cloud service helps to speed up the process of optimization.

Introduce cloud-based model serving and monitoring

Model serving makes it available for deployment, while the model monitoring keeps track of its performance.

The usage of AI Cloud services speeds up the deployment of the model, benefits its functioning, and brings insights into its performance.

The Final Thoughts

Machine learning alone is an efficient solution for improving the performance of any business involved. When it is combined with AI Cloud services and infrastructure, it becomes the essential tool for streamlining the workload, maximizing the efficiency of performance, thus, increasing the ROI, profits and overall functioning of the system.

Filed Under: Guides





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

New Arduino Cloud Editor offers a classic Arduino IDE experience

New Arduino Cloud Editor offers a classic Arduino IDE experience

The world of Arduino development has just been given a significant boost with the introduction of a new online platform known as the Arduino Cloud Editor. This platform is changing the way developers work with Arduino projects by providing a seamless experience that can be accessed from anywhere with an internet connection. Unlike the traditional Arduino Integrated Development Environment (IDE), the Cloud Editor is a major update aimed at improving your efficiency and the way you manage your projects.

The Cloud Editor brings together the previously separate simple and full editors into one robust online tool. This means you no longer have to switch between different versions, making your work more streamlined. Even though this new tool is available, you can still use the existing IDE if you prefer, which makes the transition to the cloud smoother.

Arduino Cloud Editor IDE

One of the key improvements in the Cloud Editor is the way it handles libraries and examples. If you’re working on complex projects that require a variety of resources, you’ll find it easier to find and use what you need. This is a big step forward in making project management more intuitive.

Debugging is an essential part of development, and the Cloud Editor’s serial monitor has been upgraded to help you with this. It now has features like the ability to download logs and add timestamps, giving you more detailed information to solve problems and refine your projects.

Maker Plan : Grab your first month free on the Maker plan with the code “MAKER2024” at checkout (Choose the Monthly Maker Plan, you can cancel anytime). But hurry, this offer is only valid until January 31st! ) – Connect devices, visualize data, and control your projects from anywhere in the world. Choose a device you want to connect, and Arduino Cloud will take care of all the code necessary for setting things up. Gather real time and historical data from your devices in one place, whether you are working with a simple project, or hundreds of variables.

Another important aspect of the Cloud Editor is that it works with all devices compatible with the Arduino IDE. This means you can use it on any device that has a browser, which gives you a lot of flexibility in how and where you work on your projects.

Keeping your projects organized is easier with the Cloud Editor thanks to a centralized sketch repository. This includes a sketchbook feature that lets you create folders to keep your workspace tidy and your projects easy to manage.

The Cloud Editor also makes a distinction between standard “Sketches” and “IoT Sketches” for projects that connect with the Arduino Cloud. While it doesn’t support Chromebooks right away, there are plans to add this feature, which will make the platform even more accessible.

The Arduino Cloud itself is a comprehensive platform for IoT projects. It works with a wide range of devices and programming languages, and you can access it from your browser or mobile device. This means you can always get to your projects, no matter where you are.

If you’re new to the Cloud Editor, there’s plenty of documentation to help you get started. There’s also a promotional offer for the Maker plan, which gives new users a chance to try out the Cloud Editor at a special rate.

The launch of the Arduino Cloud Editor is a big moment for developers who use Arduino. With its wide range of features, better debugging tools, and compatibility with many devices, the Cloud Editor is likely to become a key tool for developers around the world. Whether you’re working on personal projects or complex IoT solutions, the Cloud Editor is designed to support your work and improve your experience with Arduino.

Filed Under: DIY Projects, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.