In an effort to position itself as one of the leading platforms for custom generative AI models, AWS has announced the launch of Custom Model Import within Bedrock, its suite of enterprise-focused GenAI services.
As the name suggests, Custom Model Import will allow organizations to import and access their own generative AI models as fully managed APIs, which means they’ll get to benefit from the same infrastructure and tools that are available for existing models in Bedrock.
AWS sees the addition of custom model support as addressing a growing trend among enterprises to develop and refine their own in-house models.
AWS Bedrock now supports custom in-house models
Besides leveraging their own proprietary models, enterprises using Bedrock’s Custom Model Import will be able to use the suite’s other tools for knowledge expansion, fine-tuning and safeguarding against things like bias. In theory, it should give customers the best of both worlds.
Users will also be able to monitor and filter outputs for undesirable content like hate speech or violence, as well as assess the performance of models across various criteria.
The service is now available in preview, and it supports three of the most popular open model architectures – Flan-T5, Llama and Mistal. AWS has committed to adding more architecture support in the future.
In the same breath, AWS also revealed the general availability of Titan Image Generator and the launch of Titan Text Embeddings V2. Trained by AWS, the company says that reductions in storage and compute costs coincide with improved accuracy, making the Titan family more cost-effective for companies that don’t have the resources to develop their own models.
Meta Llama 3 foundation models have also arrived on Bedrock, with Cohere’s Command R and Command R+ models set to arrive soon.
On the whole, it’s refreshing to see AWS commit to interoperability by offering popular third-party and now in-house model support on its platform – it’s a move that could be set to jointly benefit it and its customers.