Remove APIs Remove Benchmark Remove Presentation
article thumbnail

Pixtral-12B-2409 is now available on Amazon Bedrock Marketplace

AWS Machine Learning

Overview of Pixtral 12B Pixtral 12B, Mistrals inaugural VLM, delivers robust performance across a range of benchmarks, surpassing other open models and rivaling larger counterparts, according to Mistrals evaluation. Performance metrics and benchmarks Pixtral 12B is trained to understand both natural images and documents, achieving 52.5%

Benchmark 109
article thumbnail

GraphStorm 0.3: Scalable, multi-task learning on graphs with user-friendly APIs

AWS Machine Learning

adds new APIs to customize GraphStorm pipelines: you now only need 12 lines of code to implement a custom node classification training loop. Based on customer feedback for the experimental APIs we released in GraphStorm 0.2, introduces refactored graph ML pipeline APIs. Specifically, GraphStorm 0.3 In addition, GraphStorm 0.3

APIs 117
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Benchmark and optimize endpoint deployment in Amazon SageMaker JumpStart 

AWS Machine Learning

This post explores these relationships via a comprehensive benchmarking of LLMs available in Amazon SageMaker JumpStart, including Llama 2, Falcon, and Mistral variants. We provide theoretical principles on how accelerator specifications impact LLM benchmarking. Additionally, models are fully sharded on the supported instance.

Benchmark 128
article thumbnail

Evaluate RAG responses with Amazon Bedrock, LlamaIndex and RAGAS

AWS Machine Learning

Here are some examples of these metrics: Retrieval component Context precision Evaluates whether all of the ground-truth relevant items present in the contexts are ranked higher or not. Evaluate RAG components with Foundation models We can also use a Foundation Model as a judge to compute various metrics for both retrieval and generation.

Metrics 118
article thumbnail

Your guide to generative AI and ML at AWS re:Invent 2024

AWS Machine Learning

As attendees circulate through the GAIZ, subject matter experts and Generative AI Innovation Center strategists will be on-hand to share insights, answer questions, present customer stories from an extensive catalog of reference demos, and provide personalized guidance for moving generative AI applications into production.

APIs 107
article thumbnail

Reduce conversational AI response time through inference at the edge with AWS Local Zones

AWS Machine Learning

The device further processes this response, including text-to-speech (TTS) conversion for voice agents, before presenting it to the user. They enable applications requiring very low latency or local data processing using familiar APIs and tool sets.

APIs 83
article thumbnail

From RAG to fabric: Lessons learned from building real-world RAGs at GenAIIC – Part 2

AWS Machine Learning

An alternative approach to routing is to use the native tool use capability (also known as function calling) available within the Bedrock Converse API. In this scenario, each category or data source would be defined as a ‘tool’ within the API, enabling the model to select and use these tools as needed.

APIs 117