Remove APIs Remove Best practices Remove Construction
article thumbnail

Centralize model governance with SageMaker Model Registry Resource Access Manager sharing

AWS Machine Learning

Customers can use the SageMaker Studio UI or APIs to specify the SageMaker Model Registry model to be shared and grant access to specific AWS accounts or to everyone in the organization. It also helps achieve data, project, and team isolation while supporting software development lifecycle best practices.

article thumbnail

Enterprise-grade natural language to SQL generation using LLMs: Balancing accuracy, latency, and scale

AWS Machine Learning

This work extends upon the post Generating value from enterprise data: Best practices for Text2SQL and generative AI. The solution uses the data domain to construct prompt inputs for the generative LLM. Scoping data domain for focused prompt construction This is a divide-and-conquer pattern.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Track LLM model evaluation using Amazon SageMaker managed MLflow and FMEval

AWS Machine Learning

Thanks to this construct, you can evaluate any LLM by configuring the model runner according to your model. It functions as a standalone HTTP server that provides various REST API endpoints for monitoring, recording, and visualizing experiment runs. Model runner Composes input, and invokes and extracts output from your model.

article thumbnail

GraphStorm 0.3: Scalable, multi-task learning on graphs with user-friendly APIs

AWS Machine Learning

adds new APIs to customize GraphStorm pipelines: you now only need 12 lines of code to implement a custom node classification training loop. To help you get started with the new API, we have published two Jupyter notebook examples: one for node classification, and one for a link prediction task. Specifically, GraphStorm 0.3

APIs 120
article thumbnail

Best practices and design patterns for building machine learning workflows with Amazon SageMaker Pipelines

AWS Machine Learning

In this post, we provide some best practices to maximize the value of SageMaker Pipelines and make the development experience seamless. Best practices for SageMaker Pipelines In this section, we discuss some best practices that can be followed while designing workflows using SageMaker Pipelines.

article thumbnail

Enhancing LLM Capabilities with NeMo Guardrails on Amazon SageMaker JumpStart

AWS Machine Learning

Some links for security best practices are shared below but we strongly recommend reaching out to your account team for detailed guidance and to discuss the appropriate security architecture needed for a secure and compliant deployment. model API exposed by SageMaker JumpStart properly. Integrating Llama 3.1 The Llama 3.1

Chatbots 107
article thumbnail

Model customization, RAG, or both: A case study with Amazon Nova

AWS Machine Learning

In this post, we seek to address this growing need by offering clear, actionable guidelines and best practices on when to use each approach, helping you make informed decisions that align with your unique requirements and objectives. The following diagram illustrates the solution architecture.

APIs 123