Complete the Evaluate Gen AI model and agent performance skill badge to demonstrate your ability to use the Gen AI evaluation service. You will evaluate models to select the best model for a given task, compare models against each other and evaluate the performance of agents. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete the assessment challenge lab, to receive a skill badge that you can share with your network. When you complete this course, you can earn the badge displayed here and claim it on Credly! Boost your cloud career by showing the world the skills you have developed!
Evaluation is important at every step of your Gen AI development process. In this course you will learn how to evaluate gen AI agents built using agent frameworks.
This course delves into the complexities of assessing the quality of large language model outputs. It examines the challenges enterprises face due to the subjective and sometimes incorrect nature of LLM responses, including hallucinations and inconsistent results. The course introduces various evaluation metrics for different tasks like classification, text generation, and question answering, such as Accuracy, Precision, Recall, F1 score, ROUGE, BLEU, and Exact Match. It also explores evaluation methods offered by Vertex AI LLM Evaluation Services, including computation-based, autorater, and human evaluation, providing insights into their application and benefits. Finally, the module covers how to unit test LLM applications within Vertex AI.
This course equips machine learning practitioners with the essential tools, techniques, and best practices for evaluating both generative and predictive AI models. Model evaluation is a critical discipline for ensuring that ML systems deliver reliable, accurate, and high-performing results in production. Participants will gain a deep understanding of various evaluation metrics, methodologies, and their appropriate application across different model types and tasks. The course will emphasize the unique challenges posed by generative AI models and provide strategies for tackling them effectively. By leveraging Google Cloud's Vertex AI platform, participants will learn how to implement robust evaluation processes for model selection, optimization, and continuous monitoring.
This lab tests your ability to develop a real-world Generative AI Q&A solution using a RAG framework. You will use Firestore as a vector database and deploy a Flask app as a user interface to query a food safety knowledge base.
Learn how to create Hybrid Search applications using Vertex AI Vertex Search to combine semantic searching with keyword search to return results based on both semantic meaning and keyword matching.
Learn how to build your own Retrieval-Augmented Generation (RAG) solutions for greater control and flexibility than out-of-the-box implementations. Create a custom RAG solution using Vertex AI APIs, vector stores, and the LangChain framework.
This course explores a Retrieval Augmented Generation (RAG) solution in BigQuery to mitigate AI hallucinations. It introduces a RAG workflow that encompasses creating embeddings, searching a vector space, and generating improved answers. The course explains the conceptual reasons behind these steps and their practical implementation with BigQuery. By the end of the course, learners will be able to build a RAG pipeline using BigQuery and generative AI models like Gemini and embedding models to address their own AI hallucination use cases.
Complete the Edit images with Imagen skill badge to demonstrate your skills with Imagen's mask modes and editing modes to edit images according to certain prompts. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete the assessment challenge lab, to receive a skill badge that you can share with your network. When you complete this course, you can earn the badge displayed here and claim it on Credly! Boost your cloud career by showing the world the skills you have developed!
Generate engaging media with Google's foundation models for media. Create new images with Imagen, or edit your existing photos by adding details or outpainting to create a wider view. Replace backgrounds to put your products in new scenes. And learn the basics of generating videos with Veo!
Complete the Develop solutions using Model Garden APIs skill badge to demonstrate your ability to use Vertex AI Model Garden features when building gen AI solutions. You will use partner APIs such as Anthropic Claude ands Meta Llama, deploy and programatically access foundation models like Gemma and Stable Diffusion XL and access Vertex AI Endpoints. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete the assessment challenge lab, to receive a skill badge that you can share with your network. When you complete this course, you can earn the badge displayed here and claim it on Credly! Boost your cloud career by showing the world the skills you have developed!
Model tuning is an effective way to customize large models to your tasks. It's a key step to improve the model's quality and efficiency. Model tuning provides benefits such as higher quality results for your specific tasks and increased model robustness. You learn some of the tuning options available in Vertex AI and when to use them.
Model Garden is a model library that helps you discover, test, and deploy models from Google and Google partners. Learn how to explore the available models and select the right ones for your use case. And how to deploy and interact with Model Garden models through the Google Cloud console and APIs.
Complete the Extend Gemini with controlled generation and Tool use skill badge to demonstrate your proficiency in connecting models to external tools and APIs. This allows models to augment their knowledge, extend their capabilities and interact with external systems to take actions such as sending an email. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete the assessment challenge lab, to receive a skill badge that you can share with your network. When you complete this course, you can earn the badge displayed here and claim it on Credly! Boost your cloud career by showing the world the skills you have developed!"
An LLM-based application can process language in a way that resembles thought. But if you want to extend its capabilities to take actions by running other functions you have coded, you will need to use function calling. This can also be referred to as tool use. Additionally, you can give a model the ability to search Google or search a data store of documents to ground its responses. In other words, to base its answers on that information. In this course, you’ll explore these concepts.
Learn a variety of strategies and techniques to engineer effective prompts for generative models
Learn how to leverage Gemini multimodal capabilities to process and generate text, images, and audio and to integrate Gemini through APIs to perform tasks such as content creation and summarization.
This course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring and operating production ML systems on Google Cloud. MLOps is a discipline focused on the deployment, testing, monitoring, and automation of ML systems in production. Machine Learning Engineering professionals use tools for continuous improvement and evaluation of deployed models. They work with (or can be) Data Scientists, who develop models, to enable velocity and rigor in deploying the best performing models.
In this course, you’ll learn to use the Google Agent Development Kit to build complex, multi-agent systems. You will build agents equipped with tools, and connect them with parent-child relationships and flows to define how they interact. You’ll run your agents locally and deploy them to Vertex AI Agent Engine to run as a managed agentic flow, with infrastructure decisions and resource scaling handled by Agent Engine. Please note these labs are based off a pre-released version of this product. There may be some lag on these labs as we provide maintenance updates.
This course demonstrates how to use AI/ML models for generative AI tasks in BigQuery. Through a practical use case involving customer relationship management, you learn the workflow of solving a business problem with Gemini models. To facilitate comprehension, the course also provides step-by-step guidance through coding solutions using both SQL queries and Python notebooks.
This course explores Gemini in BigQuery, a suite of AI-driven features to assist data-to-AI workflow. These features include data exploration and preparation, code generation and troubleshooting, and workflow discovery and visualization. Through conceptual explanations, a practical use case, and hands-on labs, the course empowers data practitioners to boost their productivity and expedite the development pipeline.
Completa il corso introduttivo con badge delle competenze Crea un mesh di dati con Dataplex per dimostrare le tue competenze nei seguenti ambiti: creare un mesh di dati con Dataplex per facilitare governance, discovery e sicurezza dei dati su Google Cloud. Ti eserciterai e metterai alla prova le tue competenze nel tagging degli asset, nell'assegnazione di ruoli IAM e nella valutazione della qualità dei dati in Dataplex.
Complete the intermediate Engineer Data for Predictive Modeling with BigQuery ML skill badge to demonstrate skills in the following: building data transformation pipelines to BigQuery using Dataprep by Trifacta; using Cloud Storage, Dataflow, and BigQuery to build extract, transform, and load (ETL) workflows; and building machine learning models using BigQuery ML.
Complete the intermediate Build a Data Warehouse with BigQuery skill badge course to demonstrate skills in the following: joining data to create new tables, troubleshooting joins, appending data with unions, creating date-partitioned tables, and working with JSON, arrays, and structs in BigQuery.
Ottieni il corso intermedio con badge delle competenze Prepara i dati per le API ML su Google Cloud per dimostrare le tue competenze nei seguenti ambiti: pulizia dei dati con Dataprep di Trifacta, esecuzione delle pipeline di dati in Dataflow, creazione dei cluster ed esecuzione dei job Apache Spark in Dataproc e richiamo delle API ML tra cui l'API Cloud Natural Language, l'API Google Cloud Speech-to-Text e l'API Video Intelligence. Un badge delle competenze è un badge digitale esclusivo rilasciato da Google Cloud come riconoscimento della tua competenza nell'uso di prodotti e servizi Google Cloud dopo aver messo alla prova la tua capacità di applicare le tue conoscenze in un ambiente interattivo pratico. Completa questo corso con badge delle competenze e il Challenge Lab finale di valutazione per ricevere un badge delle competenze da condividere con la tua rete.
In this course, you learn about data engineering on Google Cloud, the roles and responsibilities of data engineers, and how those map to offerings provided by Google Cloud. You also learn about ways to address data engineering challenges.
In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
L'integrazione del machine learning nelle pipeline di dati aumenta la capacità di estrarre insight dai dati. Questo corso illustra i modi in cui il machine learning può essere incluso nelle pipeline di dati su Google Cloud. Per una personalizzazione minima o nulla, il corso tratta di AutoML. Per funzionalità di machine learning più personalizzate, il corso introduce Notebooks e BigQuery Machine Learning (BigQuery ML). Inoltre, il corso spiega come mettere in produzione soluzioni di machine learning utilizzando Vertex AI.
L'elaborazione dei flussi di dati sta diventando sempre più diffusa poiché la modalità flusso consente alle aziende di ottenere parametri in tempo reale sulle operazioni aziendali. Questo corso tratta la creazione di pipeline di dati in modalità flusso su Google Cloud. Pub/Sub viene presentato come strumento per la gestione dei flussi di dati in entrata. Il corso spiega anche come applicare aggregazioni e trasformazioni ai flussi di dati utilizzando Dataflow e come archiviare i record elaborati in BigQuery o Bigtable per l'analisi. Gli studenti acquisiranno esperienza pratica nella creazione di componenti della pipeline di dati in modalità flusso su Google Cloud utilizzando QwikLabs.
This course explores Google Cloud technologies to create and generate embeddings. Embeddings are numerical representations of text, images, video and audio, and play a pivotal role in many tasks that involve the identification of similar items, like Google searches, online shopping recommendations, and personalized music suggestions. Specifically, you’ll use embeddings for tasks like classification, outlier detection, clustering and semantic search. You’ll combine semantic search with the text generation capabilities of an LLM to build Retrieval Augmented Generation (RAG) systems and question-answering solutions, on your own proprietary data using Google Cloud’s Vertex AI.
Demonstrate your ability to implement updated prompt engineering techniques and utilize several of Gemini's key capacilities including multimodal understanding and function calling. Then integrate generative AI into a RAG application deployed to Cloud Run. This course contains labs that are to be used as a test environment. They are deployed to test your understanding as a learner with a limited scope. These technologies can be used with fewer limitations in a real world environment.
This course on Integrate Vertex AI Search and Conversation into Voice and Chat Apps is composed of a set of labs to give you a hands on experience to interacting with new Generative AI technologies. You will learn how to create end-to-end search and conversational experiences by following examples. These technologies complement predefined intent-based chat experiences created in Dialogflow with LLM-based, generative answers that can be based on your own data. Also, they allow you to porvide enterprise-grade search experiences for internal and external websites to search documents, structure data and public websites.
In this course, you'll use text embeddings for tasks like classification, outlier detection, text clustering and semantic search. You'll combine semantic search with the text generation capabilities of an LLM to build Retrieval Augmented Generation (RAG) solutions, such as for question-answering systems, using Google Cloud's Vertex AI and Google Cloud databases.
Explore AI-powered search technologies, tools, and applications in this course. Learn semantic search utilizing vector embeddings, hybrid search combining semantic and keyword approaches, and retrieval-augmented generation (RAG) minimizing AI hallucinations as a grounded AI agent. Gain practical experience with Vertex AI Vector Search to build your intelligent search engine.
This course will help ML Engineers, Developers, and Data Scientists implement Large Language Models for Generative AI use cases with Vertex AI. The first two modules of this course contain links to videos and prerequisite course materials that will build your knowledge foundation in Generative AI. Please do not skip these modules. The advanced modules in this course assume you have completed these earlier modules.
Questo corso ti insegna come creare un modello per le didascalie delle immagini utilizzando il deep learning. Scoprirai i diversi componenti di un modello per le didascalie delle immagini, come l'encoder e il decoder, e imparerai ad addestrare e valutare il tuo modello. Alla fine di questo corso, sarai in grado di creare modelli personali per le didascalie delle immagini e utilizzarli per generare didascalie per le immagini.
Questo corso introduce i modelli di diffusione, una famiglia di modelli di machine learning che recentemente si sono dimostrati promettenti nello spazio di generazione delle immagini. I modelli di diffusione traggono ispirazione dalla fisica, in particolare dalla termodinamica. Negli ultimi anni, i modelli di diffusione sono diventati popolari sia nella ricerca che nella produzione. I modelli di diffusione sono alla base di molti modelli e strumenti di generazione di immagini all'avanguardia su Google Cloud. Questo corso ti introduce alla teoria alla base dei modelli di diffusione e a come addestrarli ed eseguirne il deployment su Vertex AI.
Questo corso ti introduce all'architettura Transformer e al modello BERT (Bidirectional Encoder Representations from Transformers). Scopri i componenti principali dell'architettura Transformer, come il meccanismo di auto-attenzione, e come viene utilizzata per creare il modello BERT. Imparerai anche le diverse attività per le quali può essere utilizzato il modello BERT, come la classificazione del testo, la risposta alle domande e l'inferenza del linguaggio naturale. Si stima che il completamento di questo corso richieda circa 45 minuti.
Questo corso ti offre un riepilogo dell'architettura encoder-decoder, che è un'architettura di machine learning potente e diffusa per attività da sequenza a sequenza come traduzione automatica, riassunto del testo e risposta alle domande. Apprenderai i componenti principali dell'architettura encoder-decoder e come addestrare e fornire questi modelli. Nella procedura dettagliata del lab corrispondente, implementerai in TensorFlow dall'inizio un semplice codice dell'architettura encoder-decoder per la generazione di poesie da zero.
Questo corso ti introdurrà al meccanismo di attenzione, una potente tecnica che consente alle reti neurali di concentrarsi su parti specifiche di una sequenza di input. Imparerai come funziona l'attenzione e come può essere utilizzata per migliorare le prestazioni di molte attività di machine learning, come la traduzione automatica, il compendio di testi e la risposta alle domande.
Earn a skill badge by passing the final quiz, you'll demonstrate your understanding of foundational concepts in generative AI. A skill badge is a digital badge issued by Google Cloud in recognition of your knowledge of Google Cloud products and services. Share your skill badge by making your profile public and adding it to your social media profile.
Text Prompt Engineering Techniques introduces you to consider different strategic approaches & techniques to deploy when writing prompts for text-based generative AI tasks.
Questo corso illustra Generative AI Studio, un prodotto su Vertex AI che ti aiuta a prototipare e personalizzare i modelli di AI generativa in modo da poterne utilizzare le capacità nelle tue applicazioni. In questo corso imparerai cos'è Generative AI Studio, le sue funzionalità e opzioni e come utilizzarlo, esaminando le demo del prodotto. Alla fine, troverai un laboratorio pratico per mettere in pratica ciò che hai imparato e un quiz per testare le tue conoscenze.
Complete the introductory Prompt Design in Vertex AI skill badge to demonstrate skills in the following: prompt engineering, image analysis, and multimodal generative techniques, within Vertex AI. Discover how to craft effective prompts, guide generative AI output, and apply Gemini models to real-world marketing scenarios.
Dal momento che l'uso dell'intelligenza artificiale e del machine learning nelle aziende continua a crescere, cresce anche l'importanza di realizzarli in modo responsabile. Molti sono scoraggiati dal fatto che parlare di IA responsabile può essere più facile che metterla in pratica. Se vuoi imparare come operativizzare l'IA responsabile nella tua organizzazione, questo corso fa per te. In questo corso scoprirai come Google Cloud ci riesce attualmente, oltre alle best practice e alle lezioni apprese, per fungere da framework per costruire il tuo approccio all'IA responsabile.
Questo è un corso di microlearning di livello introduttivo volto a spiegare cos'è l'IA responsabile, perché è importante e in che modo Google implementa l'IA responsabile nei propri prodotti. Introduce anche i 7 principi dell'IA di Google.
A Business Leader in Generative AI can articulate the capabilities of core cloud Generative AI products and services and understand how they benefit organizations. This course provides an overview of the types of opportunities and challenges that companies often encounter in their digital transformation journey and how they can leverage Google Cloud's generative AI products to overcome these challenges.
Le pipeline di dati in genere rientrano in uno dei paradigmi EL (Extract, Load), ELT (Extract, Load, Transform) o ETL (Extract, Transform, Load). Questo corso descrive quale paradigma dovrebbe essere utilizzato e quando per i dati in batch. Inoltre, questo corso tratta diverse tecnologie su Google Cloud per la trasformazione dei dati, tra cui BigQuery, l'esecuzione di Spark su Dataproc, i grafici della pipeline in Cloud Data Fusion e trattamento dati serverless con Dataflow. Gli studenti fanno esperienza pratica nella creazione di componenti della pipeline di dati su Google Cloud utilizzando Qwiklabs.
I due componenti chiave di qualsiasi pipeline di dati sono costituiti dai data lake e dai data warehouse. In questo corso evidenzieremo i casi d'uso per ogni tipo di spazio di archiviazione e approfondiremo i dettagli tecnici delle soluzioni di data lake e data warehouse disponibili su Google Cloud. Inoltre, descriveremo il ruolo di un data engineer, illustreremo i vantaggi di una pipeline di dati di successo per le operazioni aziendali ed esamineremo i motivi per cui il data engineering dovrebbe essere eseguito in un ambiente cloud. Questo è il primo corso della serie Data engineering su Google Cloud. Dopo il completamento di questo corso, iscriviti al corso Creazione di pipeline di dati in batch su Google Cloud.
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.