Complete the Extend Gemini with controlled generation and Tool use skill badge to demonstrate your proficiency in connecting models to external tools and APIs. This allows models to augment their knowledge, extend their capabilities and interact with external systems to take actions such as sending an email. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete the assessment challenge lab, to receive a skill badge that you can share with your network. When you complete this course, you can earn the badge displayed here and claim it on Credly! Boost your cloud career by showing the world the skills you have developed!"
Learn a variety of strategies and techniques to engineer effective prompts for generative models
Learn how to leverage Gemini multimodal capabilities to process and generate text, images, and audio and to integrate Gemini through APIs to perform tasks such as content creation and summarization.
An LLM-based application can process language in a way that resembles thought. But if you want to extend its capabilities to take actions by running other functions you have coded, you will need to use function calling. This can also be referred to as tool use. Additionally, you can give a model the ability to search Google or search a data store of documents to ground its responses. In other words, to base its answers on that information. In this course, you’ll explore these concepts.
This course introduces AI Applications. You will learn about the types of apps that you can create using AI Applications, the high-level steps that its data stores automate for you, and what advanced features can be enabled for Search apps.
In this course, you’ll learn to use the Google Agent Development Kit to build complex, multi-agent systems. You will build agents equipped with tools, and connect them with parent-child relationships and flows to define how they interact. You’ll run your agents locally and deploy them to Vertex AI Agent Engine to run as a managed agentic flow, with infrastructure decisions and resource scaling handled by Agent Engine. Please note these labs are based off a pre-released version of this product. There may be some lag on these labs as we provide maintenance updates.
Demonstrate your ability to implement updated prompt engineering techniques and utilize several of Gemini's key capacilities including multimodal understanding and function calling. Then integrate generative AI into a RAG application deployed to Cloud Run. This course contains labs that are to be used as a test environment. They are deployed to test your understanding as a learner with a limited scope. These technologies can be used with fewer limitations in a real world environment.
Text Prompt Engineering Techniques introduces you to consider different strategic approaches & techniques to deploy when writing prompts for text-based generative AI tasks.
This course on Integrate Vertex AI Search and Conversation into Voice and Chat Apps is composed of a set of labs to give you a hands on experience to interacting with new Generative AI technologies. You will learn how to create end-to-end search and conversational experiences by following examples. These technologies complement predefined intent-based chat experiences created in Dialogflow with LLM-based, generative answers that can be based on your own data. Also, they allow you to porvide enterprise-grade search experiences for internal and external websites to search documents, structure data and public websites.
This course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring and operating production ML systems on Google Cloud. MLOps is a discipline focused on the deployment, testing, monitoring, and automation of ML systems in production. Learners will get hands-on practice using Vertex AI Feature Store's streaming ingestion at the SDK layer.
This course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring and operating production ML systems on Google Cloud. MLOps is a discipline focused on the deployment, testing, monitoring, and automation of ML systems in production. Machine Learning Engineering professionals use tools for continuous improvement and evaluation of deployed models. They work with (or can be) Data Scientists, who develop models, to enable velocity and rigor in deploying the best performing models.