Manoj Kumar
Member since 2022
Gold League
40420 points
Member since 2022
Complete the introductory Build Real World AI Applications with Gemini and Imagen skill badge to demonstrate skills in the following: image recognition, natural language processing, image generation using Google's powerful Gemini and Imagen models, deploying applications on the Vertex AI platform.
Transform Your Work With Gen AI Apps is the fourth course of the Gen AI Leader learning path. This course introduces Google’s gen AI applications, such as Google Workspace with Gemini and NotebookLM. It guides you through concepts like grounding, retrieval augmented generation, constructing effective prompts and building automated workflows.
Gen AI: Navigate the Landscape is the third course of the Gen AI Leader learning path. Gen AI is changing how we work and interact with the world around us. But as a leader, how can you harness its power to drive real business outcomes? In this course, you explore the different layers of building gen AI solutions, Google Cloud’s offerings, and the factors to consider when selecting a solution.
Gen AI: Unlock Foundational Concepts is the second course of the Gen AI Leader learning path. In this course, you unlock the foundational concepts of generative AI by exploring the differences between AI, ML, and gen AI, and understanding how various data types enable generative AI to address business challenges. You also gain insights into Google Cloud strategies to address the limitations of foundation models and the key challenges for responsible and secure AI development and deployment.
Complete the introductory Prompt Design in Vertex AI skill badge to demonstrate skills in the following: prompt engineering, image analysis, and multimodal generative techniques, within Vertex AI. Discover how to craft effective prompts, guide generative AI output, and apply Gemini models to real-world marketing scenarios.
Google Cloud : Prompt Engineering Guide examines generative AI tools, how they work. We'll explore how to combine Google Cloud knowledge with prompt engineering to improve Gemini responses.
Gen AI: Beyond the Chatbot is the first course of the Gen AI Leader learning path and has no prerequisites. This course aims to move beyond the basic understanding of chatbots to explore the true potential of generative AI for your organization. You explore concepts like foundation models and prompt engineering, which are crucial for leveraging the power of gen AI. The course also guides you through important considerations you should make when developing a successful gen AI strategy for your organization.
Gen AI Agents: Transform Your Organization is the fifth and final course of the Gen AI Leader learning path. This course explores how organizations can use custom gen AI agents to help tackle specific business challenges. You gain hands-on practice building a basic gen AI agent, while exploring the components of these agents, such as models, reasoning loops, and tools.
In this course, we’ll show you how organizations are aligning their BI strategy to most effectively achieve business outcomes with Looker. We'll follow four iterative steps: Plan, Build, Launch, Grow, and provide resources to take into your own services delivery to build Looker with the goal of achieving business outcomes.
By the end of this course, you should be able to articulate Looker's value propositions and what makes it different from other analytics tools in the market. You should also be able to explain how Looker works, and explain the standard components of successful service delivery.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
An LLM-based application can process language in a way that resembles thought. But if you want to extend its capabilities to take actions by running other functions you have coded, you will need to use function calling. This can also be referred to as tool use. Additionally, you can give a model the ability to search Google or search a data store of documents to ground its responses. In other words, to base its answers on that information. In this course, you’ll explore these concepts.
Welcome to Intro to Data Lakes, where we discuss how to create a scalable and secure data lake on Google Cloud that allows enterprises to ingest, store, process, and analyze any type or volume of full fidelity data.
In this course, you learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps administrators provision infrastructure. You learn how to prompt Gemini to explain infrastructure, deploy GKE clusters and update existing infrastructure. Using a hands-on lab, you experience how Gemini improves the GKE deployment workflow. Duet AI was renamed to Gemini, our next-generation model.
In this course, you learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps developers build applications. You learn how to prompt Gemini to explain code, recommend Google Cloud services, and generate code for your applications. Using a hands-on lab, you experience how Gemini improves the application development workflow. Duet AI was renamed to Gemini, our next-generation model.
This course on Integrate Vertex AI Search and Conversation into Voice and Chat Apps is composed of a set of labs to give you a hands on experience to interacting with new Generative AI technologies. You will learn how to create end-to-end search and conversational experiences by following examples. These technologies complement predefined intent-based chat experiences created in Dialogflow with LLM-based, generative answers that can be based on your own data. Also, they allow you to porvide enterprise-grade search experiences for internal and external websites to search documents, structure data and public websites.
Text Prompt Engineering Techniques introduces you to consider different strategic approaches & techniques to deploy when writing prompts for text-based generative AI tasks.
Welcome to Data Governance, where we discuss how to implement data governance on Google Cloud.
This course introduces you to the Transformer architecture and the Bidirectional Encoder Representations from Transformers (BERT) model. You learn about the main components of the Transformer architecture, such as the self-attention mechanism, and how it is used to build the BERT model. You also learn about the different tasks that BERT can be used for, such as text classification, question answering, and natural language inference.This course is estimated to take approximately 45 minutes to complete.
This course gives you a synopsis of the encoder-decoder architecture, which is a powerful and prevalent machine learning architecture for sequence-to-sequence tasks such as machine translation, text summarization, and question answering. You learn about the main components of the encoder-decoder architecture and how to train and serve these models. In the corresponding lab walkthrough, you’ll code in TensorFlow a simple implementation of the encoder-decoder architecture for poetry generation from the beginning.
This course will introduce you to the attention mechanism, a powerful technique that allows neural networks to focus on specific parts of an input sequence. You will learn how attention works, and how it can be used to improve the performance of a variety of machine learning tasks, including machine translation, text summarization, and question answering. This course is estimated to take approximately 45 minutes to complete.
This is an introductory level micro-learning course that explores what large language models (LLM) are, the use cases where they can be utilized, and how you can use prompt tuning to enhance LLM performance. It also covers Google tools to help you develop your own Gen AI apps.
This is an introductory level microlearning course aimed at explaining what Generative AI is, how it is used, and how it differs from traditional machine learning methods. It also covers Google Tools to help you develop your own Gen AI apps.
While the traditional approaches of using data lakes and data warehouses can be effective, they have shortcomings, particularly in large enterprise environments. This course introduces the concept of a data lakehouse and the Google Cloud products used to create one. A lakehouse architecture uses open-standard data sources and combines the best features of data lakes and data warehouses, which addresses many of their shortcomings.
This course introduces the Google Cloud big data and machine learning products and services that support the data-to-AI lifecycle. It explores the processes, challenges, and benefits of building a big data pipeline and machine learning models with Vertex AI on Google Cloud.