Manoj Kumar
Membro dal giorno 2022
Campionato Oro
40420 punti
Membro dal giorno 2022
Complete the introductory Build Real World AI Applications with Gemini and Imagen skill badge to demonstrate skills in the following: image recognition, natural language processing, image generation using Google's powerful Gemini and Imagen models, deploying applications on the Vertex AI platform.
Transform Your Work With Gen AI Apps is the fourth course of the Gen AI Leader learning path. This course introduces Google’s gen AI applications, such as Google Workspace with Gemini and NotebookLM. It guides you through concepts like grounding, retrieval augmented generation, constructing effective prompts and building automated workflows.
Gen AI: Navigate the Landscape is the third course of the Gen AI Leader learning path. Gen AI is changing how we work and interact with the world around us. But as a leader, how can you harness its power to drive real business outcomes? In this course, you explore the different layers of building gen AI solutions, Google Cloud’s offerings, and the factors to consider when selecting a solution.
Gen AI: Unlock Foundational Concepts is the second course of the Gen AI Leader learning path. In this course, you unlock the foundational concepts of generative AI by exploring the differences between AI, ML, and gen AI, and understanding how various data types enable generative AI to address business challenges. You also gain insights into Google Cloud strategies to address the limitations of foundation models and the key challenges for responsible and secure AI development and deployment.
Complete the introductory Prompt Design in Vertex AI skill badge to demonstrate skills in the following: prompt engineering, image analysis, and multimodal generative techniques, within Vertex AI. Discover how to craft effective prompts, guide generative AI output, and apply Gemini models to real-world marketing scenarios.
Google Cloud : Prompt Engineering Guide examines generative AI tools, how they work. We'll explore how to combine Google Cloud knowledge with prompt engineering to improve Gemini responses.
Gen AI: Beyond the Chatbot is the first course of the Gen AI Leader learning path and has no prerequisites. This course aims to move beyond the basic understanding of chatbots to explore the true potential of generative AI for your organization. You explore concepts like foundation models and prompt engineering, which are crucial for leveraging the power of gen AI. The course also guides you through important considerations you should make when developing a successful gen AI strategy for your organization.
Gen AI Agents: Transform Your Organization is the fifth and final course of the Gen AI Leader learning path. This course explores how organizations can use custom gen AI agents to help tackle specific business challenges. You gain hands-on practice building a basic gen AI agent, while exploring the components of these agents, such as models, reasoning loops, and tools.
In this course, we’ll show you how organizations are aligning their BI strategy to most effectively achieve business outcomes with Looker. We'll follow four iterative steps: Plan, Build, Launch, Grow, and provide resources to take into your own services delivery to build Looker with the goal of achieving business outcomes.
By the end of this course, you should be able to articulate Looker's value propositions and what makes it different from other analytics tools in the market. You should also be able to explain how Looker works, and explain the standard components of successful service delivery.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
An LLM-based application can process language in a way that resembles thought. But if you want to extend its capabilities to take actions by running other functions you have coded, you will need to use function calling. This can also be referred to as tool use. Additionally, you can give a model the ability to search Google or search a data store of documents to ground its responses. In other words, to base its answers on that information. In this course, you’ll explore these concepts.
Welcome to Intro to Data Lakes, where we discuss how to create a scalable and secure data lake on Google Cloud that allows enterprises to ingest, store, process, and analyze any type or volume of full fidelity data.
In this course, you learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps administrators provision infrastructure. You learn how to prompt Gemini to explain infrastructure, deploy GKE clusters and update existing infrastructure. Using a hands-on lab, you experience how Gemini improves the GKE deployment workflow. Duet AI was renamed to Gemini, our next-generation model.
In this course, you learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps developers build applications. You learn how to prompt Gemini to explain code, recommend Google Cloud services, and generate code for your applications. Using a hands-on lab, you experience how Gemini improves the application development workflow. Duet AI was renamed to Gemini, our next-generation model.
This course on Integrate Vertex AI Search and Conversation into Voice and Chat Apps is composed of a set of labs to give you a hands on experience to interacting with new Generative AI technologies. You will learn how to create end-to-end search and conversational experiences by following examples. These technologies complement predefined intent-based chat experiences created in Dialogflow with LLM-based, generative answers that can be based on your own data. Also, they allow you to porvide enterprise-grade search experiences for internal and external websites to search documents, structure data and public websites.
Text Prompt Engineering Techniques introduces you to consider different strategic approaches & techniques to deploy when writing prompts for text-based generative AI tasks.
Welcome to Data Governance, where we discuss how to implement data governance on Google Cloud.
Questo corso ti introduce all'architettura Transformer e al modello BERT (Bidirectional Encoder Representations from Transformers). Scopri i componenti principali dell'architettura Transformer, come il meccanismo di auto-attenzione, e come viene utilizzata per creare il modello BERT. Imparerai anche le diverse attività per le quali può essere utilizzato il modello BERT, come la classificazione del testo, la risposta alle domande e l'inferenza del linguaggio naturale. Si stima che il completamento di questo corso richieda circa 45 minuti.
Questo corso ti offre un riepilogo dell'architettura encoder-decoder, che è un'architettura di machine learning potente e diffusa per attività da sequenza a sequenza come traduzione automatica, riassunto del testo e risposta alle domande. Apprenderai i componenti principali dell'architettura encoder-decoder e come addestrare e fornire questi modelli. Nella procedura dettagliata del lab corrispondente, implementerai in TensorFlow dall'inizio un semplice codice dell'architettura encoder-decoder per la generazione di poesie da zero.
Questo corso ti introdurrà al meccanismo di attenzione, una potente tecnica che consente alle reti neurali di concentrarsi su parti specifiche di una sequenza di input. Imparerai come funziona l'attenzione e come può essere utilizzata per migliorare le prestazioni di molte attività di machine learning, come la traduzione automatica, il compendio di testi e la risposta alle domande.
Questo è un corso di microlearning di livello introduttivo che esplora cosa sono i modelli linguistici di grandi dimensioni (LLM), i casi d'uso in cui possono essere utilizzati e come è possibile utilizzare l'ottimizzazione dei prompt per migliorare le prestazioni dei modelli LLM. Descrive inoltre gli strumenti Google per aiutarti a sviluppare le tue app Gen AI.
Questo è un corso di microlearning di livello introduttivo volto a spiegare cos'è l'AI generativa, come viene utilizzata e in che modo differisce dai tradizionali metodi di machine learning. Descrive inoltre gli strumenti Google che possono aiutarti a sviluppare le tue app Gen AI.
I due componenti chiave di qualsiasi pipeline di dati sono costituiti dai data lake e dai data warehouse. In questo corso evidenzieremo i casi d'uso per ogni tipo di spazio di archiviazione e approfondiremo i dettagli tecnici delle soluzioni di data lake e data warehouse disponibili su Google Cloud. Inoltre, descriveremo il ruolo di un data engineer, illustreremo i vantaggi di una pipeline di dati di successo per le operazioni aziendali ed esamineremo i motivi per cui il data engineering dovrebbe essere eseguito in un ambiente cloud. Questo è il primo corso della serie Data engineering su Google Cloud. Dopo il completamento di questo corso, iscriviti al corso Creazione di pipeline di dati in batch su Google Cloud.
Questo corso presenta i prodotti e i servizi per big data e di machine learning di Google Cloud che supportano il ciclo di vita dai dati all'IA. Esplora i processi, le sfide e i vantaggi della creazione di una pipeline di big data e di modelli di machine learning con Vertex AI su Google Cloud.