Sebastian Brückner
Member since 2021
Silver League
1815 points
Member since 2021
Kurumsal yapay zeka ve makine öğreniminin kullanımı artmaya devam ettikçe, bunu sorumlu bir şekilde oluşturmanın önemi de artıyor. Sorumlu yapay zeka hakkında konuşmanın, onu uygulamaya koymaktan çok daha kolay olabilmesi burada bir zorluk oluşturmaktadır. Kuruluşunuzda sorumlu yapay zekayı nasıl işlevsel hale getireceğinizi öğrenmekle ilgileniyorsanız, bu kurs tam size göre. Bu kurs, Google Cloud'un sorumlu yapay zeka yaklaşımını nasıl uyguladığını derinlemesine inceleyerek, kendi sorumlu yapay zeka stratejinizi oluşturmanız için size kapsamlı bir çerçeve sunuyor.
Earn a skill badge by passing the final quiz, you'll demonstrate your understanding of foundational concepts in generative AI. A skill badge is a digital badge issued by Google Cloud in recognition of your knowledge of Google Cloud products and services. Share your skill badge by making your profile public and adding it to your social media profile.
Bu kursta Vertex AI Studio tanıtılmaktadır. Bu araç, üretken yapay zeka modelleriyle etkileşime geçmek, kurumsal fikirlerin prototipini oluşturmak ve bunları gerçek hayatta uygulamak için kullanılır. Gerçek hayattan kullanım alanları, etkileşimli dersler ve uygulamalı laboratuvarlar aracılığıyla, ilk istemden son ürüne uzanan yaşam döngüsünü keşfedecek ve çoklu format destekli Gemini uygulamaları, istem tasarımı, istem mühendisliği ve model ayarlama konularında Vertex AI Studio'dan nasıl yararlanabileceğinizi öğreneceksiniz. Bu kursun amacı, Vertex AI Studio'yu kullanarak projelerinizde üretken yapay zekadan yararlanabilmenizi sağlamaktır.
Bu kurs, derin öğrenmeyi kullanarak görüntülere altyazı ekleme modeli oluşturmayı öğretmektedir. Kurs sırasında görüntülere altyazı ekleme modelinin farklı bileşenlerini (ör. kodlayıcı ve kod çözücü) ve modelinizi eğitip değerlendirmeyi öğreneceksiniz. Bu kursu tamamlayan öğrenciler, kendi görüntülere altyazı ekleme modellerini oluşturabilecek ve bu modelleri görüntülere altyazı oluşturmak için kullanabilecek.
Bu kursta, kodlayıcı-kod çözücü mimarisi özet olarak anlatılmaktadır. Bu mimari; makine çevirisi, metin özetleme ve soru yanıtlama gibi "sıradan sıraya" görevlerde yaygın olarak kullanılan, güçlü bir makine öğrenimi mimarisidir. Kursta, kodlayıcı-kod çözücü mimarisinin ana bileşenlerini ve bu modellerin nasıl eğitilip sunulacağını öğreneceksiniz. Laboratuvarın adım adım açıklamalı kılavuz bölümünde ise sıfırdan şiir üretmek için TensorFlow'da kodlayıcı-kod çözücü mimarisinin basit bir uygulamasını yazacaksınız.
Earn a skill badge by completing the Introduction to Generative AI, Introduction to Large Language Models and Introduction to Responsible AI courses. By passing the final quiz, you'll demonstrate your understanding of foundational concepts in generative AI. A skill badge is a digital badge issued by Google Cloud in recognition of your knowledge of Google Cloud products and services. Share your skill badge by making your profile public and adding it to your social media profile.
Bu kurs, sorumlu yapay zekanın ne olduğunu, neden önemli olduğunu ve Google'ın sorumlu yapay zekayı ürünlerinde nasıl uyguladığını açıklamayı amaçlayan giriş seviyesinde bir mikro öğrenme kursudur. Ayrıca Google'ın 7 yapay zeka ilkesini de tanıtır.
Bu kursta, görüntü üretme alanında gelecek vadeden bir makine öğrenimi modelleri ailesi olan "difüzyon modelleri" tanıtılmaktadır. Difüzyon modelleri fizikten, özellikle de termodinamikten ilham alır. Geçtiğimiz birkaç yıl içinde, gerek araştırma gerekse endüstri alanında difüzyon modelleri popülerlik kazandı. Google Cloud'daki son teknoloji görüntü üretme model ve araçlarının çoğu, difüzyon modelleri ile desteklenmektedir. Bu kursta, difüzyon modellerinin ardındaki teori tanıtılmakta ve bu modellerin Vertex AI'da nasıl eğitilip dağıtılacağı açıklanmaktadır.
This course provides partners the skills required to scope, design and deploy Document AI solutions for enterprise customers utilizing use-cases from both the procurement and lending arenas.
Bu kurs, dönüştürücü mimarisini ve dönüştürücülerden çift yönlü kodlayıcı temsilleri (BERT - Encoder Representations from Transformers) modelini tanıtmaktadır. Kursta, öz dikkat mekanizması gibi dönüştürücü mimarisinin ana bileşenlerini ve BERT modelini oluşturmak için dönüştürücünün nasıl kullanıldığını öğreneceksiniz. Ayrıca sınıflandırma, soru yanıtlama ve doğal dil çıkarımı gibi BERT'in kullanılabileceği çeşitli görevler hakkında da bilgi sahibi olacaksınız. Kursun tahmini süresi 45 dakikadır.
Bu kursta nöral ağların, giriş sırasının belirli bölümlerine odaklanmasına olanak tanıyan güçlü bir teknik olan dikkat mekanizması tanıtılmaktadır. Kursta, dikkat mekanizmasının çalışma şeklini ve makine öğrenimi, metin özetleme ve soru yanıtlama gibi çeşitli makine öğrenimi görevlerinin performansını artırmak için nasıl kullanılabileceğini öğreneceksiniz.
Bu giriş seviyesi mikro öğrenme kursunda büyük dil modelleri (BDM) nedir, hangi kullanım durumlarında kullanılabileceği ve büyük dil modelleri performansını artırmak için nasıl istem ayarlaması yapabileceğiniz keşfedilecektir. Ayrıca kendi üretken yapay zeka uygulamalarınızı geliştirmenize yardımcı olacak Google araçları hakkında bilgi verilecektir.
Bu, üretken yapay zekanın ne olduğunu, nasıl kullanıldığını ve geleneksel makine öğrenme yöntemlerinden nasıl farklı olduğunu açıklamayı amaçlayan giriş seviyesi bir mikro öğrenme kursudur. Ayrıca kendi üretken yapay zeka uygulamalarınızı geliştirmenize yardımcı olacak Google Araçlarını da kapsar.
Get started with Go (Golang) by reviewing Go code, and then creating and deploying simple Go apps on Google Cloud. Go is an open source programming language that makes it easy to build fast, reliable, and efficient software at scale. Go runs native on Google Cloud, and is fully supported on Google Kubernetes Engine, Compute Engine, App Engine, Cloud Run, and Cloud Functions. Go is a compiled language and is faster and more efficient than interpreted languages. As a result, Go requires no installed runtime like Node, Python, or JDK to execute.
This course introduces you to fundamentals, practices, capabilities and tools applicable to modern cloud-native application development using Google Cloud Run. Through a combination of lectures, hands-on labs, and supplemental materials, you will learn how to on Google Cloud using Cloud Run.design, implement, deploy, secure, manage, and scale applications
In this course, application developers learn how to design and develop cloud-native applications that seamlessly integrate components from the Google Cloud ecosystem. Through a combination of presentations, demos, and hands-on labs, participants learn how to create repeatable deployments by treating infrastructure as code, choose the appropriate application execution environment for an application, and monitor application performance. Completing one version of each lab is required. Each lab is available in Node.js. In most cases, the same labs are also provided in Python or Java. You may complete each lab in whichever language you prefer.
In this course, application developers learn how to design and develop cloud-native applications that seamlessly integrate managed services from Google Cloud. Through a combination of presentations, demos, and hands-on labs, participants learn how to develop more secure applications, implement federated identity management, and integrate application components by using messaging, event-driven processing, and API gateways. Completing one version of each lab is required. Each lab is available in Node.js. In most cases, the same labs are also provided in Python or Java. You may complete each lab in whichever language you prefer. This is the second course of the Developing Applications with Google Cloud series. After completing this course, enroll in the App Deployment, Debugging, and Performance course.
In this course, application developers learn how to design and develop cloud-native applications that seamlessly integrate managed services from Google Cloud. Through a combination of presentations, demos, and hands-on labs, participants learn how to apply best practices for application development and use the appropriate Google Cloud storage services for object storage, relational data, caching, and analytics. Completing one version of each lab is required. Each lab is available in Node.js. In most cases, the same labs are also provided in Python or Java. You may complete each lab in whichever language you prefer. This is the first course of the Developing Applications with Google Cloud series. After completing this course, enroll in the Securing and Integrating Components of your Application course.
In this course, you apply your knowledge of classification models and embeddings to build a ML pipeline that functions as a recommendation engine. This is the fifth and final course of the Advanced Machine Learning on Google Cloud series.
This course introduces the products and solutions to solve NLP problems on Google Cloud. Additionally, it explores the processes, techniques, and tools to develop an NLP project with neural networks by using Vertex AI and TensorFlow.
This course describes different types of computer vision use cases and then highlights different machine learning strategies for solving these use cases. The strategies vary from experimenting with pre-built ML models through pre-built ML APIs and AutoML Vision to building custom image classifiers using linear models, deep neural network (DNN) models or convolutional neural network (CNN) models. The course shows how to improve a model's accuracy with augmentation, feature extraction, and fine-tuning hyperparameters while trying to avoid overfitting the data. The course also looks at practical issues that arise, for example, when one doesn't have enough data and how to incorporate the latest research findings into different models. Learners will get hands-on practice building and optimizing their own image classification models on a variety of public datasets in the labs they will work on.
This course covers how to implement the various flavors of production ML systems— static, dynamic, and continuous training; static and dynamic inference; and batch and online processing. You delve into TensorFlow abstraction levels, the various options for doing distributed training, and how to write distributed training models with custom estimators. This is the second course of the Advanced Machine Learning on Google Cloud series. After completing this course, enroll in the Image Understanding with TensorFlow on Google Cloud course.
One of the best ways to review something is to work with the concepts and technologies that you have learned. So, this course is set up as a workshop and in this workshop, you will do End-to-End Machine Learning with TensorFlow on Google Cloud Platform. It involves building an end-to-end model from data exploration all the way to deploying an ML model and getting predictions from it. This is the first course of the Advanced Machine Learning on Google Cloud series. After completing this course, enroll in the Production Machine Learning Systems course.
This course takes a real-world approach to the ML Workflow through a case study. An ML team faces several ML business requirements and use cases. The team must understand the tools required for data management and governance and consider the best approach for data preprocessing. The team is presented with three options to build ML models for two use cases. The course explains why they would use AutoML, BigQuery ML, or custom training to achieve their objectives.
This course covers building ML models with TensorFlow and Keras, improving the accuracy of ML models and writing ML models for scaled use.
The course begins with a discussion about data: how to improve data quality and perform exploratory data analysis. We describe Vertex AI AutoML and how to build, train, and deploy an ML model without writing a single line of code. You will understand the benefits of Big Query ML. We then discuss how to optimize a machine learning (ML) model and how generalization and sampling can help assess the quality of ML models for custom training.
This course explores what ML is and what problems it can solve. The course also discusses best practices for implementing machine learning. You’re introduced to Vertex AI, a unified platform to quickly build, train, and deploy AutoML machine learning models. The course discusses the five phases of converting a candidate use case to be driven by machine learning, and why it’s important to not skip them. The course ends with recognizing the biases that ML can amplify and how to recognize them.
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
Giriş düzeyindeki Compute Engine'de Yük Dengelemeyi Uygulama beceri rozetini tamamlayarak şu konulardaki becerilerinizi gösterin: gcloud komutları yazma ve Cloud Shell kullanma, Compute Engine'de sanal makineler oluşturma ve dağıtma, ağ ve HTTP yük dengeleyicileri yapılandırma. Beceri rozeti, Google Cloud ürün ve hizmetlerine ilişkin uzmanlık düzeyinizin tanınması amacıyla Google Cloud tarafından verilen özel bir rozettir. Bu rozet, bilginizi etkileşimli ve uygulamalı bir ortamda uygulama becerinizi test eder. Ağınızla paylaşabileceğiniz bir beceri rozeti kazanmak için bu beceri rozetini ve son değerlendirme niteliğindeki yarışma laboratuvarını tamamlayın.
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.
In this course you will get hands-on in order to work through real-world challenges faced when building streaming data pipelines. The primary focus is on managing continuous, unbounded data with Google Cloud products.
In this intermediate course, you will learn to design, build, and optimize robust batch data pipelines on Google Cloud. Moving beyond fundamental data handling, you will explore large-scale data transformations and efficient workflow orchestration, essential for timely business intelligence and critical reporting. Get hands-on practice using Dataflow for Apache Beam and Serverless for Apache Spark (Dataproc Serverless) for implementation, and tackle crucial considerations for data quality, monitoring, and alerting to ensure pipeline reliability and operational excellence. A basic knowledge of data warehousing, ETL/ELT, SQL, Python, and Google Cloud concepts is recommended.
While the traditional approaches of using data lakes and data warehouses can be effective, they have shortcomings, particularly in large enterprise environments. This course introduces the concept of a data lakehouse and the Google Cloud products used to create one. A lakehouse architecture uses open-standard data sources and combines the best features of data lakes and data warehouses, which addresses many of their shortcomings.
This course introduces the Google Cloud big data and machine learning products and services that support the data-to-AI lifecycle. It explores the processes, challenges, and benefits of building a big data pipeline and machine learning models with Vertex AI on Google Cloud.