Shanthapriya Kannan
成为会员时间:2024
青铜联赛
38620 积分
成为会员时间:2024
This course helps you structure your preparation for the Associate Cloud Engineer exam. You will learn about the Google Cloud domains covered by the exam and how to create a study plan to improve your domain knowledge.
「Google Cloud 基礎知識:核心基礎架構」介紹了在使用 Google Cloud 時會遇到的重要概念和術語。本課程會透過影片和實作實驗室,介紹並比較 Google Cloud 的多種運算和儲存服務,同時提供重要的資源和政策管理工具。
這堂初級課程將介紹 Google Cloud 的資料分析工作流程,以及用於探索、分析資料並以圖表呈現的工具。您也能學會如何與相關人員分享自己的發現結果。本課程包含個案研究、實作實驗室、講座、測驗和示範,實際展示如何將原始資料集轉化為清晰的資料,進而呈現出能發揮成效的圖表和資訊主頁。無論您是資料領域從業人員、想瞭解如何透過 Google Cloud 取得成功,或有意在職涯中更上一層樓,本課程都能協助您踏出第一步。絕大多數在工作上執行或運用資料分析的學員,都能從本課程受益。
完成使用 BigQuery ML 為預測模型進行資料工程技能徽章中階課程, 即可證明自己具備下列知識與技能:運用 Dataprep by Trifacta 建構連至 BigQuery 的資料轉換 pipeline; 使用 Cloud Storage、Dataflow 和 BigQuery 建構「擷取、轉換及載入」(ETL) 工作負載, 以及使用 BigQuery ML 建構機器學習模型。
完成 透過 BigQuery 建構資料倉儲 技能徽章中階課程,即可證明您具備下列技能: 彙整資料以建立新資料表、排解彙整作業問題、利用聯集附加資料、建立依日期分區的資料表, 以及在 BigQuery 使用 JSON、陣列和結構體。 「技能徽章」是 Google Cloud 核發的獨家數位徽章, 用於肯定您在 Google Cloud 產品和服務方面的精通程度, 代表您已通過測驗,能在互動式實作環境中應用相關 知識。完成技能徽章課程及結業評量挑戰研究室, 即可取得技能徽章並與他人分享。
完成 在 Compute Engine 實作負載平衡功能 技能徽章入門課程,即可證明您具備下列技能: 編寫 gcloud 指令和使用 Cloud Shell、在 Compute Engine 建立及部署虛擬機器, 以及設定網路和 HTTP 負載平衡器。 「技能徽章」是 Google Cloud 核發的 獨家數位徽章,用於肯定您在 Google Cloud 產品與服務方面的精通程度, 代表您已通過測驗,能在互動式實作環境中應用相關 知識。完成這個課程及挑戰研究室 最終評量,即可取得技能徽章並與親友分享。
Earn a skill badge by passing the final quiz, you'll demonstrate your understanding of foundational concepts in generative AI. A skill badge is a digital badge issued by Google Cloud in recognition of your knowledge of Google Cloud products and services. Share your skill badge by making your profile public and adding it to your social media profile.
完成 在 Google Cloud 為機器學習 API 準備資料 技能徽章入門課程,即可證明您具備下列技能: 使用 Dataprep by Trifacta 清理資料、在 Dataflow 執行資料管道、在 Dataproc 建立叢集和執行 Apache Spark 工作,以及呼叫機器學習 API,包含 Cloud Natural Language API、Google Cloud Speech-to-Text API 和 Video Intelligence API。 「技能徽章」是 Google Cloud 核發的獨家數位徽章,用於肯定您在 Google Cloud 產品與服務方面的精通程度, 代表您已通過測驗,能在互動式實作環境中應用相關知識。完成本技能徽章課程及結業評量挑戰研究室, 即可取得技能徽章並與他人分享。
In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.
In this course you will get hands-on in order to work through real-world challenges faced when building streaming data pipelines. The primary focus is on managing continuous, unbounded data with Google Cloud products.
In this intermediate course, you will learn to design, build, and optimize robust batch data pipelines on Google Cloud. Moving beyond fundamental data handling, you will explore large-scale data transformations and efficient workflow orchestration, essential for timely business intelligence and critical reporting. Get hands-on practice using Dataflow for Apache Beam and Serverless for Apache Spark (Dataproc Serverless) for implementation, and tackle crucial considerations for data quality, monitoring, and alerting to ensure pipeline reliability and operational excellence. A basic knowledge of data warehousing, ETL/ELT, SQL, Python, and Google Cloud concepts is recommended.
While the traditional approaches of using data lakes and data warehouses can be effective, they have shortcomings, particularly in large enterprise environments. This course introduces the concept of a data lakehouse and the Google Cloud products used to create one. A lakehouse architecture uses open-standard data sources and combines the best features of data lakes and data warehouses, which addresses many of their shortcomings.
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.