Join Sign in

Matheus Silva

Member since 2021

Diamond League

13125 points
Work with Gemini Models in BigQuery Earned янв. 25, 2025 EST
Boost Productivity with Gemini in BigQuery Earned янв. 21, 2025 EST
Build a Data Mesh with Dataplex Earned янв. 17, 2025 EST
Introduction to Data Engineering on Google Cloud Earned янв. 8, 2025 EST
Build a Data Warehouse with BigQuery Earned янв. 1, 2025 EST
Google Cloud Platform Fundamentals: Core Infrastructure Earned апр. 25, 2023 EDT
Preparing for your Professional Data Engineer Journey Earned июня 24, 2022 EDT
Serverless Data Processing with Dataflow: Operations Earned июня 23, 2022 EDT
Prepare Data for ML APIs on Google Cloud Earned июня 22, 2022 EDT
Google Cloud Big Data and Machine Learning Fundamentals Earned июня 22, 2022 EDT
Build Batch Data Pipelines on Google Cloud Earned июня 21, 2022 EDT
Implementing Cloud Load Balancing for Compute Engine Earned июня 21, 2022 EDT
Engineer Data for Predictive Modeling with BigQuery ML Earned июня 21, 2022 EDT
Serverless Data Processing with Dataflow: Develop Pipelines Earned июня 10, 2022 EDT
Serverless Data Processing with Dataflow: Foundations Earned июня 7, 2022 EDT
Smart Analytics, Machine Learning, and AI on Google Cloud Earned июня 7, 2022 EDT
Build Streaming Data Pipelines on Google Cloud Earned июня 2, 2022 EDT
Build Data Lakes and Data Warehouses on Google Cloud Earned мая 26, 2022 EDT

This course demonstrates how to use AI/ML models for generative AI tasks in BigQuery. Through a practical use case involving customer relationship management, you learn the workflow of solving a business problem with Gemini models. To facilitate comprehension, the course also provides step-by-step guidance through coding solutions using both SQL queries and Python notebooks.

Learn more

This course explores Gemini in BigQuery, a suite of AI-driven features to assist data-to-AI workflow. These features include data exploration and preparation, code generation and troubleshooting, and workflow discovery and visualization. Through conceptual explanations, a practical use case, and hands-on labs, the course empowers data practitioners to boost their productivity and expedite the development pipeline.

Learn more

Complete the introductory Build a Data Mesh with Dataplex skill badge to demonstrate skills in the following: building a data mesh with Dataplex to facilitate data security, governance, and discovery on Google Cloud. You practice and test your skills in tagging assets, assigning IAM roles, and assessing data quality in Dataplex.

Learn more

In this course, you learn about data engineering on Google Cloud, the roles and responsibilities of data engineers, and how those map to offerings provided by Google Cloud. You also learn about ways to address data engineering challenges.

Learn more

Complete the intermediate Build a Data Warehouse with BigQuery skill badge course to demonstrate skills in the following: joining data to create new tables, troubleshooting joins, appending data with unions, creating date-partitioned tables, and working with JSON, arrays, and structs in BigQuery.

Learn more

This content is deprecated. Please see the latest version of the course, here.

Learn more

This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.

Learn more

In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.

Learn more

Complete the introductory Prepare Data for ML APIs on Google Cloud skill badge to demonstrate skills in the following: cleaning data with Dataprep by Trifacta, running data pipelines in Dataflow, creating clusters and running Apache Spark jobs in Dataproc, and calling ML APIs including the Cloud Natural Language API, Google Cloud Speech-to-Text API, and Video Intelligence API.

Learn more

This course introduces the Google Cloud big data and machine learning products and services that support the data-to-AI lifecycle. It explores the processes, challenges, and benefits of building a big data pipeline and machine learning models with Vertex AI on Google Cloud.

Learn more

In this intermediate course, you will learn to design, build, and optimize robust batch data pipelines on Google Cloud. Moving beyond fundamental data handling, you will explore large-scale data transformations and efficient workflow orchestration, essential for timely business intelligence and critical reporting. Get hands-on practice using Dataflow for Apache Beam and Serverless for Apache Spark (Dataproc Serverless) for implementation, and tackle crucial considerations for data quality, monitoring, and alerting to ensure pipeline reliability and operational excellence. A basic knowledge of data warehousing, ETL/ELT, SQL, Python, and Google Cloud concepts is recommended.

Learn more

Complete the introductory Implementing Cloud Load Balancing for Compute Engine skill badge to demonstrate skills in the following: creating and deploying virtual machines in Compute Engine and configuring network and application load balancers.

Learn more

Complete the intermediate Engineer Data for Predictive Modeling with BigQuery ML skill badge to demonstrate skills in the following: building data transformation pipelines to BigQuery using Dataprep by Trifacta; using Cloud Storage, Dataflow, and BigQuery to build extract, transform, and load (ETL) workflows; and building machine learning models using BigQuery ML.

Learn more

In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.

Learn more

This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.

Learn more

Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.

Learn more

In this course you will get hands-on in order to work through real-world challenges faced when building streaming data pipelines. The primary focus is on managing continuous, unbounded data with Google Cloud products.

Learn more

While the traditional approaches of using data lakes and data warehouses can be effective, they have shortcomings, particularly in large enterprise environments. This course introduces the concept of a data lakehouse and the Google Cloud products used to create one. A lakehouse architecture uses open-standard data sources and combines the best features of data lakes and data warehouses, which addresses many of their shortcomings.

Learn more