Merve Çağlarer
Menjadi anggota sejak 2022
Diamond League
47815 poin
Menjadi anggota sejak 2022
Selesaikan badge keahlian pengantar Menyiapkan Data untuk ML API di Google Cloud untuk menunjukkan keterampilan Anda dalam hal berikut: menghapus data dengan Dataprep by Trifacta, menjalankan pipeline data di Dataflow, membuat cluster dan menjalankan tugas Apache Spark di Dataproc, dan memanggil beberapa ML API, termasuk Cloud Natural Language API, Google Cloud Speech-to-Text API, dan Video Intelligence API.
Selesaikan badge keahlian pengantar Membangun Mesh Data dengan Dataplex untuk menunjukkan keterampilan dalam hal berikut: membuat mesh data dengan Dataplex untuk memfasilitasi keamanan, tata kelola, dan penemuan data di Google Cloud. Anda akan berlatih dan menguji keterampilan Anda dalam memberikan tag pada aset, menetapkan peran IAM, dan menilai kualitas data di Dataplex.
This 1-week, accelerated on-demand course builds upon Google Cloud Platform Big Data and Machine Learning Fundamentals. Through a combination of video lectures, demonstrations, and hands-on labs, you'll learn to build streaming data pipelines using Google cloud Pub/Sub and Dataflow to enable real-time decision making. You will also learn how to build dashboards to render tailored output for various stakeholder audiences.
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.
This course introduces the Google Cloud big data and machine learning products and services that support the data-to-AI lifecycle. It explores the processes, challenges, and benefits of building a big data pipeline and machine learning models with Vertex AI on Google Cloud.
This skill badge course aims to unlock the power of data visualization and business intelligence reporting with Looker, and gain hands-on experience through labs.
This is the second course in the Data to Insights course series. Here we will cover how to ingest new external datasets into BigQuery and visualize them with Looker Studio. We will also cover intermediate SQL concepts like multi-table JOINs and UNIONs which will allow you to analyze data across multiple data sources. Note: Even if you have a background in SQL, there are BigQuery specifics (like handling query cache and table wildcards) that may be new to you. After completing this course, enroll in the Achieving Advanced Insights with BigQuery course.
The third course in this course series is Achieving Advanced Insights with BigQuery. Here we will build on your growing knowledge of SQL as we dive into advanced functions and how to break apart a complex query into manageable steps. We will cover the internal architecture of BigQuery (column-based sharded storage) and advanced SQL topics like nested and repeated fields through the use of Arrays and Structs. Lastly we will dive into optimizing your queries for performance and how you can secure your data through authorized views. After completing this course, enroll in the Applying Machine Learning to your Data with Google Cloud course.
This workload aims to upskill Google Cloud partners to perform specific tasks for modernization using LookML on BigQuery. A proof-of-concept will take learners through the process of creating LookML visualizations on BigQuery. During this course, learners will be guided specifically on how to write Looker modeling language, also known as LookML and create semantic data models, and learn how LookML constructs SQL queries against BigQuery. At a high level, this course will focus on basic LookML to create and access BigQuery objects, and optimize BigQuery objects with LookML.
This course explores how to leverage Looker to create data experiences and gain insights with modern business intelligence (BI) and reporting.
This learning experience guides you through the process of utilizing various data sources and multiple Google Cloud products (including BigQuery and Google Sheets using Connected Sheets) to analyze, visualize, and interpret data to answer specific questions and share insights with key decision makers.
This course explores how to implement a streaming analytics solution using Pub/Sub.
Selesaikan badge keahlian pengantar Memantau dan Membuat Log dengan Google Cloud Observability untuk menunjukkan kemahiran dalam hal berikut: memantau virtual machine di Compute Engine, menggunakan Cloud Monitoring untuk pengawasan multi-project, memperluas kemampuan pemantauan dan logging ke Cloud Functions, membuat dan mengirimkan metrik aplikasi kustom, serta mengonfigurasi pemberitahuan Cloud Monitoring berdasarkan metrik kustom.
Selesaikan badge keahlian tingkat menengah Rekayasa Data untuk Pembuatan Model Prediktif dengan BigQuery ML untuk menunjukkan keterampilan Anda dalam hal berikut: membangun pipeline transformasi data ke BigQuery dengan Dataprep by Trifacta; menggunakan Cloud Storage, Dataflow, dan BigQuery untuk membangun alur kerja ekstrak, transformasi, dan pemuatan (ETL); serta membangun model machine learning menggunakan BigQuery ML.
This course continues to explore the implementation of data load and transformation pipelines for a BigQuery Data Warehouse using Cloud Data Fusion.
Welcome to Cloud Data Fusion, where we discuss how to use Cloud Data Fusion to build complex data pipelines.
In this course you will get hands-on in order to work through real-world challenges faced when building streaming data pipelines. The primary focus is on managing continuous, unbounded data with Google Cloud products.
While the traditional approaches of using data lakes and data warehouses can be effective, they have shortcomings, particularly in large enterprise environments. This course introduces the concept of a data lakehouse and the Google Cloud products used to create one. A lakehouse architecture uses open-standard data sources and combines the best features of data lakes and data warehouses, which addresses many of their shortcomings.
This quest offers hands-on practice with Cloud Data Fusion, a cloud-native, code-free, data integration platform. ETL Developers, Data Engineers and Analysts can greatly benefit from the pre-built transformations and connectors to build and deploy their pipelines without worrying about writing code. This Quest starts with a quickstart lab that familiarises learners with the Cloud Data Fusion UI. Learners then get to try running batch and realtime pipelines as well as using the built-in Wrangler plugin to perform some interesting transformations on data.
In this intermediate course, you will learn to design, build, and optimize robust batch data pipelines on Google Cloud. Moving beyond fundamental data handling, you will explore large-scale data transformations and efficient workflow orchestration, essential for timely business intelligence and critical reporting. Get hands-on practice using Dataflow for Apache Beam and Serverless for Apache Spark (Dataproc Serverless) for implementation, and tackle crucial considerations for data quality, monitoring, and alerting to ensure pipeline reliability and operational excellence. A basic knowledge of data warehousing, ETL/ELT, SQL, Python, and Google Cloud concepts is recommended.
Selesaikan badge keahlian tingkat menengah Menerapkan Dasar-Dasar Keamanan Cloud di Google Cloud untuk menunjukkan kemahiran dalam hal berikut: membuat dan menetapkan peran dengan Identity and Access Management (IAM); membuat dan mengelola akun layanan; memungkinkan konektivitas pribadi di seluruh jaringan virtual private cloud (VPC); membatasi akses aplikasi menggunakan Identity-Aware Proxy; mengelola kunci dan data terenkripsi dengan Cloud Key Management Service (KMS); dan membuat cluster Kubernetes pribadi.
This self-paced training course gives participants broad study of security controls and techniques on Google Cloud. Through recorded lectures, demonstrations, and hands-on labs, participants explore and deploy the components of a secure Google Cloud solution, including Cloud Storage access control technologies, Security Keys, Customer-Supplied Encryption Keys, API access controls, scoping, shielded VMs, encryption, and signed URLs. It also covers securing Kubernetes environments.
In this self-paced training course, participants learn mitigations for attacks at many points in a Google Cloud-based infrastructure, including Distributed Denial-of-Service attacks, phishing attacks, and threats involving content classification and use. They also learn about the Security Command Center, cloud logging and audit logging, and using Forseti to view overall compliance with your organization's security policies.
This self-paced training course gives participants broad study of security controls and techniques on Google Cloud. Through recorded lectures, demonstrations, and hands-on labs, participants explore and deploy the components of a secure Google Cloud solution, including Cloud Identity, Resource Manager, IAM, Virtual Private Cloud firewalls, Cloud Load Balancing, Cloud Peering, Cloud Interconnect, and VPC Service Controls. This is the first course of the Security in Google Cloud series. After completing this course, enroll in the Security Best Practices in Google Cloud course.
In this course, you learn how to do the kind of data exploration and analysis in Looker that would formerly be done primarily by SQL developers or analysts. Upon completion of this course, you will be able to leverage Looker's modern analytics platform to find and explore relevant content in your organization’s Looker instance, ask questions of your data, create new metrics as needed, and build and share visualizations and dashboards to facilitate data-driven decision making.
In this course, you will get hands-on experience applying advanced LookML concepts in Looker. You will learn how to use Liquid to customize and create dynamic dimensions and measures, create dynamic SQL derived tables and customized native derived tables, and use extends to modularize your LookML code.
This course empowers you to develop scalable, performant LookML (Looker Modeling Language) models that provide your business users with the standardized, ready-to-use data that they need to answer their questions. Upon completing this course, you will be able to start building and maintaining LookML models to curate and manage data in your organization’s Looker instance.
Complete the intermediate Manage Data Models in Looker skill badge to demonstrate skills in the following: maintaining LookML project health; utilizing SQL runner for data validation; employing LookML best practices; optimizing queries and reports for performance; and implementing persistent derived tables and caching policies. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete this skill badge course, and the final assessment challenge lab, to receive a digital badge that you can share with your network.
In this quest, you will get hands-on experience with LookML in Looker. You will learn how to write LookML code to create new dimensions and measures, create derived tables and join them to Explores, filter Explores, and define caching policies in LookML.
Complete the introductory Build LookML Objects in Looker skill badge course to demonstrate skills in the following: building new dimensions and measures, views, and derived tables; setting measure filters and types based on requirements; updating dimensions and measures; building and refining Explores; joining views to existing Explores; and deciding which LookML objects to create based on business requirements.
In this course, you shadow a series of client meetings led by a Looker Professional Services Consultant.
By the end of this course, you should feel confident employing technical concepts to fulfill business requirements and be familiar with common complex design patterns.
In this course you will discover additional tools for your toolbox for working with complex deployments, building robust solutions, and delivering even more value.
Develop technical skills beyond LookML along with basic administration for optimizing Looker instances
This course reviews the processes for creating table calculations, pivots and visualizations
This course is designed for Looker users who want to create their own ad-hoc reports. It assumes experience of everything covered in our Get Started with Looker course (logging in, finding Looks & dashboards, adjusting filters, and sending data)
In this course you will discover Liquid, the templating language invented by Shopify and explore how it can be used in Looker to create dynamic links, content, formatting, and more.
Hands on course covering the main uses of extends and the three primary LookML objects extends are used on as well as some advanced usage of extends.
This course is designed to teach you about roles, permission sets and model sets. These are areas that are used together to manage what users can do and what they can see in Looker.
This course aims to introduce you to the basic concepts of Git: what it is and how it's used in Looker. You will also develop an in-depth knowledge of the caching process on the Looker platform, such as why they are used and why they work
This course provides an introduction to databases and summarized the differences in the main database technologies. This course will also introduce you to Looker and how Looker scales as a modern data platform. In the lessons, you will build and maintain standard Looker data models and establish the foundation necessary to learn Looker's more advanced features.
Want to learn the core SQL and visualization skills of a Data Analyst? Interested in how to write queries that scale to petabyte-size datasets? Take the BigQuery for Analyst Quest and learn how to query, ingest, optimize, visualize, and even build machine learning models in SQL inside of BigQuery.
Want to scale your data analysis efforts without managing database hardware? Learn the best practices for querying and getting insights from your data warehouse with this interactive series of BigQuery labs. BigQuery is Google's fully managed, NoOps, low cost analytics database. With BigQuery you can query terabytes and terabytes of data without having any infrastructure to manage or needing a database administrator. BigQuery uses SQL and can take advantage of the pay-as-you-go model. BigQuery allows you to focus on analyzing data to find meaningful insights.
Selesaikan badge keahlian tingkat menengah Membangun Data Warehouse dengan BigQuery untuk menunjukkan keterampilan Anda dalam hal berikut: menggabungkan data untuk membuat tabel baru, memecahkan masalah penggabungan, menambahkan data dengan union, membuat tabel berpartisi tanggal, serta menggunakan JSON, array, dan struct di BigQuery. Badge keahlian adalah badge digital eksklusif yang diberikan oleh Google Cloud sebagai pengakuan atas kemahiran Anda dalam menggunakan produk dan layanan Google Cloud serta menguji kemampuan Anda dalam menerapkan pengetahuan di lingkungan yang interaktif. Selesaikan kursus badge keahlian ini dan challenge lab penilaian akhir, untuk menerima badge keahlian yang dapat Anda bagikan dengan jaringan Anda.
Selesaikan badge keahlian pengantar Mendapatkan Insight dari Data BigQuery untuk menunjukkan keterampilan dalam hal berikut: menulis kueri SQL, membuat kueri tabel publik, memuat sampel data ke dalam BigQuery, memecahkan masalah error sintaksis umum dengan validator kueri di BigQuery, dan membuat laporan di Looker Studio dengan menghubungkannya ke data BigQuery.
This course provides an iterative approach to plan, build, launch, and grow a modern, scalable, mature analytics ecosystem and data culture in an organization that consistently achieves established business outcomes. Users will also learn how to design and build a useful, easy-to-use dashboard in Looker. It assumes experience with everything covered in our Getting Started with Looker and Building Reports in Looker courses.
In this course, we’ll show you how organizations are aligning their BI strategy to most effectively achieve business outcomes with Looker. We'll follow four iterative steps: Plan, Build, Launch, Grow, and provide resources to take into your own services delivery to build Looker with the goal of achieving business outcomes.
By the end of this course, you should be able to articulate Looker's value propositions and what makes it different from other analytics tools in the market. You should also be able to explain how Looker works, and explain the standard components of successful service delivery.