Introduction
- Google Cloud Professional Machine Learning Engineer Certification
- Google Cloud Professional ML Engineer Objective Map
Section 1: Framing ML Problems
- Translating Business Use Cases
- Machine Learning Approaches
- ML Success Metrics
- Responsible AI Practices
- Summary
- Exam Essentials
- Review Questions
Section 2: Exploring Data and Building Data Pipelines
- Visualization
- Statistics Fundamentals
- Data Quality and Reliability
- Establishing Data Constraints
- Running TFDV on Google Cloud Platform
- Organizing and Optimizing Training Datasets
- Handling Missing Data
- Data Leakage
- Summary
- Exam Essentials
- Review Questions
Section 3: Feature Engineering
- Consistent Data Preprocessing
- Encoding Structured Data Types
- Class Imbalance
- Feature Crosses
- TensorFlow Transform
- GCP Data and ETL Tools
- Summary
- Exam Essentials
- Review Questions
Section 4: Choosing the Right ML Infrastructure
- Pretrained vs. AutoML vs. Custom Models
- Pretrained Models
- AutoML
- Custom Training
- Provisioning for Predictions
- Summary
- Exam Essentials
- Review Questions
Section 5: Architecting ML Solutions
- Designing Reliable, Scalable, and Highly Available ML Solutions
- Choosing an Appropriate ML Service
- Data Collection and Data Management
- Automation and Orchestration
- Serving
- Summary
- Exam Essentials
- Review Questions
Section 6: Building Secure ML Pipelines
- Building Secure ML Systems
- Identity and Access Management
- Privacy Implications of Data Usage and Collection
- Summary
- Exam Essentials
- Review Questions
Section 7: Model Building
- Choice of Framework and Model Parallelism
- Modeling Techniques
- Transfer Learning
- Semi‐supervised Learning
- Data Augmentation
- Model Generalization and Strategies to Handle Overfitting and Underfitting
- Summary
- Exam Essentials
- Review Questions
Section 8: Model Training and Hyperparameter Tuning
- Ingestion of Various File Types into Training
- Developing Models in Vertex AI Workbench by Using Common Frameworks
- Training a Model as a Job in Different Environments
- Hyperparameter Tuning
- Tracking Metrics During Training
- Retraining/Redeployment Evaluation
- Unit Testing for Model Training and Serving
- Summary
- Exam Essentials
- Review Questions
Section 9: Model Explainability on Vertex AI
- Model Explainability on Vertex AI
- Summary
- Exam Essentials
- Review Questions
Section 10: Scaling Models in Production
- Scaling Prediction Service
- Serving (Online, Batch, and Caching)
- Google Cloud Serving Options
- Hosting Third‐Party Pipelines (MLflow) on Google Cloud
- Testing for Target Performance
- Configuring Triggers and Pipeline Schedules
- Summary
- Exam Essentials
- Review Questions
Section 11: Designing ML Training Pipelines
- Orchestration Frameworks
- Identification of Components, Parameters, Triggers, and Compute Needs
- System Design with Kubeflow/TFX
- Hybrid or Multicloud Strategies
- Summary
- Exam Essentials
- Review Questions
Section 12: Model Monitoring, Tracking, and Auditing Metadata
- Model Monitoring
- Model Monitoring on Vertex AI
- Logging Strategy
- Model and Dataset Lineage
- Vertex AI Experiments
- Vertex AI Debugging
- Summary
- Exam Essentials
- Review Questions
Section 13: Maintaining ML Solutions
- MLOps Maturity
- Retraining and Versioning Models
- Feature Store
- Vertex AI Permissions Model
- Common Training and Serving Errors
- Summary
- Exam Essentials
- Review Questions
Section 14: BigQuery ML
- BigQuery – Data Access
- BigQuery ML Algorithms
- Explainability in BigQuery ML
- BigQuery ML vs. Vertex AI Tables
- Interoperability with Vertex AI
- BigQuery Design Patterns
- Summary
- Exam Essentials
- Review Questions