Key Facts for Google Cloud Platform Engineer – These are key facts and concepts around Google Cloud Platform Cloud Engineering and will help in a quick revision for your Google Associate Cloud Engineer Study. 

 

  1. The command – line command to create a Cloud Storage bucket is gsutil mb, where gsutil is command line for accessing and manipulating Cloud Storage from command line. mb is the specific command for creating, or making, a bucket.
  2. Adding Virtual Machines to an instance group can be triggered in an autoscaling policy by all of the following :
    • CPU Utilisation 
    • Stackdriver metrics
    • Load balancing serving capacity
  3. Datastore options in GCP for transactions and the ability to perform relational database operations using fully complinat SQL data store – Spanner & Cloud SQL
  4. Instance templates are used to create a group of identical VMs. The instance templates include the followint configuration parameters or attributes of a
    1. VM
    2. Machine Type
    3. Boot Disk image
    4. Container Image
    5. Zone
    6. Labels.
  5. The most efficient way to implement an object management policy via administrators that requires ojects stored in Cloud Storage to be migrated from regional storage to enearline storage 90 days after the object is created is via lifecycle management configuration policy specifying an age of 90 days and SetStorageClass as nearline.
  6. Command to synchronize the contents of the two buckets gsutil rsync.
  7. All of the following are components of firewall rules a.) direction of traffic, b.) priority c.) action on match d.) enforcement status e.) target f.) source g.) protocol
  8. VPC’s are global resources and subnets are regional resources.
  9. Web application – deployment but does not want to manage managed servers or clusters. A good option is a PaaS – App Engine.
  10. Data warehouse needing SQL query capabilities over petabytes of data but with no manage servers or clusters, such requirements can be met by Big Query.
  11. Internet of Things space, will stream large volumes of data into GCp. The data needs to be filtered, transformed and analysed before being stored in GCP Datastore. — Cloud Dataflow.
  12. Cloud Dataflow allows for stream and batch processing of data and is well suited for ETL work.
  13. Dataproc is a managed Hadoop and Spark service thiat is used for big data analysics
  14. Buckets, directories and subdirectories are used to organise storage
  15. gcloud is the command line tool for IAM and list-grantable-roles will list roles granted to a resource, gcloud iam list-grantable-roles <resource>
  16. Cloud Endpoints is an API Service
  17. Cloud Interconnect is a network service.
  18. Compute Engine Virtual Machine is a zonal resource.
  19. Within the Zonal & Regional scope, GCP geographic scopes are network latencies generally less than 1 millisecond.
  20. To create a custom role, a user must possess iam.roles.create
  21. Project is the base-level organizing entity for creating and using GCP resources and services.
  22. Organisations, folders and projects are the components used to manage an organizational hierarchy.
  23. gcloud compute regions describe, gets a list of all CPU types available in a particular zone.
  24. Cloud Function responds to events in Cloud Storage, making them a good choice for taking actiona after a file is loaded
  25. Billing is setup at the Project level in the GCP resource hierarchy.
  26. Cloud Dataproc is the managed Spark Service
  27. Cloud Dataflow is for stream and processing
  28. Rate Quotas resets at regular intervals.
  29. There are two types of quotas in billing, Rate Quotas and Allocation Quotas.
  30. In Kubernetes Engine, a node pool is a subset of node instances within a cluster that all have the same configuration.
  31. Code for Cloud Functions can be written in Node.js and Python
  32. Preemptible virtual machines may be shutdown at any time but will always be shut down after running for 24 hours.
  33. After deciding to use Cloud Key Management Services and before you can start to create cryptographic keys you must enable KMS Api (Google Cloud Key Management Service) and setup billing.
  34. GCP Service for storing and managing Docker containers is Container Registry.
  35. Google Cloud Interconnect – Dedicated is used to provide a dedicated connection between customer’s data center and a Google data center
  36. Incase a remote component in your network has failed, which results ina transient network error, when you submit a gsutil command, it fails because of a transient error, by default the command will retry using a truncated binary exponential back-off strategy. This strategy is as follows, gsutil will retry using a truncated binary exponential backoff strategy:
    • Wait a random period between [0..1] seconds and retry;
    • If that fails, wait a random period between [0..2] seconds and retry;
    • If that fails, wait a random period between [0..4] seconds and retry;
    • And so on, up to a configurable maximum number of retries (default = 23),with each retry period bounded by a configurable maximum period of time (default = 60 seconds).

Thus, by default, gsutil will retry 23 times over 1+2+4+8+16+32+60… seconds for about 10 minutes. You can adjust the number of retries and maximum delay of any individual retry by editing the num_retries and max_retry_delay configuration variables in the “[Boto]” section of the .boto config file. Most users shouldn’t need to change these values.For data transfers (the gsutil cp and rsync commands), gsutil provides additional retry functionality, in the form of resumable transfers. Essentially, a transfer that was interrupted because of a transient error can be restarted without starting over from scratch. For more details about this, see the “RESUMABLE TRANSFERS” section of gsutil help.

References