Nando’s Peri-Peri Chicken Wings Recipe

For those who are not familiar with Nandos, it is one of our favourite food chains. It is a grilled chicken serving restaurant which lets you indulge in a variety of chicken dishes with a feeling that you are eating healthy as majority of the dishes are grilled. The restaurants have a good ambience, well sized portions and most importantly reasonably priced.

Coming to the Nando’s chain and brand, it is a South African restaurant chain that specialises in Portuguese-African food, such as peri-peri style chicken dishes. Founded in Johannesburg in 1987, Nando’s operates over 1000 outlets in 35 countries. Their logo is the famous Portuguese symbol, the Rooster of Barcelos.

A couple of weeks back after a quick registration on their site, Nando’s delivered a little fun kit with the marinade sauce, tissues, recipe card and a take way bag. On the recipe card was the step by step guide to one of my favourite dishes “PERi-PERi Wing Platter” which is described as “24 flame-grilled PERi-PERi wings” on the Nandos UK menu.

What is Peri-Peri ?

PERi-PERi, also known as the African Bird’s Eye Chilli, is the key to the flame-grilled PERi-PERi chicken.

PERi-PERi chilli seeds are rich in Vitamins A, B, and C.
They also have capsaicin, which enhances mood: your pupils dilate, your metabolic rate increases, and there’s a rush of endorphins when you consume it! PERi-PERi is also a natural preservative.

PERi-PERi when mixed with salt, garlic, lemon, onion, oil, and vinegar to goes on to make the signature sauce which is readily available at several retail stores.

Official Recipe Video

You’ll Need:

Nando’s Medium Peri-Peri Sauce
  • 12 Chicken wings
  • Nando’s Medium Peri-Peri sauce (125 ml bottle)
  • 1 tbsp baking powder
  • 1 tbsp salt

Marination Time – 1 hour or overnight

What to do:

  1. Put the wings in a bowl and rub all over with the Peri-Peri Sauce, salt and baking powder
  2. Cover the bowl with clingfilm and put in the firge for at least 1 hour to marinate, or preferably overnight.
  3. Pre – heat the oven to 180 degrees (not fan) or gas mark 4
  4. Transfer the chicken wings to a roasting tray (image of a roasting tray below this recipe)
  5. Cook for about 30 mins until the wings are just cooked.
  6. Heat up a grill pan on the stove (or fire up your BBQ!)
  7. When it’s smoking hot, put 6 wings in the grill pan
  8. Leave the wings without moving them to get Nando’s unique grill marks.
  9. Turn over once the skin has grill marks and repeat on the other side.
  10. Generously brush over your chose Nando’s sauce on both sides
  11. Turn the oven down to 100 degrees (under gas mark 1) and keep the grilled wings warm while you repeat with the remaining wings.

That is all folks, there you have it the Nando’s Peri-Peri Chicken Wings Recipe.

Roasting Tray

Tagged : / / / / / / / / /

Ciccionateland – a celebration of food

Just bumped into a wonderful blog, with like minded symphony of food, travel and delightm. Ciccinateland is a blog with an intriguing name.

First dashed to it’s “What is …” page to quench my intrigue. The interesting name has an explanation, and supposedly is “a brand-new Italian slang for delicious and high-fat food.”

The author is clearly upto to something different and there is a clear departure or intent to deviate from normal. This curiosity took me to the vivid pages which cover Venice, Verona, Manchester, China & Stockholm. Indeed, a variety.

I did run through the latest article about Venice and it presents a very interesting suggestion about gelato, something that has clearly eluded me. A side of Venice, personally would not stumble into. The vivid images prove a point that the author is sincere to his subject and did enjoy the variations of gelato to his/her heart’s content.

Look forward to more such content and twists on travel and food. Just one other remarkable thing about the blog, its content was completely translatable through a simple browser, so no user difficulties there. Please do like, share and subscribe the creativity – Ciccinateland.

Key Facts for Google Cloud Platform Engineer

Key Facts for Google Cloud Platform Engineer – These are key facts and concepts around Google Cloud Platform Cloud Engineering and will help in a quick revision for your Google Associate Cloud Engineer Study. 

 

  1. The command – line command to create a Cloud Storage bucket is gsutil mb, where gsutil is command line for accessing and manipulating Cloud Storage from command line. mb is the specific command for creating, or making, a bucket.
  2. Adding Virtual Machines to an instance group can be triggered in an autoscaling policy by all of the following :
    • CPU Utilisation 
    • Stackdriver metrics
    • Load balancing serving capacity
  3. Datastore options in GCP for transactions and the ability to perform relational database operations using fully complinat SQL data store – Spanner & Cloud SQL
  4. Instance templates are used to create a group of identical VMs. The instance templates include the following configuration parameters or attributes of a
    1. VM
    2. Machine Type
    3. Boot Disk image
    4. Container Image
    5. Zone
    6. Labels.
  5. The most efficient way to implement an object management policy via administrators that requires ojects stored in Cloud Storage to be migrated from regional storage to enearline storage 90 days after the object is created is via lifecycle management configuration policy specifying an age of 90 days and SetStorageClass as nearline.
  6. Command to synchronize the contents of the two buckets gsutil rsync.
  7. All of the following are components of firewall rules a.) direction of traffic, b.) priority c.) action on match d.) enforcement status e.) target f.) source g.) protocol
  8. VPC’s are global resources and subnets are regional resources.
  9. Web application – deployment but does not want to manage managed servers or clusters. A good option is a PaaS – App Engine.
  10. Data warehouse needing SQL query capabilities over petabytes of data but with no manage servers or clusters, such requirements can be met by Big Query.
  11. Internet of Things space, will stream large volumes of data into GCp. The data needs to be filtered, transformed and analysed before being stored in GCP Datastore. — Cloud Dataflow.
  12. Cloud Dataflow allows for stream and batch processing of data and is well suited for ETL work.
  13. Dataproc is a managed Hadoop and Spark service thiat is used for big data analysics
  14. Buckets, directories and subdirectories are used to organise storage
  15. gcloud is the command line tool for IAM and list-grantable-roles will list roles granted to a resource, gcloud iam list-grantable-roles <resource>
  16. Cloud Endpoints is an API Service
  17. Cloud Interconnect is a network service.
  18. Compute Engine Virtual Machine is a zonal resource.
  19. Within the Zonal & Regional scope, GCP geographic scopes are network latencies generally less than 1 millisecond.
  20. To create a custom role, a user must possess iam.roles.create
  21. Project is the base-level organizing entity for creating and using GCP resources and services.
  22. Organisations, folders and projects are the components used to manage an organizational hierarchy.
  23. gcloud compute regions describe, gets a list of all CPU types available in a particular zone.
  24. Cloud Function responds to events in Cloud Storage, making them a good choice for taking actiona after a file is loaded
  25. Billing is setup at the Project level in the GCP resource hierarchy.
  26. Cloud Dataproc is the managed Spark Service
  27. Cloud Dataflow is for stream and processing
  28. Rate Quotas resets at regular intervals.
  29. There are two types of quotas in billing, Rate Quotas and Allocation Quotas.
  30. In Kubernetes Engine, a node pool is a subset of node instances within a cluster that all have the same configuration.
  31. Code for Cloud Functions can be written in Node.js and Python
  32. Preemptible virtual machines may be shutdown at any time but will always be shut down after running for 24 hours.
  33. After deciding to use Cloud Key Management Services and before you can start to create cryptographic keys you must enable KMS Api (Google Cloud Key Management Service) and setup billing.
  34. GCP Service for storing and managing Docker containers is Container Registry.
  35. You must verify the project selected is the one you want to work with, once you have opened the GCP console at console.google.com before performing task on VM’s. All operations you perform apply to resoures in the identified project.
  36. One time task you will need to complete before using the console is setting up the billing. You will be able to create the project only if the billing is enabled.
  37. A name for VM, machine type, a region and a zone are minimal set of info you will need while creating a VM.
  38. Different zones may have different machine types available.
  39. Billing of different departments for the cost of VM’s used for their applications is possible with labels and descriptions.
  40. Google Cloud Interconnect – Dedicated is used to provide a dedicated connection between customer’s data center and a Google data center
  41. Purpose of instance groups in a Kubernetes cluster is to create sets of VM’s that can be managed as a unit.
  42. A Kubernetes cluster has a single cluster master and one or more nodes to execute workloads.
  43. A pod is a single instance of a running process in a cluster
  44. To ensure applications calling Kubernetes services
  45. ReplicaSets are controllers that are responsible for maintaining the correct number of pods.
  46. Deployments are versions of application code running on a cluster. 
  47. To maintain availability even if there is a major network outage in a data center, Multizone/multiregion clusters are available in Kubernetes Engine and are used to provide resiliency to an application
  48. Starting with an existing template, filling in parameters, and generating the gcloud command is the most reliable way to deploy a Kubernetes cluster with GPUs.
  49. gcloud beta container clusters create ch07-cluster-1 –num-nodes=4 will create a cluster named ch07-cluster-1 with four nodes.
  50.  Application name, container image, and initial command can all be specified when using create deployment from cloud console when creating a deployment from cloud console. Time to Live (TTL) is not specified and not an attribute of deployments.
  51. Deployment configuration files created in Cloud Console use YAML format.
  52. When working on a Kubernetes Engine a cloud engineer may need to configure, Nodes, Pods, services, clusters and container images.
  53. After observing performance degradation, inorder to see details of a specific cluster, after opening Cloud Console, Click the cluster name to see details of a specific cluster.
  54. You can find the number of vCPUS on the cluster listing in the Total Cores column or on the Details Page in the Node Pool section in the size parameter.
  55. High level characteristics of a cluster — gcloud container clusters list
  56. gcloud container clusters get-credentials is the correct command to configure kubectl to use GCP credentials for the cluster.
  57. Clicking Edit button allows you to change, add, or remove labels from the Kubernetes cluser.
  58. When resizing, the gcloud container clusteres resize command requires the name of the cluster,size and the node pool to modify.
  59. Pods are used to implement replicas of a deployment, and it is best practice to modify deployments which are configured with a specification of the number of replicas that should always run. 
  60. In the Kubernetes Engine Navigation menu, you would select Workloads inorder to see a list of deployments.
  61. 4 actions available for deployments is Autoscale, Expose, Rolling Update and Scale.
  62. Command to list deployments is kubectl get deployments
  63. You can specify container image, cluster name and application name along with the labels, initial command and namespace. 
  64. The Deployment Details page includes services.
  65. kubetcl run command is used to start a deployment. It takes name for the deployment, image & port.
  66. Command for service not functioning as expected and needs to be removed from the cluster — kubectl delete service m1-classfied
  67. Container Registry is the service for managing images that can be used in other services like Kubernetes Engine and Compute Engine.
  68. gcloud container images list  — is to list container images in the command line. 
  69. gcloud container images describe — to get  a detailed description of each containers
  70. kubectl expose deployment — makes a service accessible.
  71. Autoscaling is the most cost-effective and least burdensome way to respond to changes in demand for a service.
  72. Version aspect of App Engine components would you use to minimize disruptions during updates to the service. Versions support migration. An app can have multiple versions, and by deploying with the --migrate parameter, you can migrate traffic to the new version.
  73. Autoscaling enables setting a maximum and minimum number of instances, the best way to ensure that you have enough instances to meet demand without spending more than you have to.
  74. Applications have one or more services. Services have one or more versions. Versions are executed on one or more instances when the application is running. Hence in the hierarchy of App Engine components, instance is the lowest-level component.
  75. gcloud app deploy, is used to deploy App engine app from command line.
  76. The app.yaml file is used to configure an App Engine application, so if python associated with the app were to be upgraded, it would be upgraded via the file.
  77.  A project can support only one App Engine app
  78. The best way to get the code out as soon as possible without exposing it to customers, would be to deploy with gcloud app deploy --no-promote
  79. App Engine applications are accessible from URLs that consist of the project name followed by appspot.com.
  80. Related to App Engine, max_concurrent_requests lets you specify the maximum number of concurrent requests before another instance is started. target_throughput_utilization functions similarly but uses a 0.05 to 0.95 scale to specify maximum throughput utilization. max_instances specifies the maximum number of instances but not the criteria for adding instances. max_pending_latency is based on the time a request waits, not the number of requests.
  81. App Engine Basic scaling only allows for idle time and maximum instances
  82. App Engine, The runtime parameter specifies the language environment to execute in. The script to execute is specified by the script parameter. The URL to access the application is based on the project name and the domain appspot.com
  83. Two kinds of instances present in App Engine Standard, Resident instances are used with manual scaling while dynamic instances are used with autoscaling and basic scaling.
  84. For Apps running in App Engine, using dynamic instances by specifying autoscaling or basic scaling will automatically adjust the number of instances in use based on load.
  85. For Apps running in App Engine, gcloud app services set-traffic, can allocate some users to a new version without exposing all users to it
  86. For Apps running in App Engine, –split-traffic parameter to gcloud app services set-traffic is used to specify the method to use when splitting traffic
  87. For Apps running in App Engine, –splits parameter to gcloud app services set-traffic is used to specify the percentage of traffic that should go to each instance
  88. For Apps running in App Engine,--migrate is the parameter for specifying that traffic should be moved or migrated to the newer instance,
  89. The cookie used for cookie based splitting in App Engine is called GOOGAPPUID
  90. From the App Engine console you can view the list of services and versions as well as information about the utilization of each instance.
  91. All three methods listed, IP address, HTTP cookie, and random splitting, are allowed methods for splitting traffic for App Engine Traffic
  92. New app will require several backend services, three business logic services and access to relational databases. Each service will provide a single function and it will require services to complete a business task.Service execution time is dependent on the size of input and is expected to take up to 30 minutes in some cases. App Engine is designed to support multiple tightly coupled services comprising an application.
  93. Cloud Functions, which is designed to support single-purpose functions that operate independently and in response to isolated events in the Google Cloud and complete within a specified period of time.
  94. In Cloud Functions,a timeout period that is too low would explain why the smaller files are processed in time but the largest are not.
  95. In Cloud Functions, An event, is an action that occurs in GCP, such as a file being written to Cloud Storage or a message being added to a Cloud Pub/Sub topic.
  96. In Cloud Functions, a trigger is a declaration that a certain function should execute when an event occurs. 
  97. GCP products listed do generate events that can have triggers associated with them, Cloud Storage, Cloud Pub/Sub/ Firebase, HTTP.
  98. Python, Node.js 6, Node.js 8 are supported in Cloud Functions.
  99. In Cloud Functions, an HTTP trigger can be invoked by making a request using DELETE, POST and GET.
  100. With Cloud Storage working with Cloud Functions, upload or finalise, delete, metadata update and archive are the 4 events supported.
    1. google.storage.object.finalize
    2. google.storage.object.delete
    3. google.storage.object.archive
    4. google.storage.object.metadataUpdate
  101. Following feature cannot be specified in a parameter and must be implemented in Function code, i.e. File type to apply the function to.
  102. Cloud Functions can have between 128MB and 2GB of memory allocated.
  103. By default Cloud Functions can run for up to 1 minute before timing out, you can, however, set the timeout parameter for a cloud function for periods of up to 9 minutes before timing out.
  104. Python Cloud Functions is currently in beta. The standard set of gcloud commands does not include commands for alpha or beta release features by default. You will need to explicitly install beta features using the gcloud components install beta command
  105. google.storage.object.finalize, which occurs after a file is uploaded.
  106. If you are defining a cloud function to write a record to a database when a file in Cloud Storage is archived, you need runtimetrigger-resourcetrigger-event only.
  107. If you’d like to stop using a cloud function and delete it from your project, gcloud functions delete.
  108. As part of python code for a cloud function to work with Cloud Pub/Sub. Deccode function will be required in Pub/Sub cloud function for messages in Pub/Sub topics are encoded to allow binary data to be used in places where text data is expected. Messages need to be decoded to access the data in the message.

  109. Bigtable is a wide-column database that can ingest large volumes of data consistently.
  110. Once a bucket is created as either regional or multi-regional, it cannot be changed to the other.
  111. The goal is to reduce cost, so you would want to use the least costly storage option. Coldline has the lowest per-gigabyte charge at $0.07/GB/month.
  112. Memorystore is a managed Redis cache. The cache can be used to store the results of queries. Follow-on queries that reference the data stored in the cache can read it from the cache, which is much faster than reading from persistent disks. SSDs have significantly lower latency than hard disk drives and should be used for performance-sensitive applications like databases. 
  113. While versioning on a bucket, the latest version of the object is called the live version.
  114. Lifecycle configurations can change storage class from regional to nearline or coldline. Once a bucket is created as regional or multiregional, it cannot be changed to the other.
  115. Transactions and support for tabular data are important, Cloud SQL and Spanner are relational databases and are well suited for transaction-processing applications.
  116. Sample command for deployment of python cloud function call pub_sub_function_test,
    • gcloud functions deploy pub_sub_function_test –runtime python37 –trigger-topic gcp-ace-exam-test-topic
  117. There is only one type of event that is triggered in Cloud Pub/Sub, and that is when a message is published.
  118. Both MySQL and PostgreSQL are Cloud SQL options
  119. nam3 is a single super region
  120. us-central1 is a region
  121. us-west1-a is a zone
  122. Multiregional and multi-super-regional location of nam-eur-aisa1 is one the most expensive Cloud Spanner Configurations.
  123. BigQuery, Datastore, and Firebase are all fully managed services that do not require you to specify configuration information for VMs.
  124. Document data model is used by Datastore.
  125. BigQuery is a managed service designed for data warehouses and analytics. It uses standard SQL for querying and can support tens of petabytes of data.
  126. Bigtable can support tens of petabytes of data, but it does not use SQL as a query language. 
  127. Firestore is a document database that has mobile supporting features, like data synchronization.
  128. Consistency, cost, read / write patterns, transaction support and latency are features of storage which should be considered while securing additional storage.
  129. Once a bucket has its storage class set to coldline, it cannot be changed to another storage class.
  130. To use BigQuery to store data, you must have a data set to store it.
  131. With a second-generation instance in Cloud SQL, you can configure the MySQL version, connectivity, machine type, automatic backups, failover replicas, database flags, maintenance windows, and labels.
  132. Access charges are used with nearline and coldline storage
  133. Memorystore can be configured to use between 1GB and 300GB of memory.
  134. Your company has a web application that allows job seekers to upload résumé files. Some files are in Microsoft Word, some are PDFs, and others are text files. You would like to store all résumés as PDFs. And the solution for this one is, implement a Cloud Function on Cloud Storage to execute on a finalize event. The function checks the file type, and if it is not PDF, the function calls a PDF converter function and writes the PDF version to the bucket that has the original.
  135. Options for uploading code to a cloud function are as follows 
    1. Inline editor
    2. Zip upload
    3. Cloud source repository
  136. The HTTP trigger allows for the use of POST, GET, and PUT calls to invoke a cloud function.
  137. Cloud SQL is a fully managed relational database service, but database administrators still have to perform some tasks. Creating Databases is one of them.
  138. Cloud SQL is controlled using the gcloud command; the sequence of terms in gcloud commands is gcloud followed by the service, in this case SQL; followed by a resource, in this case backups, and a command or verb, in this case create.  Command is used to create a backup of a Cloud SQL database. “gcloud sql backups create”
  139. Command will run automatic backup on an instance called ace-exam-mysql. The base command is gcloud sql instances patch, which is followed by the instance name and a start time passed to the –-backup-start-time parameter. gcloud sql instances patch ace-exam-mysql –backup-start-time 03:00
  140. GQL – Go Query Language is used for Datastore
  141. Export Data from Datastore uses the following command, gcloud datastore export -namespaces='[NAMESPACE]’ gs://[BUCKET_NAME]
  142. BigQuery analyzes the query and BigQuery displays an estimate of the amount of data scanned. This is important because BigQuery charges for data scanned in queries.
  143. Command to get an estimate of the volume of data scanned by BigQuery from the command line, the correct bq command structure, which includes location and the ––dry_run option. This option calculates an estimate without actually running the query. bq ––location=[LOCATION] query –use_legacy_sql=false ––dry_run [SQL_QUERY]
  144. You are using Cloud Console and want to check on some jobs running in BigQuery. You navigate to the BigQuery part of the console. Job History is the menu item would you click to view jobs.
  145. Estimate the cost of running BigQuery query, BigQuery provides an estimate of the amount of data scanned, and the Pricing Calculator gives a cost estimate for scanning that volume of data.
  146. You have just created a Cloud Spanner instance. You have been tasked with creating a way to store data about a product catalog, the next step is to create a database within the instance. Once a database is created, tables can be created, and data can be loaded into tables. 
  147. Your software team is developing a distributed application and wants to send messages from one application to another. Once the consuming application reads a message, it should be deleted. You want your system to be robust to failure, so messages should be available for at least three days before they are discarded. It involves sending messages to the topic, and the subscription model is a good fit. Pub/Sub has a retention period to support the three-day retention period. 
  148. Pub/Sub works with topics, which receive and hold messages, and subscriptions, which make messages available to consuming applications.
  149. Command line tools for Big Table Environment can be secured using gcloud components install cbt to install the Bigtable command-line tool.
  150. cbt createtable iot-ingest-data, to create a table iot-ingest-data in Big Table
  151. Cloud Dataproc is a managed service for Spark and Hadoop. Cassandra is a big data distributed database but is not offered as a managed service by Google
  152. gcloud dataproc clusters create spark-nightly-analysis ––zone us-west2-a, command to create data proc cluster
  153. Command to rename an object stored in a bucket, gsutil mv gs://[BUCKET_NAME]/[OLD_OBJECT_NAME] gs://[BUCKET_NAME]/ [NEW_OBJECT_NAME
  154. Dataproc with Spark and its machine learning library are ideal for the use of selling more products.
  155. gsutil mb is used to create buckets in Cloud Storage
  156. gsutil cp is the command to copy files from your local device to a bucket in Cloud Storage assuming your have Cloud SDK installed.
  157. Using the cloud console you can upload files and folders, if you are migrating a large number of files from a local storage system to Cloud Storage
  158. When exporting a database from Cloud SQL, the export file format options are CSV and SQL
  159. SQL format, exports a database as a series of SQL data definition commands. These commands can be executed in another relational database without having to first create a schema
  160. gcloud sql export sql ace-exam-mysql1 gs://ace-exam-buckete1/ace-exam-mysql-export.sql \ ––database=mysql, this command will export a MySQL database called ace-exam-mysql1 to a file called ace-exam-mysql-export.sql in a bucket named ace-exam-buckete1
  161. Command is required to back up data from your Datastore database to an object storage system. Your data is stored in the default namespace, gcloud datastore export ––namespaces=“(default)” gs://ace-exam-bucket1
  162. Datastore export command process creates a metadata file with information about the data exported and a folder that has the data itself
  163. XML is not an option in BigQuery export process, CSV, AVRO,JSON are valid options.
  164. CSV, AVRO,Parquet are valid BigQuery options.
  165. bq load ––autodetect ––source_format=[FORMAT] [DATASET].[TABLE] [PATH_TO_SOURCE], BigQuery to analyse the data and make data available for analysis in BigQuery.
  166. You have set up a Cloud Spanner process to export data to Cloud Storage. You notice that each time the process runs you incur charges for another GCP service, which you think is related to the export process.Dataflow is a pipeline service for processing streaming and batch data that implements workflows used by Cloud Spanner.
  167. Exporting from Dataproc exports data about the cluster configuration. Dataproc supports Apache Spark, which has libraries for machine learning. 
  168. Correct command to create a Pub/Sub topic, gcloud pubsub topics create
  169. gcloud pubsub subscriptions create ––topic=ace-exam-topic1 ace-exam-sub1, command will create a subscription on the topic ace-exam-topic1
  170. Direct Advantages of using a message queue in distributed systems, It decouples services, so if one lags, it does not cause other services to lag.
  171. gcloud components install beta, to install beta glcoud commands
  172. BigQuery parameter to automatically detect the schedma of a file import, –autodetect.
  173. Avro supports Deflate and Snappy compression when exporting from BigQuery. CSV supports Gzip and no compression. 
  174. As a developer on a project using Bigtable for an IoT application, you will need to export data from Bigtable to make some data available for analysis with another tool. A Java program designed for importing and exporting data from Bigtable.
  175. The A record is used to map a domain name to an IPv4 address. The AAAA record is used for IPv6 addresses.
  176. DNSSEC is a secure protocol designed to prevent spoofing and cache poisoning.
  177. The TTL parameters in a DNS record specify, the time a record can be in a cache before the data should be queried again.
  178. Command to create a DNS zone in command line,  gcloud beta dns managed-zones create.
  179. –visibility=private is the parameter that can be set to private for DNS
  180. Virtual private clouds are global, so option D is correct. By default, they have subnets in all regions. Resources in any region can be accessed through the VPC.
  181. IP ranges are assigned to subnets, when CIDR ranges are defined, they are defined as per the number of subnets.
  182.  Dynamic routing is the parameter that specifies whether routes are learned regionally or globally. 
  183. gcloud compute networks create – command to create a VPC
  184. The Flow Log option of the create vpc command determines whether logs are sent to Stackdriver.
  185. Shared VPC’s can be created at the organisation or folder level of the resource hierarchy.
  186. While creating a VM that should exist in a custom subnet, one needs specify the subnet in the Networking tab of the Management, Security, Disks, Networking, Sole Tenancy section of the form.
  187. VPC peering is used for interproject communications.
  188. Target, this part of the firewall rule can reference the network tag to determine the set of instances afftected by the rule. 
  189. Direction specifies whether the rule is applied to incoming or outgoing traffic.
  190. The 0.0.0.0/0 matches all IP addresses.
  191. gcloud compute firewall-rules create, the product you are working with is compute and the resource you are creating is a firewall rule.
  192. Using gcloud to create a firewall rule, –network parameter is used to specify the subnet it should apply to.
  193. gcloud compute firewall-rules create fwr1 –allow=udp:20000-30000 –direction=ingress.The service endpoints will accept any UDP traffic and each endpoint will use a port in the range of 20000–30000.
  194. You want it to apply only if there is not another rule that would deny that traffic. 65535 is the appropriate priority because  it is the largest number allowed in the range of values for priorities. The larger the number, the lower the priority. Having the lowest priority will ensure that other rules that match will apply.
  195. The VPC create option is available in the Hybrid Connectivity section
  196. If you want to configure the GCP end of the VPN, the following section of the Create VPN form would be used, the Google Compute Engine VPN is where you specify information about the Google Cloud end of the VPN connection.
  197. Global dynamic routing is used to learn all routes on a network and if you want the router on a tunnel you are creating to learn routes from all GCP regions on the network.The autonomous system number (ASN) is a number used to identify a cloud router on a network
  198. When using gcloud to create a VPN, you need to create forwarding rules, tunnels, and gateways, so all the gcloud commands listed would be used. gcloud compute forwarding-rule, gcloud compute target-vpn-gateways, and gcloud compute vpn-tunnels
  199. When you create a cloud route, you need to assign an ASN for the BGP protocol.
  200. Incase a remote component in your network has failed, which results in a transient network error, when you submit a gsutil command, it fails because of a transient error, by default the command will retry using a truncated binary exponential back-off strategy. This strategy is as follows, gsutil will retry using a truncated binary exponential backoff strategy:
    • Wait a random period between [0..1] seconds and retry;
    • If that fails, wait a random period between [0..2] seconds and retry;
    • If that fails, wait a random period between [0..4] seconds and retry;
    • And so on, up to a configurable maximum number of retries (default = 23),with each retry period bounded by a configurable maximum period of time (default = 60 seconds).

Thus, by default, gsutil will retry 23 times over 1+2+4+8+16+32+60… seconds for about 10 minutes. You can adjust the number of retries and maximum delay of any individual retry by editing the num_retries and max_retry_delay configuration variables in the “[Boto]” section of the .boto config file. Most users shouldn’t need to change these values.For data transfers (the gsutil cp and rsync commands), gsutil provides additional retry functionality, in the form of resumable transfers. Essentially, a transfer that was interrupted because of a transient error can be restarted without starting over from scratch. For more details about this, see the “RESUMABLE TRANSFERS” section of gsutil help.

 

 

 

References

 

Tagged : / / / / / / / / / /

Mangalorean Pork Chilly Recipe

Mangalorean Pork chilly is a spicy and typical side or starter dish which sends the taste buds racing. Much to my amazement, this dish is a signature manglorean / goan dish and the google search reflects that. There are definitely other references to Chinese and Bong variations. Chinese was a more sweeter variation. Sweet and meat do not go hand in hand for me.

One of the first time I tasted this amazing dish was my last trip to Mangalore and would highly recommend visiting Managala Bar, Mangalore. It serves and amazing pork chilly with a chilled beer with an amazing ambience. For more details there is a helpful zomato link below.

You’ll Need:

  • 1 kg Pork
  • 1 Green Bell Pepper or Capscicum
  • 2- 3 medium onions
  • 6 green chillies
  • 3 tablespoons of ginger & garlic paste
  • 6 flakes garlic
  • 1 tsp bafat powder
  • 1 tsp pepper powder
  • 2 tablespoons soya sauce (Preferably Dark Soya Sauce)
  • 1 tablespoon chilli sauce / schezwan sauce
  • Salt
  • Vinegar (1 tablespoon)
  • Optional : Garnish with spring Onion

What to do:

  • If you have the luxury of a friendly butcher ensure the meat pieces are cut into small chunks or use the favourite words “curry pieces” and wash. (Some butchers will magically know what you need)
  • After washing the meat, ensure the excess water is drained in the colander.
  • Transfer it to a glass vessel and add add ginger garlic paste, salt and vinegar and let it marinate for 30 minutes at the max overnight. Marinating some food too long can result in tough, dry, or poor texture.
  • After marination boil for 20 minutes adding very little water. 
  • Optionally, once the meat cools down, if needed slice pork further into thin slices and keep aside.
  • In a pan with 3 tablespoons of oil, fry the thinly sliced onions, green bell pepper and green chillies and the flakes of garlic for 2 minutes. Once done keep aside.
  • Now in the same pan, add sufficient oil for a shallow fry, (for those who are little lost – shallow fry means the oil / fat covers only the lower part of the food in contact with the vessel.). Fry till the meat slices are light brown on both sides.  Ensure the meat is cooked while wrapping up this stage.
  • Now it is time to bring everything together, in a seperate non stick vessel add the vegetables followed by the bafat powder, pepper powder and soya sauce and chilli sauce.  Mix well. Cook on medium flame for 2 mins.
  • Add the fried meat to this saucy mixture and let it cook for 5 more minutes. 
  • Optionally garnish with spring onion.

Enjoy !

Tagged : / / /

Best options to tranfer money abroad

Recently stumbled on a very neat solution which compares pretty much all the options available in the market to transfer money from country to country and then produces a list of the best sites at real time. The market is flooded with options and you will be shocked to see that a good option on a particular day does not remain the same on the next day.

Firstly I would like to clearly callout this is not a paid review. Something I found really helpful and believe it help a lot of students or people who need to transfer for savings, maintenance or medical purposes.

“Monito is a comparison tool that enables people to find, compare and review money transfer services”

monito.com

Factors that influence money tranfer at real time

  • Total cost
  • Fees
  • Margin on the exchange rate

Monito search engine compares all the options and simply presents a neat view of the best options to tranfer money abroad and funny enough they do not transfer money at all and just give you links to the site.

Over and above they clearly call out the pesky promo codes which you need to hunt for ages. In short you are armed with the best options and this did save me a good amount of money and made me understand that the market.

A little sneak peak of the site –

Example : UK to India Transfer worth 1000 GBP

Other interesting views provided –

Cheapest

Fastest

Best Rated

Best Rate

They are already covering 154 countries and over 300 providers which is a massive amount of content and value.

With their experience in the industry, they have launched the Monito Score, a composite evaluation based on 56 criteria organized in seven categories:

  • Fees & Exchange Rates
  • Ease of use
  • Credibility & Security
  • Service & Coverage
  • Customer satisfaction
  • Customer support
  • Transparency

I must say I am simply in love with the solution they have delivered and very inspiring. Please do make use of it.


Money Transfer Promo Codes / Free Cash Offers

Referral Links which will give you a kick start for your tranfers with some free cash to start with.

Click on the links to get started

Terms and Conditions of the rewards are as per the money transfer sites and could change.

World Remit – Once you’ve sent 100 GBP using the above link, you’ll be emailed a 20 GBP WorldRemit voucher code

Tagged : / / / / / / / / / / / / / / /

Interacting with Google Cloud Platform

There are four ways you can interact with Google Cloud Platform, each of them are listed below :

  1. GCP Console
  2. SDK and Cloud Shell
  3. Mobile App
  4. APIs. 

 1. GCP Console

  • The GCP Console is a web-based administrative interface. If you build an application in GCP, you will interact with it and not exposed to the end users of your app.
  • It lets you view and manage all your projects and all the resources they use. 
  • It also lets you enable, disable and explore the APIs of GCP services. 
  • It gives you access to Cloud Shell. 

2. A. Cloud Shell

That’s a command-line interface to GCP that’s easily accessed from your browser. 

From Cloud Shell, you can use the tools provided by the Google Cloud Software Development kit SDK, without having to first install them somewhere.

2. B. Google Cloud SDK – Software Development Kit 

The Google Cloud SDK is a set of tools that you can use to manage your resources and your applications on GCP. 

These include the gcloud tool, which provides the main command line interface for Google Cloud Platform products and services. 

 GSUTIL

gsutil is a Python application that lets you access Cloud Storage from the command line. You can use gsutil to do a wide range of bucket and object management tasks, including:

  • Creating and deleting buckets.
  • Uploading, downloading, and deleting objects.
  • Listing buckets and objects.
  • Moving, copying, and renaming objects.
  • Editing object and bucket ACLs.

BQ

bq is a python-based, command-line tool for BigQuery.

A virtual machine with all these commands is already installed. You can also install the SDK on your own computers – your laptop, your on-premise servers of virtual machines and other clouds. The SDK is also available as a docker image, which is a really easy and clean way to work with it.  

3. API’s

The services that make up GCP offer application programming interfaces so that the code you write can control them. These APIs are what’s called RESTful. In other words they follow the representational state transfer paradigm. 

Basically, it means that your code can use Google services in much the same way that web browsers talk to web servers. 

The APIs name resources and GCP with URLs. Your code can pass information to the APIs using JSON, which is a very popular way of passing textual information over the web. 

And there’s an open system for user log in and access control. The GCP Console lets you turn on and off APIs. 

Many APIs are off by default, and many are associated with quotas and limits. These restrictions help protect you from using resources inadvertently. 

You can enable only those APIs you need and you can request increases in quotas when you need more resources. 

The GCP Console includes a tool called the APIs Explorer that helps you learn about the APIs interactively. It lets you see what APIs are available and in what versions. 

These APIs expect parameters and documentation on them is built in. You can try the APIs interactively even with user authentication. Suppose you have explored an API and you’re ready to build an application that uses it. Google provides client libraries that take a lot of the drudgery out of the task of calling GCP from your code. 

 Libraries

There are two kinds of libraries. 

  • Cloud Client Library
  • Google API Client Library

The Cloud Client Libraries are Google clouds latest and recommended libraries for its APIs. They adopt the native styles and idioms of each language.  Google Cloud Client Libraries are our latest and recommended client libraries for calling Google Cloud APIs. They provide an optimized developer experience by using each supported language’s natural conventions and styles. They also reduce the boilerplate code you have to write because they’re designed to enable you to work with service metaphors in mind, rather than implementation details or service API concepts. 

On the other hand, sometimes a Cloud Client Library doesn’t support the newest services and features. In that case, you can use the Google API Client Library for your desired languages. These libraries are designed for generality and completeness. 

Following are the Cloud Client Libraries available,

  1. Java
  2. Node.js
  3. Python
  4. C#
  5. Go
  6. Ruby
  7. PHP

 Mobile App

There’s a mobile App for Android and iOS that lets you examine and manage the resources you’re using in GCP. 

It lets you build dashboards so that you can get the information you need at a glance.

Tagged : / / / / / / / / / / /

Casereccia Pollo Piccante

Considering the overwhelming response to our first blog article on one of my favourite pasta recipes – Strozzapreti Pesto Rosso, it is only apt that we tempt your taste buds with another special one. An Italian chicken pasta preparation which adds to our another favourite to the collection. Most importanty can cook this beauty in under 8 to 10 minutes, with all the ingredients chopped & in hand.

Whenever I visit a Zizzi restaurant in the UK, my choice is pretty clear for mains it is reserved between two types of pastas and one rissoto. Call me predicatable. But they are clearly irrestiable. A quick peak into my “main” targets.

Strozzapreti_Pesto_Rosso

Strozzapretti Pesto RossoSpicy chicken, red pesto, mascarpone & spring onions.

Casereccia Pollo PiccanteSpicy harissa chicken in a creamy sauce with heritage tomatoes & spinach.

Risotto PesceKing prawns, mussels & squid rings, with tomato, chilli & white wine.

Breaking Down the Recipe

Casareccia Pasta is a Sicilian twisted tube-shaped pasta. From the end, it looks like an “S.”
Its shape catches and holds sauce very well. This helps to make it a particularly good pasta for baking, as there is less chance of it being dry.

Piccante represents spicy hot or sharp and I believe it used to refer to the roasted chilli half which is used to dress the dish finally.

You’ll Need:

Serves 1 :

  • 200 gram Casareccia Pasta
  • Cooking oil
  • Torn Cooked Spicy Chicken
  • 5 Cherry Plum Tomatoes
  • 10 gram Harissa Sauce
  • 115ml Double Cream
  • Salt
  • Peper
  • Spinach
  • Roasted Chilli Half

What to do:

  • Tear your chicken breast in to small pieces – and lightly coat with chilli paste (at times I make use of sirancha sauce or schezwan sauce, ensure it is used in moderation, as it will affect the taste of the dish).
  • For added flavour season the chicken with freshly ground pepper and a little salt.
  • In a pan, add the olive oil and sauté the chicken breast for about a minute or two. Ensure the chicken is cooked.
  • Take 200 gram casareccia pasta cook the same for 8 to 10 minutes seperately in boiled water with salt. Once it is cooked drain the pasta and mix with table spoon of olive oil in a strainer and leave to settle aside.
  • Add 5 cherry plum tomatoes and stir in the pan.
  • Add 10 gram harissa sauce and stir for a minute.
  • Add 115ml double cream and stir for a minute.
  • Mix all the ingredients together until the cream turns a nice orange colour
  • Add salt and paper to taste.
  • Reduce the sauce as per your favoured consistency.
  • Finally add the cooked pasta into the pan and add few leaves of spinach
  • Add a roasted chilli half to decorate the dish.
  • Voila! You have the beautiful Casereccia Pollo Piccante ready in under 10 minutes (if you have all the ingredients ready)

Live in action:

Tagged : / / / / / / / / / / / / /