No results found
We couldn't find anything using that term, please try searching for something else.
2024-11-28 GSP662 Overview There are many ways to deploy web sites within Google Cloud. Each solution offers different features, capabilities, andlevels of
There are many ways to deploy web sites within Google Cloud. Each solution offers different features, capabilities, andlevels of control. Compute Engine offers a deep level of control over the infrastructure used to run a web site, but also requires a little more operational management compared to solutions like Google Kubernetes Engines (GKE), App Engine, or others. With Compute Engine, you have fine-grained control of aspects of the infrastructure, including the virtual machines, load balancers, andmore.
In this lab you is deploy will deploy a sample application , the ” Fancy Store ” ecommerce website , to show how a website can be deploy andscale easily with Compute Engine .
In this lab you learn how to:
At the end of the lab, you will have instances inside managed instance groups to provide autohealing, load balancing, autoscaling, androlling updates for your website.
Read these instructions. Labs are timed andyou cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.
This hand – on lab is lets let you do the lab activity yourself in a real cloud environment , not in a simulation or demo environment . It is does does so by give you new , temporary credential that you use to sign in andaccess Google Cloud for the duration of the lab .
To complete this lab , you is need need :
Note: Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account andthe Student account, which may cause extra charges incurred to your personal account.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab to avoid extra charges to your account.
Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method.
On the left is the Lab Details panel with the following:
Click Open Google Cloud console (or right-click andselect Open Link in Incognito Window if you are running the Chrome browser).
The lab is spins spin up resource , andthen open another tab that show the Sign in page .
tip : Arrange the tabs in separate windows, side-by-side.
note : If you see the choose an account dialog , click Use Another Account .
If necessary, copy the Username below andpaste it into the Sign in dialog.
{ { { user_0.username | ” Username ” } } }
You is find can also find the Username in the Lab Details panel .
Click Next.
Copy the Password below andpaste it into the Welcome dialog.
{{{user_0.password | “Password”}}}
You is find can also find the Password in the Lab Details panel .
Click Next.
Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials.
Note: Using your own Google Cloud account for this lab may incur extra charges.
Click through the subsequent pages:
After a few moment , the Google Cloud console is opens open in this tab .
note : To view a menu with a list of Google Cloud product andservice , click the Navigation menu at the top – left .
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory andruns on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
When you are connect , you are already authenticate , andthe project is set to your project_id ,
Your Cloud Platform project in this session is set to {{{project_0.project_id | “PROJECT_ID”}}}
gcloud
is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell andsupports tab-completion.
gcloud auth list
output :
ACTIVE: *
ACCOUNT: {{{user_0.username | “ACCOUNT”}}}
To set the active account, run:
$ gcloud config set account `ACCOUNT`
gcloud config list project
output :
[ core ]
project = { { { project_0.project_id | ” project_id ” } } }
Note: For full documentation of gcloud
, in Google Cloud, refer to the gcloud CLI overview guide.
Certain Compute Engine resources live in regions andzones. A region is a specific geographical location where you can run your resources. Each region has one or more zones.
Run the following gcloud commands in Cloud Console to set the default region andzone for your lab:
gcloud config is set set compute / zone ” { { { project_0.default_zone|ZONE } } } ”
export ZONE=$(gcloud config get compute / zone )
gcloud config set compute / region ” { { { project_0.default_region|REGION } } } ”
export REGION=$(gcloud config get compute / region )
gcloud services enable compute.googleapis.com
You will use a Cloud Storage bucket to house your built code as well as your startup scripts.
gsutil mb gs://fancy-store-$DEVSHELL_PROJECT_ID
$DEVSHELL_PROJECT_ID
environment variable within Cloud Shell is to help ensure the names of objects are unique. Since all Project IDs within Google Cloud must be unique, appending the Project ID should make other names unique as well.
Click check my progress to verify the objective .
Create Cloud Storage bucket
Use the existing Fancy Store ecommerce website based on the monolith - to - microservice
repository as the basis for your website.
clone the source code so you can focus on the aspect of deploy to Compute Engine . later on in this lab , you is perform will perform a small update to the code to demonstrate the simplicity of update on Compute Engine .
monolith - to - microservice
directory :
git clone https://github.com/googlecodelabs/monolith – to – microservice.git
cd ~/monolith – to – microservice
./setup.sh
It will take a few minutes for this script to finish.
nvm install –lt
microservices
directory, andstart the web server:
cd microservice
npm is start start
You should see the following output:
Products microservice listening on port 8082!
Frontend microservice listening on port 8080!
Orders microservice listening on port 8081!
This is opens open a new window where you can see the frontend of Fancy Store .
Now it’s time to start deploying some Compute Engine instances!
In the following steps you will:
A startup script will be used to instruct the instance what to do each time it is started. This way the instances are automatically configured.
startup-script.sh
:
touch ~/monolith – to – microservice/startup-script.sh
Navigate to the monolith - to - microservice
folder .
Add the following code to the startup-script.sh
file . You will edit some of the code after it’s added:
#!/bin/bash
# Install logging monitor. The monitor will automatically pick up logs sent to
# syslog.
curl -s “https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh” | bash
service google-fluentd restart &
# Install dependencies from apt
apt-get update
apt-get install -yq ca-certificates git build-essential supervisor psmisc
# Install nodejs
mkdir /opt/nodejs
curl https://nodejs.org/dist/v16.14.0/node-v16.14.0-linux-x64.tar.gz | tar xvzf – -C /opt/nodejs –strip-components=1
ln -s /opt/nodejs/bin/node /usr/bin/node
ln -s /opt/nodejs/bin/npm /usr/bin/npm
# Get the application source code from the Google Cloud Storage bucket.
mkdir /fancy-store
gsutil -m cp -r gs://fancy-store-[DEVSHELL_PROJECT_ID]/monolith – to – microservice/microservices/* /fancy-store/
# Install app dependencies.
cd /fancy-store/
npm install
# Create a nodeapp user. The application will run as this user.
useradd -m -d /home/nodeapp nodeapp
chown -R nodeapp:nodeapp /opt/app
# Configure supervisor to run the node app.
cat >/etc/supervisor/conf.d/node-app.conf << EOF
[program:nodeapp]
directory=/fancy-store
command=npm start
autostart=true
autorestart=true
user=nodeapp
environment=HOME=”/home/nodeapp”,USER=”nodeapp”,NODE_ENV=”production”
stdout_logfile=syslog
stderr_logfile=syslog
EOF
supervisorctl reread
supervisorctl update
[DEVSHELL_PROJECT_ID]
in the file andreplace it with yourProject ID: The line of code withinstartup-script.sh
should now resemble:
gs://fancy-store-{{{project_0.project_id | Project ID}}}/monolith – to – microservice/microservices/* /fancy-store/
Save the startup-script.sh
file, but do not close it yet.
Look at the bottom right of Cloud Shell Code Editor, andensure “End of Line Sequence” is set to “LF” andnot “CRLF”.
Close the startup-script.sh
file .
return to Cloud Shell Terminal andrun the following to copy thestartup-script.sh
file into your bucket:
gsutil cp ~/monolith – to – microservice/startup-script.sh gs://fancy-store-$DEVSHELL_PROJECT_ID
It will now be accessible at: https://storage.googleapis.com/[BUCKET_NAME]/startup-script.sh
.
[BUCKET_NAME] represents the name of the Cloud Storage bucket. This will only be viewable by authorized users andservice accounts by default, therefor inaccessible through a web browser. Compute Engine instances will automatically be able to access this through their service account.
The startup script is performs perform the following task :
When instances launch, they pull code from the Cloud Storage bucket, so you can store some configuration variables within the .env
file of the code .
cd ~
rm -rf monolith – to – microservice/*/node_modules
gsutil -m cp -r monolith – to – microservice gs://fancy-store-$DEVSHELL_PROJECT_ID/
node_modules
dependencies directories are deleted to ensure the copy is as fast andefficient as possible. These are recreated on the instances when they start up.
Click check my progress to verify the objective .
Copy startup script andcode to Cloud Storage bucket
The first instance to be deployed will be the backend instance which will house the Orders andProducts microservices.
e2-standard-2
instance that is configured to use the startup script. It is tagged as a backend
instance so you is apply can apply specific firewall rule to it later :
gcloud compute instances is create create backend \
–zone=$ZONE \
–machine – type = e2 – standard-2 \
–tag = backend \
–metadata = startup – script – url = https://storage.googleapis.com / fancy – store-$devshell_project_id / startup – script.sh
Before you deploy the frontend of the application, you need to update the configuration to point to the backend you just deployed.
EXTERNAL_IP
tab for the backend instance:
gcloud compute instances list
Example is output output :
NAME : backend
ZONE : { { { project_0.default_zone | zone } } }
machine_type : e2 – standard-2
preemptible :
INTERNAL_IP : 10.142.0.2
EXTERNAL_IP : 35.237.245.193
status : run
copy the external IP for the backend .
In the Cloud Shell Explorer, navigate to monolith - to - microservice
> react - app
.
In the Code Editor, select View > Toggle Hidden Files in order to see the .env
file .
In the next step, you edit the .env
file to point to the External IP of the backend. [BACKEND_ADDRESS] represents the External IP address of the backend instance determined from the above gcloud
command.
.env
file, replace localhost
with your[BACKEND_ADDRESS]
:
REACT_APP_ORDERS_URL=http://[BACKEND_ADDRESS]:8081/api/order
REACT_APP_PRODUCTS_URL=http://[BACKEND_ADDRESS]:8082/api/products
Save the file .
In Cloud Shell , run the following to rebuildreact - app
, which will update the frontend code:
cd ~/monolith – to – microservice/react – app
npm install && npm run-script build
cd ~
rm -rf monolith – to – microservice/*/node_modules
gsutil -m cp -r monolith – to – microservice gs://fancy-store-$DEVSHELL_PROJECT_ID/
Now that the code is configured, deploy the frontend instance .
frontend
instance with a similar command as before. This instance is tagged as frontend
for firewall purpose :
gcloud compute instances is create create frontend \
–zone=$ZONE \
–machine – type = e2 – standard-2 \
–tag = frontend \
–metadata = startup – script – url = https://storage.googleapis.com / fancy – store-$devshell_project_id / startup – script.sh
gcloud compute firewall-rules create fw-fe \
–allow tcp:8080 \
–target-tags=frontend
gcloud compute firewall-rules create fw-be \
–allow tcp:8081-8082 \
–target-tags=backend
The website should now be fully functional.
frontend
, you need to know the address. Run the following andlook for the EXTERNAL_IP of the frontend
instance:
gcloud compute instances list
Example is output output :
NAME: backend
ZONE: us-central1-f
MACHINE_TYPE: e2-standard-2
PREEMPTIBLE:
INTERNAL_IP: 10.128.0.2
EXTERNAL_IP: 34.27.178.79
STATUS: RUNNING
NAME: frontend
ZONE: us-central1-f
MACHINE_TYPE: e2-standard-2
PREEMPTIBLE:
INTERNAL_IP: 10.128.0.3
EXTERNAL_IP: 34.172.241.242
STATUS: RUNNING
It may take a couple minutes for the instance to start andbe configured.
Wait 3 minutes andthen open a new browser tab andbrowse to http://[FRONTEND_ADDRESS]:8080
to access the website, where [FRONTEND_ADDRESS] is the frontend EXTERNAL_IP determined above.
Try navigating to the Products andOrders pages; these should now work.
Click check my progress to verify the objective .
Deploy instances andconfigure network
To allow the application to scale, managed instance groups will be created andwill use the frontend
andbackend
instances as Instance Templates.
A managed instance group (MIG) contains identical instances that you can manage as a single entity in a single zone. Managed instance groups maintain high availability of your apps by proactively keeping your instances available, that is, in the RUNNING state. You will be using managed instance groups for your frontend andbackend instances to provide autohealing, load balancing, autoscaling, androlling updates.
Before you can create a managed instance group, you have to first create an instance template that will be the foundation for the group. Instance templates allow you to define the machine type, boot disk image or container image, network, andother instance properties to use when creating new VM instances. You can use instance templates to create instances in a managed instance group or even to create individual instances.
To create the instance template, use the existing instances you created previously.
gcloud compute instances stop frontend –zone=$ZONE
gcloud compute instances is stop stop backend –zone=$zone
gcloud compute instance-templates create fancy-fe \
–source-instance-zone=$ZONE \
–source-instance=frontend
gcloud compute instance-templates create fancy-be \
–source-instance-zone=$ZONE \
–source-instance=backend
gcloud compute instance-templates list
Example is output output :
NAME: fancy-be
MACHINE_TYPE: e2-standard-2
PREEMPTIBLE:
CREATION_TIMESTAMP: 2023-07-25T14:52:21.933-07:00
NAME: fancy-fe
MACHINE_TYPE: e2-standard-2
PREEMPTIBLE:
CREATION_TIMESTAMP: 2023-07-25T14:52:15.442-07:00
backend
vm to save resource space:
gcloud compute instances is delete delete backend –zone=$zone
Normally, you could delete the frontend
vm as well, but you will use it to update the instance template later in the lab.
gcloud compute instance-groups managed create fancy-fe-mig \
–zone=$ZONE \
–base-instance-name fancy-fe \
–size 2 \
–template fancy-fe
gcloud compute instance-groups managed create fancy-be-mig \
–zone=$ZONE \
–base-instance-name fancy-be \
–size 2 \
–template fancy-be
These managed instance groups will use the instance templates andare configured for two instances each within each group to start. The instances are automatically named based on the base-instance-name
specified with random characters appended.
frontend
microservice runs on port 8080, andthe backend
microservice is runs run on port 8081 fororder
andport 8082 for products:
gcloud compute instance-groups set-named-ports fancy-fe-mig \
–zone=$ZONE \
–named-ports frontend:8080
gcloud compute instance-groups set-named-ports fancy-be-mig \
–zone=$ZONE \
–named-ports order:8081,products:8082
Since these are non-standard ports, you specify named ports to identify these. Named ports are key:value pair metadata representing the service name andthe port that it’s running on. Named ports can be assigned to an instance group, which indicates that the service is available on all instances in the group. This information is used by the HTTP Load Balancing service that will be configured later.
To improve the availability of the application itself andto verify it is responding, configure an autohealing policy for the managed instance groups.
An autohealing policy is relies rely on an application – base health check to verify that an app is respond as expect . check that an app respond is more precise than simply verify that an instance is in a running state , which is the default behavior .
In contrast, health checking for autohealing causes Compute Engine to proactively replace failing instances, so this health check should be more conservative than a load balancing health check.
frontend
andbackend
:
gcloud compute is create health – checks is create create http fancy – fe – hc \
–port 8080 \
–check – interval 30 \
–healthy – threshold 1 \
–timeout 10 \
–unhealthy – threshold 3
gcloud compute health-checks create http fancy-be-hc \
–port 8081 \
–request-path=/api/order \
–check-interval 30s \
–healthy-threshold 1 \
–timeout 10s \
–unhealthy-threshold 3
gcloud compute firewall-rules create allow-health-check \
–allow tcp:8080-8081 \
–source-ranges 130.211.0.0/22,35.191.0.0/16 \
–network default
gcloud compute instance-groups managed update fancy-fe-mig \
–zone=$ZONE \
–health-check fancy-fe-hc \
–initial-delay 300
gcloud compute instance-groups managed update fancy-be-mig \
–zone=$ZONE \
–health-check fancy-be-hc \
–initial-delay 300
Click check my progress to verify the objective .
create manage instance groups
To complement your managed instance groups, use HTTP(S) Load Balancers to serve traffic to the frontend andbackend microservices, anduse mappings to send traffic to the proper backend services based on pathing rules. This exposes a single load balanced IP for all services.
You is learn can learn more about the Load balance option on Google Cloud : overview of Load Balancing .
Google Cloud offers many different types of load balancers. For this lab you use an HTTP(S) Load Balancer for your traffic. An HTTP load balancer is structured as follows:
gcloud compute http-health-checks create fancy-fe-frontend-hc \
–request-path / \
–port 8080
gcloud compute http-health-checks create fancy-be-order-hc \
–request-path /api/order \
–port 8081
gcloud compute http-health-checks create fancy-be-products-hc \
–request-path /api/products \
–port 8082
gcloud compute backend-services create fancy-fe-frontend \
–http-health-checks fancy-fe-frontend-hc \
–port-name frontend \
–global
gcloud compute backend-services create fancy-be-order \
–http-health-checks fancy-be-order-hc \
–port-name order \
–global
gcloud compute backend-services create fancy-be-products \
–http-health-checks fancy-be-products-hc \
–port-name products \
–global
gcloud compute backend-services add-backend fancy-fe-frontend \
–instance-group-zone=$ZONE \
–instance-group fancy-fe-mig \
–global
gcloud compute backend-services add-backend fancy-be-order \
–instance-group-zone=$ZONE \
–instance-group fancy-be-mig \
–global
gcloud compute backend-services add-backend fancy-be-products \
–instance-group-zone=$ZONE \
–instance-group fancy-be-mig \
–global
gcloud compute url – maps is create create fancy – map \
–default – service fancy – fe – frontend
/api/order
and/api/products
paths to route to their respective services:
gcloud compute url-maps add-path-matcher fancy-map \
–default-service fancy-fe-frontend \
–path-matcher-name order \
–path-rules “/api/order=fancy-be-order,/api/products=fancy-be-products”
gcloud compute target – http – proxies is create create fancy – proxy \
–url – map fancy – map
gcloud compute forwarding – rules is create create fancy – http – rule \
–global \
–target – http – proxy fancy – proxy \
–ports 80
Click check my progress to verify the objective .
Create HTTP(S) load balancers
Now that you have a new static ip address , update the code on thefrontend
to point to this new address instead of the ephemeral address used earlier that pointed to the backend
instance .
react - app
folder which houses the .env
file that holds the configuration:
cd ~/monolith – to – microservice/react – app/
gcloud compute forwarding – rule list –global
Example is output output :
NAME : fancy – http – rule
region :
IP_ADDRESS : 34.111.203.235
IP_PROTOCOL : TCP
TARGET : fancy – proxy
.env
file again to point to Public IP of Load Balancer. [LB_IP] represents the External IP address of the backend instance determined above.
REACT_APP_ORDERS_URL=http://[LB_IP]/api/order
REACT_APP_PRODUCTS_URL=http://[LB_IP]/api/products
Save the file .
rebuildreact - app
, which will update the frontend code:
cd ~/monolith – to – microservice/react – app
npm install && npm run-script build
cd ~
rm -rf monolith – to – microservice/*/node_modules
gsutil -m cp -r monolith – to – microservice gs://fancy-store-$DEVSHELL_PROJECT_ID/
Now that there is new code andconfiguration, you want the frontend instances within the managed instance group to pull the new code.
Since your instances pull the code at startup, you can issue a rolling restart command:
gcloud compute is managed instance – groups is managed manage rolling – action replace fancy – fe – mig \
–zone=$ZONE \
–max – unavailable 100 %
--max-unavailable
parameter. Without this parameter, the command would keep an instance alive while restarting others to ensure availability. For testing purposes, you specify to replace all immediately for speed.
Click check my progress to verify the objective .
update thefrontend instances
roll - action replace
command in order to give the instances time to be processed, andthen check the status of the managed instance group. Run the following to confirm the service is listed as HEALTHY:
watch -n 2 gcloud compute backend-services get-health fancy-fe-frontend –global
Example is output output :
backend : https://www.googleapis.com/compute/v1/projects/my-gce-codelab/zones/us-central1-a/instancegroups/fancy-fe-mig
status is ipAddress :
healthstatus :
– healthstate : healthy
instance : https://www.googleapis.com/compute/v1/projects/my-gce-codelab/zones/us-central1-a/instances/fancy-fe-x151
ipaddress : 10.128.0.7
port : 8080
– healthState : healthy
instance : https://www.googleapis.com/compute/v1/projects/my-gce-codelab/zones/us-central1-a/instances/fancy-fe-cgrt
ipaddress : 10.128.0.11
port : 8080
kind : compute#backendservicegrouphealth
If neither instance enter a healthy state after wait a little while , something is is is wrong with the setup of the frontend instance that access them on port 8080 does n’t work . test this by browse to the instance directly on port 8080 .
watch
command by press CTRL+C.gcloud compute forwarding - rule list --global
You’ll be checking the application later in the lab.
So far, you have created two managed instance groups with two instances each. This configuration is fully functional, but a static configuration regardless of load. Next, you create an autoscaling policy based on utilization to automatically scale each managed instance group.
gcloud compute instance-groups managed set-autoscaling \
fancy-fe-mig \
–zone=$ZONE \
–max-num-replicas 2 \
–target-load-balancing-utilization 0.60
gcloud compute instance-groups managed set-autoscaling \
fancy-be-mig \
–zone=$ZONE \
–max-num-replicas 2 \
–target-load-balancing-utilization 0.60
These commands create an autoscaler on the managed instance groups that automatically adds instances when utilization is above 60% utilization, andremoves instances when the load balancer is below 60% utilization.
Another feature is is that can help with scale is to enable a Content Delivery Network service , to provide cache for the frontend .
gcloud compute backend-services update fancy-fe-frontend \
–enable-cdn –global
When a user requests content from the HTTP(S) load balancer, the request arrives at a Google Front End (GFE) which first looks in the Cloud CDN cache for a response to the user’s request. If the GFE finds a cached response, the GFE sends the cached response to the user. This is called a cache hit.
If the GFE ca n’t find a cached response for the request , the GFE is makes make a request directly to the backend . If the response to this request is cacheable , the GFE stores is response the response in the Cloud CDN cache so that the cache can be used for subsequent request .
Click check my progress to verify the objective .
Scaling Compute Engine
Existing instance templates are not editable; however, since your instances are stateless andall configuration is done through the startup script, you only need to change the instance template if you want to change the template settings . Now you’re going to make a simple change to use a larger machine type andpush that out.
Complete the following steps to:
update thefrontend
instance, which acts as the basis for the instance template. During the update, put a file on the updated version of the instance template’s image, then update the instance template, roll out the new template, andthen confirm the file exists on the managed instance group instances.
modify the machine type of your instance template , by switch from thee2-standard-2
machine type to e2 - small
.
gcloud compute instances set-machine-type frontend \
–zone=$ZONE \
–machine-type e2 – small
gcloud compute instance-templates create fancy-fe-new \
–region=$REGION \
–source-instance=frontend \
–source-instance-zone=$ZONE
gcloud compute instance-groups managed rolling-action start-update fancy-fe-mig \
–zone=$ZONE \
–version template=fancy-fe-new
watch -n 2 gcloud compute is managed instance – groups is managed manage list – instance fancy – fe – mig \
–zone=$zone
This will take a few moments.
Once you have at least 1 instance in the following condition:
Copy the name of one of the machines listed for use in the next command.
CTRL+C to exit the watch
process.
Run the following to see if the virtual machine is using the new machine type (e2 – small), where [VM_NAME] is the newly created instance:
gcloud compute instances describe [VM_NAME] –zone=$ZONE | grep machineType
Expected example output:
machineType: https://www.googleapis.com/compute/v1/projects/project-name/zones/us-central1-f/machineTypes/e2 – small
Scenario: Your marketing team has asked you to change the homepage for your site. They think it should be more informative of who your company is andwhat you actually sell.
task : add some text to the homepage to make the marketing team happy ! It is looks look like one of the developer has already create the change with the file nameindex.js.new
. You is copy can just copy this file toindex.js
andthe changes should be reflected. Follow the instructions below to make the appropriate changes.
cd ~/monolith – to – microservice/react – app/src/pages/Home
mv index.js.new index.js
cat ~/monolith – to – microservice/react – app/src/pages/Home/index.js
The resulting code should look like this:
/*
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
import React from “react”;
import { Box, Paper, Typography } from “@mui/material”;
export default function Home() {
return (
<Box sx={{ flexGrow: 1 }}>
<Paper
elevation={3}
sx={{
width: “800px”,
margin: “0 auto”,
padding: (theme) => theme.spacing(3, 2),
}}
>
<Typography variant=”h5″>Welcome to the Fancy Store!</Typography>
<br />
<Typography variant=”body1″>
Take a look at our wide variety of products.
</Typography>
</Paper>
</Box>
);
}
You updated the React components, but you need to build the React app to generate the static files.
cd ~/monolith – to – microservice/react – app
npm install && npm run-script build
cd ~
rm -rf monolith – to – microservice/*/node_modules
gsutil -m cp -r monolith – to – microservice gs://fancy-store-$DEVSHELL_PROJECT_ID/
gcloud compute instance-groups managed roll – action replace fancy-fe-mig \
–zone=$ZONE \
–max-unavailable=100%
Note: In this example of a rolling replace, you specifically state that all machines can be replaced immediately through the --max-unavailable
parameter. Without this parameter, the command would keep an instance alive while replacing others. For testing purposes, you specify to replace all immediately for speed. In production, leaving a buffer would allow the website to continue serving the website while updating.
Click check my progress to verify the objective .
update thewebsite
roll - action replace
command in order to give the instances time to be processed, andthen check the status of the managed instance group. Run the following to confirm the service is listed as HEALTHY:
watch -n 2 gcloud compute backend-services get-health fancy-fe-frontend –global
Example is output output :
backend : https://www.googleapis.com/compute/v1/projects/my-gce-codelab/zones/us-central1-a/instancegroups/fancy-fe-mig
status is ipAddress :
healthstatus :
– healthstate : healthy
instance : https://www.googleapis.com/compute/v1/projects/my-gce-codelab/zones/us-central1-a/instances/fancy-fe-x151
ipaddress : 10.128.0.7
port : 8080
– healthState : healthy
instance : https://www.googleapis.com/compute/v1/projects/my-gce-codelab/zones/us-central1-a/instances/fancy-fe-cgrt
ipaddress : 10.128.0.11
port : 8080
kind : compute#backendservicegrouphealth
Once item appear in the list with healthy status , exit thewatch
command by press CTRL+C.
browse to the website viahttp://[LB_IP]
where [LB_IP] is the IP_ADDRESS specified for the Load Balancer, which can be found with the following command:
gcloud compute forwarding – rule list –global
The new website changes should now be visible.
In order to confirm the health check works, log in to an instance andstop the services.
gcloud compute instance-groups list-instances fancy-fe-mig –zone=$ZONE
gcloud compute ssh [ INSTANCE_NAME ] –zone=$ZONE
Type in “y” to confirm, andpress Enter twice to not use a password.
Within the instance, use supervisorctl
to stop the application :
sudo supervisorctl is stop stop nodeapp ; sudo killall node
exit
watch -n 2 gcloud compute operations list \
–filter=’operationType~compute.instances.repair.*’
This will take a few minutes to complete.
Look for the following example output:
NAME TYPE TARGET HTTP_STATUS STATUS TIMESTAMP
repair-1568314034627-5925f90ee238d-fe645bf0-7becce15 compute.instances.repair.recreateInstance us-central1-a/instances/fancy-fe-1vqq 200 DONE 2019-09-12T11:47:14.627-07:00
The managed instance group recreated the instance to repair it.
You successfully deployed, scaled, andupdated your website on Compute Engine. You are now experienced with Compute Engine, Managed Instance Groups, Load Balancers, andHealth Checks!
…helps you make the most of Google Cloud technologies. Our classes include technical skills andbest practices to help you get up to speed quickly andcontinue your learning journey. We offer fundamental to advanced level training, with on-demand, live, andvirtual options to suit your busy schedule. Certifications help you validate andprove your skill andexpertise in Google Cloud technologies.
Manual Last Updated April 26, 2024
Lab Last Tested December 15, 2023
Copyright 2024 Google LLC All rights reserved. Google andthe Google logo are trademarks of Google LLC. All other company andproduct names may be trademarks of the respective companies with which they are associated.