Cards you may also be interested in
Bảo hiểm phòng cháy chữa cháy
Bảo hiểm phòng cháy chữa cháy là loại hình bảo hiểm bồi thường cho hư hỏng, thiệt hại của tài sản khi có rủi ro cháy,nổ. Bảo hiểm này cũng là loại bảo hiểm mà công an phòng cháy chữa cháy bắt các bạn mua. Thông thường, định kỳ 3 tháng, 6 tháng hay 1 năm công an phòng cháy chữa cháy sẽ xuống kiểm tra tại địa điểm và yêu cầu mọi người phải mua. Vậy công an phòng cháy chữa cháy sẽ kiểm tra những gì của bảo hiểm ? Thông thường thì sau một vài lần nhắc nhở khi xuống kiểm tra thấy cơ sở có mua bảo hiểm phòng cháy là được. Tham khảo thêm: bảo hiểm phòng cháy chữa cháy 2022 Nếu kiểm tra kỹ hơn thì sẽ kiểm tra một số vấn đề liên quan sâu hơn vào bảo hiểm như số tiền bảo hiểm, mức phí bảo hiểm ,mức khấu trừ bảo hiểm vì trong quy định của nhà nước phải mua tối thiểu như sau: Số tiền bảo hiểm: Số tiền bảo hiểm tối thiểu là giá trị tính thành tiền theo giá thị trường của các tài sản như nhà xưởng ,máy móc thiết bị, hàng hóa, tòa nhà, hệ thống điện, hệ thống phòng cháy chữa cháy… tại thời điểm giao kết hợp đồng bảo hiểm. Tham khảo thêm: quy định bảo hiểm phòng cháy chữa cháy Trường hợp không xác định được giá thị trường của tài sản thì số tiền bảo hiểm do các bên thỏa thuận như sau: a) Đối với các tài sản như nhà xưởng ,máy móc thiết bị, tòa nhà, hệ thống điện, hệ thống phòng cháy chữa cháy: Số tiền bảo hiểm là giá trị tính thành tiền của tài sản theo giá trị còn lại hoặc giá trị thay thế của tài sản tại thời điểm giao kết hợp đồng bảo hiểm. Tham khảo thêm: Phí bảo hiểm phòng cháy chữa cháy b) Đối với các tài sản là hàng hóa: Số tiền bảo hiểm là giá trị tính thành tiền của tài sản căn cứ theo hóa đơn, chứng từ hợp lệ hoặc các tài liệu có liên quan. Mức khấu trừ bảo hiểm phòng cháy chữa cháy ? Mức khấu trừ bảo hiểm là số tiền mà bên mua bảo hiểm phải tự chịu trong mỗi sự kiện bảo hiểm Đối với cơ sở có nguy hiểm về cháy, nổ (trừ cơ sở hạt nhân) có tổng số tiền bảo hiểm của các tài sản tại một địa điểm dưới 1.000 tỷ đồng: Mức khấu trừ bảo hiểm quy định như sau: Bảo hiểm phòng cháy chữa cháy 【Những lưu ý bạn cần biết !】 (baohiempetrolimex.com)
따스한 봄을 기다리며 읽는 책
Editor Comment 코로나19 여파로 재택근무와 학교 및 일부 공공기관 폐쇄는 물론 경기 침체 등 사회 저변에 끼치는 영향이 상당하다. 비단 우리나라만이 아닌 세계적인 문제로 확산되고 있는 가운데, 하릴없이 집에만 있는 시간이 무기력하게 느껴지는 요즘이다. 하지만 사람들 틈에서 벗어나 오롯이 나만의 시간을 향유하는 것. 이는 빽빽한 도심 속 미세먼지를 걷어내준달까, '혼자'라는 새로운 가치를 발견하게 될 수도 있다. 그럼에도 무엇을 하며 보내야 할지 모르겠다면, <선데이 라이언>이 추천하는 영화 한 편을 보거나 혹은 오랜만에 책 한 권을 들고 천천히 흘러가는 시간을 온몸으로 느껴보는 것은 어떨까. 이미지가 가득한 시대에 텍스트를 읽고 머릿속에 그려보는 것도 색다를 터. 날로 불안감이 고조되는 때에 심신의 안정을 찾아줄 책들을 소개한다. 하루빨리 모든 사태가 안정되고, 따스한 봄을 맞이하길 바라며. <우리가 이 도시의 주인공은 아닐지라도> "미쉐린의 별을 받은 레스토랑은 물론 훌륭하겠지만 그 도시에서 가장 중요한 장소는 아닐 것이다." <매거진 B>의 11년 차 라이프스타일 에디터 박찬용이 도시와 도시인의 삶에 대해 쓴 에세이. 유명하지 않은 동네 식당에서 도시인들을 관찰하고, 소위 힙이라 불리는 을지로와 성수 일대를 체험하며 현시대의 흐름에 대해 고찰한다. 술술 읽히는 작가의 문체. 적당히 비관적이면서 이 도시의 안팎을 탐독하는 그는 화려하고 세련되지 못한 도시 곳곳의 이야기를 담아냈다. 비록 우리가 이 도시의 주인공은 아닐지라도, 적어도 이 책의 주인공이라는 것이 그가 말하는 바. <그 겨울의 일주일> "저는 제 인생이 이렇게 될 거라는 생각은 못했거든요." 온갖 사연을 가진 사람들의 치유 공간 호텔 스톤하우스. 저마다의 문제를 안고 호텔을 찾는 이들의 이야기를 담은 책은 아일랜드인이 가장 사랑하는 작가 ‘메이브 빈치’가 타계 후 발표된 마지막 작품이다. 제각기 사연을 지닌 사람들의 아주 평범하고도 특별한 일주일. 삶을 격려하고 위로하는 그들의 스토리는 삭막한 이 겨울과 참 잘 어울린다. 따뜻함이 느껴지는 문체도 감정에 호소하듯 절절하지도 않지만, 잔잔히 몸과 마음을 녹여주며 행복한 시간을 선사한달까. <빵 고르듯 살고 싶다> "당신이 가장 좋아하는 빵은 무엇인가요?" 제목을 보자마자 읽어야겠다는 생각이 들었다면, 아마 빵을 좋아하는 사람이 분명하다. 식빵을 가장 좋아하는 작가는 우연히 들어간 빵집에서 고른 빵 하나 혹은 좋아하는 곳에서 빈 쟁반을 가지고 고를 때의 설렘으로, 일상에서 마주하는 감정과 순간들을 8개의 빵으로 나누어 이야기한다. 거창하지도 특별하지도 않은 오늘일지라도 그 속에 담긴 작고 귀여운 행복. 사소한 질문이지만 무슨 빵을 제일 좋아하냐는 물음을 가만히 들여다보면, 어느새 '빵 고르듯 살아볼까'라는 가볍고 몰랑한 다짐을 하게 된다. 고소한 글귀만큼이나 소중히 느껴지는 하루를 선사하는 책. <망가진 대로 괜찮잖아요> "영화 <노팅힐> 마지막 장면에 이런 대사가 나온다. " 예술이 사람을 위로하는 것. 책을 펼치고, 음악을 듣고, 영화를 보는 등 힘들 때 마음을 어루만져 주며 위안이 돼주는 것들이 있는가. 이 책은 누군가에게 위로가 되었던 작품을 소개하며, 함께 내일을 또 버텨나가면 좋겠다고 말한다. 힘든 시간에 도움을 주었던 책과 음악, 영화와 그에 얽힌 에피소드를 말하는 25명의 작가. 오늘도 잠 못 드는 새벽이라면 따뜻한 한 마디를 건네는 이 책을 펼쳐보는 것은 어떨까. 저마다 자신을 달래준 작품들을 살펴보면 결국 자신에게 꼭 맞는 이야기를 찾을 수도 있을 테니. <포근한 봄 졸음이 떠돌아라> "꽃가루와 같이 부드러운 고양이의 털에 고운 봄의 香氣(향기)가 어리우도다" 시인과 화가의 숨결을 함께 느낄 수 있는 열두 개의 달 시화집 시리즈 중 3월. 봄기운이 기지개를 펴는 계절, 귀스타브 카유보(Gustave Caillebotte)의 그림과 윤동주의 ‘봄’을 시작으로 한 책은 백석, 정지용, 김소월 등 1일부터 31일까지 19명의 시인의 하루 한 편 시와 명화를 담아냈다. 아름다운 작품은 물론 마음을 저릿하게 만드는 시가 좋아 짧은 구절에도 페이지마다 오래 머물게 되는 시집. 따스한 바람과 봄 내음이 묻어 나오는 시를 읽다 보면 어느새 설레고 포근한 봄이 찾아온듯 하기도. <오늘, 내일, 모레 정도의 삶> "많은 이들은 자신의 삶을 불행한 듯 여기며 지낸다." 한 권을 판매하면 수익의 반이 판매원에게 제공돼 홈리스들의 자활을 돕는 잡지 <빅이슈>. 예술가를 꿈꾸던 그가 노숙자부터 판매원으로 살아가기까지, 하루도 쉬이 예측할 수 없는 작가 임상철의 삶을 투박하지만 간절한 마음으로 담아냈다. 자신의 슬픔을 담담히 바라보며 간명하게 서술한 문장들은 동정을 바라기보단, ‘한 사람’으로서 그의 진솔한 이야기를 들려준다. 무심코 지나쳐버린 오늘이 누군가에겐 기적 같은 하루일 수도. 인생이 현재 진행형이라고 말하는 그처럼 행복과 불행은 자신의 택에 달려있다는 것을 일깨워준다. 더 자세한 내용은 <아이즈매거진> 링크에서
How To Delete Tiktok Account Lifetime in 2021
How to Delete Tiktok Account How To delete Tiktok account for lifetime –  Then we don’t know how to delete Tiktok account. Here I will show you through four steps how you can easily delete a Tik tok account. Here are a few steps you can take to begin the process of preparation for mediation. Then of course you can delete the Tiktok account. In fact, delete a Tik tok account is so easy that you can see for yourself after a while. Recent Posts How To Delete Tiktok Account Lifetime in 2021 What is Amazon Web Service (AWS) ?- What’s Benefits of AWS Services? How To Make Your Facebook Private in 2021 (UPDATE) Bangladesh Mobile Banking Code in 2021 How to Turn off Active status on Facebook (with Screenshot) 4 Step How to Unfriend All in Facebook Friends One Sec 2021 How to Unblock someone on Facebook – Very Easily 20 Best unlimited Free Email Service Providers of 2021 How to Create Your Own RansomWare- Top 5 Easy Way How to Find Low Competition Keywords with High CPC How to delete a TikTok account ?   If you downloaded TikTok just to watch videos and never signed up for an account, you can just delete the app from your Mobile Phone . If you do have an account, follow the Four steps below to actually delete your account.   Open the TikTok app, and tap the “Me” profile button in the bottom right-hand corner of the app. Tap the three-dot menu in the top-right corner of the screen Select “Manage my account” and then tap “Delete account” at the bottom of the screen. Follow the on-screen prompts and tap “Delete account” again to confirm your decision. Most Popular Posts দৈনিক ১০০০ টাকা ইনকাম করুন || Earn Money Online income bd payment Bkash 2021 (7,468) Windows 10 Download ISO 64 bit with Full Version (4,798) How to Post New job on Facebook Timeline? (2,820)
[June-2021]Braindump2go New Professional-Cloud-Architect PDF and VCE Dumps Free Share(Q200-Q232)
QUESTION 200 You are monitoring Google Kubernetes Engine (GKE) clusters in a Cloud Monitoring workspace. As a Site Reliability Engineer (SRE), you need to triage incidents quickly. What should you do? A.Navigate the predefined dashboards in the Cloud Monitoring workspace, and then add metrics and create alert policies. B.Navigate the predefined dashboards in the Cloud Monitoring workspace, create custom metrics, and install alerting software on a Compute Engine instance. C.Write a shell script that gathers metrics from GKE nodes, publish these metrics to a Pub/Sub topic, export the data to BigQuery, and make a Data Studio dashboard. D.Create a custom dashboard in the Cloud Monitoring workspace for each incident, and then add metrics and create alert policies. Answer: D QUESTION 201 You are implementing a single Cloud SQL MySQL second-generation database that contains business-critical transaction data. You want to ensure that the minimum amount of data is lost in case of catastrophic failure. Which two features should you implement? (Choose two.) A.Sharding B.Read replicas C.Binary logging D.Automated backups E.Semisynchronous replication Answer: CD QUESTION 202 You are working at a sports association whose members range in age from 8 to 30. The association collects a large amount of health data, such as sustained injuries. You are storing this data in BigQuery. Current legislation requires you to delete such information upon request of the subject. You want to design a solution that can accommodate such a request. What should you do? A.Use a unique identifier for each individual. Upon a deletion request, delete all rows from BigQuery with this identifier. B.When ingesting new data in BigQuery, run the data through the Data Loss Prevention (DLP) API to identify any personal information. As part of the DLP scan, save the result to Data Catalog. Upon a deletion request, query Data Catalog to find the column with personal information. C.Create a BigQuery view over the table that contains all data. Upon a deletion request, exclude the rows that affect the subject's data from this view. Use this view instead of the source table for all analysis tasks. D.Use a unique identifier for each individual. Upon a deletion request, overwrite the column with the unique identifier with a salted SHA256 of its value. Answer: B QUESTION 203 Your company has announced that they will be outsourcing operations functions. You want to allow developers to easily stage new versions of a cloud-based application in the production environment and allow the outsourced operations team to autonomously promote staged versions to production. You want to minimize the operational overhead of the solution. Which Google Cloud product should you migrate to? A.App Engine B.GKE On-Prem C.Compute Engine D.Google Kubernetes Engine Answer: D QUESTION 204 Your company is running its application workloads on Compute Engine. The applications have been deployed in production, acceptance, and development environments. The production environment is business-critical and is used 24/7, while the acceptance and development environments are only critical during office hours. Your CFO has asked you to optimize these environments to achieve cost savings during idle times. What should you do? A.Create a shell script that uses the gcloud command to change the machine type of the development and acceptance instances to a smaller machine type outside of office hours. Schedule the shell script on one of the production instances to automate the task. B.Use Cloud Scheduler to trigger a Cloud Function that will stop the development and acceptance environments after office hours and start them just before office hours. C.Deploy the development and acceptance applications on a managed instance group and enable autoscaling. D.Use regular Compute Engine instances for the production environment, and use preemptible VMs for the acceptance and development environments. Answer: D QUESTION 205 You are moving an application that uses MySQL from on-premises to Google Cloud. The application will run on Compute Engine and will use Cloud SQL. You want to cut over to the Compute Engine deployment of the application with minimal downtime and no data loss to your customers. You want to migrate the application with minimal modification. You also need to determine the cutover strategy. What should you do? A.1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server. 2. Stop the on-premises application. 3. Create a mysqldump of the on-premises MySQL server. 4. Upload the dump to a Cloud Storage bucket. 5. Import the dump into Cloud SQL. 6. Modify the source code of the application to write queries to both databases and read from its local database. 7. Start the Compute Engine application. 8. Stop the on-premises application. B.1. Set up Cloud SQL proxy and MySQL proxy. 2. Create a mysqldump of the on-premises MySQL server. 3. Upload the dump to a Cloud Storage bucket. 4. Import the dump into Cloud SQL. 5. Stop the on-premises application. 6. Start the Compute Engine application. C.1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server. 2. Stop the on-premises application. 3. Start the Compute Engine application, configured to read and write to the on-premises MySQL server. 4. Create the replication configuration in Cloud SQL. 5. Configure the source database server to accept connections from the Cloud SQL replica. 6. Finalize the Cloud SQL replica configuration. 7. When replication has been completed, stop the Compute Engine application. 8. Promote the Cloud SQL replica to a standalone instance. 9. Restart the Compute Engine application, configured to read and write to the Cloud SQL standalone instance. D.1. Stop the on-premises application. 2. Create a mysqldump of the on-premises MySQL server. 3. Upload the dump to a Cloud Storage bucket. 4. Import the dump into Cloud SQL. 5. Start the application on Compute Engine. Answer: A QUESTION 206 Your organization has decided to restrict the use of external IP addresses on instances to only approved instances. You want to enforce this requirement across all of your Virtual Private Clouds (VPCs). What should you do? A.Remove the default route on all VPCs. Move all approved instances into a new subnet that has a default route to an internet gateway. B.Create a new VPC in custom mode. Create a new subnet for the approved instances, and set a default route to the internet gateway on this new subnet. C.Implement a Cloud NAT solution to remove the need for external IP addresses entirely. D.Set an Organization Policy with a constraint on constraints/compute.vmExternalIpAccess. List the approved instances in the allowedValues list. Answer: D QUESTION 207 Your company uses the Firewall Insights feature in the Google Network Intelligence Center. You have several firewall rules applied to Compute Engine instances. You need to evaluate the efficiency of the applied firewall ruleset. When you bring up the Firewall Insights page in the Google Cloud Console, you notice that there are no log rows to display. What should you do to troubleshoot the issue? A.Enable Virtual Private Cloud (VPC) flow logging. B.Enable Firewall Rules Logging for the firewall rules you want to monitor. C.Verify that your user account is assigned the compute.networkAdmin Identity and Access Management (IAM) role. D.Install the Google Cloud SDK, and verify that there are no Firewall logs in the command line output. Answer: B QUESTION 208 Your company has sensitive data in Cloud Storage buckets. Data analysts have Identity Access Management (IAM) permissions to read the buckets. You want to prevent data analysts from retrieving the data in the buckets from outside the office network. What should you do? A.1. Create a VPC Service Controls perimeter that includes the projects with the buckets. 2. Create an access level with the CIDR of the office network. B.1. Create a firewall rule for all instances in the Virtual Private Cloud (VPC) network for source range. 2. Use the Classless Inter-domain Routing (CIDR) of the office network. C.1. Create a Cloud Function to remove IAM permissions from the buckets, and another Cloud Function to add IAM permissions to the buckets. 2. Schedule the Cloud Functions with Cloud Scheduler to add permissions at the start of business and remove permissions at the end of business. D.1. Create a Cloud VPN to the office network. 2. Configure Private Google Access for on-premises hosts. Answer: C QUESTION 209 You have developed a non-critical update to your application that is running in a managed instance group, and have created a new instance template with the update that you want to release. To prevent any possible impact to the application, you don't want to update any running instances. You want any new instances that are created by the managed instance group to contain the new update. What should you do? A.Start a new rolling restart operation. B.Start a new rolling replace operation. C.Start a new rolling update. Select the Proactive update mode. D.Start a new rolling update. Select the Opportunistic update mode. Answer: C QUESTION 210 Your company is designing its application landscape on Compute Engine. Whenever a zonal outage occurs, the application should be restored in another zone as quickly as possible with the latest application data. You need to design the solution to meet this requirement. What should you do? A.Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs, use the latest snapshot to restore the disk in the same zone. B.Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin up the application in another zone in the same region. Use the regional persistent disk for the application data. C.Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs, use the latest snapshot to restore the disk in another zone within the same region. D.Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin up the application in another region. Use the regional persistent disk for the application data, Answer: D QUESTION 211 Your company has just acquired another company, and you have been asked to integrate their existing Google Cloud environment into your company's data center. Upon investigation, you discover that some of the RFC 1918 IP ranges being used in the new company's Virtual Private Cloud (VPC) overlap with your data center IP space. What should you do to enable connectivity and make sure that there are no routing conflicts when connectivity is established? A.Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply new IP addresses so there is no overlapping IP space. B.Create a Cloud VPN connection from the new VPC to the data center, and create a Cloud NAT instance to perform NAT on the overlapping IP space. C.Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply a custom route advertisement to block the overlapping IP space. D.Create a Cloud VPN connection from the new VPC to the data center, and apply a firewall rule that blocks the overlapping IP space. Answer: A QUESTION 212 You need to migrate Hadoop jobs for your company's Data Science team without modifying the underlying infrastructure. You want to minimize costs and infrastructure management effort. What should you do? A.Create a Dataproc cluster using standard worker instances. B.Create a Dataproc cluster using preemptible worker instances. C.Manually deploy a Hadoop cluster on Compute Engine using standard instances. D.Manually deploy a Hadoop cluster on Compute Engine using preemptible instances. Answer: A QUESTION 213 Your company has a project in Google Cloud with three Virtual Private Clouds (VPCs). There is a Compute Engine instance on each VPC. Network subnets do not overlap and must remain separated. The network configuration is shown below. Instance #1 is an exception and must communicate directly with both Instance #2 and Instance #3 via internal IPs. How should you accomplish this? A.Create a cloud router to advertise subnet #2 and subnet #3 to subnet #1. B.Add two additional NICs to Instance #1 with the following configuration: • NIC1 ○ VPC: VPC #2 ○ SUBNETWORK: subnet #2 • NIC2 ○ VPC: VPC #3 ○ SUBNETWORK: subnet #3 Update firewall rules to enable traffic between instances. C.Create two VPN tunnels via CloudVPN: • 1 between VPC #1 and VPC #2. • 1 between VPC #2 and VPC #3. Update firewall rules to enable traffic between the instances. D.Peer all three VPCs: • Peer VPC #1 with VPC #2. • Peer VPC #2 with VPC #3. Update firewall rules to enable traffic between the instances. Answer: B QUESTION 214 You need to deploy an application on Google Cloud that must run on a Debian Linux environment. The application requires extensive configuration in order to operate correctly. You want to ensure that you can install Debian distribution updates with minimal manual intervention whenever they become available. What should you do? A.Create a Compute Engine instance template using the most recent Debian image. Create an instance from this template, and install and configure the application as part of the startup script. Repeat this process whenever a new Google-managed Debian image becomes available. B.Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates. C.Create an instance with the latest available Debian image. Connect to the instance via SSH, and install and configure the application on the instance. Repeat this process whenever a new Google-managed Debian image becomes available. D.Create a Docker container with Debian as the base image. Install and configure the application as part of the Docker image creation process. Host the container on Google Kubernetes Engine and restart the container whenever a new update is available. Answer: B QUESTION 215 You have an application that runs in Google Kubernetes Engine (GKE). Over the last 2 weeks, customers have reported that a specific part of the application returns errors very frequently. You currently have no logging or monitoring solution enabled on your GKE cluster. You want to diagnose the problem, but you have not been able to replicate the issue. You want to cause minimal disruption to the application. What should you do? A.1. Update your GKE cluster to use Cloud Operations for GKE. 2. Use the GKE Monitoring dashboard to investigate logs from affected Pods. B.1. Create a new GKE cluster with Cloud Operations for GKE enabled. 2. Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster. 3. Use the GKE Monitoring dashboard to investigate logs from affected Pods. C.1. Update your GKE cluster to use Cloud Operations for GKE, and deploy Prometheus. 2. Set an alert to trigger whenever the application returns an error. D.1. Create a new GKE cluster with Cloud Operations for GKE enabled, and deploy Prometheus. 2. Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster. 3. Set an alert to trigger whenever the application returns an error. Answer: C QUESTION 216 You need to deploy a stateful workload on Google Cloud. The workload can scale horizontally, but each instance needs to read and write to the same POSIX filesystem. At high load, the stateful workload needs to support up to 100 MB/s of writes. What should you do? A.Use a persistent disk for each instance. B.Use a regional persistent disk for each instance. C.Create a Cloud Filestore instance and mount it in each instance. D.Create a Cloud Storage bucket and mount it in each instance using gcsfuse. Answer: D QUESTION 217 Your company has an application deployed on Anthos clusters (formerly Anthos GKE) that is running multiple microservices. The cluster has both Anthos Service Mesh and Anthos Config Management configured. End users inform you that the application is responding very slowly. You want to identify the microservice that is causing the delay. What should you do? A.Use the Service Mesh visualization in the Cloud Console to inspect the telemetry between the microservices. B.Use Anthos Config Management to create a ClusterSelector selecting the relevant cluster. On the Google Cloud Console page for Google Kubernetes Engine, view the Workloads and filter on the cluster. Inspect the configurations of the filtered workloads. C.Use Anthos Config Management to create a namespaceSelector selecting the relevant cluster namespace. On the Google Cloud Console page for Google Kubernetes Engine, visit the workloads and filter on the namespace. Inspect the configurations of the filtered workloads. D.Reinstall istio using the default istio profile in order to collect request latency. Evaluate the telemetry between the microservices in the Cloud Console. Answer: A QUESTION 218 You are working at a financial institution that stores mortgage loan approval documents on Cloud Storage. Any change to these approval documents must be uploaded as a separate approval file, so you want to ensure that these documents cannot be deleted or overwritten for the next 5 years. What should you do? A.Create a retention policy on the bucket for the duration of 5 years. Create a lock on the retention policy. B.Create the bucket with uniform bucket-level access, and grant a service account the role of Object Writer. Use the service account to upload new files. C.Use a customer-managed key for the encryption of the bucket. Rotate the key after 5 years. D.Create the bucket with fine-grained access control, and grant a service account the role of Object Writer. Use the service account to upload new files. Answer: A QUESTION 219 Your team will start developing a new application using microservices architecture on Kubernetes Engine. As part of the development lifecycle, any code change that has been pushed to the remote develop branch on your GitHub repository should be built and tested automatically. When the build and test are successful, the relevant microservice will be deployed automatically in the development environment. You want to ensure that all code deployed in the development environment follows this process. What should you do? A.Have each developer install a pre-commit hook on their workstation that tests the code and builds the container when committing on the development branch. After a successful commit, have the developer deploy the newly built container image on the development cluster. B.Install a post-commit hook on the remote git repository that tests the code and builds the container when code is pushed to the development branch. After a successful commit, have the developer deploy the newly built container image on the development cluster. C.Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and stores it in Container Registry. Create a deployment pipeline that watches for new images and deploys the new image on the development cluster. Ensure only the deployment tool has access to deploy new versions. D.Create a Cloud Build trigger based on the development branch to build a new container image and store it in Container Registry. Rely on Vulnerability Scanning to ensure the code tests succeed. As the final step of the Cloud Build process, deploy the new container image on the development cluster. Ensure only Cloud Build has access to deploy new versions. Answer: A QUESTION 220 Your operations team has asked you to help diagnose a performance issue in a production application that runs on Compute Engine. The application is dropping requests that reach it when under heavy load. The process list for affected instances shows a single application process that is consuming all available CPU, and autoscaling has reached the upper limit of instances. There is no abnormal load on any other related systems, including the database. You want to allow production traffic to be served again as quickly as possible. Which action should you recommend? A.Change the autoscaling metric to agent.googleapis.com/memory/percent_used. B.Restart the affected instances on a staggered schedule. C.SSH to each instance and restart the application process. D.Increase the maximum number of instances in the autoscaling group. Answer: A QUESTION 221 You are implementing the infrastructure for a web service on Google Cloud. The web service needs to receive and store the data from 500,000 requests per second. The data will be queried later in real time, based on exact matches of a known set of attributes. There will be periods where the web service will not receive any requests. The business wants to keep costs low. Which web service platform and database should you use for the application? A.Cloud Run and BigQuery B.Cloud Run and Cloud Bigtable C.A Compute Engine autoscaling managed instance group and BigQuery D.A Compute Engine autoscaling managed instance group and Cloud Bigtable Answer: D QUESTION 222 You are developing an application using different microservices that should remain internal to the cluster. You want to be able to configure each microservice with a specific number of replicas. You also want to be able to address a specific microservice from any other microservice in a uniform way, regardless of the number of replicas the microservice scales to. You need to implement this solution on Google Kubernetes Engine. What should you do? A.Deploy each microservice as a Deployment. Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster. B.Deploy each microservice as a Deployment. Expose the Deployment in the cluster using an Ingress, and use the Ingress IP address to address the Deployment from other microservices within the cluster. C.Deploy each microservice as a Pod. Expose the Pod in the cluster using a Service, and use the Service DNS name to address the microservice from other microservices within the cluster. D.Deploy each microservice as a Pod. Expose the Pod in the cluster using an Ingress, and use the Ingress IP address name to address the Pod from other microservices within the cluster. Answer: A QUESTION 223 Your company has a networking team and a development team. The development team runs applications on Compute Engine instances that contain sensitive data. The development team requires administrative permissions for Compute Engine. Your company requires all network resources to be managed by the networking team. The development team does not want the networking team to have access to the sensitive data on the instances. What should you do? A.1. Create a project with a standalone VPC and assign the Network Admin role to the networking team. 2. Create a second project with a standalone VPC and assign the Compute Admin role to the development team. 3. Use Cloud VPN to join the two VPCs. B.1. Create a project with a standalone Virtual Private Cloud (VPC), assign the Network Admin role to the networking team, and assign the Compute Admin role to the development team. C.1. Create a project with a Shared VPC and assign the Network Admin role to the networking team. 2. Create a second project without a VPC, configure it as a Shared VPC service project, and assign the Compute Admin role to the development team. D.1. Create a project with a standalone VPC and assign the Network Admin role to the networking team. 2. Create a second project with a standalone VPC and assign the Compute Admin role to the development team. 3. Use VPC Peering to join the two VPCs. Answer: C QUESTION 224 Your company wants you to build a highly reliable web application with a few public APIs as the backend. You don't expect a lot of user traffic, but traffic could spike occasionally. You want to leverage Cloud Load Balancing, and the solution must be cost-effective for users. What should you do? A.Store static content such as HTML and images in Cloud CDN. Host the APIs on App Engine and store the user data in Cloud SQL. B.Store static content such as HTML and images in a Cloud Storage bucket. Host the APIs on a zonal Google Kubernetes Engine cluster with worker nodes in multiple zones, and save the user data in Cloud Spanner. C.Store static content such as HTML and images in Cloud CDN. Use Cloud Run to host the APIs and save the user data in Cloud SQL. D.Store static content such as HTML and images in a Cloud Storage bucket. Use Cloud Functions to host the APIs and save the user data in Firestore. Answer: B QUESTION 225 Your company sends all Google Cloud logs to Cloud Logging. Your security team wants to monitor the logs. You want to ensure that the security team can react quickly if an anomaly such as an unwanted firewall change or server breach is detected. You want to follow Google-recommended practices. What should you do? A.Schedule a cron job with Cloud Scheduler. The scheduled job queries the logs every minute for the relevant events. B.Export logs to BigQuery, and trigger a query in BigQuery to process the log data for the relevant events. C.Export logs to a Pub/Sub topic, and trigger Cloud Function with the relevant log events. D.Export logs to a Cloud Storage bucket, and trigger Cloud Run with the relevant log events. Answer: C QUESTION 226 You have deployed several instances on Compute Engine. As a security requirement, instances cannot have a public IP address. There is no VPN connection between Google Cloud and your office, and you need to connect via SSH into a specific machine without violating the security requirements. What should you do? A.Configure Cloud NAT on the subnet where the instance is hosted. Create an SSH connection to the Cloud NAT IP address to reach the instance. B.Add all instances to an unmanaged instance group. Configure TCP Proxy Load Balancing with the instance group as a backend. Connect to the instance using the TCP Proxy IP. C.Configure Identity-Aware Proxy (IAP) for the instance and ensure that you have the role of IAP-secured Tunnel User. Use the gcloud command line tool to ssh into the instance. D.Create a bastion host in the network to SSH into the bastion host from your office location. From the bastion host, SSH into the desired instance. Answer: D QUESTION 227 Your company is using Google Cloud. You have two folders under the Organization: Finance and Shopping. The members of the development team are in a Google Group. The development team group has been assigned the Project Owner role on the Organization. You want to prevent the development team from creating resources in projects in the Finance folder. What should you do? A.Assign the development team group the Project Viewer role on the Finance folder, and assign the development team group the Project Owner role on the Shopping folder. B.Assign the development team group only the Project Viewer role on the Finance folder. C.Assign the development team group the Project Owner role on the Shopping folder, and remove the development team group Project Owner role from the Organization. D.Assign the development team group only the Project Owner role on the Shopping folder. Answer: C QUESTION 228 You are developing your microservices application on Google Kubernetes Engine. During testing, you want to validate the behavior of your application in case a specific microservice should suddenly crash. What should you do? A.Add a taint to one of the nodes of the Kubernetes cluster. For the specific microservice, configure a pod anti-affinity label that has the name of the tainted node as a value. B.Use Istio's fault injection on the particular microservice whose faulty behavior you want to simulate. C.Destroy one of the nodes of the Kubernetes cluster to observe the behavior. D.Configure Istio's traffic management features to steer the traffic away from a crashing microservice. Answer: C QUESTION 229 Your company is developing a new application that will allow globally distributed users to upload pictures and share them with other selected users. The application will support millions of concurrent users. You want to allow developers to focus on just building code without having to create and maintain the underlying infrastructure. Which service should you use to deploy the application? A.App Engine B.Cloud Endpoints C.Compute Engine D.Google Kubernetes Engine Answer: A QUESTION 230 Your company provides a recommendation engine for retail customers. You are providing retail customers with an API where they can submit a user ID and the API returns a list of recommendations for that user. You are responsible for the API lifecycle and want to ensure stability for your customers in case the API makes backward-incompatible changes. You want to follow Google-recommended practices. What should you do? A.Create a distribution list of all customers to inform them of an upcoming backward-incompatible change at least one month before replacing the old API with the new API. B.Create an automated process to generate API documentation, and update the public API documentation as part of the CI/CD process when deploying an update to the API. C.Use a versioning strategy for the APIs that increases the version number on every backward-incompatible change. D.Use a versioning strategy for the APIs that adds the suffix "DEPRECATED" to the current API version number on every backward-incompatible change. Use the current version number for the new API. Answer: A QUESTION 231 Your company has developed a monolithic, 3-tier application to allow external users to upload and share files. The solution cannot be easily enhanced and lacks reliability. The development team would like to re-architect the application to adopt microservices and a fully managed service approach, but they need to convince their leadership that the effort is worthwhile. Which advantage(s) should they highlight to leadership? A.The new approach will be significantly less costly, make it easier to manage the underlying infrastructure, and automatically manage the CI/CD pipelines. B.The monolithic solution can be converted to a container with Docker. The generated container can then be deployed into a Kubernetes cluster. C.The new approach will make it easier to decouple infrastructure from application, develop and release new features, manage the underlying infrastructure, manage CI/CD pipelines and perform A/B testing, and scale the solution if necessary. D.The process can be automated with Migrate for Compute Engine. Answer: C QUESTION 232 Your team is developing a web application that will be deployed on Google Kubernetes Engine (GKE). Your CTO expects a successful launch and you need to ensure your application can handle the expected load of tens of thousands of users. You want to test the current deployment to ensure the latency of your application stays below a certain threshold. What should you do? A.Use a load testing tool to simulate the expected number of concurrent users and total requests to your application, and inspect the results. B.Enable autoscaling on the GKE cluster and enable horizontal pod autoscaling on your application deployments. Send curl requests to your application, and validate if the auto scaling works. C.Replicate the application over multiple GKE clusters in every Google Cloud region. Configure a global HTTP(S) load balancer to expose the different clusters over a single global IP address. D.Use Cloud Debugger in the development environment to understand the latency between the different microservices. Answer: B 2021 Latest Braindump2go Professional-Cloud-Architect PDF and VCE Dumps Free Share: https://drive.google.com/drive/folders/1kpEammLORyWlbsrFj1myvn2AVB18xtIR?usp=sharing