jabedskakiB
1+ Views

What is Digital M5bvpmarketing?

What is Digital Marketing?

One can best describe digital marketing by comparing it to traditional marketing, which utilizes mediums like newspaper ads and billboards. Each https://comingfly.blogspot.com/2021/07/what-is-digital-marketing.html
the channel gives businesses another chance to put themselves in front of the right users and increase traffic to their sites. For example, effective SEO strategies will help your entire site or a specific web page rank more highly on search engine results, increasing your company’s visibility.

In contrast, digital marketing uses a variety of digital outlets such as:
Google and other search engines
Search Engine Optimisation
PPC (Pay-per-click) advertising
Websites
social media advertising (Facebook, Instagram, etc)
Email marketing
Mobile marketing (apps)
Content marketing and digital PR (blog posts, LinkedIn articles, etc.)
Lead magnets (free digital products in exchange for contact info)
Conversion Rate Optimisation (CRO)
Comment
Suggested
Recent
Cards you may also be interested in
What are the features of On Demand Video interviewing Software?
With human psychological and technology-enabled assessment factors, an on demand video interviewing solution provides recruiters with automation and accurate analysis in every stage of the hiring process. Below are some of the intelligent features of video on demand interview software to better understand its functionalities. Job Posting Management Job description development, sharing, and management include a long list of tasks behind which most recruiters spend much of their time. On-demand video interview solutions can speed up this task with their job description (JD) templates, third-party site integration features. Using JD templates, recruiters can create comprehensive and attractive JDs in seconds. They need to put their requirements, and the video on demand interview will automatically form JDs for postings. Moreover, recruiters can track job boards, job sites, and others from a single platform using third-party integration facilities. Furthermore, they also get complete visibility of the number of applicants and manage them accordingly. Hence, video on demand interview software enables recruiters with a smooth path to job posting management. Resume Sifting Automation With a lot of applications for a single job position comes a lot of data reviewing. Recruiters spend hours finding the right applicant that meets their required criteria. It is pretty similar to finding a needle from a haystack. With an on demand video interviewing system, recruiters can streamline this whole process using ML algorithms and text analysis. The on demand interview platform automatically scans multiple resumes and tries to find a suitable match amongst the unstructured data with the given criteria. Hence, recruiters can automatically get their hands on the proper resume to proceed with the next round without any bias.
[June-2021]Braindump2go New Professional-Cloud-Architect PDF and VCE Dumps Free Share(Q200-Q232)
QUESTION 200 You are monitoring Google Kubernetes Engine (GKE) clusters in a Cloud Monitoring workspace. As a Site Reliability Engineer (SRE), you need to triage incidents quickly. What should you do? A.Navigate the predefined dashboards in the Cloud Monitoring workspace, and then add metrics and create alert policies. B.Navigate the predefined dashboards in the Cloud Monitoring workspace, create custom metrics, and install alerting software on a Compute Engine instance. C.Write a shell script that gathers metrics from GKE nodes, publish these metrics to a Pub/Sub topic, export the data to BigQuery, and make a Data Studio dashboard. D.Create a custom dashboard in the Cloud Monitoring workspace for each incident, and then add metrics and create alert policies. Answer: D QUESTION 201 You are implementing a single Cloud SQL MySQL second-generation database that contains business-critical transaction data. You want to ensure that the minimum amount of data is lost in case of catastrophic failure. Which two features should you implement? (Choose two.) A.Sharding B.Read replicas C.Binary logging D.Automated backups E.Semisynchronous replication Answer: CD QUESTION 202 You are working at a sports association whose members range in age from 8 to 30. The association collects a large amount of health data, such as sustained injuries. You are storing this data in BigQuery. Current legislation requires you to delete such information upon request of the subject. You want to design a solution that can accommodate such a request. What should you do? A.Use a unique identifier for each individual. Upon a deletion request, delete all rows from BigQuery with this identifier. B.When ingesting new data in BigQuery, run the data through the Data Loss Prevention (DLP) API to identify any personal information. As part of the DLP scan, save the result to Data Catalog. Upon a deletion request, query Data Catalog to find the column with personal information. C.Create a BigQuery view over the table that contains all data. Upon a deletion request, exclude the rows that affect the subject's data from this view. Use this view instead of the source table for all analysis tasks. D.Use a unique identifier for each individual. Upon a deletion request, overwrite the column with the unique identifier with a salted SHA256 of its value. Answer: B QUESTION 203 Your company has announced that they will be outsourcing operations functions. You want to allow developers to easily stage new versions of a cloud-based application in the production environment and allow the outsourced operations team to autonomously promote staged versions to production. You want to minimize the operational overhead of the solution. Which Google Cloud product should you migrate to? A.App Engine B.GKE On-Prem C.Compute Engine D.Google Kubernetes Engine Answer: D QUESTION 204 Your company is running its application workloads on Compute Engine. The applications have been deployed in production, acceptance, and development environments. The production environment is business-critical and is used 24/7, while the acceptance and development environments are only critical during office hours. Your CFO has asked you to optimize these environments to achieve cost savings during idle times. What should you do? A.Create a shell script that uses the gcloud command to change the machine type of the development and acceptance instances to a smaller machine type outside of office hours. Schedule the shell script on one of the production instances to automate the task. B.Use Cloud Scheduler to trigger a Cloud Function that will stop the development and acceptance environments after office hours and start them just before office hours. C.Deploy the development and acceptance applications on a managed instance group and enable autoscaling. D.Use regular Compute Engine instances for the production environment, and use preemptible VMs for the acceptance and development environments. Answer: D QUESTION 205 You are moving an application that uses MySQL from on-premises to Google Cloud. The application will run on Compute Engine and will use Cloud SQL. You want to cut over to the Compute Engine deployment of the application with minimal downtime and no data loss to your customers. You want to migrate the application with minimal modification. You also need to determine the cutover strategy. What should you do? A.1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server. 2. Stop the on-premises application. 3. Create a mysqldump of the on-premises MySQL server. 4. Upload the dump to a Cloud Storage bucket. 5. Import the dump into Cloud SQL. 6. Modify the source code of the application to write queries to both databases and read from its local database. 7. Start the Compute Engine application. 8. Stop the on-premises application. B.1. Set up Cloud SQL proxy and MySQL proxy. 2. Create a mysqldump of the on-premises MySQL server. 3. Upload the dump to a Cloud Storage bucket. 4. Import the dump into Cloud SQL. 5. Stop the on-premises application. 6. Start the Compute Engine application. C.1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server. 2. Stop the on-premises application. 3. Start the Compute Engine application, configured to read and write to the on-premises MySQL server. 4. Create the replication configuration in Cloud SQL. 5. Configure the source database server to accept connections from the Cloud SQL replica. 6. Finalize the Cloud SQL replica configuration. 7. When replication has been completed, stop the Compute Engine application. 8. Promote the Cloud SQL replica to a standalone instance. 9. Restart the Compute Engine application, configured to read and write to the Cloud SQL standalone instance. D.1. Stop the on-premises application. 2. Create a mysqldump of the on-premises MySQL server. 3. Upload the dump to a Cloud Storage bucket. 4. Import the dump into Cloud SQL. 5. Start the application on Compute Engine. Answer: A QUESTION 206 Your organization has decided to restrict the use of external IP addresses on instances to only approved instances. You want to enforce this requirement across all of your Virtual Private Clouds (VPCs). What should you do? A.Remove the default route on all VPCs. Move all approved instances into a new subnet that has a default route to an internet gateway. B.Create a new VPC in custom mode. Create a new subnet for the approved instances, and set a default route to the internet gateway on this new subnet. C.Implement a Cloud NAT solution to remove the need for external IP addresses entirely. D.Set an Organization Policy with a constraint on constraints/compute.vmExternalIpAccess. List the approved instances in the allowedValues list. Answer: D QUESTION 207 Your company uses the Firewall Insights feature in the Google Network Intelligence Center. You have several firewall rules applied to Compute Engine instances. You need to evaluate the efficiency of the applied firewall ruleset. When you bring up the Firewall Insights page in the Google Cloud Console, you notice that there are no log rows to display. What should you do to troubleshoot the issue? A.Enable Virtual Private Cloud (VPC) flow logging. B.Enable Firewall Rules Logging for the firewall rules you want to monitor. C.Verify that your user account is assigned the compute.networkAdmin Identity and Access Management (IAM) role. D.Install the Google Cloud SDK, and verify that there are no Firewall logs in the command line output. Answer: B QUESTION 208 Your company has sensitive data in Cloud Storage buckets. Data analysts have Identity Access Management (IAM) permissions to read the buckets. You want to prevent data analysts from retrieving the data in the buckets from outside the office network. What should you do? A.1. Create a VPC Service Controls perimeter that includes the projects with the buckets. 2. Create an access level with the CIDR of the office network. B.1. Create a firewall rule for all instances in the Virtual Private Cloud (VPC) network for source range. 2. Use the Classless Inter-domain Routing (CIDR) of the office network. C.1. Create a Cloud Function to remove IAM permissions from the buckets, and another Cloud Function to add IAM permissions to the buckets. 2. Schedule the Cloud Functions with Cloud Scheduler to add permissions at the start of business and remove permissions at the end of business. D.1. Create a Cloud VPN to the office network. 2. Configure Private Google Access for on-premises hosts. Answer: C QUESTION 209 You have developed a non-critical update to your application that is running in a managed instance group, and have created a new instance template with the update that you want to release. To prevent any possible impact to the application, you don't want to update any running instances. You want any new instances that are created by the managed instance group to contain the new update. What should you do? A.Start a new rolling restart operation. B.Start a new rolling replace operation. C.Start a new rolling update. Select the Proactive update mode. D.Start a new rolling update. Select the Opportunistic update mode. Answer: C QUESTION 210 Your company is designing its application landscape on Compute Engine. Whenever a zonal outage occurs, the application should be restored in another zone as quickly as possible with the latest application data. You need to design the solution to meet this requirement. What should you do? A.Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs, use the latest snapshot to restore the disk in the same zone. B.Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin up the application in another zone in the same region. Use the regional persistent disk for the application data. C.Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs, use the latest snapshot to restore the disk in another zone within the same region. D.Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin up the application in another region. Use the regional persistent disk for the application data, Answer: D QUESTION 211 Your company has just acquired another company, and you have been asked to integrate their existing Google Cloud environment into your company's data center. Upon investigation, you discover that some of the RFC 1918 IP ranges being used in the new company's Virtual Private Cloud (VPC) overlap with your data center IP space. What should you do to enable connectivity and make sure that there are no routing conflicts when connectivity is established? A.Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply new IP addresses so there is no overlapping IP space. B.Create a Cloud VPN connection from the new VPC to the data center, and create a Cloud NAT instance to perform NAT on the overlapping IP space. C.Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply a custom route advertisement to block the overlapping IP space. D.Create a Cloud VPN connection from the new VPC to the data center, and apply a firewall rule that blocks the overlapping IP space. Answer: A QUESTION 212 You need to migrate Hadoop jobs for your company's Data Science team without modifying the underlying infrastructure. You want to minimize costs and infrastructure management effort. What should you do? A.Create a Dataproc cluster using standard worker instances. B.Create a Dataproc cluster using preemptible worker instances. C.Manually deploy a Hadoop cluster on Compute Engine using standard instances. D.Manually deploy a Hadoop cluster on Compute Engine using preemptible instances. Answer: A QUESTION 213 Your company has a project in Google Cloud with three Virtual Private Clouds (VPCs). There is a Compute Engine instance on each VPC. Network subnets do not overlap and must remain separated. The network configuration is shown below. Instance #1 is an exception and must communicate directly with both Instance #2 and Instance #3 via internal IPs. How should you accomplish this? A.Create a cloud router to advertise subnet #2 and subnet #3 to subnet #1. B.Add two additional NICs to Instance #1 with the following configuration: • NIC1 ○ VPC: VPC #2 ○ SUBNETWORK: subnet #2 • NIC2 ○ VPC: VPC #3 ○ SUBNETWORK: subnet #3 Update firewall rules to enable traffic between instances. C.Create two VPN tunnels via CloudVPN: • 1 between VPC #1 and VPC #2. • 1 between VPC #2 and VPC #3. Update firewall rules to enable traffic between the instances. D.Peer all three VPCs: • Peer VPC #1 with VPC #2. • Peer VPC #2 with VPC #3. Update firewall rules to enable traffic between the instances. Answer: B QUESTION 214 You need to deploy an application on Google Cloud that must run on a Debian Linux environment. The application requires extensive configuration in order to operate correctly. You want to ensure that you can install Debian distribution updates with minimal manual intervention whenever they become available. What should you do? A.Create a Compute Engine instance template using the most recent Debian image. Create an instance from this template, and install and configure the application as part of the startup script. Repeat this process whenever a new Google-managed Debian image becomes available. B.Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates. C.Create an instance with the latest available Debian image. Connect to the instance via SSH, and install and configure the application on the instance. Repeat this process whenever a new Google-managed Debian image becomes available. D.Create a Docker container with Debian as the base image. Install and configure the application as part of the Docker image creation process. Host the container on Google Kubernetes Engine and restart the container whenever a new update is available. Answer: B QUESTION 215 You have an application that runs in Google Kubernetes Engine (GKE). Over the last 2 weeks, customers have reported that a specific part of the application returns errors very frequently. You currently have no logging or monitoring solution enabled on your GKE cluster. You want to diagnose the problem, but you have not been able to replicate the issue. You want to cause minimal disruption to the application. What should you do? A.1. Update your GKE cluster to use Cloud Operations for GKE. 2. Use the GKE Monitoring dashboard to investigate logs from affected Pods. B.1. Create a new GKE cluster with Cloud Operations for GKE enabled. 2. Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster. 3. Use the GKE Monitoring dashboard to investigate logs from affected Pods. C.1. Update your GKE cluster to use Cloud Operations for GKE, and deploy Prometheus. 2. Set an alert to trigger whenever the application returns an error. D.1. Create a new GKE cluster with Cloud Operations for GKE enabled, and deploy Prometheus. 2. Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster. 3. Set an alert to trigger whenever the application returns an error. Answer: C QUESTION 216 You need to deploy a stateful workload on Google Cloud. The workload can scale horizontally, but each instance needs to read and write to the same POSIX filesystem. At high load, the stateful workload needs to support up to 100 MB/s of writes. What should you do? A.Use a persistent disk for each instance. B.Use a regional persistent disk for each instance. C.Create a Cloud Filestore instance and mount it in each instance. D.Create a Cloud Storage bucket and mount it in each instance using gcsfuse. Answer: D QUESTION 217 Your company has an application deployed on Anthos clusters (formerly Anthos GKE) that is running multiple microservices. The cluster has both Anthos Service Mesh and Anthos Config Management configured. End users inform you that the application is responding very slowly. You want to identify the microservice that is causing the delay. What should you do? A.Use the Service Mesh visualization in the Cloud Console to inspect the telemetry between the microservices. B.Use Anthos Config Management to create a ClusterSelector selecting the relevant cluster. On the Google Cloud Console page for Google Kubernetes Engine, view the Workloads and filter on the cluster. Inspect the configurations of the filtered workloads. C.Use Anthos Config Management to create a namespaceSelector selecting the relevant cluster namespace. On the Google Cloud Console page for Google Kubernetes Engine, visit the workloads and filter on the namespace. Inspect the configurations of the filtered workloads. D.Reinstall istio using the default istio profile in order to collect request latency. Evaluate the telemetry between the microservices in the Cloud Console. Answer: A QUESTION 218 You are working at a financial institution that stores mortgage loan approval documents on Cloud Storage. Any change to these approval documents must be uploaded as a separate approval file, so you want to ensure that these documents cannot be deleted or overwritten for the next 5 years. What should you do? A.Create a retention policy on the bucket for the duration of 5 years. Create a lock on the retention policy. B.Create the bucket with uniform bucket-level access, and grant a service account the role of Object Writer. Use the service account to upload new files. C.Use a customer-managed key for the encryption of the bucket. Rotate the key after 5 years. D.Create the bucket with fine-grained access control, and grant a service account the role of Object Writer. Use the service account to upload new files. Answer: A QUESTION 219 Your team will start developing a new application using microservices architecture on Kubernetes Engine. As part of the development lifecycle, any code change that has been pushed to the remote develop branch on your GitHub repository should be built and tested automatically. When the build and test are successful, the relevant microservice will be deployed automatically in the development environment. You want to ensure that all code deployed in the development environment follows this process. What should you do? A.Have each developer install a pre-commit hook on their workstation that tests the code and builds the container when committing on the development branch. After a successful commit, have the developer deploy the newly built container image on the development cluster. B.Install a post-commit hook on the remote git repository that tests the code and builds the container when code is pushed to the development branch. After a successful commit, have the developer deploy the newly built container image on the development cluster. C.Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and stores it in Container Registry. Create a deployment pipeline that watches for new images and deploys the new image on the development cluster. Ensure only the deployment tool has access to deploy new versions. D.Create a Cloud Build trigger based on the development branch to build a new container image and store it in Container Registry. Rely on Vulnerability Scanning to ensure the code tests succeed. As the final step of the Cloud Build process, deploy the new container image on the development cluster. Ensure only Cloud Build has access to deploy new versions. Answer: A QUESTION 220 Your operations team has asked you to help diagnose a performance issue in a production application that runs on Compute Engine. The application is dropping requests that reach it when under heavy load. The process list for affected instances shows a single application process that is consuming all available CPU, and autoscaling has reached the upper limit of instances. There is no abnormal load on any other related systems, including the database. You want to allow production traffic to be served again as quickly as possible. Which action should you recommend? A.Change the autoscaling metric to agent.googleapis.com/memory/percent_used. B.Restart the affected instances on a staggered schedule. C.SSH to each instance and restart the application process. D.Increase the maximum number of instances in the autoscaling group. Answer: A QUESTION 221 You are implementing the infrastructure for a web service on Google Cloud. The web service needs to receive and store the data from 500,000 requests per second. The data will be queried later in real time, based on exact matches of a known set of attributes. There will be periods where the web service will not receive any requests. The business wants to keep costs low. Which web service platform and database should you use for the application? A.Cloud Run and BigQuery B.Cloud Run and Cloud Bigtable C.A Compute Engine autoscaling managed instance group and BigQuery D.A Compute Engine autoscaling managed instance group and Cloud Bigtable Answer: D QUESTION 222 You are developing an application using different microservices that should remain internal to the cluster. You want to be able to configure each microservice with a specific number of replicas. You also want to be able to address a specific microservice from any other microservice in a uniform way, regardless of the number of replicas the microservice scales to. You need to implement this solution on Google Kubernetes Engine. What should you do? A.Deploy each microservice as a Deployment. Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster. B.Deploy each microservice as a Deployment. Expose the Deployment in the cluster using an Ingress, and use the Ingress IP address to address the Deployment from other microservices within the cluster. C.Deploy each microservice as a Pod. Expose the Pod in the cluster using a Service, and use the Service DNS name to address the microservice from other microservices within the cluster. D.Deploy each microservice as a Pod. Expose the Pod in the cluster using an Ingress, and use the Ingress IP address name to address the Pod from other microservices within the cluster. Answer: A QUESTION 223 Your company has a networking team and a development team. The development team runs applications on Compute Engine instances that contain sensitive data. The development team requires administrative permissions for Compute Engine. Your company requires all network resources to be managed by the networking team. The development team does not want the networking team to have access to the sensitive data on the instances. What should you do? A.1. Create a project with a standalone VPC and assign the Network Admin role to the networking team. 2. Create a second project with a standalone VPC and assign the Compute Admin role to the development team. 3. Use Cloud VPN to join the two VPCs. B.1. Create a project with a standalone Virtual Private Cloud (VPC), assign the Network Admin role to the networking team, and assign the Compute Admin role to the development team. C.1. Create a project with a Shared VPC and assign the Network Admin role to the networking team. 2. Create a second project without a VPC, configure it as a Shared VPC service project, and assign the Compute Admin role to the development team. D.1. Create a project with a standalone VPC and assign the Network Admin role to the networking team. 2. Create a second project with a standalone VPC and assign the Compute Admin role to the development team. 3. Use VPC Peering to join the two VPCs. Answer: C QUESTION 224 Your company wants you to build a highly reliable web application with a few public APIs as the backend. You don't expect a lot of user traffic, but traffic could spike occasionally. You want to leverage Cloud Load Balancing, and the solution must be cost-effective for users. What should you do? A.Store static content such as HTML and images in Cloud CDN. Host the APIs on App Engine and store the user data in Cloud SQL. B.Store static content such as HTML and images in a Cloud Storage bucket. Host the APIs on a zonal Google Kubernetes Engine cluster with worker nodes in multiple zones, and save the user data in Cloud Spanner. C.Store static content such as HTML and images in Cloud CDN. Use Cloud Run to host the APIs and save the user data in Cloud SQL. D.Store static content such as HTML and images in a Cloud Storage bucket. Use Cloud Functions to host the APIs and save the user data in Firestore. Answer: B QUESTION 225 Your company sends all Google Cloud logs to Cloud Logging. Your security team wants to monitor the logs. You want to ensure that the security team can react quickly if an anomaly such as an unwanted firewall change or server breach is detected. You want to follow Google-recommended practices. What should you do? A.Schedule a cron job with Cloud Scheduler. The scheduled job queries the logs every minute for the relevant events. B.Export logs to BigQuery, and trigger a query in BigQuery to process the log data for the relevant events. C.Export logs to a Pub/Sub topic, and trigger Cloud Function with the relevant log events. D.Export logs to a Cloud Storage bucket, and trigger Cloud Run with the relevant log events. Answer: C QUESTION 226 You have deployed several instances on Compute Engine. As a security requirement, instances cannot have a public IP address. There is no VPN connection between Google Cloud and your office, and you need to connect via SSH into a specific machine without violating the security requirements. What should you do? A.Configure Cloud NAT on the subnet where the instance is hosted. Create an SSH connection to the Cloud NAT IP address to reach the instance. B.Add all instances to an unmanaged instance group. Configure TCP Proxy Load Balancing with the instance group as a backend. Connect to the instance using the TCP Proxy IP. C.Configure Identity-Aware Proxy (IAP) for the instance and ensure that you have the role of IAP-secured Tunnel User. Use the gcloud command line tool to ssh into the instance. D.Create a bastion host in the network to SSH into the bastion host from your office location. From the bastion host, SSH into the desired instance. Answer: D QUESTION 227 Your company is using Google Cloud. You have two folders under the Organization: Finance and Shopping. The members of the development team are in a Google Group. The development team group has been assigned the Project Owner role on the Organization. You want to prevent the development team from creating resources in projects in the Finance folder. What should you do? A.Assign the development team group the Project Viewer role on the Finance folder, and assign the development team group the Project Owner role on the Shopping folder. B.Assign the development team group only the Project Viewer role on the Finance folder. C.Assign the development team group the Project Owner role on the Shopping folder, and remove the development team group Project Owner role from the Organization. D.Assign the development team group only the Project Owner role on the Shopping folder. Answer: C QUESTION 228 You are developing your microservices application on Google Kubernetes Engine. During testing, you want to validate the behavior of your application in case a specific microservice should suddenly crash. What should you do? A.Add a taint to one of the nodes of the Kubernetes cluster. For the specific microservice, configure a pod anti-affinity label that has the name of the tainted node as a value. B.Use Istio's fault injection on the particular microservice whose faulty behavior you want to simulate. C.Destroy one of the nodes of the Kubernetes cluster to observe the behavior. D.Configure Istio's traffic management features to steer the traffic away from a crashing microservice. Answer: C QUESTION 229 Your company is developing a new application that will allow globally distributed users to upload pictures and share them with other selected users. The application will support millions of concurrent users. You want to allow developers to focus on just building code without having to create and maintain the underlying infrastructure. Which service should you use to deploy the application? A.App Engine B.Cloud Endpoints C.Compute Engine D.Google Kubernetes Engine Answer: A QUESTION 230 Your company provides a recommendation engine for retail customers. You are providing retail customers with an API where they can submit a user ID and the API returns a list of recommendations for that user. You are responsible for the API lifecycle and want to ensure stability for your customers in case the API makes backward-incompatible changes. You want to follow Google-recommended practices. What should you do? A.Create a distribution list of all customers to inform them of an upcoming backward-incompatible change at least one month before replacing the old API with the new API. B.Create an automated process to generate API documentation, and update the public API documentation as part of the CI/CD process when deploying an update to the API. C.Use a versioning strategy for the APIs that increases the version number on every backward-incompatible change. D.Use a versioning strategy for the APIs that adds the suffix "DEPRECATED" to the current API version number on every backward-incompatible change. Use the current version number for the new API. Answer: A QUESTION 231 Your company has developed a monolithic, 3-tier application to allow external users to upload and share files. The solution cannot be easily enhanced and lacks reliability. The development team would like to re-architect the application to adopt microservices and a fully managed service approach, but they need to convince their leadership that the effort is worthwhile. Which advantage(s) should they highlight to leadership? A.The new approach will be significantly less costly, make it easier to manage the underlying infrastructure, and automatically manage the CI/CD pipelines. B.The monolithic solution can be converted to a container with Docker. The generated container can then be deployed into a Kubernetes cluster. C.The new approach will make it easier to decouple infrastructure from application, develop and release new features, manage the underlying infrastructure, manage CI/CD pipelines and perform A/B testing, and scale the solution if necessary. D.The process can be automated with Migrate for Compute Engine. Answer: C QUESTION 232 Your team is developing a web application that will be deployed on Google Kubernetes Engine (GKE). Your CTO expects a successful launch and you need to ensure your application can handle the expected load of tens of thousands of users. You want to test the current deployment to ensure the latency of your application stays below a certain threshold. What should you do? A.Use a load testing tool to simulate the expected number of concurrent users and total requests to your application, and inspect the results. B.Enable autoscaling on the GKE cluster and enable horizontal pod autoscaling on your application deployments. Send curl requests to your application, and validate if the auto scaling works. C.Replicate the application over multiple GKE clusters in every Google Cloud region. Configure a global HTTP(S) load balancer to expose the different clusters over a single global IP address. D.Use Cloud Debugger in the development environment to understand the latency between the different microservices. Answer: B 2021 Latest Braindump2go Professional-Cloud-Architect PDF and VCE Dumps Free Share: https://drive.google.com/drive/folders/1kpEammLORyWlbsrFj1myvn2AVB18xtIR?usp=sharing
[October-2021]Braindump2go New PL-900 PDF and VCE Dumps Free Share(Q179-Q195)
QUESTION 179 A company builds and sells residential apartments. The company uses Dynamics 365 Sales to manage sales opportunities. Management must receive notifications on their mobile devices when sales opportunities are created. You need to recommend the appropriate Power Platform components to address the requirements. Which two components should you recommend to invoke the notification process? Each correct answer presents part of the solution. NOTE: Each selection is worth one point. A.AI Builder B.Power Automate C.Common Data Service connector D.Power BI Answer: BC QUESTION 180 Hotspot Question You are creating a number of Power Automate flows. You need to select the triggers for the flows. Which flow types should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer: QUESTION 181 A company is building an interactive chatbot to answer questions about product and product warranties. You need to create conversation paths for questions about product warranties. Which tool should you use? A.Authoring canvas B.Azure Bot Framework C.Power Platform admin center D.Power Virtual Agents portal E.Conversation node Answer: D QUESTION 182 Hotspot Question You are developing a Power Virtual Agents chatbot for a company. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer: QUESTION 183 You are building a Power Virtual Agents chatbot for a company. You are working with an existing topic and would like to call an action. Which technology is available to perform the action? A.Power Virtual Agent Entity B.Power BI C.Power Apps D.Power Automate Answer: D QUESTION 184 A company uses Power Platform. You must ensure that users cannot share customer data with other users. You must also ensure that uses cannot connect to data sources unless you grant the user explicit permissions to access a data source. You need to recommend solutions to meet the company's security requirements. Which two types of policies should you recommend? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A.Office cloud policies B.Group Policy Objects C.environment-level policies D.tenant-level policies E.preset security policies Answer: CD QUESTION 185 You create a Power Virtual Agents chatbot. You need to share the bot with other team members so that they can try out the bot before you share the bot with customers. What should you use? A.demo website B.live production website C.test chat feature Answer: C QUESTION 186 You create a Power Bl dashboard that displays Common Data Model data. You need to share the Power Bl dashboard with coworkers and allow the coworkers to collaborate. What are two possible ways to achieve the goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A.Create a Power Automate flow to export the data into a SQL Server database. B.Publish the dashboard as an app to your coworkers. C.Export the data to Microsoft Excel. Make required changes and then re-import the data. D.Create a Power Bl workspace and grant coworkers permissions. Answer: AB QUESTION 187 You are a district manager for a large retail organization. You train each store manager to use Power BI to track sales and daily sales targets. A store manager remembers learning about the Analyze in Excel option but cannot find the option in their Power BI dashboard. You need to help the user resolve the issue. How should you advise the user? A.Install the Power Bl Desktop app. B.Navigate to the report used by the dashboard. C.Select the Spotlight button on the dashboard tile. D.Subscribe to the dashboard and follow the email link. Answer: B QUESTION 188 You are creating visuals in Power BI. You create area charts, pie charts, and donut charts that use your company's data. You need to display the charts to others at the company. Which two objects can you add the charts to? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A.Power Bl service B.Power Bl reports C.Power Bl desktop D.Power Bl dashboards Answer: BD QUESTION 189 You create a Power App portal. When a user signs into the portal the following error displays: user not found You confirm that the user's sign in information is correct. You need to determine the cause of the error. What should you do? A.Disable custom error messages. B.Create a custom error message. C.Enable diagnostic tools in Lifecycle Services. D.Enable Maintenance mode. Answer: C QUESTION 190 You create a canvas app that allows contractors to submit time they work against a project. Contractors must be able to use the canvas app to enter time. Contractors must not be able to perform any other actions in the app. You need to configure permissions for the contractors. Which type of permissions should you use? A.application-level B.task-level C.record-level D.field-level Answer: D QUESTION 191 Hotspot Question You have version 1.0.0.0 of a published Power Apps app. You create and publish version 2.0.0.0 of the app. A customer goes through the process of restoring the previous version of the app. How many versions of the app are displayed in the Version tab for the app? To answer, select the appropriate option in the answer area. Answer: QUESTION 192 Drag and Drop Question A company has locations in multiple regions. The company develops solutions based on Power Apps and Power Automate. You need to recommend features to support the implementation. Which Power Platform features should you recommend? To answer, drag the appropriate features to the correct requirements. Each feature may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Answer: QUESTION 193 Drag and Drop Question A travel company plans to use the Power Platform to create tools that help travel agents book customer travel. You need to recommend solutions for the company. What should you recommend? To answer, drag the appropriate tools to the correct requirements. Each tool may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Answer: QUESTION 194 Drag and Drop Question A manufacturing company is evaluating Al Builder. You need to select Al Builder models to address specified requirements. Which model types should you use? To answer, drag the appropriate model types to the correct requirements. Each model type may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Answer: QUESTION 195 Hotspot Question You are planning to use the Business Card Reader and Sentiment Analysis prebuilt AI models to build solutions. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer: 2021 Latest Braindump2go PL-900 PDF and PL-900 VCE Dumps Free Share: https://drive.google.com/drive/folders/1IOmmERLjCXhozbt-Vq8PQdAbCPhQMXPo?usp=sharing
[October-2021]Braindump2go New 1Y0-312 PDF and VCE Dumps Free Share(Q119-Q133)
QUESTION 119 In which location is the resulting .VHD file stored by the Enterprise Layer Manager (ELM), when creating layers or layered images with Citrix App Layering? A.On a Common Internet File System (CIFS) share location on the local domain file share server B.On the Repository located on the local storage of the Enterprise Layer Manager (ELM) C.On the Server Message Block (SMB) file share configured from the Citrix App Layering management console D.On local storage attached to the hypervisor platform being used to create the images Answer: B QUESTION 120 Scenario: A Citrix Engineer is managing a large Citrix Virtual Apps and Desktops environment. Recently, the users are complaining about slow sessions on random days. The engineer wants to use a Citrix tool in order to have a baseline comparison of the users' performance on a daily basis. Which tool can the engineer use to accomplish this? A.Citrix Diagnostic B.Citrix Optimizer C.Citrix Analytics D.Citrix Diagnostic Facility (CDF) Control Answer: C QUESTION 121 Which statement is valid regarding back-up of StoreFront Configuration? A.Only a partial configuration of StoreFront can be exported. B.Configuration on a StoreFront server will NOT be overwritten completely by the imported settings. C.Configuration exports can be imported on other machines with StoreFront installed. D.StoreFront configuration exports can include only single server deployments and NOT server group configurations. Answer: C QUESTION 122 Scenario: A Citrix Engineer is maintaining a Citrix Virtual Apps and Desktops environment. The engineer has deployed Workspace Environment Management (WEM) within the infrastructure. The engineer has recently noticed that the two most commonly used applications are consuming excessive disk contention and causing bottlenecks on the servers. Which WEM System Optimization does the engineer need to use to address this issue? A.Session-Based Computing/Hosted Virtual Desktop (SBC/HVD) Tuning B.Memory Management C.Process Management D.I/O Management Answer: A Explanation: SBC/HVD (Session-Based Computing/Hosted Virtual Desktop) tuning allows you to optimize the performance of sessions running on Citrix Virtual Apps and Desktops. While designed to improve performance, some of the options might result in slight degradation of the user experience. QUESTION 123 What are two valid options to manage antivirus definition updates in a Citrix App Layering deployment? (Choose two.) A.Enable auto-updates and store updates in the Elastic Layer assigned to each user. B.Disable auto-updates and redeploy the layer for each update on a schedule approved by the security team. C.Enable auto-updates and store updates in the user's User Layer. D.Disable auto-updates and clear the checkbox for "Reinstall the layer" when redeploying the layers. Answer: BC QUESTION 124 Scenario: A Citrix Engineer has been hired to work on daily troubleshooting issues like session hang, Virtual Delivery Agent (VDA) registration issues, and non-responsive application issues. The engineer will also need recent trends to prepare health check reports. Which component/console should the engineer consider using? A.Citrix Diagnostic Facility (CDF) Tracing B.Citrix Insights C.Citrix Director D.Citrix Studio Answer: A QUESTION 125 Scenario: A Citrix Engineer needs to configure passthrough for user authentication on the Workspace front. The engineer confirmed that Single Sign-on process is running on the user machine and all the other settings are in place on the machine. Which two steps should the engineer follow to make passthrough authentication work for the users? (Choose two.) A.Configure Any Domain setting under Trusted domains in StoreFront. B.Set the TrustManagedAnonymousXmlServiceRequests to true on the Delivery Controllers. C.Configure Optimal Gateway Routing on the StoreFront server. D.Set the TrustRequestsSentToTheXmlServicePort to true on the Delivery Controllers. Answer: C QUESTION 126 Scenario: A Citrix Engineer has implemented a multi-zone Citrix Virtual Apps and Desktops site. The setup is as follows: - The Primary zone has been implemented in Sweden. - The Satellite zones have been implemented in Finland, Norway and Denmark. - The Application named `Sys-QA' is hosted on Virtual Delivery Agent (VDA) machines which are available in all the Satellite zones only. - The Application home zone for Sys-CA has been configured as Finland. A user with user ID `test1' has a disconnected session for Sys-QA in Norway. Where will the session launch, if user test1 tries to reconnect to application Sys-QA? A.Denmark B.Finland C.Norway D.Sweden Answer: C QUESTION 127 Which two statements are true regarding Publishing Platform Layer? (Choose two.) A.It can be updated directly using the Citrix App Layering Management console. B.Its purpose is to create a template that outputs to a virtual machine (VM) or a virtual disk. C.It has limited use-case scenarios, such as making minor updates to an App Layer. D.It is used every time changes are made, and a Platform Layer is being published. Answer: AB QUESTION 128 Which statement is correct regarding Citrix App Layering templates and layered images? A.Templates are NOT required when creating a VHD file for a Citrix Provisioning deployment. B.To minimize management effort, use the same template to generate layered images for each hypervisor platform used. C.Templates are NOT supported for use with physical machine imaging platforms. D.After machines are provisioned from a layered image, delete that layered image to reduce storage consumption. Answer: C QUESTION 129 Which two statements are valid for Enterprise Layer Manager (ELM) storage? (Choose two.) A.The appliance local storage size is fixed. B.When installing the appliance, it comes equipped with an additional 300GB data disk. C.This storage also stores Elastic Layers and their .JSON files. D.This storage is used to store all OS, Platform and App Layers, and versions. Answer: BD QUESTION 130 Which tool allows a Citrix Engineer to monitor and identify inconsistent or suspicious activities on the network? A.Citrix Monitor B.Citrix ADC C.Citrix Director D.Citrix Analytics Answer: D QUESTION 131 Which three statements are true about building multiple Elastic Layers? (Choose three.) A.There is no limit on how many Elastic Layers can be built as long as there is adequate SMB storage space and network bandwidth. B.They are usually built to perform the majority of the application layer workload for users. C.They appear identical to other application layers from a user perspective D.They greatly reduce the number of golden images required to be managed by Citrix App Layering. E.They are primarily for applications with compatibility issues for specific users or groups. Answer: BDE QUESTION 132 A Citrix Engineer configured two Workspace Environment Management (WEM) brokers that are load- balanced by Citrix ADC. How do the WEM brokers synchronize their information? A.The secondary WEM broker periodically checks in with the primary WEM broker to get the latest settings. B.The WEM brokers alternatively synchronize with each other on a set schedule. C.The WEM brokers individually connect to the WEM database to keep their settings updated. D.The WEM brokers share their Local Host Cache information Answer: B QUESTION 133 The main transformer setting within Workspace Environment Management (WEM) changes the WEM Agent machine so that it __________. A.only runs white-listed applications B.functions in kiosk mode C.can be accessed remotely using HDX D.intelligently adjusts RAM, CPU, and I/O resources Answer: B 2021 Latest Braindump2go 1Y0-312 PDF and 1Y0-312 VCE Dumps Free Share: https://drive.google.com/drive/folders/1lmrRADUgTWsS2iN7Huc9cwk3lDd7K6VA?usp=sharing
Global Lateral Flow Assays Market Size By Product, By Application, By Geographic Scope, And Forecast
The Lateral Flow Assays Market report is analyzed on the basis of its market share by value and volume. The report includes regional, country, and global analyses of all Lateral Flow Assays segments. The study encompasses all the major geographies around the world that are influencing the Lateral Flow Assays Market. The major insights into the Lateral Flow Assays Market are dominating factors, potential growth opportunities, restraints, and challenges that are presented in the report. Reporting is supported by Porter's Five Forces Analysis, Competitiveness Analysis, Assessment of Key Features of the Competitive Landscape, and Product Analysis. The research methodology included in the report and the resulting data will meet the needs of your business. The investment research data offered in the report enables stakeholders and investors of the Lateral Flow Assays Market to focus on ongoing and upcoming investment opportunities and to draw their attention to investment scenarios in the Lateral Flow Assays Market. The strategic intelligence functions promote the expansion of your business and help to better understand the potential of different industries in the Lateral Flow Assays Market. The report provides qualitative and quantitative analysis of Lateral Flow Assays Market scenarios by geographies and the performance of the different regions. The research analysis study is customized to meet the business needs of market participants. Further, the report highlights specifications and challenges including multiple methodologies for extracting precise data and facts, in-depth interviews, and studies of the competitive landscape of the Lateral Flow Assays Market The report covers the following key players in the Lateral Flow Assays Market: Abbott Laboratories Abbott Becton, Dickinson, and Company Biomerieux Bio-Rad Laboratories Danaher Corporation Hologic Johnson & Johnson Qiagen Thermo Fisher Scientific PerkinElmer Siemens Healthineers Segmentation of Lateral Flow Assays Market: Lateral Flow Assays Market, By Product • Benchtop Readers • Digital/Mobile Readers • Kits & Reagents • North America • United States • Canada • Europe • Germany • France • U.K. • Italy • Russia Lateral Flow Assays Market, By Application • Hospitals and Clinics • Diagnostic Laboratories • Pharmaceutical and Biotech Companies End Users • North America • United States • Canada • Europe • Germany • France Lateral Flow Assays Geographic Market Analysis: - North America (USA, Canada, Mexico) - Europe (Great Britain, France, Germany, Spain, Italy, Central and Eastern Europe, CIS) - Asia Pacific (China, Japan, South Korea, ASEAN, India, rest of Asia Pacific) - Latin America (Brazil, rest of LA) - Middle East and Africa (Turkey, CCG, rest of the Middle East) The report highlights various aspects in the Lateral Flow Assays Market and answers relevant questions on the Lateral Flow Assays Market: 1. What are the best investment opportunities to bring new products to market and provide advanced services in the Lateral Flow Assays Market? 2. What value propositions are relevant to the client or market segment that a company should focus on when launching new research or investment funds in the Lateral Flow Assays Market? 3. What policy changes will help stakeholders strengthen their supply chain and demand network? 4. Which regions would need more products and services in certain segments during the forecast period? 5. What strategies have helped established players reduce supplier, purchasing, and logistics costs? 6. The C-suite perspective used to put companies on a new growth path? 7. What government measures are promoting the Lateral Flow Assays Market or what government regulations may call into question the status of regional and global industries in the Lateral Flow Assays Market? 8. How will the political and economic crisis affect the opportunities in the Lateral Flow Assays Growth Zones?
[October-2021]Braindump2go New SAA-C02 PDF and VCE Dumps Free Share(Q724-Q745)
QUESTION 724 A company is building a new furniture inventory application. The company has deployed the application on a fleet of Amazon EC2 instances across multiple Availability Zones. The EC2 instances run behind an Application Load Balancer (ALB) in their VPC. A solutions architect has observed that incoming traffic seems to favor one EC2 instance resulting in latency for some requests. What should the solutions architect do to resolve this issue? A.Disable session affinity (sticky sessions) on the ALB B.Replace the ALB with a Network Load Balancer C.increase the number of EC2 instances in each Availability Zone D.Adjust the frequency of the health checks on the ALB's target group Answer: B QUESTION 725 A startup company is using me AWS Cloud to develop a traffic control monitoring system for a large city. The system must be highly available and must provide near-real-time results for residents and city officials even during peak events. Gigabytes of data will come in daily from loT devices that run at intersections and freeway ramps across the city. The system must process the data sequentially to provide the correct timeline. However results need to show only what has happened in the last 24 hours. Which solution will meet these requirements MOST cost-effectively? A.Deploy Amazon Kinesis Data Firehose to accept incoming data from the loT devices and write the data to Amazon S3 Build a web dashboard to display the data from the last 24 hours B.Deploy an Amazon API Gateway API endpoint and an AWS Lambda function to process incoming data from the loT devices and store the data in Amazon DynamoDB Build a web dashboard to display the data from the last 24 hours C.Deploy an Amazon API Gateway API endpoint and an Amazon Simple Notification Service (Amazon SNS) tope to process incoming data from the loT devices Write the data to Amazon Redshift Build a web dashboard to display the data from the last 24 hours D.Deploy an Amazon Simple Queue Service (Amazon SOS) FIFO queue and an AWS Lambda function to process incoming data from the loT devices and store the data in an Amazon RDS DB instance Build a web dashboard to display the data from the last 24 hours Answer: D QUESTION 726 A company has designed an application where users provide small sets of textual data by calling a public API. The application runs on AWS and includes a public Amazon API Gateway API that forwards requests to an AWS Lambda function for processing. The Lambda function then writes the data to an Amazon Aurora Serverless database for consumption. The company is concerned that it could lose some user data it a Lambda function fails to process the request property or reaches a concurrency limit. What should a solutions architect recommend to resolve this concern? A.Split the existing Lambda function into two Lambda functions Configure one function to receive API Gateway requests and put relevant items into Amazon Simple Queue Service (Amazon SQS) Configure the other function to read items from Amazon SQS and save the data into Aurora B.Configure the Lambda function to receive API Gateway requests and write relevant items to Amazon ElastiCache Configure ElastiCache to save the data into Aurora C.Increase the memory for the Lambda function Configure Aurora to use the Multi-AZ feature D.Split the existing Lambda function into two Lambda functions Configure one function to receive API Gateway requests and put relevant items into Amazon Simple Notification Service (Amazon SNS) Configure the other function to read items from Amazon SNS and save the data into Aurora Answer: A QUESTION 727 A developer has a script lo generate daily reports that users previously ran manually. The script consistently completes in under 10 minutes. The developer needs to automate this process in a cost-effective manner. Which combination of services should the developer use? (Select TWO.) A.AWS Lambda B.AWS CloudTrail C.Cron on an Amazon EC2 instance D.Amazon EC2 On-Demand Instance with user data E.Amazon EventBridge (Amazon CloudWatch Events) Answer: CE QUESTION 728 A solution architect is creating a new Amazon CloudFront distribution for an application. Some of Ine information submitted by users is sensitive. The application uses HTTPS but needs another layer" of security. The sensitive information should be protected throughout the entire application stack end access to the information should be restricted to certain applications. Which action should the solutions architect take? A.Configure a CloudFront signed URL B.Configure a CloudFront signed cookie. C.Configure a CloudFront field-level encryption profile D.Configure CloudFront and set the Origin Protocol Policy setting to HTTPS Only for the Viewer Protocol Policy Answer: C QUESTION 729 A company has an Amazon S3 bucket that contains confidential information in its production AWS account. The company has turned on AWS CloudTrail for the account. The account sends a copy of its logs to Amazon CloudWatch Logs. The company has configured the S3 bucket to log read and write data events. A company auditor discovers that some objects in the S3 bucket have been deleted. A solutions architect must provide the auditor with information about who deleted the objects. What should the solutions architect do to provide this information? A.Create a CloudWatch Logs fitter to extract the S3 write API calls against the S3 bucket B.Query the CloudTrail togs with Amazon Athena to identify the S3 write API calls against the S3 bucket C.Use AWS Trusted Advisor to perform security checks for S3 write API calls that deleted the content D.Use AWS Config to track configuration changes on the S3 bucket Use these details to track the S3 write API calls that deleted the content Answer: B QUESTION 730 A company has three AWS accounts Management Development and Production. These accounts use AWS services only in the us-east-1 Region. All accounts have a VPC with VPC Flow Logs configured to publish data to an Amazon S3 bucket in each separate account. For compliance reasons the company needs an ongoing method to aggregate all the VPC flow logs across all accounts into one destination S3 bucket in the Management account. What should a solutions architect do to meet these requirements with the LEAST operational overhead? A.Add S3 Same-Region Replication rules in each S3 bucket that stores VPC flow logs to replicate objects to the destination S3 bucket Configure the destination S3 bucket to allow objects to be received from the S3 buckets in other accounts B.Set up an IAM user in the Management account Grant permissions to the IAM user to access the S3 buckets that contain the VPC flow logs Run the aws s3 sync command in the AWS CLl to copy the objects to the destination S3 bucket C.Use an S3 inventory report to specify which objects in the S3 buckets to copy Perform an S3 batch operation to copy the objects into the destination S3 bucket in the Management account with a single request. D.Create an AWS Lambda function in the Management account Grant S3 GET permissions on the source S3 buckets Grant S3 PUT permissions on the destination S3 bucket Configure the function to invoke when objects are loaded in the source S3 buckets Answer: A QUESTION 731 A company is running a multi-tier web application on AWS. The application runs its database on Amazon Aurora MySQL. The application and database tiers are in the us-easily Region. A database administrator who monitors the Aurora DB cluster finds that an intermittent increase in read traffic is creating high CPU utilization on the read replica. The result is increased read latency for the application. The memory and disk utilization of the DB instance are stable throughout the event of increased latency. What should a solutions architect do to improve the read scalability? A.Reboot the DB cluster B.Create a cross-Region read replica C.Configure Aurora Auto Scaling for the read replica D.Increase the provisioned read IOPS for the DB instance Answer: B QUESTION 732 A developer is creating an AWS Lambda function to perform dynamic updates to a database when an item is added to an Amazon Simple Queue Service (Amazon SOS) queue. A solutions architect must recommend a solution that tracks any usage of database credentials in AWS CloudTrail. The solution also must provide auditing capabilities. Which solution will meet these requirements? A.Store the encrypted credentials in a Lambda environment variable B.Create an Amazon DynamoDB table to store the credentials Encrypt the table C.Store the credentials as a secure string in AWS Systems Manager Parameter Store D.Use an AWS Key Management Service (AWS KMS) key store to store the credentials Answer: D QUESTION 733 A company has a service that reads and writes large amounts of data from an Amazon S3 bucket in the same AWS Region. The service is deployed on Amazon EC2 instances within the private subnet of a VPC. The service communicates with Amazon S3 over a NAT gateway in the public subnet. However, the company wants a solution that will reduce the data output costs. Which solution will meet these requirements MOST cost-effectively? A.Provision a dedicated EC2 NAT instance in the public subnet. Configure the route table for the private subnet to use the elastic network interface of this instance as the destination for all S3 traffic B.Provision a dedicated EC2 NAT instance in the private subnet. Configure the route table for the public subnet to use the elastic network interface of this instance as the destination for all S3 traffic. C.Provision a VPC gateway endpoint. Configure the route table for the private subnet to use the gateway endpoint as the route for all S3 traffic. D.Provision a second NAT gateway. Configure the route table foe the private subnet to use this NAT gateway as the destination for all S3 traffic. Answer: C QUESTION 734 A company has an application that uses an Amazon OynamoDB table lew storage. A solutions architect discovers that many requests to the table are not returning the latest data. The company's users have not reported any other issues with database performance Latency is in an acceptable range. Which design change should the solutions architect recommend? A.Add read replicas to the table. B.Use a global secondary index (GSI). C.Request strongly consistent reads for the table D.Request eventually consistent reads for the table. Answer: C QUESTION 735 A company wants lo share data that is collected from sell-driving cars with the automobile community. The data will be made available from within an Amazon S3 bucket. The company wants to minimize its cost of making this data available to other AWS accounts. What should a solutions architect do to accomplish this goal? A.Create an S3 VPC endpoint for the bucket. B.Configure the S3 bucket to be a Requester Pays bucket. C.Create an Amazon CloudFront distribution in front of the S3 bucket. D.Require that the fries be accessible only with the use of the BitTorrent protocol. Answer: A QUESTION 736 A company recently announced the deployment of its retail website to a global audience. The website runs on multiple Amazon EC2 instances behind an Elastic Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The company wants to provide its customers with different versions of content based on the devices that the customers use to access the website. Which combination of actions should a solutions architect take to meet these requirements7 (Select TWO.) A.Configure Amazon CloudFront to cache multiple versions of the content. B.Configure a host header in a Network Load Balancer to forward traffic to different instances. C.Configure a Lambda@Edge function to send specific objects to users based on the User-Agent header. D.Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up host-based routing to different EC2 instances. E.Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up path-based routing to different EC2 instances. Answer: BD QUESTION 737 A company has developed a new content-sharing application that runs on Amazon Elastic Container Service (Amazon ECS). The application runs on Amazon Linux Docker tasks that use the Amazon EC2 launch type. The application requires a storage solution that has the following characteristics: - Accessibility (or multiple ECS tasks through bind mounts - Resiliency across Availability Zones - Burslable throughput of up to 3 Gbps - Ability to be scaled up over time Which storage solution meets these requirements? A.Launch an Amazon FSx for Windows File Server Multi-AZ instance. Configure the ECS task definitions to mount the Amazon FSx instance volume at launch. B.Launch an Amazon Elastic File System (Amazon EFS) instance. Configure the ECS task definitions to mount the EFS Instance volume at launch. C.Create a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach set to enabled. Attach the EBS volume to the ECS EC2 instance Configure ECS task definitions to mount the EBS instance volume at launch. D.Launch an EC2 instance with several Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volumes attached m a RAID 0 configuration. Configure the EC2 instance as an NFS storage server. Configure ECS task definitions to mount the volumes at launch. Answer: B QUESTION 738 An airline that is based in the United States provides services for routes in North America and Europe. The airline is developing a new read-intensive application that customers can use to find flights on either continent. The application requires strong read consistency and needs scalable database capacity to accommodate changes in user demand. The airline needs the database service to synchronize with the least possible latency between the two continents and to provide a simple failover mechanism to a second AWS Region. Which solution will meet these requirements? A.Deploy Microsoft SQL Server on Amazon EC2 instances in a Region in North America. Use SOL Server binary log replication on an EC2 instance in a Region in Europe. B.Create an Amazon DynamoDB global table Add a Region from North America and a Region from Europe to the table. Query data with strongly consistent reads. C.Use an Amazon Aurora MySQL global database. Deploy the read-write node in a Region in North America, and deploy read-only endpoints in Regions in North America and Europe. Query data with global read consistency. D.Create a subscriber application that uses Amazon Kinesis Data Steams for an Amazon Redshift cluster in a Region in North America. Create a second subscriber application for the Amazon Redshift cluster in a Region in Europe. Process all database modifications through Kinesis Data Streams. Answer: C QUESTION 739 A company has a production web application in which users upload documents through a web interlace or a mobile app. According to a new regulatory requirement, new documents cannot be modified or deleted after they are stored. What should a solutions architect do to meet this requirement? A.Store the uploaded documents in an Amazon S3 bucket with S3 Versioning and S3 Object Lock enabled B.Store the uploaded documents in an Amazon S3 bucket. Configure an S3 Lifecycle policy to archive the documents periodically. C.Store the uploaded documents in an Amazon S3 bucket with S3 Versioning enabled Configure an ACL to restrict all access to read-only. D.Store the uploaded documents on an Amazon Elastic File System (Amazon EFS) volume. Access the data by mounting the volume in read-only mode. Answer: A QUESTION 740 A company has a Microsoft NET application that runs on an on-premises Windows Server. The application stores data by using an Oracle Database Standard Edition server. The company is planning a migration to AWS and wants to minimize development changes while moving the application. The AWS application environment should be highly available. Which combination of actions should the company take to meet these requirements? (Select TWO.) A.Refactor the application as serverless with AWS Lambda functions running NET Core. B.Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment. C.Replatform the application to run on Amazon EC2 with the Amazon Linus Amazon Machine Image (AMI). D.Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Amazon DynamoDB in a Multi-AZ deployment. E.Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ deployment. Answer: AD QUESTION 741 A company wants to enforce strict security guidelines on accessing AWS Cloud resources as the company migrates production workloads from its data centers. Company management wants all users to receive permissions according to their job roles and functions. Which solution meets these requirements with the LEAST operational overhead? A.Create an AWS Single Sign-On deployment. Connect to the on-premises Active Directory to centrally manage users and permissions across the company B.Create an IAM role for each job function. Require each employee to call the stsiAssumeRole action in the AWS Management Console to perform their job role. C.Create individual IAM user accounts for each employee Create an IAM policy for each job function, and attach the policy to all IAM users based on their job role. D.Create individual IAM user accounts for each employee. Create IAM policies for each job function. Create IAM groups, and attach associated policies to each group. Assign the IAM users to a group based on their Job role. Answer: D QUESTION 742 A company provides machine learning solutions .The company's users need to download large data sets from the company's Amazon S3 bucket. These downloads often take a long lime, especially when the users are running many simulations on a subset of those datasets. Users download the datasets to Amazon EC2 instances in the same AWS Region as the S3 bucket. Multiple users typically use the same datasets at the same time. Which solution will reduce the lime that is required to access the datasets? A.Configure the S3 bucket lo use the S3 Standard storage class with S3 Transfer Acceleration activated. B.Configure the S3 bucket to use the S3 Intelligent-Tiering storage class with S3 Transfer Acceleration activated. C.Create an Amazon Elastic File System (Amazon EFS) network Tile system. Migrate the datasets by using AWS DataSync. D.Move the datasets onto a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume. Attach the volume to all the EC2 instances. Answer: C QUESTION 743 A company needs to retain its AWS CloudTrail logs (or 3 years. The company is enforcing CloudTrail across a set of AWS accounts by using AWS Organizations from the parent account. The CloudTrail target S3 bucket is configured with S3 Versioning enabled. An S3 Lifecycle policy is in place to delete current objects after 3 years. After the fourth year of use of the S3 bucket, the S3 bucket metrics show that the number of objects has continued to rise. However, the number of new CloudTrail logs that are delivered to the S3 bucket has remained consistent. Which solution will delete objects that are older than 3 years in the MOST cost-effective manner? A.Configure the organization's centralized CloudTrail trail to expire objects after 3 years. B.Configure the S3 Lifecycle policy to delete previous versions as well as current versions. C.Create an AWS Lambda function to enumerate and delete objects from Amazon S3 that are older than 3 years. D.Configure the parent account as the owner of all objects that are delivered to the S3 bucket. Answer: B QUESTION 744 A company has a website hosted on AWS. The website is behind an Application Load Balancer (ALB) that is configured to handle HTTP and HTTPS separately. The company wants to forward all requests to the website so that the requests will use HTTPS. What should a solutions architect do to meet this requirement? A.Update the ALB's network ACL to accept only HTTPS traffic B.Create a rule that replaces the HTTP in the URL with HTTPS. C.Create a listener rule on the ALB to redirect HTTP traffic to HTTPS. D.Replace the ALB with a Network Load Balancer configured to use Server Name Indication (SNI). Answer: C QUESTION 745 A company is deploying an application that processes large quantities of data in batches as needed. The company plans to use Amazon EC2 instances for the workload. The network architecture must support a highly scalable solution and prevent groups of nodes from sharing the same underlying hardware. Which combination of network solutions will meet these requirements? (Select TWO.) A.Create Capacity Reservations for the EC2 instances to run in a placement group B.Run the EC2 instances in a spread placement group. C.Run the EC2 instances in a cluster placement group. D.Place the EC2 instances in an EC2 Auto Scaling group. E.Run the EC2 instances in a partition placement group. Answer: BC 2021 Latest Braindump2go SAA-C02 PDF and SAA-C02 VCE Dumps Free Share: https://drive.google.com/drive/folders/1_5IK3H_eM74C6AKwU7sKaLn1rrn8xTfm?usp=sharing