SERVICEREPAIR02
1+ Views

LG Refrigerator Service Center in Mulund

A refrigerator is the best home appliance. If the refrigerator gets any issue at some point you can’t manage it. If you have any experience in service you can rectify it by yourself. But if you don’t have any experiences you need to contact a well-trained technician. Nearby you have the best service center to provide service. Contact our LG Refrigerator Service Center in Mulund for the best service. We provide well-trained and skilled technicians for service. They will rectify any issues like a bad smell, overflow of ice, water leakage, etc. We provide service at just Rs: 300/- per service to doorstep service. Contact us on 8688821387, 9390110225.
0 Likes
0 Shares
Comment
Suggested
Recent
Cards you may also be interested in
[October-2021]New Braindump2go AZ-400 PDF and VCE Dumps[Q214-Q223]
QUESTION 214 You have an Azure DevOps organization that contains a project named Project1. You need to create a published wiki in Project1. What should you do first? A.Modify the Storage settings of Project1. B.In Project1, create an Azure DevOps pipeline. C.In Project1, create an Azure DevOps repository. D.Modify the Team configuration settings of Project1. Answer: C QUESTION 215 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure DevOps organization named Contoso and an Azure subscription. The subscription contains an Azure virtual machine scale set named VMSS1 that is configured for autoscaling. You have a project in Azure DevOps named Project1. Project1 is used to build a web app named App1 and deploy App1 to VMSS1. You need to ensure that an email alert is generated whenever VMSS1 scales in or out. Solution: From Azure DevOps, configure the Service hooks settings for Project1. Does this meet the goal? A.Yes B.No Answer: B QUESTION 216 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure DevOps organization named Contoso and an Azure subscription. The subscription contains an Azure virtual machine scale set named VMSS1 that is configured for autoscaling. You have a project in Azure DevOps named Project1. Project1 is used to build a web app named App1 and deploy App1 to VMSS1. Solution: From Azure Monitor, configure the autoscale settings. Does this meet the goal? A.Yes B.No Answer: B QUESTION 217 You have an Azure solution that contains a build pipeline in Azure Pipelines. You experience intermittent delays before the build pipeline starts. You need to reduce the time it takes to start the build pipeline. What should you do? A.Split the build pipeline into multiple stages. B.Purchase an additional parallel job. C.Create a new agent pool. D.Enable self-hosted build agents. Answer: C QUESTION 218 You are evaluating the use of code review assignments in GitHub. Which two requirements can be met by using code review assignments' Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point A.Automatically choose and assign reviewers based on a list of available personnel B.Automatically choose and assign reviewers based on who has the most completed review requests. C.Ensure that each team member reviews an equal number of pull requests during any 30-day period. D.Automatically choose and assign reviewers based on who received the least recent review requests. Answer: AC QUESTION 219 You haw an Azure subscription that contains multiple Azure services. You need to send an SMS alert when scheduled maintenance is planned for the Azure services. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A.Create an Azure Service Health alert. B.Enable Azure Security Center. C.Create and configure an action group D.Create and configure an Azure Monitor alert rule Answer: AD QUESTION 220 You have a project m Azure DevOps that has a release pipeline. You need to integrate work item tracking and an Agile project management system to meet the following requirements: - Ensure that developers can track whether their commits are deployed to production. - Report the deployment status. - Minimize integration effort. Which system should you use? A.Trello B.Jira C.Basecamp D.Asana Answer: B QUESTION 221 You have several Azure Active Directory (Azure AD) accounts. You need to ensure that users use multi-factor authentication (MFA) to access Azure apps from untrusted networks. What should you configure in Azure AD? A.access reviews B.managed identities C.entitlement management D.conditional access Answer: D QUESTION 222 You configure Azure Application Insights and the shared service plan tier for a web app. You enable Smart Detection. You confirm that standard metrics are visible in the logs, but when you test a failure, you do not receive a Smart Detection notification What prevents the Smart Detection notification from being sent? A.You must restart the web app before Smart Detection is enabled. B.Smart Detection uses the first 24 hours to establish the normal behavior of the web app. C.You must enable the Snapshot Debugger for the web app. D.The web app is configured to use the shared service plan tier. Answer: B QUESTION 223 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure DevOps organization named Contoso and an Azure subscription. The subscription contains an Azure virtual machine scale set named VMSS1 that is configured for autoscaling. You have a project in Azure DevOps named Project1. Project1 is used to build a web app named App1 and deploy App1 to VMSS1. You need to ensure that an email alert is generated whenever VMSS1 scales in or out. Solution: From Azure DevOps, configure the Notifications settings for Project1. Does this meet the goal? A.Yes B.No Answer: B 2021 Latest Braindump2go AZ-400 PDF and AZ-400 VCE Dumps Free Share: https://drive.google.com/drive/folders/1kLhX5N_Pt_noAKZD50xUpnSEA5Tt62TZ?usp=sharing
[October-2021]New Braindump2go CLF-C01 PDF and VCE Dumps[Q25-Q45]
QUESTION 25 A large organization has a single AWS account. What are the advantages of reconfiguring the single account into multiple AWS accounts? (Choose two.) A.It allows for administrative isolation between different workloads. B.Discounts can be applied on a quarterly basis by submitting cases in the AWS Management Console. C.Transitioning objects from Amazon S3 to Amazon S3 Glacier in separate AWS accounts will be less expensive. D.Having multiple accounts reduces the risks associated with malicious activity targeted at a single account. E.Amazon QuickSight offers access to a cost tool that provides application-specific recommendations for environments running in multiple accounts. Answer: AC QUESTION 26 An online retail company recently deployed a production web application. The system administrator needs to block common attack patterns such as SQL injection and cross-site scripting. Which AWS service should the administrator use to address these concerns? A.AWS WAF B.Amazon VPC C.Amazon GuardDuty D.Amazon CloudWatch Answer: A QUESTION 27 What does Amazon CloudFront provide? A.Automatic scaling for all resources to power an application from a single unified interface B.Secure delivery of data, videos, applications, and APIs to users globally with low latency C.Ability to directly manage traffic globally through a variety of routing types, including latency-based routing, geo DNS, geoproximity, and weighted round robin D.Automatic distribution of incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and AWS Lambda functions Answer: B QUESTION 28 Which phase describes agility as a benefit of building in the AWS Cloud? A.The ability to pay only when computing resources are consumed, based on the volume of resources that are consumed B.The ability to eliminate guessing about infrastructure capacity needs C. The ability to support innovation through a reduction in the time that is required to make IT resources available to developers D. The ability to deploy an application in multiple AWS Regions around the world in minutes Answer: QUESTION 29 A company is undergoing a security audit. The audit includes security validation and compliance validation of the AWS infrastructure and services that the company uses. The auditor needs to locate compliance-related information and must download AWS security and compliance documents. These documents include the System and Organization Control (SOC) reports. Which AWS service or group can provide these documents? A.AWS Abuse team B.AWS Artifact C.AWS Support D.AWS Config Answer: B QUESTION 30 Which AWS Trusted Advisor checks are available to users with AWS Basic Support? (Choose two.) A.Service limits B.High utilization Amazon EC2 instances C.Security groups ?specific ports unrestricted D.Load balancer optimization E.Large number of rules in an EC2 security groups Answer: AC QUESTION 31 A company has a centralized group of users with large file storage requirements that have exceeded the space available on premises. The company wants to extend its file storage capabilities for this group while retaining the performance benefit of sharing content locally. What is the MOST operationally efficient AWS solution for this scenario? A.Create an Amazon S3 bucket for each users. Mount each bucket by using an S3 file system mounting utility. B.Configure and deploy an AWS Storage Gateway file gateway. Connect each user's workstation to the file gateway. C.Move each user's working environment to Amazon WorkSpaces. Set up an Amazon WorkDocs account for each user. D.Deploy an Amazon EC2 instance and attach an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume. Share the EBS volume directly with the users. Answer: B QUESTION 32 Which network security features are supported by Amazon VPC? (Choose two.) A.Network ACLs B.Internet gateways C.VPC peering D.Security groups E.Firewall rules Answer: AD QUESTION 33 A company wants to build a new architecture with AWS services. The company needs to compare service costs at various scales. Which AWS service, tool, or feature should the company use to meet this requirement? A.AWS Compute Optimizer B.AWS Pricing Calculator C.AWS Trusted Advisor D.Cost Explorer rightsizing recommendations Answer: B QUESTION 34 An Elastic Load Balancer allows the distribution of web traffic across multiple: A.AWS Regions. B.Availability Zones. C.Dedicated Hosts. D.Amazon S3 buckets. Answer: B QUESTION 35 Which characteristic of the AWS Cloud helps users eliminate underutilized CPU capacity? A.Agility B.Elasticity C.Reliability D.Durability Answer: B QUESTION 36 Which AWS services make use of global edge locations? (Choose two.) A.AWS Fargate B.Amazon CloudFront C.AWS Global Accelerator D.AWS Wavelength E.Amazon VPC Answer: BC QUESTION 37 Which of the following are economic benefits of using AWS Cloud? (Choose two.) A.Consumption-based pricing B.Perpetual licenses C.Economies of scale D.AWS Enterprise Support at no additional cost E.Bring-your-own-hardware model Answer: AC QUESTION 38 A company is using Amazon EC2 Auto Scaling to scale its Amazon EC2 instances. Which benefit of the AWS Cloud does this example illustrate? A.High availability B.Elasticity C.Reliability D.Global reach Answer: B QUESTION 39 A company is running and managing its own Docker environment on Amazon EC2 instances. The company wants to alternate to help manage cluster size, scheduling, and environment maintenance. Which AWS service meets these requirements? A.AWS Lambda B.Amazon RDS C.AWS Fargate D.Amazon Athena Answer: C QUESTION 40 A company hosts an application on an Amazon EC2 instance. The EC2 instance needs to access several AWS resources, including Amazon S3 and Amazon DynamoDB. What is the MOST operationally efficient solution to delegate permissions? A.Create an IAM role with the required permissions. Attach the role to the EC2 instance. B.Create an IAM user and use its access key and secret access key in the application. C.Create an IAM user and use its access key and secret access key to create a CLI profile in the EC2 instance D.Create an IAM role with the required permissions. Attach the role to the administrative IAM user. Answer: A QUESTION 41 Who is responsible for managing IAM user access and secret keys according to the AWS shared responsibility model? A.IAM access and secret keys are static, so there is no need to rotate them. B.The customer is responsible for rotating keys. C.AWS will rotate the keys whenever required. D.The AWS Support team will rotate keys when requested by the customer. Answer: B QUESTION 42 A company is running a Microsoft SQL Server instance on premises and is migrating its application to AWS. The company lacks the resources need to refactor the application, but management wants to reduce operational overhead as part of the migration. Which database service would MOST effectively support these requirements? A.Amazon DynamoDB B.Amazon Redshift C.Microsoft SQL Server on Amazon EC2 D.Amazon RDS for SQL Server Answer: D QUESTION 43 A company wants to increase its ability to recover its infrastructure in the case of a natural disaster. Which pillar of the AWS Well-Architected Framework does this ability represent? A.Cost optimization B.Performance efficiency C.Reliability D.Security Answer: C QUESTION 44 Which AWS service provides the capability to view end-to-end performance metrics and troubleshoot distributed applications? A.AWS Cloud9 B.AWS CodeStar C.AWS Cloud Map D.AWS X-Ray Answer: D QUESTION 45 Which tasks require use of the AWS account root user? (Choose two.) A.Changing an AWS Support plan B.Modifying an Amazon EC2 instance type C.Grouping resources in AWS Systems Manager D.Running applications in Amazon Elastic Kubernetes Service (Amazon EKS) E.Closing an AWS account Answer: AE 2021 Latest Braindump2go CLF-C01 PDF and CLF-C01 VCE Dumps Free Share: https://drive.google.com/drive/folders/1krJU57a_UPVWcWZmf7UYjIepWf04kaJg?usp=sharing
Membervio Review (Neil Napier): Good or Bad?
Membervio Review Membervio Review - What is Membervio? Membervio helps you create your own STUNNING premium membership site with online courses, coaching programs, and much more… with built-in payment integrations & automated account management. Simply get access & create the course of your choice to get started. Then, you can even customize the course to suit your target audience from within your powerful dashboard & name your price. Instantly share your ready-to-convert courses to begin making big bucks right here & right now. Besides, they will also even help you profit faster with other people’s knowledge by including FIVE DFY Courses (I mentioned) that you can sell & keep 100% of revenue. Membervio can do things that no other tools in the market can do. And the best part is, you can grab it with a low one-time price during this special launch till Oct 19th. So, act fast! https://app.box.com/s/qe3h3gnzxye8c6cvpclgh2vzcn8vnhgu https://pbase.com/topics/affiliatesayhi/membervio_review https://www.plurk.com/p/olp3m0 https://www.flickr.com/photos/edwardrecommends/51600719725/in/dateposted/ https://affiliatesayhi.tumblr.com/post/665311599412871168/membervio-review-neil-napier-1-teachable https://vk.com/wall516421045_25 https://www.deviantart.com/affiliatesayhi/art/Membervio-Review-895159496 https://tapas.io/edwardrecommends https://www.evensi.com/page/membervio-review/10079877981 https://descubre.beqbe.com/membervio-review https://slashdot.org/submission/14792211/membervio-review https://dribbble.com/shots/16670899-Membervio-Review Thanks for reading my Membervio Review!
[October-2021]New Braindump2go DOP-C01 PDF and VCE Dumps[Q552-Q557]
QUESTION 552 A company manages an application that stores logs in Amazon CloudWatch Logs. The company wants to archive the logs in Amazon S3. Logs are rarely accessed after 90 days and must be retained for 10 years. Which combination of steps should a DevOps engineer take to meet these requirements? (Choose two.) A.Configure a CloudWatch Logs subscription filter to use AWS Glue to transfer all logs to an S3 bucket. B.Configure a CloudWatch Logs subscription filter to use Amazon Kinesis Data Firehose to stream all logs to an S3 bucket. C.Configure a CloudWatch Logs subscription filter to stream all logs to an S3 bucket. D.Configure the S3 bucket lifecycle policy to transition logs to S3 Glacier after 90 days and to expire logs after 3.650 days. E.Configure the S3 bucket lifecycle policy to transition logs to Reduced Redundancy after 90 days and to expire logs after 3.650 days. Answer: BC QUESTION 553 A company gives its employees limited rights to AWS. DevOps engineers have the ability to assume an administrator role. For tracking purposes, the security team wants to receive a near-real-time notification when the administrator role is assumed. How should this be accomplished? A.Configure AWS Config to publish logs to an Amazon S3 bucket. Use Amazon Athena to query the logs and send a notification to the security team when the administrator role is assumed. B.Configure Amazon GuardDuty to monitor when the administrator role is assumed and send a notification to the security team. C.Create an Amazon EventBridge (Amazon CloudWatch Events) event rule using an AWS Management Console sign-in events event pattern that publishes a message to an Amazon SNS topic if the administrator role is assumed. D.Create an Amazon EventBridge (Amazon CloudWatch Events) events rule using an AWS API call that uses an AWS CloudTrail event pattern to trigger an AWS Lambda function that publishes a message to an Amazon SNS topic if the administrator role is assumed. Answer: C QUESTION 554 A development team manages website deployments using AWS CodeDeploy blue/green deployments. The application is running on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. When deploying a new revision, the team notices the deployment eventually fails, but it takes a long time to fail. After further inspection, the team discovers the AllowTraffic lifecycle event ran for an hour and eventually failed without providing any other information. The team wants to ensure failure notices are delivered more quickly while maintaining application availability even upon failure. Which combination of actions should be taken to meet these requirements? (Choose two.) A.Change the deployment configuration to CodeDeployDefault.AllAtOnce to speed up the deployment process by deploying to all of the instances at the same time. B.Create a CodeDeploy trigger for the deployment failure event and make the deployment fail as soon as a single health check failure is detected. C.Reduce the HealthCheckIntervalSeconds and UnhealthyThresholdCount values within the target group health checks to decrease the amount of time it takes for the application to be considered unhealthy. D.Use the appspec.yml file to run a script on the AllowTraffic hook to perform lighter health checks on the application instead of making CodeDeploy wait for the target group health checks to pass. E.Use the appspec,yml file to run a script on the BeforeAllowTraffic hook to perform hearth checks on the application and fail the deployment if the health checks performed by the script are not successful. Answer: AC QUESTION 555 A company is running a number of internet-facing APIs that use an AWS Lambda authorizer to control access. A security team wants to be alerted when a large number of requests are failing authorization, as this may indicate API abuse. Given the magnitude of API requests, the team wants to be alerted only if the number of HTTP 403 Forbidden responses goes above 2% of overall API calls. Which solution will accomplish this? A.Use the default Amazon API Gateway 403Error and Count metrics sent to Amazon CloudWatch, and use metric math to create a CloudWatch alarm. Use the (403Error/Count)*100 mathematical expression when defining the alarm. Set the alarm threshold to be greater than 2. B.Write a Lambda function that fetches the default Amazon API Gateway 403Error and Count metrics sent to Amazon CloudWatch, calculate the percentage of errors, then push a custom metric to CloudWatch named Custorn403Percent. Create a CloudWatch alarm based on this custom metric. Set the alarm threshold to be greater than 2. C.Configure Amazon API Gateway to send custom access logs to Amazon CloudWatch Logs. Create a log filter to produce a custom metric for the HTTP 403 response code named Custom403Error. Use this custom metric and the default API Gateway Count metric sent to CloudWatch, and use metric match to create a CloudWatch alarm. Use the (Custom403Error/Count)*100 mathematical expression when defining the alarm. Set the alarm threshold to be greater than 2. D.Configure Amazon API Gateway to enable custom Amazon CloudWatch metrics, enable the ALL_STATUS_CODE option, and define an APICustom prefix. Use CloudWatch metric math to create a CloudWatch alarm. Use the (APICustom403Error/Count)*100 mathematical expression when defining the alarm. Set the alarm threshold to be greater than 2. Answer: C QUESTION 556 A company uses AWS Organizations to manage multiple accounts. Information security policies require that all unencrypted Amazon EBS volumes be marked as non-compliant. A DevOps engineer needs to automatically deploy the solution and ensure that this compliance check is always present. With solution will accomplish this? A.Create an AWS CloudFormation template that defines an AWS Inspector rule to check whether EBS encryption is enabled. Save the template to an Amazon S3 bucket that has been shared with all accounts within the company. Update the account creation script pointing to the CloudFormation template in Amazon S3. B.Create an AWS Config organizational rule to check whether EBS encryption is enabled and deploy the rule using the AWS CLI. Create and apply an SCP to prohibit stopping and deleting AWS Config across the organization. C.Create an SCP in Organizations. Set the policy to prevent the launch of Amazon EC2 instances without encryption on the EBS volumes using a conditional expression. Apply the SCP to all AWS accounts. Use Amazon Athena to analyze the AWS CloudTrail output, looking for events that deny an ec2:RunInstances action. D.Deploy an IAM role to all accounts from a single trusted account. Build a pipeline with AWS CodePipeline with a stage in AWS Lambda to assume the IAM role, and list all EBS volumes in the account. Publish a report to Amazon S3. Answer: A QUESTION 557 A company's application is running on Amazon EC2 instances in an Auto Scaling group. A DevOps engineer needs to ensure there are at least four application servers running at all times. Whenever an update has to be made to the application, the engineer creates a new AMI with the updated configuration and updates the AWS CloudFormation template with the new AMI ID. After the stack finishes, the engineer manually terminates the old instances one by one, verifying that the new instance is operational before proceeding. The engineer needs to automate this process. Which action will allow for the LEAST number of manual steps moving forward? A.Update the CloudFormation template to include the UpdatePolicy attribute with the AutoScalingRollingUpdate policy. B.Update the CloudFormation template to include the UpdatePolicy attribute with the AutoScalingReplacingUpdate policy. C.Use an Auto Scaling lifecycle hook to verify that the previous instance is operational before allowing the DevOps engineer's selected instance to terminate. D.Use an Auto Scaling lifecycle hook to confirm there are at least four running instances before allowing the DevOps engineer's selected instance to terminate. Answer: B 2021 Latest Braindump2go DOP-C01 PDF and DOP-C01 VCE Dumps Free Share: https://drive.google.com/drive/folders/1hd6oWmIDwjJEZd1HiDEA_vw9HTVc_nAH?usp=sharing
[October-2021]New Braindump2go MLS-C01 PDF and VCE Dumps[Q158-Q171]
QUESTION 158 A company needs to quickly make sense of a large amount of data and gain insight from it. The data is in different formats, the schemas change frequently, and new data sources are added regularly. The company wants to use AWS services to explore multiple data sources, suggest schemas, and enrich and transform the data. The solution should require the least possible coding effort for the data flows and the least possible infrastructure management. Which combination of AWS services will meet these requirements? A.Amazon EMR for data discovery, enrichment, and transformation Amazon Athena for querying and analyzing the results in Amazon S3 using standard SQL Amazon QuickSight for reporting and getting insights B.Amazon Kinesis Data Analytics for data ingestion Amazon EMR for data discovery, enrichment, and transformation Amazon Redshift for querying and analyzing the results in Amazon S3 C.AWS Glue for data discovery, enrichment, and transformation Amazon Athena for querying and analyzing the results in Amazon S3 using standard SQL Amazon QuickSight for reporting and getting insights D.AWS Data Pipeline for data transfer AWS Step Functions for orchestrating AWS Lambda jobs for data discovery, enrichment, and transformation Amazon Athena for querying and analyzing the results in Amazon S3 using standard SQL Amazon QuickSight for reporting and getting insights Answer: A QUESTION 159 A company is converting a large number of unstructured paper receipts into images. The company wants to create a model based on natural language processing (NLP) to find relevant entities such as date, location, and notes, as well as some custom entities such as receipt numbers. The company is using optical character recognition (OCR) to extract text for data labeling. However, documents are in different structures and formats, and the company is facing challenges with setting up the manual workflows for each document type. Additionally, the company trained a named entity recognition (NER) model for custom entity detection using a small sample size. This model has a very low confidence score and will require retraining with a large dataset. Which solution for text extraction and entity detection will require the LEAST amount of effort? A.Extract text from receipt images by using Amazon Textract. Use the Amazon SageMaker BlazingText algorithm to train on the text for entities and custom entities. B.Extract text from receipt images by using a deep learning OCR model from the AWS Marketplace. Use the NER deep learning model to extract entities. C.Extract text from receipt images by using Amazon Textract. Use Amazon Comprehend for entity detection, and use Amazon Comprehend custom entity recognition for custom entity detection. D.Extract text from receipt images by using a deep learning OCR model from the AWS Marketplace. Use Amazon Comprehend for entity detection, and use Amazon Comprehend custom entity recognition for custom entity detection. Answer: C QUESTION 160 A company is building a predictive maintenance model based on machine learning (ML). The data is stored in a fully private Amazon S3 bucket that is encrypted at rest with AWS Key Management Service (AWS KMS) CMKs. An ML specialist must run data preprocessing by using an Amazon SageMaker Processing job that is triggered from code in an Amazon SageMaker notebook. The job should read data from Amazon S3, process it, and upload it back to the same S3 bucket. The preprocessing code is stored in a container image in Amazon Elastic Container Registry (Amazon ECR). The ML specialist needs to grant permissions to ensure a smooth data preprocessing workflow. Which set of actions should the ML specialist take to meet these requirements? A.Create an IAM role that has permissions to create Amazon SageMaker Processing jobs, S3 read and write access to the relevant S3 bucket, and appropriate KMS and ECR permissions. Attach the role to the SageMaker notebook instance. Create an Amazon SageMaker Processing job from the notebook. B.Create an IAM role that has permissions to create Amazon SageMaker Processing jobs. Attach the role to the SageMaker notebook instance. Create an Amazon SageMaker Processing job with an IAM role that has read and write permissions to the relevant S3 bucket, and appropriate KMS and ECR permissions. C.Create an IAM role that has permissions to create Amazon SageMaker Processing jobs and to access Amazon ECR. Attach the role to the SageMaker notebook instance. Set up both an S3 endpoint and a KMS endpoint in the default VPC. Create Amazon SageMaker Processing jobs from the notebook. D.Create an IAM role that has permissions to create Amazon SageMaker Processing jobs. Attach the role to the SageMaker notebook instance. Set up an S3 endpoint in the default VPC. Create Amazon SageMaker Processing jobs with the access key and secret key of the IAM user with appropriate KMS and ECR permissions. Answer: D QUESTION 161 A data scientist has been running an Amazon SageMaker notebook instance for a few weeks. During this time, a new version of Jupyter Notebook was released along with additional software updates. The security team mandates that all running SageMaker notebook instances use the latest security and software updates provided by SageMaker. How can the data scientist meet this requirements? A.Call the CreateNotebookInstanceLifecycleConfig API operation B.Create a new SageMaker notebook instance and mount the Amazon Elastic Block Store (Amazon EBS) volume from the original instance C.Stop and then restart the SageMaker notebook instance D.Call the UpdateNotebookInstanceLifecycleConfig API operation Answer: C QUESTION 162 A library is developing an automatic book-borrowing system that uses Amazon Rekognition. Images of library members' faces are stored in an Amazon S3 bucket. When members borrow books, the Amazon Rekognition CompareFaces API operation compares real faces against the stored faces in Amazon S3. The library needs to improve security by making sure that images are encrypted at rest. Also, when the images are used with Amazon Rekognition. they need to be encrypted in transit. The library also must ensure that the images are not used to improve Amazon Rekognition as a service. How should a machine learning specialist architect the solution to satisfy these requirements? A.Enable server-side encryption on the S3 bucket. Submit an AWS Support ticket to opt out of allowing images to be used for improving the service, and follow the process provided by AWS Support. B.Switch to using an Amazon Rekognition collection to store the images. Use the IndexFaces and SearchFacesByImage API operations instead of the CompareFaces API operation. C.Switch to using the AWS GovCloud (US) Region for Amazon S3 to store images and for Amazon Rekognition to compare faces. Set up a VPN connection and only call the Amazon Rekognition API operations through the VPN. D.Enable client-side encryption on the S3 bucket. Set up a VPN connection and only call the Amazon Rekognition API operations through the VPN. Answer: B QUESTION 163 A company is building a line-counting application for use in a quick-service restaurant. The company wants to use video cameras pointed at the line of customers at a given register to measure how many people are in line and deliver notifications to managers if the line grows too long. The restaurant locations have limited bandwidth for connections to external services and cannot accommodate multiple video streams without impacting other operations. Which solution should a machine learning specialist implement to meet these requirements? A.Install cameras compatible with Amazon Kinesis Video Streams to stream the data to AWS over the restaurant's existing internet connection. Write an AWS Lambda function to take an image and send it to Amazon Rekognition to count the number of faces in the image. Send an Amazon Simple Notification Service (Amazon SNS) notification if the line is too long. B.Deploy AWS DeepLens cameras in the restaurant to capture video. Enable Amazon Rekognition on the AWS DeepLens device, and use it to trigger a local AWS Lambda function when a person is recognized. Use the Lambda function to send an Amazon Simple Notification Service (Amazon SNS) notification if the line is too long. C.Build a custom model in Amazon SageMaker to recognize the number of people in an image. Install cameras compatible with Amazon Kinesis Video Streams in the restaurant. Write an AWS Lambda function to take an image. Use the SageMaker endpoint to call the model to count people. Send an Amazon Simple Notification Service (Amazon SNS) notification if the line is too long. D.Build a custom model in Amazon SageMaker to recognize the number of people in an image. Deploy AWS DeepLens cameras in the restaurant. Deploy the model to the cameras. Deploy an AWS Lambda function to the cameras to use the model to count people and send an Amazon Simple Notification Service (Amazon SNS) notification if the line is too long. Answer: A QUESTION 164 A company has set up and deployed its machine learning (ML) model into production with an endpoint using Amazon SageMaker hosting services. The ML team has configured automatic scaling for its SageMaker instances to support workload changes. During testing, the team notices that additional instances are being launched before the new instances are ready. This behavior needs to change as soon as possible. How can the ML team solve this issue? A.Decrease the cooldown period for the scale-in activity. Increase the configured maximum capacity of instances. B.Replace the current endpoint with a multi-model endpoint using SageMaker. C.Set up Amazon API Gateway and AWS Lambda to trigger the SageMaker inference endpoint. D.Increase the cooldown period for the scale-out activity. Answer: A QUESTION 165 A telecommunications company is developing a mobile app for its customers. The company is using an Amazon SageMaker hosted endpoint for machine learning model inferences. Developers want to introduce a new version of the model for a limited number of users who subscribed to a preview feature of the app. After the new version of the model is tested as a preview, developers will evaluate its accuracy. If a new version of the model has better accuracy, developers need to be able to gradually release the new version for all users over a fixed period of time. How can the company implement the testing model with the LEAST amount of operational overhead? A.Update the ProductionVariant data type with the new version of the model by using the CreateEndpointConfig operation with the InitialVariantWeight parameter set to 0. Specify the TargetVariant parameter for InvokeEndpoint calls for users who subscribed to the preview feature. When the new version of the model is ready for release, gradually increase InitialVariantWeight until all users have the updated version. B.Configure two SageMaker hosted endpoints that serve the different versions of the model. Create an Application Load Balancer (ALB) to route traffic to both endpoints based on the TargetVariant query string parameter. Reconfigure the app to send the TargetVariant query string parameter for users who subscribed to the preview feature. When the new version of the model is ready for release, change the ALB's routing algorithm to weighted until all users have the updated version. C.Update the DesiredWeightsAndCapacity data type with the new version of the model by using the UpdateEndpointWeightsAndCapacities operation with the DesiredWeight parameter set to 0. Specify the TargetVariant parameter for InvokeEndpoint calls for users who subscribed to the preview feature. When the new version of the model is ready for release, gradually increase DesiredWeight until all users have the updated version. D.Configure two SageMaker hosted endpoints that serve the different versions of the model. Create an Amazon Route 53 record that is configured with a simple routing policy and that points to the current version of the model. Configure the mobile app to use the endpoint URL for users who subscribed to the preview feature and to use the Route 53 record for other users. When the new version of the model is ready for release, add a new model version endpoint to Route 53, and switch the policy to weighted until all users have the updated version. Answer: D QUESTION 166 A company offers an online shopping service to its customers. The company wants to enhance the site's security by requesting additional information when customers access the site from locations that are different from their normal location. The company wants to update the process to call a machine learning (ML) model to determine when additional information should be requested. The company has several terabytes of data from its existing ecommerce web servers containing the source IP addresses for each request made to the web server. For authenticated requests, the records also contain the login name of the requesting user. Which approach should an ML specialist take to implement the new security feature in the web application? A.Use Amazon SageMaker Ground Truth to label each record as either a successful or failed access attempt. Use Amazon SageMaker to train a binary classification model using the factorization machines (FM) algorithm. B.Use Amazon SageMaker to train a model using the IP Insights algorithm. Schedule updates and retraining of the model using new log data nightly. C.Use Amazon SageMaker Ground Truth to label each record as either a successful or failed access attempt. Use Amazon SageMaker to train a binary classification model using the IP Insights algorithm. D.Use Amazon SageMaker to train a model using the Object2Vec algorithm. Schedule updates and retraining of the model using new log data nightly. Answer: C QUESTION 167 A retail company wants to combine its customer orders with the product description data from its product catalog. The structure and format of the records in each dataset is different. A data analyst tried to use a spreadsheet to combine the datasets, but the effort resulted in duplicate records and records that were not properly combined. The company needs a solution that it can use to combine similar records from the two datasets and remove any duplicates. Which solution will meet these requirements? A.Use an AWS Lambda function to process the data. Use two arrays to compare equal strings in the fields from the two datasets and remove any duplicates. B.Create AWS Glue crawlers for reading and populating the AWS Glue Data Catalog. Call the AWS Glue SearchTables API operation to perform a fuzzy-matching search on the two datasets, and cleanse the data accordingly. C.Create AWS Glue crawlers for reading and populating the AWS Glue Data Catalog. Use the FindMatches transform to cleanse the data. D.Create an AWS Lake Formation custom transform. Run a transformation for matching products from the Lake Formation console to cleanse the data automatically. Answer: D QUESTION 168 A company provisions Amazon SageMaker notebook instances for its data science team and creates Amazon VPC interface endpoints to ensure communication between the VPC and the notebook instances. All connections to the Amazon SageMaker API are contained entirely and securely using the AWS network. However, the data science team realizes that individuals outside the VPC can still connect to the notebook instances across the internet. Which set of actions should the data science team take to fix the issue? A.Modify the notebook instances' security group to allow traffic only from the CIDR ranges of the VPC. Apply this security group to all of the notebook instances' VPC interfaces. B.Create an IAM policy that allows the sagemaker:CreatePresignedNotebooklnstanceUrl and sagemaker:DescribeNotebooklnstance actions from only the VPC endpoints. Apply this policy to all IAM users, groups, and roles used to access the notebook instances. C.Add a NAT gateway to the VPC. Convert all of the subnets where the Amazon SageMaker notebook instances are hosted to private subnets. Stop and start all of the notebook instances to reassign only private IP addresses. D.Change the network ACL of the subnet the notebook is hosted in to restrict access to anyone outside the VPC. Answer: B QUESTION 169 A company will use Amazon SageMaker to train and host a machine learning (ML) model for a marketing campaign. The majority of data is sensitive customer data. The data must be encrypted at rest. The company wants AWS to maintain the root of trust for the master keys and wants encryption key usage to be logged. Which implementation will meet these requirements? A.Use encryption keys that are stored in AWS Cloud HSM to encrypt the ML data volumes, and to encrypt the model artifacts and data in Amazon S3. B.Use SageMaker built-in transient keys to encrypt the ML data volumes. Enable default encryption for new Amazon Elastic Block Store (Amazon EBS) volumes. C.Use customer managed keys in AWS Key Management Service (AWS KMS) to encrypt the ML data volumes, and to encrypt the model artifacts and data in Amazon S3. D.Use AWS Security Token Service (AWS STS) to create temporary tokens to encrypt the ML storage volumes, and to encrypt the model artifacts and data in Amazon S3. Answer: C QUESTION 170 A machine learning specialist stores IoT soil sensor data in Amazon DynamoDB table and stores weather event data as JSON files in Amazon S3. The dataset in DynamoDB is 10 GB in size and the dataset in Amazon S3 is 5 GB in size. The specialist wants to train a model on this data to help predict soil moisture levels as a function of weather events using Amazon SageMaker. Which solution will accomplish the necessary transformation to train the Amazon SageMaker model with the LEAST amount of administrative overhead? A.Launch an Amazon EMR cluster. Create an Apache Hive external table for the DynamoDB table and S3 data. Join the Hive tables and write the results out to Amazon S3. B.Crawl the data using AWS Glue crawlers. Write an AWS Glue ETL job that merges the two tables and writes the output to an Amazon Redshift cluster. C.Enable Amazon DynamoDB Streams on the sensor table. Write an AWS Lambda function that consumes the stream and appends the results to the existing weather files in Amazon S3. D.Crawl the data using AWS Glue crawlers. Write an AWS Glue ETL job that merges the two tables and writes the output in CSV format to Amazon S3. Answer: C QUESTION 171 A company sells thousands of products on a public website and wants to automatically identify products with potential durability problems. The company has 1.000 reviews with date, star rating, review text, review summary, and customer email fields, but many reviews are incomplete and have empty fields. Each review has already been labeled with the correct durability result. A machine learning specialist must train a model to identify reviews expressing concerns over product durability. The first model needs to be trained and ready to review in 2 days. What is the MOST direct approach to solve this problem within 2 days? A.Train a custom classifier by using Amazon Comprehend. B.Build a recurrent neural network (RNN) in Amazon SageMaker by using Gluon and Apache MXNet. C.Train a built-in BlazingText model using Word2Vec mode in Amazon SageMaker. D.Use a built-in seq2seq model in Amazon SageMaker. Answer: B 2021 Latest Braindump2go MLS-C01 PDF and MLS-C01 VCE Dumps Free Share: https://drive.google.com/drive/folders/1eX--L9LzE21hzqPIkigeo1QoAGNWL4vd?usp=sharing
[October-2021]New Braindump2go DAS-C01 PDF and VCE Dumps[Q122-Q132]
QUESTION 122 A company has a marketing department and a finance department. The departments are storing data in Amazon S3 in their own AWS accounts in AWS Organizations. Both departments use AWS Lake Formation to catalog and secure their data. The departments have some databases and tables that share common names. The marketing department needs to securely access some tables from the finance department. Which two steps are required for this process? (Choose two.) A.The finance department grants Lake Formation permissions for the tables to the external account for the marketing department. B.The finance department creates cross-account IAM permissions to the table for the marketing department role. C.The marketing department creates an IAM role that has permissions to the Lake Formation tables. Answer: AB QUESTION 123 A human resources company maintains a 10-node Amazon Redshift cluster to run analytics queries on the company's data. The Amazon Redshift cluster contains a product table and a transactions table, and both tables have a product_sku column. The tables are over 100 GB in size. The majority of queries run on both tables. Which distribution style should the company use for the two tables to achieve optimal query performance? A.An EVEN distribution style for both tables B.A KEY distribution style for both tables C.An ALL distribution style for the product table and an EVEN distribution style for the transactions table D.An EVEN distribution style for the product table and an KEY distribution style for the transactions table Answer: B QUESTION 124 A company receives data from its vendor in JSON format with a timestamp in the file name. The vendor uploads the data to an Amazon S3 bucket, and the data is registered into the company's data lake for analysis and reporting. The company has configured an S3 Lifecycle policy to archive all files to S3 Glacier after 5 days. The company wants to ensure that its AWS Glue crawler catalogs data only from S3 Standard storage and ignores the archived files. A data analytics specialist must implement a solution to achieve this goal without changing the current S3 bucket configuration. Which solution meets these requirements? A.Use the exclude patterns feature of AWS Glue to identify the S3 Glacier files for the crawler to exclude. B.Schedule an automation job that uses AWS Lambda to move files from the original S3 bucket to a new S3 bucket for S3 Glacier storage. C.Use the excludeStorageClasses property in the AWS Glue Data Catalog table to exclude files on S3 Glacier storage. D.Use the include patterns feature of AWS Glue to identify the S3 Standard files for the crawler to include. Answer: A QUESTION 125 A company analyzes historical data and needs to query data that is stored in Amazon S3. New data is generated daily as .csv files that are stored in Amazon S3. The company's analysts are using Amazon Athena to perform SQL queries against a recent subset of the overall data. The amount of data that is ingested into Amazon S3 has increased substantially over time, and the query latency also has increased. Which solutions could the company implement to improve query performance? (Choose two.) A.Use MySQL Workbench on an Amazon EC2 instance, and connect to Athena by using a JDBC or ODBC connector. Run the query from MySQL Workbench instead of Athena directly. B.Use Athena to extract the data and store it in Apache Parquet format on a daily basis. Query the extracted data. C.Run a daily AWS Glue ETL job to convert the data files to Apache Parquet and to partition the converted files. Create a periodic AWS Glue crawler to automatically crawl the partitioned data on a daily basis. D.Run a daily AWS Glue ETL job to compress the data files by using the .gzip format. Query the compressed data. E.Run a daily AWS Glue ETL job to compress the data files by using the .lzo format. Query the compressed data. Answer: BC QUESTION 126 A company is sending historical datasets to Amazon S3 for storage. A data engineer at the company wants to make these datasets available for analysis using Amazon Athena. The engineer also wants to encrypt the Athena query results in an S3 results location by using AWS solutions for encryption. The requirements for encrypting the query results are as follows: - Use custom keys for encryption of the primary dataset query results. - Use generic encryption for all other query results. - Provide an audit trail for the primary dataset queries that shows when the keys were used and by whom. Which solution meets these requirements? A.Use server-side encryption with S3 managed encryption keys (SSE-S3) for the primary dataset. Use SSE-S3 for the other datasets. B.Use server-side encryption with customer-provided encryption keys (SSE-C) for the primary dataset. Use server-side encryption with S3 managed encryption keys (SSE-S3) for the other datasets. C.Use server-side encryption with AWS KMS managed customer master keys (SSE-KMS CMKs) for the primary dataset. Use server-side encryption with S3 managed encryption keys (SSE-S3) for the other datasets. D.Use client-side encryption with AWS Key Management Service (AWS KMS) customer managed keys for the primary dataset. Use S3 client-side encryption with client-side keys for the other datasets. Answer: A QUESTION 127 A large telecommunications company is planning to set up a data catalog and metadata management for multiple data sources running on AWS. The catalog will be used to maintain the metadata of all the objects stored in the data stores. The data stores are composed of structured sources like Amazon RDS and Amazon Redshift, and semistructured sources like JSON and XML files stored in Amazon S3. The catalog must be updated on a regular basis, be able to detect the changes to object metadata, and require the least possible administration. Which solution meets these requirements? A.Use Amazon Aurora as the data catalog. Create AWS Lambda functions that will connect and gather the metadata information from multiple sources and update the data catalog in Aurora. Schedule the Lambda functions periodically. B.Use the AWS Glue Data Catalog as the central metadata repository. Use AWS Glue crawlers to connect to multiple data stores and update the Data Catalog with metadata changes. Schedule the crawlers periodically to update the metadata catalog. C.Use Amazon DynamoDB as the data catalog. Create AWS Lambda functions that will connect and gather the metadata information from multiple sources and update the DynamoDB catalog. Schedule the Lambda functions periodically. D.Use the AWS Glue Data Catalog as the central metadata repository. Extract the schema for RDS and Amazon Redshift sources and build the Data Catalog. Use AWS crawlers for data stored in Amazon S3 to infer the schema and automatically update the Data Catalog. Answer: D QUESTION 128 An ecommerce company is migrating its business intelligence environment from on premises to the AWS Cloud. The company will use Amazon Redshift in a public subnet and Amazon QuickSight. The tables already are loaded into Amazon Redshift and can be accessed by a SQL tool. The company starts QuickSight for the first time. During the creation of the data source, a data analytics specialist enters all the information and tries to validate the connection. An error with the following message occurs: "Creating a connection to your data source timed out." How should the data analytics specialist resolve this error? A.Grant the SELECT permission on Amazon Redshift tables. B.Add the QuickSight IP address range into the Amazon Redshift security group. C.Create an IAM role for QuickSight to access Amazon Redshift. D.Use a QuickSight admin user for creating the dataset. Answer: A QUESTION 129 A power utility company is deploying thousands of smart meters to obtain real-time updates about power consumption. The company is using Amazon Kinesis Data Streams to collect the data streams from smart meters. The consumer application uses the Kinesis Client Library (KCL) to retrieve the stream data. The company has only one consumer application. The company observes an average of 1 second of latency from the moment that a record is written to the stream until the record is read by a consumer application. The company must reduce this latency to 500 milliseconds. Which solution meets these requirements? A.Use enhanced fan-out in Kinesis Data Streams. B.Increase the number of shards for the Kinesis data stream. C.Reduce the propagation delay by overriding the KCL default settings. D.Develop consumers by using Amazon Kinesis Data Firehose. Answer: C QUESTION 130 A company needs to collect streaming data from several sources and store the data in the AWS Cloud. The dataset is heavily structured, but analysts need to perform several complex SQL queries and need consistent performance. Some of the data is queried more frequently than the rest. The company wants a solution that meets its performance requirements in a cost-effective manner. Which solution meets these requirements? A.Use Amazon Managed Streaming for Apache Kafka to ingest the data to save it to Amazon S3. Use Amazon Athena to perform SQL queries over the ingested data. B.Use Amazon Managed Streaming for Apache Kafka to ingest the data to save it to Amazon Redshift. Enable Amazon Redshift workload management (WLM) to prioritize workloads. C.Use Amazon Kinesis Data Firehose to ingest the data to save it to Amazon Redshift. Enable Amazon Redshift workload management (WLM) to prioritize workloads. D.Use Amazon Kinesis Data Firehose to ingest the data to save it to Amazon S3. Load frequently queried data to Amazon Redshift using the COPY command. Use Amazon Redshift Spectrum for less frequently queried data. Answer: B QUESTION 131 A manufacturing company uses Amazon Connect to manage its contact center and Salesforce to manage its customer relationship management (CRM) data. The data engineering team must build a pipeline to ingest data from the contact center and CRM system into a data lake that is built on Amazon S3. What is the MOST efficient way to collect data in the data lake with the LEAST operational overhead? A.Use Amazon Kinesis Data Streams to ingest Amazon Connect data and Amazon AppFlow to ingest Salesforce data. B.Use Amazon Kinesis Data Firehose to ingest Amazon Connect data and Amazon Kinesis Data Streams to ingest Salesforce data. C.Use Amazon Kinesis Data Firehose to ingest Amazon Connect data and Amazon AppFlow to ingest Salesforce data. D.Use Amazon AppFlow to ingest Amazon Connect data and Amazon Kinesis Data Firehose to ingest Salesforce data. Answer: B QUESTION 132 A manufacturing company wants to create an operational analytics dashboard to visualize metrics from equipment in near-real time. The company uses Amazon Kinesis Data Streams to stream the data to other applications. The dashboard must automatically refresh every 5 seconds. A data analytics specialist must design a solution that requires the least possible implementation effort. Which solution meets these requirements? A.Use Amazon Kinesis Data Firehose to store the data in Amazon S3. Use Amazon QuickSight to build the dashboard. B.Use Apache Spark Streaming on Amazon EMR to read the data in near-real time. Develop a custom application for the dashboard by using D3.js. C.Use Amazon Kinesis Data Firehose to push the data into an Amazon Elasticsearch Service (Amazon ES) cluster. Visualize the data by using a Kibana dashboard. D.Use AWS Glue streaming ETL to store the data in Amazon S3. Use Amazon QuickSight to build the dashboard. Answer: B 2021 Latest Braindump2go DAS-C01 PDF and DAS-C01 VCE Dumps Free Share: https://drive.google.com/drive/folders/1WbSRm3ZlrRzjwyqX7auaqgEhLLzmD-2w?usp=sharing
[October-2021]New Braindump2go 300-815 PDF and VCE Dumps[Q105-Q119]
QUESTION 105 The SIP session refresh timer allows the RTP session to stay active during an active call. The Cisco UCM sends either SIP-INVITE or SIP-UPDATE messages in a regular interval of time throughout the active duration of the call. During a troubleshooting session, the engineer finds that the Cisco UCM is sending SIP-UPDATE as the SIP session refresher, and the engineer would like to use SIP-INVITE as the session refresher. What configuration should be made in the Cisco UCM to achieve this? A.Enable SIP ReMXX Options on the SIP profile. B.Enable Send send-receive SDP in mid-call INVITE on the SIP profile. C.Change Session Refresh Method on the SIP profile to INVITE. D.Increase Retry INVITE to 20 seconds on the SIP profile. Answer: C QUESTION 106 Refer to the exhibit. ILS has been configured between two hubs using this configuration. The hubs appear to register successfully, but ILS is not functioning as expected. Which configuration step is missing? A.A password has never been set for ILS. B.Use TLS Certificates must be selected. C.Trust certificates for ILS have not been installed on the clusters D.The Cluster IDs have not been set to unique values Answer: D QUESTION 107 A new deployment is using MVA for a specific user on the sales team, but the user is having issues when dialing DTMF. Which DTMF method must be configured in resolve the issue? A.gateway B.out-of-band C.channel D.in-band Answer: B QUESTION 108 A single site reports that when they dial select numbers, the call connects, but they do not get audio. The administrator finds that the calls are not routing out of the normal gateway but out of another site's gateway due to a TEHO configuration. What is the next step to diagnose and solve the issue? A.Verify that IP routing is correct between the gateway and the IP phone. B.Verify that the route pattern is not blocking calls to the destination number. C.Verify that the dial peer of the gateway has the correct destination pattern configured. D.Verify that the route pattern has the correct calling-party transformation mask Answer: C QUESTION 109 An engineer is configuring Cisco UCM lo forward parked calls back to the user who parked the call if it is not retrieved after a specified time interval. Which action must be taken to accomplish this task? A.Configure device pools. B.Configure service parameters C.Configure enterprise softkeys. D.Configure class of control. Answer: B QUESTION 110 Refer to the exhibit. An engineer is troubleshooting an issue with the caller not hearing a PSTN announcement before the SIP call has completed setup. How must the engineer resolve this issue using the reliable provisional response of the SIP? A.voice service voip sip send 180 sdp B.voice service voip sip rehxx require 100rel C.sip-ua disable-early-media 180 D.voice service voip sip no reMxx Answer: B QUESTION 111 Users are reporting that several inter-site calls are failing, and the message "not enough bandwidth" is showing on the display. Voice traffic between locations goes through corporate WAN. and Call Admission Control is enabled to limit the number of calls between sites. How is the issue solved without increasing bandwidth utilization on the WAN links? A.Disable Call Admission Control and let the calls use the amount of bandwidth they require. B.Configure Call Queuing so that the user waits until there is bandwidth available C.Configure AAR to reroute calls that are denied by Call Admission Control through the PSTN. D.Reroute all calls through the PSTN and avoid using WAN. Answer: C QUESTION 112 An engineer must configure a Cisco UCM hunt list so that calls to users in a line group are routed to the first idle user and then the next. Which distribution algorithm must be configured to accomplish this task? A.top down B.circular C.broadcast D.longest idle time Answer: A QUESTION 113 An administrator configured Cisco Unified Mobility to block access to remote destinations for certain caller IDs. A user reports that a blocked caller was able to reach a remote destination. Which action resolves the issue? A.Configure Single Number Reach. B.Configure an access list. C.Configure a mobility identity. D.Configure Mobile Voice Access. Answer: B QUESTION 114 Refer to the exhibit. An engineer is troubleshooting a call-establishment problem between Cisco Unified Border Element and Cisco UCM. Which command set corrects the issue? A.SIP binding in SIP configuration mode: voice service voip sip bind control source-interface GigabitEthernetO/0/0 bind media source-interface GigabitEthernetO/0/0 B.SIP binding In SIP configuration mode: voice service volp sip bind control source-Interface GlgabltEthernetO/0/1 bind media source-Interface GlgabltEthernetO/0/1 C.SIP binding In dial-peer configuration mode: dial-peer voice 300 voip voice-class sip bind control source-interface GigabitEthernetO/0/1 voice-class sip bind media source- interface GigabitEthernetO/0/1 D.SIP binding in dial-peer configuration mode: dial-peer voice 100 volp voice-class sip bind control source-interface GigabitEthernetO/0/0 voice-class sip bind media source-interface GigabitEthernetO/0/0 Answer: D QUESTION 115 Refer to the exhibit. Which change to the translation rule is needed to strip only the leading 9 from the digit string 9123548? A.rule 1 /^9\(.*\)/A1/ B.rulel /.*\(3548S\)/^1/ C.rulel /^9\(\d*\)/^1/ D.rule 1/^9123548/^1/ Answer: A QUESTION 116 A customer has multisite deployments with a globalized dial plan. The customer wants to route PSTN calls via the gateway assigned to each site. Which two actions will fulfill the requirement? (Choose two.) A.Create one route group for each site and one global route list for PSTN calls that point to the local route group. B.Create a route group which has all the gateways and associate it to the device pool of every site. C.Create one global route list for PSTN calls that points to one global PSTN route group. D.Create a hunt group and assign it to each side route pattern E.Assign one route group as a local route group in the device pool of the corresponding site. Answer: AE QUESTION 117 Refer to the exhibit. A company needs to ensure that all calls are normalized to E164 format. Which configuration will ensure that the resulting digit string 14085554001 is created and will be routed to the E.164 routing schema? A.Called Party Transformation Mask of + 14085554XXX B.Called Party Transformation Mask of 1408555[35)XXX C.Calling Party Transformation Mask of +1408555XXXX D.Calling Party Transformation Mask of +14085554XXX Answer: A QUESTION 118 An engineer set up and successfully tested a TEHO solution on the Cisco UCM. PSTN calls are routed correctly using the IP WAN as close to the final PSTN destination as possible. However, suddenly, calls start using the backup local gateway instead. What is causing the issue? A.WAN connectivity B.LAN connectivity C.route pattern D.route list and route group Answer: A QUESTION 119 An administrator is asked to configure egress call routing by applying globalization and localization on Cisco UCM. How should this be accomplished? A.Localize the calling and called numbers to PSTN format and globalize the calling and called numbers in the gateway. B.Globalize the calling and called numbers to PSTN format and localize the calling number in the gateway. C.Localize the calling and called numbers to E. 164 format and globalize the called number in the gateway. D.Globalize the calling and called numbers to E. 164 format and localize the called number in the gateway. Answer: D 2021 Latest Braindump2go 300-815 PDF and 300-815 VCE Dumps Free Share: https://drive.google.com/drive/folders/1IHjHEsMRfmKZVssEobUIr0a8XtPy0qWv?usp=sharing