danidee
5,000+ Views

District Managers Visiting Your Retail Job, As Told By Sailor Moon

No, seriously. What do district managers even do?! I used to work at Party City, and whenever the DM would come, we'd all lose our damn minds. But then he'd just walk in, look at the walls, ask a few questions, then leave.
Please tell me I'm not the only person with retail experience that totally knows this feel.

I hate you, District Manager Tuxedo Mask! You're pointless!

10 Comments
Suggested
Recent
@poojas If Tuxedo Mask was middle management, so many things in Sailor Moon would've made more sense.
@danidee yeah those are not transferable skills XD
@shannonl5 EXACTLY. All the district managers (or well, for the most part) are nice at Starbucks. It's just that 99.9% of them are outside hires, so they're pulling their contextual experience from places like Nordstroms or McDonald's or something else somewhat irrelevant.
@danidee oooooof. I knew one Stabrucks DM who was really nice who had worked his way up, but the impression I get is that they're usually hired from outside the company so they have no context for what goes on day to day
@shannonl5 Omg, the amount of cluelessness is sadly enough not too shocking at all. I remember when I worked at Starbucks, a bottle of cinnamon syrup fell off a shelf, burst, and leaked into the oven vents without anyone realizing it, and my district manager went "I think something's burning. I don't know what it is, but it smells delightful!"
Cards you may also be interested in
(April-2021)Braindump2go DAS-C01 PDF and DAS-C01 VCE Dumps(Q88-Q113)
QUESTION 88 An online gaming company is using an Amazon Kinesis Data Analytics SQL application with a Kinesis data stream as its source. The source sends three non-null fields to the application: player_id, score, and us_5_digit_zip_code. A data analyst has a .csv mapping file that maps a small number of us_5_digit_zip_code values to a territory code. The data analyst needs to include the territory code, if one exists, as an additional output of the Kinesis Data Analytics application. How should the data analyst meet this requirement while minimizing costs? A.Store the contents of the mapping file in an Amazon DynamoDB table. Preprocess the records as they arrive in the Kinesis Data Analytics application with an AWS Lambda function that fetches the mapping and supplements each record to include the territory code, if one exists. Change the SQL query in the application to include the new field in the SELECT statement. B.Store the mapping file in an Amazon S3 bucket and configure the reference data column headers for the .csv file in the Kinesis Data Analytics application. Change the SQL query in the application to include a join to the file's S3 Amazon Resource Name (ARN), and add the territory code field to the SELECT columns. C.Store the mapping file in an Amazon S3 bucket and configure it as a reference data source for the Kinesis Data Analytics application. Change the SQL query in the application to include a join to the reference table and add the territory code field to the SELECT columns. D.Store the contents of the mapping file in an Amazon DynamoDB table. Change the Kinesis Data Analytics application to send its output to an AWS Lambda function that fetches the mapping and supplements each record to include the territory code, if one exists. Forward the record from the Lambda function to the original application destination. Answer: C QUESTION 89 A company has collected more than 100 TB of log files in the last 24 months. The files are stored as raw text in a dedicated Amazon S3 bucket. Each object has a key of the form year-month- day_log_HHmmss.txt where HHmmss represents the time the log file was initially created. A table was created in Amazon Athena that points to the S3 bucket. One-time queries are run against a subset of columns in the table several times an hour. A data analyst must make changes to reduce the cost of running these queries. Management wants a solution with minimal maintenance overhead. Which combination of steps should the data analyst take to meet these requirements? (Choose three.) A.Convert the log files to Apace Avro format. B.Add a key prefix of the form date=year-month-day/ to the S3 objects to partition the data. C.Convert the log files to Apache Parquet format. D.Add a key prefix of the form year-month-day/ to the S3 objects to partition the data. E.Drop and recreate the table with the PARTITIONED BY clause. Run the ALTER TABLE ADD PARTITION statement. F.Drop and recreate the table with the PARTITIONED BY clause. Run the MSCK REPAIR TABLE statement. Answer: BCF QUESTION 90 A company has an application that ingests streaming data. The company needs to analyze this stream over a 5-minute timeframe to evaluate the stream for anomalies with Random Cut Forest (RCF) and summarize the current count of status codes. The source and summarized data should be persisted for future use. Which approach would enable the desired outcome while keeping data persistence costs low? A.Ingest the data stream with Amazon Kinesis Data Streams. Have an AWS Lambda consumer evaluate the stream, collect the number status codes, and evaluate the data against a previously trained RCF model. Persist the source and results as a time series to Amazon DynamoDB. B.Ingest the data stream with Amazon Kinesis Data Streams. Have a Kinesis Data Analytics application evaluate the stream over a 5-minute window using the RCF function and summarize the count of status codes. Persist the source and results to Amazon S3 through output delivery to Kinesis Data Firehouse. C.Ingest the data stream with Amazon Kinesis Data Firehose with a delivery frequency of 1 minute or 1 MB in Amazon S3. Ensure Amazon S3 triggers an event to invoke an AWS Lambda consumer that evaluates the batch data, collects the number status codes, and evaluates the data against a previously trained RCF model. Persist the source and results as a time series to Amazon DynamoDB. D.Ingest the data stream with Amazon Kinesis Data Firehose with a delivery frequency of 5 minutes or 1 MB into Amazon S3. Have a Kinesis Data Analytics application evaluate the stream over a 1-minute window using the RCF function and summarize the count of status codes. Persist the results to Amazon S3 through a Kinesis Data Analytics output to an AWS Lambda integration. Answer: B QUESTION 91 An online retailer needs to deploy a product sales reporting solution. The source data is exported from an external online transaction processing (OLTP) system for reporting. Roll-up data is calculated each day for the previous day's activities. The reporting system has the following requirements: - Have the daily roll-up data readily available for 1 year. - After 1 year, archive the daily roll-up data for occasional but immediate access. - The source data exports stored in the reporting system must be retained for 5 years. Query access will be needed only for re-evaluation, which may occur within the first 90 days. Which combination of actions will meet these requirements while keeping storage costs to a minimum? (Choose two.) A.Store the source data initially in the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Glacier Deep Archive 90 days after creation, and then deletes the data 5 years after creation. B.Store the source data initially in the Amazon S3 Glacier storage class. Apply a lifecycle configuration that changes the storage class from Amazon S3 Glacier to Amazon S3 Glacier Deep Archive 90 days after creation, and then deletes the data 5 years after creation. C.Store the daily roll-up data initially in the Amazon S3 Standard storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Glacier Deep Archive 1 year after data creation. D.Store the daily roll-up data initially in the Amazon S3 Standard storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Standard-Infrequent Access (S3 Standard- IA) 1 year after data creation. E.Store the daily roll-up data initially in the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Glacier 1 year after data creation. Answer: BE QUESTION 92 A company needs to store objects containing log data in JSON format. The objects are generated by eight applications running in AWS. Six of the applications generate a total of 500 KiB of data per second, and two of the applications can generate up to 2 MiB of data per second. A data engineer wants to implement a scalable solution to capture and store usage data in an Amazon S3 bucket. The usage data objects need to be reformatted, converted to .csv format, and then compressed before they are stored in Amazon S3. The company requires the solution to include the least custom code possible and has authorized the data engineer to request a service quota increase if needed. Which solution meets these requirements? A.Configure an Amazon Kinesis Data Firehose delivery stream for each application. Write AWS Lambda functions to read log data objects from the stream for each application. Have the function perform reformatting and .csv conversion. Enable compression on all the delivery streams. B.Configure an Amazon Kinesis data stream with one shard per application. Write an AWS Lambda function to read usage data objects from the shards. Have the function perform .csv conversion, reformatting, and compression of the data. Have the function store the output in Amazon S3. C.Configure an Amazon Kinesis data stream for each application. Write an AWS Lambda function to read usage data objects from the stream for each application. Have the function perform .csv conversion, reformatting, and compression of the data. Have the function store the output in Amazon S3. D.Store usage data objects in an Amazon DynamoDB table. Configure a DynamoDB stream to copy the objects to an S3 bucket. Configure an AWS Lambda function to be triggered when objects are written to the S3 bucket. Have the function convert the objects into .csv format. Answer: B QUESTION 93 A data analytics specialist is building an automated ETL ingestion pipeline using AWS Glue to ingest compressed files that have been uploaded to an Amazon S3 bucket. The ingestion pipeline should support incremental data processing. Which AWS Glue feature should the data analytics specialist use to meet this requirement? A.Workflows B.Triggers C.Job bookmarks D.Classifiers Answer: B QUESTION 94 A telecommunications company is looking for an anomaly-detection solution to identify fraudulent calls. The company currently uses Amazon Kinesis to stream voice call records in a JSON format from its on- premises database to Amazon S3. The existing dataset contains voice call records with 200 columns. To detect fraudulent calls, the solution would need to look at 5 of these columns only. The company is interested in a cost-effective solution using AWS that requires minimal effort and experience in anomaly-detection algorithms. Which solution meets these requirements? A.Use an AWS Glue job to transform the data from JSON to Apache Parquet. Use AWS Glue crawlers to discover the schema and build the AWS Glue Data Catalog. Use Amazon Athena to create a table with a subset of columns. Use Amazon QuickSight to visualize the data and then use Amazon QuickSight machine learning-powered anomaly detection. B.Use Kinesis Data Firehose to detect anomalies on a data stream from Kinesis by running SQL queries, which compute an anomaly score for all calls and store the output in Amazon RDS. Use Amazon Athena to build a dataset and Amazon QuickSight to visualize the results. C.Use an AWS Glue job to transform the data from JSON to Apache Parquet. Use AWS Glue crawlers to discover the schema and build the AWS Glue Data Catalog. Use Amazon SageMaker to build an anomaly detection model that can detect fraudulent calls by ingesting data from Amazon S3. D.Use Kinesis Data Analytics to detect anomalies on a data stream from Kinesis by running SQL queries, which compute an anomaly score for all calls. Connect Amazon QuickSight to Kinesis Data Analytics to visualize the anomaly scores. Answer: A QUESTION 95 An online retailer is rebuilding its inventory management system and inventory reordering system to automatically reorder products by using Amazon Kinesis Data Streams. The inventory management system uses the Kinesis Producer Library (KPL) to publish data to a stream. The inventory reordering system uses the Kinesis Client Library (KCL) to consume data from the stream. The stream has been configured to scale as needed. Just before production deployment, the retailer discovers that the inventory reordering system is receiving duplicated data. Which factors could be causing the duplicated data? (Choose two.) A.The producer has a network-related timeout. B.The stream's value for the IteratorAgeMilliseconds metric is too high. C.There was a change in the number of shards, record processors, or both. D.The AggregationEnabled configuration property was set to true. E.The max_records configuration property was set to a number that is too high. Answer: BD QUESTION 96 A large retailer has successfully migrated to an Amazon S3 data lake architecture. The company's marketing team is using Amazon Redshift and Amazon QuickSight to analyze data, and derive and visualize insights. To ensure the marketing team has the most up-to-date actionable information, a data analyst implements nightly refreshes of Amazon Redshift using terabytes of updates from the previous day. After the first nightly refresh, users report that half of the most popular dashboards that had been running correctly before the refresh are now running much slower. Amazon CloudWatch does not show any alerts. What is the MOST likely cause for the performance degradation? A.The dashboards are suffering from inefficient SQL queries. B.The cluster is undersized for the queries being run by the dashboards. C.The nightly data refreshes are causing a lingering transaction that cannot be automatically closed by Amazon Redshift due to ongoing user workloads. D.The nightly data refreshes left the dashboard tables in need of a vacuum operation that could not be automatically performed by Amazon Redshift due to ongoing user workloads. Answer: B QUESTION 97 A marketing company is storing its campaign response data in Amazon S3. A consistent set of sources has generated the data for each campaign. The data is saved into Amazon S3 as .csv files. A business analyst will use Amazon Athena to analyze each campaign's data. The company needs the cost of ongoing data analysis with Athena to be minimized. Which combination of actions should a data analytics specialist take to meet these requirements? (Choose two.) A.Convert the .csv files to Apache Parquet. B.Convert the .csv files to Apache Avro. C.Partition the data by campaign. D.Partition the data by source. E.Compress the .csv files. Answer: BC QUESTION 98 An online retail company is migrating its reporting system to AWS. The company's legacy system runs data processing on online transactions using a complex series of nested Apache Hive queries. Transactional data is exported from the online system to the reporting system several times a day. Schemas in the files are stable between updates. A data analyst wants to quickly migrate the data processing to AWS, so any code changes should be minimized. To keep storage costs low, the data analyst decides to store the data in Amazon S3. It is vital that the data from the reports and associated analytics is completely up to date based on the data in Amazon S3. Which solution meets these requirements? A.Create an AWS Glue Data Catalog to manage the Hive metadata. Create an AWS Glue crawler over Amazon S3 that runs when data is refreshed to ensure that data changes are updated. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. B.Create an AWS Glue Data Catalog to manage the Hive metadata. Create an Amazon EMR cluster with consistent view enabled. Run emrfs sync before each analytics step to ensure data changes are updated. Create an EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. C.Create an Amazon Athena table with CREATE TABLE AS SELECT (CTAS) to ensure data is refreshed from underlying queries against the raw dataset. Create an AWS Glue Data Catalog to manage the Hive metadata over the CTAS table. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. D.Use an S3 Select query to ensure that the data is properly updated. Create an AWS Glue Data Catalog to manage the Hive metadata over the S3 Select table. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. Answer: A QUESTION 99 A media company is using Amazon QuickSight dashboards to visualize its national sales data. The dashboard is using a dataset with these fields: ID, date, time_zone, city, state, country, longitude, latitude, sales_volume, and number_of_items. To modify ongoing campaigns, the company wants an interactive and intuitive visualization of which states across the country recorded a significantly lower sales volume compared to the national average. Which addition to the company's QuickSight dashboard will meet this requirement? A.A geospatial color-coded chart of sales volume data across the country. B.A pivot table of sales volume data summed up at the state level. C.A drill-down layer for state-level sales volume data. D.A drill through to other dashboards containing state-level sales volume data. Answer: B QUESTION 100 A company hosts an on-premises PostgreSQL database that contains historical data. An internal legacy application uses the database for read-only activities. The company's business team wants to move the data to a data lake in Amazon S3 as soon as possible and enrich the data for analytics. The company has set up an AWS Direct Connect connection between its VPC and its on-premises network. A data analytics specialist must design a solution that achieves the business team's goals with the least operational overhead. Which solution meets these requirements? A.Upload the data from the on-premises PostgreSQL database to Amazon S3 by using a customized batch upload process. Use the AWS Glue crawler to catalog the data in Amazon S3. Use an AWS Glue job to enrich and store the result in a separate S3 bucket in Apache Parquet format. Use Amazon Athena to query the data. B.Create an Amazon RDS for PostgreSQL database and use AWS Database Migration Service (AWS DMS) to migrate the data into Amazon RDS. Use AWS Data Pipeline to copy and enrich the data from the Amazon RDS for PostgreSQL table and move the data to Amazon S3. Use Amazon Athena to query the data. C.Configure an AWS Glue crawler to use a JDBC connection to catalog the data in the on-premises database. Use an AWS Glue job to enrich the data and save the result to Amazon S3 in Apache Parquet format. Create an Amazon Redshift cluster and use Amazon Redshift Spectrum to query the data. D.Configure an AWS Glue crawler to use a JDBC connection to catalog the data in the on-premises database. Use an AWS Glue job to enrich the data and save the result to Amazon S3 in Apache Parquet format. Use Amazon Athena to query the data. Answer: B QUESTION 101 A medical company has a system with sensor devices that read metrics and send them in real time to an Amazon Kinesis data stream. The Kinesis data stream has multiple shards. The company needs to calculate the average value of a numeric metric every second and set an alarm for whenever the value is above one threshold or below another threshold. The alarm must be sent to Amazon Simple Notification Service (Amazon SNS) in less than 30 seconds. Which architecture meets these requirements? A.Use an Amazon Kinesis Data Firehose delivery stream to read the data from the Kinesis data stream with an AWS Lambda transformation function that calculates the average per second and sends the alarm to Amazon SNS. B.Use an AWS Lambda function to read from the Kinesis data stream to calculate the average per second and sent the alarm to Amazon SNS. C.Use an Amazon Kinesis Data Firehose deliver stream to read the data from the Kinesis data stream and store it on Amazon S3. Have Amazon S3 trigger an AWS Lambda function that calculates the average per second and sends the alarm to Amazon SNS. D.Use an Amazon Kinesis Data Analytics application to read from the Kinesis data stream and calculate the average per second. Send the results to an AWS Lambda function that sends the alarm to Amazon SNS. Answer: C QUESTION 102 An IoT company wants to release a new device that will collect data to track sleep overnight on an intelligent mattress. Sensors will send data that will be uploaded to an Amazon S3 bucket. About 2 MB of data is generated each night for each bed. Data must be processed and summarized for each user, and the results need to be available as soon as possible. Part of the process consists of time windowing and other functions. Based on tests with a Python script, every run will require about 1 GB of memory and will complete within a couple of minutes. Which solution will run the script in the MOST cost-effective way? A.AWS Lambda with a Python script B.AWS Glue with a Scala job C.Amazon EMR with an Apache Spark script D.AWS Glue with a PySpark job Answer: A QUESTION 103 A company wants to provide its data analysts with uninterrupted access to the data in its Amazon Redshift cluster. All data is streamed to an Amazon S3 bucket with Amazon Kinesis Data Firehose. An AWS Glue job that is scheduled to run every 5 minutes issues a COPY command to move the data into Amazon Redshift. The amount of data delivered is uneven throughout then day, and cluster utilization is high during certain periods. The COPY command usually completes within a couple of seconds. However, when load spike occurs, locks can exist and data can be missed. Currently, the AWS Glue job is configured to run without retries, with timeout at 5 minutes and concurrency at 1. How should a data analytics specialist configure the AWS Glue job to optimize fault tolerance and improve data availability in the Amazon Redshift cluster? A.Increase the number of retries. Decrease the timeout value. Increase the job concurrency. B.Keep the number of retries at 0. Decrease the timeout value. Increase the job concurrency. C.Keep the number of retries at 0. Decrease the timeout value. Keep the job concurrency at 1. D.Keep the number of retries at 0. Increase the timeout value. Keep the job concurrency at 1. Answer: B QUESTION 104 A retail company leverages Amazon Athena for ad-hoc queries against an AWS Glue Data Catalog. The data analytics team manages the data catalog and data access for the company. The data analytics team wants to separate queries and manage the cost of running those queries by different workloads and teams. Ideally, the data analysts want to group the queries run by different users within a team, store the query results in individual Amazon S3 buckets specific to each team, and enforce cost constraints on the queries run against the Data Catalog. Which solution meets these requirements? A.Create IAM groups and resource tags for each team within the company. Set up IAM policies that control user access and actions on the Data Catalog resources. B.Create Athena resource groups for each team within the company and assign users to these groups. Add S3 bucket names and other query configurations to the properties list for the resource groups. C.Create Athena workgroups for each team within the company. Set up IAM workgroup policies that control user access and actions on the workgroup resources. D.Create Athena query groups for each team within the company and assign users to the groups. Answer: A QUESTION 105 A manufacturing company uses Amazon S3 to store its data. The company wants to use AWS Lake Formation to provide granular-level security on those data assets. The data is in Apache Parquet format. The company has set a deadline for a consultant to build a data lake. How should the consultant create the MOST cost-effective solution that meets these requirements? A.Run Lake Formation blueprints to move the data to Lake Formation. Once Lake Formation has the data, apply permissions on Lake Formation. B.To create the data catalog, run an AWS Glue crawler on the existing Parquet data. Register the Amazon S3 path and then apply permissions through Lake Formation to provide granular-level security. C.Install Apache Ranger on an Amazon EC2 instance and integrate with Amazon EMR. Using Ranger policies, create role-based access control for the existing data assets in Amazon S3. D.Create multiple IAM roles for different users and groups. Assign IAM roles to different data assets in Amazon S3 to create table-based and column-based access controls. Answer: C QUESTION 106 A company has an application that uses the Amazon Kinesis Client Library (KCL) to read records from a Kinesis data stream. After a successful marketing campaign, the application experienced a significant increase in usage. As a result, a data analyst had to split some shards in the data stream. When the shards were split, the application started throwing an ExpiredIteratorExceptions error sporadically. What should the data analyst do to resolve this? A.Increase the number of threads that process the stream records. B.Increase the provisioned read capacity units assigned to the stream's Amazon DynamoDB table. C.Increase the provisioned write capacity units assigned to the stream's Amazon DynamoDB table. D.Decrease the provisioned write capacity units assigned to the stream's Amazon DynamoDB table. Answer: C QUESTION 107 A company is building a service to monitor fleets of vehicles. The company collects IoT data from a device in each vehicle and loads the data into Amazon Redshift in near-real time. Fleet owners upload .csv files containing vehicle reference data into Amazon S3 at different times throughout the day. A nightly process loads the vehicle reference data from Amazon S3 into Amazon Redshift. The company joins the IoT data from the device and the vehicle reference data to power reporting and dashboards. Fleet owners are frustrated by waiting a day for the dashboards to update. Which solution would provide the SHORTEST delay between uploading reference data to Amazon S3 and the change showing up in the owners' dashboards? A.Use S3 event notifications to trigger an AWS Lambda function to copy the vehicle reference data into Amazon Redshift immediately when the reference data is uploaded to Amazon S3. B.Create and schedule an AWS Glue Spark job to run every 5 minutes. The job inserts reference data into Amazon Redshift. C.Send reference data to Amazon Kinesis Data Streams. Configure the Kinesis data stream to directly load the reference data into Amazon Redshift in real time. D.Send the reference data to an Amazon Kinesis Data Firehose delivery stream. Configure Kinesis with a buffer interval of 60 seconds and to directly load the data into Amazon Redshift. Answer: A QUESTION 108 A company is migrating from an on-premises Apache Hadoop cluster to an Amazon EMR cluster. The cluster runs only during business hours. Due to a company requirement to avoid intraday cluster failures, the EMR cluster must be highly available. When the cluster is terminated at the end of each business day, the data must persist. Which configurations would enable the EMR cluster to meet these requirements? (Choose three.) A.EMR File System (EMRFS) for storage B.Hadoop Distributed File System (HDFS) for storage C.AWS Glue Data Catalog as the metastore for Apache Hive D.MySQL database on the master node as the metastore for Apache Hive E.Multiple master nodes in a single Availability Zone F.Multiple master nodes in multiple Availability Zones Answer: BCF QUESTION 109 A retail company wants to use Amazon QuickSight to generate dashboards for web and in-store sales. A group of 50 business intelligence professionals will develop and use the dashboards. Once ready, the dashboards will be shared with a group of 1,000 users. The sales data comes from different stores and is uploaded to Amazon S3 every 24 hours. The data is partitioned by year and month, and is stored in Apache Parquet format. The company is using the AWS Glue Data Catalog as its main data catalog and Amazon Athena for querying. The total size of the uncompressed data that the dashboards query from at any point is 200 GB. Which configuration will provide the MOST cost-effective solution that meets these requirements? A.Load the data into an Amazon Redshift cluster by using the COPY command. Configure 50 author users and 1,000 reader users. Use QuickSight Enterprise edition. Configure an Amazon Redshift data source with a direct query option. B.Use QuickSight Standard edition. Configure 50 author users and 1,000 reader users. Configure an Athena data source with a direct query option. C.Use QuickSight Enterprise edition. Configure 50 author users and 1,000 reader users. Configure an Athena data source and import the data into SPICE. Automatically refresh every 24 hours. D.Use QuickSight Enterprise edition. Configure 1 administrator and 1,000 reader users. Configure an S3 data source and import the data into SPICE. Automatically refresh every 24 hours. Answer: C QUESTION 110 A central government organization is collecting events from various internal applications using Amazon Managed Streaming for Apache Kafka (Amazon MSK). The organization has configured a separate Kafka topic for each application to separate the data. For security reasons, the Kafka cluster has been configured to only allow TLS encrypted data and it encrypts the data at rest. A recent application update showed that one of the applications was configured incorrectly, resulting in writing data to a Kafka topic that belongs to another application. This resulted in multiple errors in the analytics pipeline as data from different applications appeared on the same topic. After this incident, the organization wants to prevent applications from writing to a topic different than the one they should write to. Which solution meets these requirements with the least amount of effort? A.Create a different Amazon EC2 security group for each application. Configure each security group to have access to a specific topic in the Amazon MSK cluster. Attach the security group to each application based on the topic that the applications should read and write to. B.Install Kafka Connect on each application instance and configure each Kafka Connect instance to write to a specific topic only. C.Use Kafka ACLs and configure read and write permissions for each topic. Use the distinguished name of the clients' TLS certificates as the principal of the ACL. D.Create a different Amazon EC2 security group for each application. Create an Amazon MSK cluster and Kafka topic for each application. Configure each security group to have access to the specific cluster. Answer: B QUESTION 111 A company wants to collect and process events data from different departments in near-real time. Before storing the data in Amazon S3, the company needs to clean the data by standardizing the format of the address and timestamp columns. The data varies in size based on the overall load at each particular point in time. A single data record can be 100 KB-10 MB. How should a data analytics specialist design the solution for data ingestion? A.Use Amazon Kinesis Data Streams. Configure a stream for the raw data. Use a Kinesis Agent to write data to the stream. Create an Amazon Kinesis Data Analytics application that reads data from the raw stream, cleanses it, and stores the output to Amazon S3. B.Use Amazon Kinesis Data Firehose. Configure a Firehose delivery stream with a preprocessing AWS Lambda function for data cleansing. Use a Kinesis Agent to write data to the delivery stream. Configure Kinesis Data Firehose to deliver the data to Amazon S3. C.Use Amazon Managed Streaming for Apache Kafka. Configure a topic for the raw data. Use a Kafka producer to write data to the topic. Create an application on Amazon EC2 that reads data from the topic by using the Apache Kafka consumer API, cleanses the data, and writes to Amazon S3. D.Use Amazon Simple Queue Service (Amazon SQS). Configure an AWS Lambda function to read events from the SQS queue and upload the events to Amazon S3. Answer: B QUESTION 112 An operations team notices that a few AWS Glue jobs for a given ETL application are failing. The AWS Glue jobs read a large number of small JOSN files from an Amazon S3 bucket and write the data to a different S3 bucket in Apache Parquet format with no major transformations. Upon initial investigation, a data engineer notices the following error message in the History tab on the AWS Glue console: "Command Failed with Exit Code 1." Upon further investigation, the data engineer notices that the driver memory profile of the failed jobs crosses the safe threshold of 50% usage quickly and reaches 90?5% soon after. The average memory usage across all executors continues to be less than 4%. The data engineer also notices the following error while examining the related Amazon CloudWatch Logs. What should the data engineer do to solve the failure in the MOST cost-effective way? A.Change the worker type from Standard to G.2X. B.Modify the AWS Glue ETL code to use the `groupFiles': `inPartition' feature. C.Increase the fetch size setting by using AWS Glue dynamics frame. D.Modify maximum capacity to increase the total maximum data processing units (DPUs) used. Answer: D QUESTION 113 A transport company wants to track vehicular movements by capturing geolocation records. The records are 10 B in size and up to 10,000 records are captured each second. Data transmission delays of a few minutes are acceptable, considering unreliable network conditions. The transport company decided to use Amazon Kinesis Data Streams to ingest the data. The company is looking for a reliable mechanism to send data to Kinesis Data Streams while maximizing the throughput efficiency of the Kinesis shards. Which solution will meet the company's requirements? A.Kinesis Agent B.Kinesis Producer Library (KPL) C.Kinesis Data Firehose D.Kinesis SDK Answer: B 2021 Latest Braindump2go DAS-C01 PDF and DAS-C01 VCE Dumps Free Share: https://drive.google.com/drive/folders/1WbSRm3ZlrRzjwyqX7auaqgEhLLzmD-2w?usp=sharing
(April-2021)Braindump2go 5V0-34.19 PDF and 5V0-34.19 VCE Dumps(Q29-Q49)
QUESTION 29 A user wants to create a super metric and apply it to a custom group to capture the total of CPU Demand (MHz) of virtual machines that are children of the custom group. Which super metric function would be used to accomplish this? A.Average B.Max C.Sum D.Count Answer: C QUESTION 30 Review the exhibit. When the Cluster Metric Load or Cluster Object Load exceeds 100%, what is the next step a vRealize Operations administrator should take? A.Reduce the vRealize Operations data retention time. B.Add an additional vRealize Operations data node. C.Increase vRealize Operations polling time. D.Remove a vCenter from the vSphere management pack. Answer: B QUESTION 31 Which object attributes are used in vRealize Operations Compliance analysis? A.tags B.properties C.user access lists D.host profiles Answer: B QUESTION 32 Based on the highlighted HIPPA compliance template above, how many hosts are in a compliant state? A.5 B.24 C.29 D.31 Answer: A QUESTION 33 How can vRealize Operations tags be used? A.be dynamically assigned to objects B.to group virtual machines in vCenter C.to set object access controls D.to filter objects within dashboard widgets Answer: B QUESTION 34 The default collection cycle is set. When changing the Cluster Time Remaining settings, how long will it take before time remaining and risk level are recalculated? A.5 minutes B.1 hour C.12 hours D.24 hours Answer: A QUESTION 35 What is a prerequisite for using Business Intent? A.DRS clusters B.storage policies C.vSphere 6.7 D.vCenter tags Answer: D QUESTION 36 What can be configured within a policy? A.alert notifications B.symptom definition threshold overrides C.custom group membership criteria D.symptom definition operator overrides Answer: B QUESTION 37 Which organizational construct within vRealize Operations has a user-configured dynamic membership criteria? A.Resource Pool B.Tags C.Custom group D.Custom Datacenter Answer: C QUESTION 38 How should a remote collector be added to a vRealize Operations installation? A.Log in as Admin on a master node and enable High Availability. B.Open the Setup Wizard from the login page. C.Navigate to a newly deployed node and click Expand an Existing Installation. D.Navigate to the Admin interface of a data node. Answer: C QUESTION 39 Refer to the exhibit. How is vSphere Usable Capacity calculated? A.Demand plus Reservation B.Total Capacity minus High Availability C.Total Capacity minus Overhead D.Demand plus High Availability Answer: B QUESTION 40 A view is created in vRealize Operations to track virtual machine maximum and average contention for the past thirty days. Which method is used to enhance the view to easily spot VMs with high contention values? A.Set a tag on virtual machines and filter on the tag. B.Edit the view and set filters for the transformation value maximum and average contention. C.Create a custom group to dynamically track virtual machines. D.Configure Metric Coloring in the Advanced Settings of the view. Answer: C QUESTION 41 Refer to the exhibit. A user has installed and configured Telegraf agent on a Windows domain controller. No application data is being collected. Which two actions should the user take to see the application data? (Choose two.) A.Verify the vCenter adapter collection status. B.Re-configure the agent on the Windows virtual machine manually. C.Verify Active Directory Service status. D.Configure ICMP Remote Check. E.Validate time synchronization between vRealize Application Remote Collector and vRealize Operations. Answer: AE QUESTION 42 Which dashboard widget provides a two-dimensional relationship? A.Heat Map B.Object Selector C.Scoreboard D.Top N Answer: A QUESTION 43 What must an administrator do to use the Troubleshoot with Logs Dashboard in vRealize Operations? A.Configure the vRealize Log Insight agent. B.Enable Log Forwarding within vRealize Operations. C.Configure vRealize Operations within vRealize Log Insight. D.Configure symptoms and alerts within vRealize Operations. Answer: C QUESTION 44 vRealize Operations places a tagless virtual machines on a tagged host. Which setting causes this behavior? A.Host-Based Business Intent B.Consolidated Operational Intent C.Balanced Operational Intent D.Cluster-Based Business Intent Answer: A QUESTION 45 The default collection cycle is set. How often are cost calculations run? A.every 5 minutes B.daily C.weekly D.monthly Answer: B QUESTION 46 vRealize Operations is actively collecting data from vCenter and the entire inventory is licensed. Why would backup VMDKs of an active virtual machine in the vCenter appear in Orphaned Disks? A.They are related to the VM. B.They are named the same as the VM. C.They are not in vCenter inventory. D.They are not actively being utilized. Answer: C QUESTION 47 In which two locations should all nodes be when deploying an analytics node? (Choose two.) A.same data center B.same vCenter C.remote data center D.same subnet E.different subnet Answer: AD QUESTION 48 Which type of view allows a user to create a view to provide tabular data about specific objects? A.Distribution B.Text C.List D.Trend Answer: C QUESTION 49 Which Operational Intent setting drives maximum application performance by avoiding resource spikes? A.Moderate B.Consolidate C.Over provision D.Balance Answer: B 2021 Latest Braindump2go 5V0-34.19 PDF and 5V0-34.19 VCE Dumps Free Share: https://drive.google.com/drive/folders/1i-g5X8oxKPFi-1oyAVi68bVlC5njt8PF?usp=sharing
Learn Data Science From Industry Experts
What is Data Science? Data science is the discipline that uses the technical methods, domain knowledge, algorithms, understanding of math and figures to extract important visions from data. The beginning of new technologies has resulted a huge growth in data. This has providing an opportunity to analyze this data and develop a meaningful visions from it. Analyzing such data needs some special specialists like Data Scientist, who specializes in Data Science. These Data Scientists use many statistical and machine learning tools to analyze the data tired from different sectors like social media, e-commerce sites and Internet searches, etc. Therefore, we can understand Data Science as the learning that includes extracting important information from great amount of data using many technical ways, algorithms and methods. Why do we need Data Science? Today, Data Science has developed an important factor for the organization’s progress. It supports them to make a better choice to improve their business. Organizations with the support of Data Scientists grow the important insights from huge amount of data that allow them to analyze themselves and their show in the market. Data Science supports the association to recognize its customer wants better and provide them good facility that will support them to grow professionally. As more establishments are realizing Data Science into their business plans, it has resulted in making a number of jobs in the Data Science field. Data Science profession outlook Data Science specialists are in great demand in today’s IT industry. Some of the roles related with Data Science are as follows: · Data Scientist · Data Engineer · Data Analyst · Machine Learning Engineer · Statistician. Learn Data Science online Data Science specialists are in high request. Several IT professionals are seeing to make their career in this field. Then, where to learn Data Science? There are various answers for this question as there are various platforms available to learn data science course in delhi. One such platform is SSDN Technologies, a pioneer in providing online courses in Data Science. SSDN Technologies has well intended courses which are taught by industry professionals and provide whole knowledge about this field.
(April-2021)Braindump2go AZ-303 PDF and AZ-303 VCE Dumps(Q223-Q233)
QUESTION 223 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure subscription. You have an on-premises file server named Server1 that runs Windows Server 2019. You manage Server1 by using Windows Admin Center. You need to ensure that if Server1 fails, you can recover Server1 files from Azure. Solution: You register Windows Admin Center in Azure and configure Azure Backup. Does this meet the goal? A.Yes B.No Answer: B QUESTION 224 You have an application that is hosted across multiple Azure regions. You need to ensure that users connect automatically to their nearest application host based on network latency. What should you implement? A.Azure Application Gateway B.Azure Load Balancer C.Azure Traffic Manager D.Azure Bastion Answer: C QUESTION 225 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. Your company is deploying an on-premises application named App1. Users will access App1 by using a URL of https://app1.contoso.com. You register App1 in Azure Active Directory (Azure AD) and publish App1 by using the Azure AD Application Proxy. You need to ensure that App1 appears in the My Apps portal for all the users. Solution: You modify User and Groups for App1. Does this meet the goal? A.Yes B.No Answer: A QUESTION 226 You create a social media application that users can use to upload images and other content. Users report that adult content is being posted in an area of the site that is accessible to and intended for young children. You need to automatically detect and flag potentially offensive content. The solution must not require any custom coding other than code to scan and evaluate images. What should you implement? A.Bing Visual Search B.Bing Image Search C.Custom Vision Search D.Computer Vision API Answer: D QUESTION 227 You have an Azure subscription named Subscription1. Subscription1 contains the resource groups in the following table. RG1 has a web app named WebApp1. WebApp1 is located in West Europe. You move WebApp1 to RG2. What is the effect of the move? A.The App Service plan for WebApp1 moves to North Europe. Policy1 applies to WebApp1. B.The App Service plan for WebApp1 remains in West Europe. Policy1 applies to WebApp1. C.The App Service plan for WebApp1 moves to North Europe. Policy2 applies to WebApp1. D.The App Service plan for WebApp1 remains in West Europe. Policy2 applies to WebApp1. Answer: D QUESTION 228 You have an Azure App Service API that allows users to upload documents to the cloud with a mobile device. A mobile app connects to the service by using REST API calls. When a new document is uploaded to the service, the service extracts the document metadata. Usage statistics for the app show significant increases in app usage. The extraction process is CPU-intensive. You plan to modify the API to use a queue. You need to ensure that the solution scales, handles request spikes, and reduces costs between request spikes. What should you do? A.Configure a CPU Optimized virtual machine (VM) and install the Web App service on the new instance. B.Configure a series of CPU Optimized virtual machine (VM) instances and install extraction logic to process a queue. C.Move the extraction logic into an Azure Function. Create a queue triggered function to process the queue. D.Configure Azure Container Service to retrieve items from a queue and run across a pool of virtual machine (VM) nodes using the extraction logic. Answer: C QUESTION 229 You have an Azure App Service named WebApp1. You plan to add a WebJob named WebJob1 to WebApp1. You need to ensure that WebJob1 is triggered every 15 minutes. What should you do? A.Change the Web.config file to include the 0*/15**** CRON expression B.From the application settings of WebApp1, add a default document named Settings.job. Add the 1-31 1-12 1-7 0*/15* CRON expression to the JOB file C.Add a file named Settings.job to the ZIP file that contains the WebJob script. Add the 0*/15**** CRON expression to the JOB file D.Create an Azure Automation account and add a schedule to the account. Set the recurrence for the schedule Answer: C QUESTION 230 You have an Azure App Service named WebApp1. You plan to add a WebJob named WebJob1 to WebApp1. You need to ensure that WebJob1 is triggered every 15 minutes. What should you do? A.Change the Web.config file to include the 1-31 1-12 1-7 0*/15* CRON expression B.From the properties of WebJob1, change the CRON expression to 0*/15****. C.Add a file named Settings.job to the ZIP file that contains the WebJob script. Add the 1-31 1-12 1-7 0*/15* CRON expression to the JOB file D.Create an Azure Automation account and add a schedule to the account. Set the recurrence for the schedule Answer: B QUESTION 231 You have an on-premises web app named App1 that is behind a firewall. The firewall blocks all incoming network traffic. You need to expose App1 to the internet via Azure. The solution must meet the following requirements: - Ensure that access to App1 requires authentication by using Azure. - Avoid deploying additional services and servers to the on-premises network. What should you use? A.Azure Application Gateway B.Azure Relay C.Azure Front Door Service D.Azure Active Directory (Azure AD) Application Proxy Answer: D QUESTION 232 Your company is developing an e-commerce Azure App Service Web App to support hundreds of restaurant locations around the world. You are designing the messaging solution architecture to support the e-commerce transactions and messages. The solution will include the following features: You need to design a solution for the Inventory Distribution feature. Which Azure service should you use? A.Azure Service Bus B.Azure Relay C.Azure Event Grid D.Azure Event Hub Answer: A QUESTION 233 You are responsible for mobile app development for a company. The company develops apps on IOS, and Android. You plan to integrate push notifications into every app. You need to be able to send users alerts from a backend server. Which two options can you use to achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A.Azure Web App B.Azure Mobile App Service C.Azure SQL Database D.Azure Notification Hubs E.a virtual machine Answer: BD QUESTION 234 Hotspot Question You need to design an authentication solution that will integrate on-premises Active Directory and Azure Active Directory (Azure AD). The solution must meet the following requirements: - Active Directory users must not be able to sign in to Azure AD-integrated apps outside of the sign-in hours configured in the Active Directory user accounts. - Active Directory users must authenticate by using multi-factor authentication (MFA) when they sign in to Azure AD-integrated apps. - Administrators must be able to obtain Azure AD-generated reports that list the Active Directory users who have leaked credentials. - The infrastructure required to implement and maintain the solution must be minimized. What should you include in the solution? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer: QUESTION 235 Hotspot Question You have an Azure subscription that contains the resources shown in the following table. You plan to deploy an Azure virtual machine that will have the following configurations: - Name: VM1 - Azure region: Central US - Image: Ubuntu Server 18.04 LTS - Operating system disk size: 1 TB - Virtual machine generation: Gen 2 - Operating system disk type: Standard SSD You need to protect VM1 by using Azure Disk Encryption and Azure Backup. On VM1, which configurations should you change? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer: 2021 Latest Braindump2go AZ-303 PDF and AZ-303 VCE Dumps Free Share: https://drive.google.com/drive/folders/1l4-Ncx3vdn9Ra2pN5d9Lnjv3pxbJpxZB?usp=sharing
(April-2021)Braindump2go 350-401 PDF and 350-401 VCE Dumps(Q409-Q433)
QUESTION 409 A customer has 20 stores located throughout a city. Each store has a single Cisco AP managed by a central WLC. The customer wants to gather analytics for users in each store. Which technique supports these requirements? A.angle of arrival B.presence C.hyperlocation D.trilateration Answer: D QUESTION 410 A customer has a pair of Cisco 5520 WLCs set up in an SSO cluster to manage all APs. Guest traffic is anchored to a Cisco 3504 WLC located in a DM2. Which action is needed to ensure that the EolP tunnel remains in an UP state in the event of failover on the SSO cluster? A.Use the mobility MAC when the mobility peer is configured B.Use the same mobility domain on all WLCs C.Enable default gateway reachability check D.Configure back-to-back connectivity on the RP ports Answer: B QUESTION 411 Refer to the exhibit. A network administrator configured RSPAN to troubleshoot an issue between switchl and switch2. The switches are connected using interface GigabitEthernet 1/1. An external packet capture device is connected to swich2 interface GigabitEthernet1/2. Which two commands must be added to complete this configuration? (Choose two) A.Option A B.Option B C.Option C D.Option D Answer: BD QUESTION 412 Refer to the exhibit. Which Python code snippet prints the descriptions of disabled interfaces only? A.Option A B.Option B C.Option C D.Option D Answer: B QUESTION 413 Refer to the exhibit. Which outcome is achieved with this Python code? A.displays the output of the show command in an unformatted way B.displays the output of the show command in a formatted way C.connects to a Cisco device using Telnet and exports the routing table information D.connects to a Cisco device using SSH and exports the routing table information Answer: B QUESTION 414 Which resource is able to be shared among virtual machines deployed on the same physical server? A.disk B.operating system C.VM configuration file D.applications Answer: A QUESTION 415 Refer to the exhibit. An engineer must deny HTTP traffic from host A to host B while allowing all other communication between the hosts. Which command set accomplishes this task? A.Option A B.Option B C.Option C D.Option D Answer: A QUESTION 416 Refer to the exhibit. An engineer must create a script that appends the output of the show process cpu sorted command to a file. Which action completes the configuration? A.action 4.0 syslog command "show process cpu sorted | append flash:high-cpu-file" B.action 4.0 cli command "show process cpu sorted | append flash:high-cpu-file" C.action 4.0 ens-event "show process cpu sorted | append flash:high-cpu-file" D.action 4.0 publish-event "show process cpu sorted | append flash:high-cpu-file" Answer: B QUESTION 417 Refer to the exhibit. Which action completes the configuration to achieve a dynamic continuous mapped NAT for all users? A.Configure a match-host type NAT pool B.Reconfigure the pool to use the 192.168 1 0 address range C.Increase the NAT pool size to support 254 usable addresses D.Configure a one-to-one type NAT pool Answer: C QUESTION 418 Which function is handled by vManage in the Cisco SD-WAN fabric? A.Establishes BFD sessions to test liveliness of links and nodes B.Distributes policies that govern data forwarding C.Performs remote software upgrades for WAN Edge. vSmart and vBond D.Establishes IPsec tunnels with nodes. Answer: B QUESTION 419 Refer to the exhibit. An engineer is configuring an EtherChannel between Switch1 and Switch2 and notices the console message on Switch2. Based on the output, which action resolves this issue? A.Configure less member ports on Switch2. B.Configure the same port channel interface number on both switches C.Configure the same EtherChannel protocol on both switches D.Configure more member ports on Switch1. Answer: B QUESTION 420 How do cloud deployments differ from on-prem deployments? A.Cloud deployments require longer implementation times than on-premises deployments B.Cloud deployments are more customizable than on-premises deployments. C.Cloud deployments require less frequent upgrades than on-premises deployments. D.Cloud deployments have lower upfront costs than on-premises deployments. Answer: B QUESTION 421 Refer to the exhibit. Extended access-list 100 is configured on interface GigabitEthernet 0/0 in an inbound direction, but it does not have the expected behavior of allowing only packets to or from 192 168 0.0/16. Which command set properly configures the access list? A.Option A B.Option B C.Option C D.Option D Answer: D QUESTION 422 An engineer is concerned with the deployment of a new application that is sensitive to inter-packet delay variance. Which command configures the router to be the destination of jitter measurements? A.Router(config)# ip sla responder udp-connect 172.29.139.134 5000 B.Router(config)# ip sla responder tcp-connect 172.29.139.134 5000 C.Router(config)# ip sla responder udp-echo 172.29.139.134 5000 D.Router(config)# ip sla responder tcp-echo 172.29.139.134 5000 Answer: C QUESTION 423 What is a characteristic of a WLC that is in master controller mode? A.All new APs that join the WLAN are assigned to the master controller. B.The master controller is responsible for load balancing all connecting clients to other controllers. C.All wireless LAN controllers are managed by the master controller. D.Configuration on the master controller is executed on all wireless LAN controllers. Answer: A QUESTION 424 Refer to the exhibit. The connection between SW1 and SW2 is not operational. Which two actions resolve the issue? (Choose two.) A.configure switchport mode access on SW2 B.configure switchport nonegotiate on SW2 C.configure switchport mode trunk on SW2 D.configure switchport nonegotiate on SW1 E.configure switchport mode dynamic desirable on SW2 Answer: CE QUESTION 425 An engineer must create an EEM applet that sends a syslog message in the event a change happens in the network due to trouble with an OSPF process. Which action should the engineer use? A.action 1 syslog msg "OSPF ROUTING ERROR" B.action 1 syslog send "OSPF ROUTING ERROR" C.action 1 syslog pattern "OSPF ROUTING ERROR" D.action 1syslog write "OSPF ROUTING ERROR" Answer: C QUESTION 426 An engineer runs the sample code, and the terminal returns this output. Which change to the sample code corrects this issue? A.Change the JSON method from load() to loads(). B.Enclose null in the test_json string in double quotes C.Use a single set of double quotes and condense test_json to a single line D.Call the read() method explicitly on the test_json string Answer: D QUESTION 427 In a Cisco DNA Center Plug and Play environment, why would a device be labeled unclaimed? A.The device has not been assigned a workflow. B.The device could not be added to the fabric. C.The device had an error and could not be provisioned. D.The device is from a third-party vendor. Answer: A QUESTION 428 Which of the following statements regarding BFD are correct? (Select 2 choices.) A.BFD is supported by OSPF, EIGRP, BGP, and IS-IS. B.BFD detects link failures in less than one second. C.BFD can bypass a failed peer without relying on a routing protocol. D.BFD creates one session per routing protocol per interface. E.BFD is supported only on physical interfaces. F.BFD consumes more CPU resources than routing protocol timers do. Answer: AB QUESTION 429 An engineer measures the Wi-Fi coverage at a customer site. The RSSI values are recorded as follows: Which two statements does the engineer use to explain these values to the customer? (Choose two) A.The signal strength at location B is 10 dB better than location C. B.Location D has the strongest RF signal strength. C.The signal strength at location C is too weak to support web surfing. D.The RF signal strength at location B is 50% weaker than location A E.The RF signal strength at location C is 10 times stronger than location B Answer: DE QUESTION 430 What is an advantage of using BFD? A.It local link failure at layer 1 and updates routing table B.It detects local link failure at layer 3 and updates routing protocols C.It has sub-second failure detection for layer 1 and layer 3 problems. D.It has sub-second failure detection for layer 1 and layer 2 problems. Answer: C QUESTION 431 Which three resources must the hypervisor make available to the virtual machines? (Choose three) A.memory B.bandwidth C.IP address D.processor E.storage F.secure access Answer: ABE QUESTION 432 What is the function of vBond in a Cisco SDWAN deployment? A.initiating connections with SD-WAN routers automatically B.pushing of configuration toward SD-WAN routers C.onboarding of SDWAN routers into the SD-WAN overlay D.gathering telemetry data from SD-WAN routers Answer: A QUESTION 433 Which three methods does Cisco DNA Center use to discover devices? (Choose three.) A.CDP B.SNMP C.LLDP D.Ping E.NETCONF F.specified range of IP addresses Answer: ACF 2021 Latest Braindump2go 350-401 PDF and 350-401 VCE Dumps Free Share: https://drive.google.com/drive/folders/1EIsykNTrKvqjDVs9JMySv052qbrCpe8V?usp=sharing