akayhelp
10+ Views
1 Like
0 Shares
Comment
Suggested
Recent
Cards you may also be interested in
(April-2021)Braindump2go 5V0-34.19 PDF and 5V0-34.19 VCE Dumps(Q29-Q49)
QUESTION 29 A user wants to create a super metric and apply it to a custom group to capture the total of CPU Demand (MHz) of virtual machines that are children of the custom group. Which super metric function would be used to accomplish this? A.Average B.Max C.Sum D.Count Answer: C QUESTION 30 Review the exhibit. When the Cluster Metric Load or Cluster Object Load exceeds 100%, what is the next step a vRealize Operations administrator should take? A.Reduce the vRealize Operations data retention time. B.Add an additional vRealize Operations data node. C.Increase vRealize Operations polling time. D.Remove a vCenter from the vSphere management pack. Answer: B QUESTION 31 Which object attributes are used in vRealize Operations Compliance analysis? A.tags B.properties C.user access lists D.host profiles Answer: B QUESTION 32 Based on the highlighted HIPPA compliance template above, how many hosts are in a compliant state? A.5 B.24 C.29 D.31 Answer: A QUESTION 33 How can vRealize Operations tags be used? A.be dynamically assigned to objects B.to group virtual machines in vCenter C.to set object access controls D.to filter objects within dashboard widgets Answer: B QUESTION 34 The default collection cycle is set. When changing the Cluster Time Remaining settings, how long will it take before time remaining and risk level are recalculated? A.5 minutes B.1 hour C.12 hours D.24 hours Answer: A QUESTION 35 What is a prerequisite for using Business Intent? A.DRS clusters B.storage policies C.vSphere 6.7 D.vCenter tags Answer: D QUESTION 36 What can be configured within a policy? A.alert notifications B.symptom definition threshold overrides C.custom group membership criteria D.symptom definition operator overrides Answer: B QUESTION 37 Which organizational construct within vRealize Operations has a user-configured dynamic membership criteria? A.Resource Pool B.Tags C.Custom group D.Custom Datacenter Answer: C QUESTION 38 How should a remote collector be added to a vRealize Operations installation? A.Log in as Admin on a master node and enable High Availability. B.Open the Setup Wizard from the login page. C.Navigate to a newly deployed node and click Expand an Existing Installation. D.Navigate to the Admin interface of a data node. Answer: C QUESTION 39 Refer to the exhibit. How is vSphere Usable Capacity calculated? A.Demand plus Reservation B.Total Capacity minus High Availability C.Total Capacity minus Overhead D.Demand plus High Availability Answer: B QUESTION 40 A view is created in vRealize Operations to track virtual machine maximum and average contention for the past thirty days. Which method is used to enhance the view to easily spot VMs with high contention values? A.Set a tag on virtual machines and filter on the tag. B.Edit the view and set filters for the transformation value maximum and average contention. C.Create a custom group to dynamically track virtual machines. D.Configure Metric Coloring in the Advanced Settings of the view. Answer: C QUESTION 41 Refer to the exhibit. A user has installed and configured Telegraf agent on a Windows domain controller. No application data is being collected. Which two actions should the user take to see the application data? (Choose two.) A.Verify the vCenter adapter collection status. B.Re-configure the agent on the Windows virtual machine manually. C.Verify Active Directory Service status. D.Configure ICMP Remote Check. E.Validate time synchronization between vRealize Application Remote Collector and vRealize Operations. Answer: AE QUESTION 42 Which dashboard widget provides a two-dimensional relationship? A.Heat Map B.Object Selector C.Scoreboard D.Top N Answer: A QUESTION 43 What must an administrator do to use the Troubleshoot with Logs Dashboard in vRealize Operations? A.Configure the vRealize Log Insight agent. B.Enable Log Forwarding within vRealize Operations. C.Configure vRealize Operations within vRealize Log Insight. D.Configure symptoms and alerts within vRealize Operations. Answer: C QUESTION 44 vRealize Operations places a tagless virtual machines on a tagged host. Which setting causes this behavior? A.Host-Based Business Intent B.Consolidated Operational Intent C.Balanced Operational Intent D.Cluster-Based Business Intent Answer: A QUESTION 45 The default collection cycle is set. How often are cost calculations run? A.every 5 minutes B.daily C.weekly D.monthly Answer: B QUESTION 46 vRealize Operations is actively collecting data from vCenter and the entire inventory is licensed. Why would backup VMDKs of an active virtual machine in the vCenter appear in Orphaned Disks? A.They are related to the VM. B.They are named the same as the VM. C.They are not in vCenter inventory. D.They are not actively being utilized. Answer: C QUESTION 47 In which two locations should all nodes be when deploying an analytics node? (Choose two.) A.same data center B.same vCenter C.remote data center D.same subnet E.different subnet Answer: AD QUESTION 48 Which type of view allows a user to create a view to provide tabular data about specific objects? A.Distribution B.Text C.List D.Trend Answer: C QUESTION 49 Which Operational Intent setting drives maximum application performance by avoiding resource spikes? A.Moderate B.Consolidate C.Over provision D.Balance Answer: B 2021 Latest Braindump2go 5V0-34.19 PDF and 5V0-34.19 VCE Dumps Free Share: https://drive.google.com/drive/folders/1i-g5X8oxKPFi-1oyAVi68bVlC5njt8PF?usp=sharing
(April-2021)Braindump2go DAS-C01 PDF and DAS-C01 VCE Dumps(Q88-Q113)
QUESTION 88 An online gaming company is using an Amazon Kinesis Data Analytics SQL application with a Kinesis data stream as its source. The source sends three non-null fields to the application: player_id, score, and us_5_digit_zip_code. A data analyst has a .csv mapping file that maps a small number of us_5_digit_zip_code values to a territory code. The data analyst needs to include the territory code, if one exists, as an additional output of the Kinesis Data Analytics application. How should the data analyst meet this requirement while minimizing costs? A.Store the contents of the mapping file in an Amazon DynamoDB table. Preprocess the records as they arrive in the Kinesis Data Analytics application with an AWS Lambda function that fetches the mapping and supplements each record to include the territory code, if one exists. Change the SQL query in the application to include the new field in the SELECT statement. B.Store the mapping file in an Amazon S3 bucket and configure the reference data column headers for the .csv file in the Kinesis Data Analytics application. Change the SQL query in the application to include a join to the file's S3 Amazon Resource Name (ARN), and add the territory code field to the SELECT columns. C.Store the mapping file in an Amazon S3 bucket and configure it as a reference data source for the Kinesis Data Analytics application. Change the SQL query in the application to include a join to the reference table and add the territory code field to the SELECT columns. D.Store the contents of the mapping file in an Amazon DynamoDB table. Change the Kinesis Data Analytics application to send its output to an AWS Lambda function that fetches the mapping and supplements each record to include the territory code, if one exists. Forward the record from the Lambda function to the original application destination. Answer: C QUESTION 89 A company has collected more than 100 TB of log files in the last 24 months. The files are stored as raw text in a dedicated Amazon S3 bucket. Each object has a key of the form year-month- day_log_HHmmss.txt where HHmmss represents the time the log file was initially created. A table was created in Amazon Athena that points to the S3 bucket. One-time queries are run against a subset of columns in the table several times an hour. A data analyst must make changes to reduce the cost of running these queries. Management wants a solution with minimal maintenance overhead. Which combination of steps should the data analyst take to meet these requirements? (Choose three.) A.Convert the log files to Apace Avro format. B.Add a key prefix of the form date=year-month-day/ to the S3 objects to partition the data. C.Convert the log files to Apache Parquet format. D.Add a key prefix of the form year-month-day/ to the S3 objects to partition the data. E.Drop and recreate the table with the PARTITIONED BY clause. Run the ALTER TABLE ADD PARTITION statement. F.Drop and recreate the table with the PARTITIONED BY clause. Run the MSCK REPAIR TABLE statement. Answer: BCF QUESTION 90 A company has an application that ingests streaming data. The company needs to analyze this stream over a 5-minute timeframe to evaluate the stream for anomalies with Random Cut Forest (RCF) and summarize the current count of status codes. The source and summarized data should be persisted for future use. Which approach would enable the desired outcome while keeping data persistence costs low? A.Ingest the data stream with Amazon Kinesis Data Streams. Have an AWS Lambda consumer evaluate the stream, collect the number status codes, and evaluate the data against a previously trained RCF model. Persist the source and results as a time series to Amazon DynamoDB. B.Ingest the data stream with Amazon Kinesis Data Streams. Have a Kinesis Data Analytics application evaluate the stream over a 5-minute window using the RCF function and summarize the count of status codes. Persist the source and results to Amazon S3 through output delivery to Kinesis Data Firehouse. C.Ingest the data stream with Amazon Kinesis Data Firehose with a delivery frequency of 1 minute or 1 MB in Amazon S3. Ensure Amazon S3 triggers an event to invoke an AWS Lambda consumer that evaluates the batch data, collects the number status codes, and evaluates the data against a previously trained RCF model. Persist the source and results as a time series to Amazon DynamoDB. D.Ingest the data stream with Amazon Kinesis Data Firehose with a delivery frequency of 5 minutes or 1 MB into Amazon S3. Have a Kinesis Data Analytics application evaluate the stream over a 1-minute window using the RCF function and summarize the count of status codes. Persist the results to Amazon S3 through a Kinesis Data Analytics output to an AWS Lambda integration. Answer: B QUESTION 91 An online retailer needs to deploy a product sales reporting solution. The source data is exported from an external online transaction processing (OLTP) system for reporting. Roll-up data is calculated each day for the previous day's activities. The reporting system has the following requirements: - Have the daily roll-up data readily available for 1 year. - After 1 year, archive the daily roll-up data for occasional but immediate access. - The source data exports stored in the reporting system must be retained for 5 years. Query access will be needed only for re-evaluation, which may occur within the first 90 days. Which combination of actions will meet these requirements while keeping storage costs to a minimum? (Choose two.) A.Store the source data initially in the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Glacier Deep Archive 90 days after creation, and then deletes the data 5 years after creation. B.Store the source data initially in the Amazon S3 Glacier storage class. Apply a lifecycle configuration that changes the storage class from Amazon S3 Glacier to Amazon S3 Glacier Deep Archive 90 days after creation, and then deletes the data 5 years after creation. C.Store the daily roll-up data initially in the Amazon S3 Standard storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Glacier Deep Archive 1 year after data creation. D.Store the daily roll-up data initially in the Amazon S3 Standard storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Standard-Infrequent Access (S3 Standard- IA) 1 year after data creation. E.Store the daily roll-up data initially in the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Glacier 1 year after data creation. Answer: BE QUESTION 92 A company needs to store objects containing log data in JSON format. The objects are generated by eight applications running in AWS. Six of the applications generate a total of 500 KiB of data per second, and two of the applications can generate up to 2 MiB of data per second. A data engineer wants to implement a scalable solution to capture and store usage data in an Amazon S3 bucket. The usage data objects need to be reformatted, converted to .csv format, and then compressed before they are stored in Amazon S3. The company requires the solution to include the least custom code possible and has authorized the data engineer to request a service quota increase if needed. Which solution meets these requirements? A.Configure an Amazon Kinesis Data Firehose delivery stream for each application. Write AWS Lambda functions to read log data objects from the stream for each application. Have the function perform reformatting and .csv conversion. Enable compression on all the delivery streams. B.Configure an Amazon Kinesis data stream with one shard per application. Write an AWS Lambda function to read usage data objects from the shards. Have the function perform .csv conversion, reformatting, and compression of the data. Have the function store the output in Amazon S3. C.Configure an Amazon Kinesis data stream for each application. Write an AWS Lambda function to read usage data objects from the stream for each application. Have the function perform .csv conversion, reformatting, and compression of the data. Have the function store the output in Amazon S3. D.Store usage data objects in an Amazon DynamoDB table. Configure a DynamoDB stream to copy the objects to an S3 bucket. Configure an AWS Lambda function to be triggered when objects are written to the S3 bucket. Have the function convert the objects into .csv format. Answer: B QUESTION 93 A data analytics specialist is building an automated ETL ingestion pipeline using AWS Glue to ingest compressed files that have been uploaded to an Amazon S3 bucket. The ingestion pipeline should support incremental data processing. Which AWS Glue feature should the data analytics specialist use to meet this requirement? A.Workflows B.Triggers C.Job bookmarks D.Classifiers Answer: B QUESTION 94 A telecommunications company is looking for an anomaly-detection solution to identify fraudulent calls. The company currently uses Amazon Kinesis to stream voice call records in a JSON format from its on- premises database to Amazon S3. The existing dataset contains voice call records with 200 columns. To detect fraudulent calls, the solution would need to look at 5 of these columns only. The company is interested in a cost-effective solution using AWS that requires minimal effort and experience in anomaly-detection algorithms. Which solution meets these requirements? A.Use an AWS Glue job to transform the data from JSON to Apache Parquet. Use AWS Glue crawlers to discover the schema and build the AWS Glue Data Catalog. Use Amazon Athena to create a table with a subset of columns. Use Amazon QuickSight to visualize the data and then use Amazon QuickSight machine learning-powered anomaly detection. B.Use Kinesis Data Firehose to detect anomalies on a data stream from Kinesis by running SQL queries, which compute an anomaly score for all calls and store the output in Amazon RDS. Use Amazon Athena to build a dataset and Amazon QuickSight to visualize the results. C.Use an AWS Glue job to transform the data from JSON to Apache Parquet. Use AWS Glue crawlers to discover the schema and build the AWS Glue Data Catalog. Use Amazon SageMaker to build an anomaly detection model that can detect fraudulent calls by ingesting data from Amazon S3. D.Use Kinesis Data Analytics to detect anomalies on a data stream from Kinesis by running SQL queries, which compute an anomaly score for all calls. Connect Amazon QuickSight to Kinesis Data Analytics to visualize the anomaly scores. Answer: A QUESTION 95 An online retailer is rebuilding its inventory management system and inventory reordering system to automatically reorder products by using Amazon Kinesis Data Streams. The inventory management system uses the Kinesis Producer Library (KPL) to publish data to a stream. The inventory reordering system uses the Kinesis Client Library (KCL) to consume data from the stream. The stream has been configured to scale as needed. Just before production deployment, the retailer discovers that the inventory reordering system is receiving duplicated data. Which factors could be causing the duplicated data? (Choose two.) A.The producer has a network-related timeout. B.The stream's value for the IteratorAgeMilliseconds metric is too high. C.There was a change in the number of shards, record processors, or both. D.The AggregationEnabled configuration property was set to true. E.The max_records configuration property was set to a number that is too high. Answer: BD QUESTION 96 A large retailer has successfully migrated to an Amazon S3 data lake architecture. The company's marketing team is using Amazon Redshift and Amazon QuickSight to analyze data, and derive and visualize insights. To ensure the marketing team has the most up-to-date actionable information, a data analyst implements nightly refreshes of Amazon Redshift using terabytes of updates from the previous day. After the first nightly refresh, users report that half of the most popular dashboards that had been running correctly before the refresh are now running much slower. Amazon CloudWatch does not show any alerts. What is the MOST likely cause for the performance degradation? A.The dashboards are suffering from inefficient SQL queries. B.The cluster is undersized for the queries being run by the dashboards. C.The nightly data refreshes are causing a lingering transaction that cannot be automatically closed by Amazon Redshift due to ongoing user workloads. D.The nightly data refreshes left the dashboard tables in need of a vacuum operation that could not be automatically performed by Amazon Redshift due to ongoing user workloads. Answer: B QUESTION 97 A marketing company is storing its campaign response data in Amazon S3. A consistent set of sources has generated the data for each campaign. The data is saved into Amazon S3 as .csv files. A business analyst will use Amazon Athena to analyze each campaign's data. The company needs the cost of ongoing data analysis with Athena to be minimized. Which combination of actions should a data analytics specialist take to meet these requirements? (Choose two.) A.Convert the .csv files to Apache Parquet. B.Convert the .csv files to Apache Avro. C.Partition the data by campaign. D.Partition the data by source. E.Compress the .csv files. Answer: BC QUESTION 98 An online retail company is migrating its reporting system to AWS. The company's legacy system runs data processing on online transactions using a complex series of nested Apache Hive queries. Transactional data is exported from the online system to the reporting system several times a day. Schemas in the files are stable between updates. A data analyst wants to quickly migrate the data processing to AWS, so any code changes should be minimized. To keep storage costs low, the data analyst decides to store the data in Amazon S3. It is vital that the data from the reports and associated analytics is completely up to date based on the data in Amazon S3. Which solution meets these requirements? A.Create an AWS Glue Data Catalog to manage the Hive metadata. Create an AWS Glue crawler over Amazon S3 that runs when data is refreshed to ensure that data changes are updated. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. B.Create an AWS Glue Data Catalog to manage the Hive metadata. Create an Amazon EMR cluster with consistent view enabled. Run emrfs sync before each analytics step to ensure data changes are updated. Create an EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. C.Create an Amazon Athena table with CREATE TABLE AS SELECT (CTAS) to ensure data is refreshed from underlying queries against the raw dataset. Create an AWS Glue Data Catalog to manage the Hive metadata over the CTAS table. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. D.Use an S3 Select query to ensure that the data is properly updated. Create an AWS Glue Data Catalog to manage the Hive metadata over the S3 Select table. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. Answer: A QUESTION 99 A media company is using Amazon QuickSight dashboards to visualize its national sales data. The dashboard is using a dataset with these fields: ID, date, time_zone, city, state, country, longitude, latitude, sales_volume, and number_of_items. To modify ongoing campaigns, the company wants an interactive and intuitive visualization of which states across the country recorded a significantly lower sales volume compared to the national average. Which addition to the company's QuickSight dashboard will meet this requirement? A.A geospatial color-coded chart of sales volume data across the country. B.A pivot table of sales volume data summed up at the state level. C.A drill-down layer for state-level sales volume data. D.A drill through to other dashboards containing state-level sales volume data. Answer: B QUESTION 100 A company hosts an on-premises PostgreSQL database that contains historical data. An internal legacy application uses the database for read-only activities. The company's business team wants to move the data to a data lake in Amazon S3 as soon as possible and enrich the data for analytics. The company has set up an AWS Direct Connect connection between its VPC and its on-premises network. A data analytics specialist must design a solution that achieves the business team's goals with the least operational overhead. Which solution meets these requirements? A.Upload the data from the on-premises PostgreSQL database to Amazon S3 by using a customized batch upload process. Use the AWS Glue crawler to catalog the data in Amazon S3. Use an AWS Glue job to enrich and store the result in a separate S3 bucket in Apache Parquet format. Use Amazon Athena to query the data. B.Create an Amazon RDS for PostgreSQL database and use AWS Database Migration Service (AWS DMS) to migrate the data into Amazon RDS. Use AWS Data Pipeline to copy and enrich the data from the Amazon RDS for PostgreSQL table and move the data to Amazon S3. Use Amazon Athena to query the data. C.Configure an AWS Glue crawler to use a JDBC connection to catalog the data in the on-premises database. Use an AWS Glue job to enrich the data and save the result to Amazon S3 in Apache Parquet format. Create an Amazon Redshift cluster and use Amazon Redshift Spectrum to query the data. D.Configure an AWS Glue crawler to use a JDBC connection to catalog the data in the on-premises database. Use an AWS Glue job to enrich the data and save the result to Amazon S3 in Apache Parquet format. Use Amazon Athena to query the data. Answer: B QUESTION 101 A medical company has a system with sensor devices that read metrics and send them in real time to an Amazon Kinesis data stream. The Kinesis data stream has multiple shards. The company needs to calculate the average value of a numeric metric every second and set an alarm for whenever the value is above one threshold or below another threshold. The alarm must be sent to Amazon Simple Notification Service (Amazon SNS) in less than 30 seconds. Which architecture meets these requirements? A.Use an Amazon Kinesis Data Firehose delivery stream to read the data from the Kinesis data stream with an AWS Lambda transformation function that calculates the average per second and sends the alarm to Amazon SNS. B.Use an AWS Lambda function to read from the Kinesis data stream to calculate the average per second and sent the alarm to Amazon SNS. C.Use an Amazon Kinesis Data Firehose deliver stream to read the data from the Kinesis data stream and store it on Amazon S3. Have Amazon S3 trigger an AWS Lambda function that calculates the average per second and sends the alarm to Amazon SNS. D.Use an Amazon Kinesis Data Analytics application to read from the Kinesis data stream and calculate the average per second. Send the results to an AWS Lambda function that sends the alarm to Amazon SNS. Answer: C QUESTION 102 An IoT company wants to release a new device that will collect data to track sleep overnight on an intelligent mattress. Sensors will send data that will be uploaded to an Amazon S3 bucket. About 2 MB of data is generated each night for each bed. Data must be processed and summarized for each user, and the results need to be available as soon as possible. Part of the process consists of time windowing and other functions. Based on tests with a Python script, every run will require about 1 GB of memory and will complete within a couple of minutes. Which solution will run the script in the MOST cost-effective way? A.AWS Lambda with a Python script B.AWS Glue with a Scala job C.Amazon EMR with an Apache Spark script D.AWS Glue with a PySpark job Answer: A QUESTION 103 A company wants to provide its data analysts with uninterrupted access to the data in its Amazon Redshift cluster. All data is streamed to an Amazon S3 bucket with Amazon Kinesis Data Firehose. An AWS Glue job that is scheduled to run every 5 minutes issues a COPY command to move the data into Amazon Redshift. The amount of data delivered is uneven throughout then day, and cluster utilization is high during certain periods. The COPY command usually completes within a couple of seconds. However, when load spike occurs, locks can exist and data can be missed. Currently, the AWS Glue job is configured to run without retries, with timeout at 5 minutes and concurrency at 1. How should a data analytics specialist configure the AWS Glue job to optimize fault tolerance and improve data availability in the Amazon Redshift cluster? A.Increase the number of retries. Decrease the timeout value. Increase the job concurrency. B.Keep the number of retries at 0. Decrease the timeout value. Increase the job concurrency. C.Keep the number of retries at 0. Decrease the timeout value. Keep the job concurrency at 1. D.Keep the number of retries at 0. Increase the timeout value. Keep the job concurrency at 1. Answer: B QUESTION 104 A retail company leverages Amazon Athena for ad-hoc queries against an AWS Glue Data Catalog. The data analytics team manages the data catalog and data access for the company. The data analytics team wants to separate queries and manage the cost of running those queries by different workloads and teams. Ideally, the data analysts want to group the queries run by different users within a team, store the query results in individual Amazon S3 buckets specific to each team, and enforce cost constraints on the queries run against the Data Catalog. Which solution meets these requirements? A.Create IAM groups and resource tags for each team within the company. Set up IAM policies that control user access and actions on the Data Catalog resources. B.Create Athena resource groups for each team within the company and assign users to these groups. Add S3 bucket names and other query configurations to the properties list for the resource groups. C.Create Athena workgroups for each team within the company. Set up IAM workgroup policies that control user access and actions on the workgroup resources. D.Create Athena query groups for each team within the company and assign users to the groups. Answer: A QUESTION 105 A manufacturing company uses Amazon S3 to store its data. The company wants to use AWS Lake Formation to provide granular-level security on those data assets. The data is in Apache Parquet format. The company has set a deadline for a consultant to build a data lake. How should the consultant create the MOST cost-effective solution that meets these requirements? A.Run Lake Formation blueprints to move the data to Lake Formation. Once Lake Formation has the data, apply permissions on Lake Formation. B.To create the data catalog, run an AWS Glue crawler on the existing Parquet data. Register the Amazon S3 path and then apply permissions through Lake Formation to provide granular-level security. C.Install Apache Ranger on an Amazon EC2 instance and integrate with Amazon EMR. Using Ranger policies, create role-based access control for the existing data assets in Amazon S3. D.Create multiple IAM roles for different users and groups. Assign IAM roles to different data assets in Amazon S3 to create table-based and column-based access controls. Answer: C QUESTION 106 A company has an application that uses the Amazon Kinesis Client Library (KCL) to read records from a Kinesis data stream. After a successful marketing campaign, the application experienced a significant increase in usage. As a result, a data analyst had to split some shards in the data stream. When the shards were split, the application started throwing an ExpiredIteratorExceptions error sporadically. What should the data analyst do to resolve this? A.Increase the number of threads that process the stream records. B.Increase the provisioned read capacity units assigned to the stream's Amazon DynamoDB table. C.Increase the provisioned write capacity units assigned to the stream's Amazon DynamoDB table. D.Decrease the provisioned write capacity units assigned to the stream's Amazon DynamoDB table. Answer: C QUESTION 107 A company is building a service to monitor fleets of vehicles. The company collects IoT data from a device in each vehicle and loads the data into Amazon Redshift in near-real time. Fleet owners upload .csv files containing vehicle reference data into Amazon S3 at different times throughout the day. A nightly process loads the vehicle reference data from Amazon S3 into Amazon Redshift. The company joins the IoT data from the device and the vehicle reference data to power reporting and dashboards. Fleet owners are frustrated by waiting a day for the dashboards to update. Which solution would provide the SHORTEST delay between uploading reference data to Amazon S3 and the change showing up in the owners' dashboards? A.Use S3 event notifications to trigger an AWS Lambda function to copy the vehicle reference data into Amazon Redshift immediately when the reference data is uploaded to Amazon S3. B.Create and schedule an AWS Glue Spark job to run every 5 minutes. The job inserts reference data into Amazon Redshift. C.Send reference data to Amazon Kinesis Data Streams. Configure the Kinesis data stream to directly load the reference data into Amazon Redshift in real time. D.Send the reference data to an Amazon Kinesis Data Firehose delivery stream. Configure Kinesis with a buffer interval of 60 seconds and to directly load the data into Amazon Redshift. Answer: A QUESTION 108 A company is migrating from an on-premises Apache Hadoop cluster to an Amazon EMR cluster. The cluster runs only during business hours. Due to a company requirement to avoid intraday cluster failures, the EMR cluster must be highly available. When the cluster is terminated at the end of each business day, the data must persist. Which configurations would enable the EMR cluster to meet these requirements? (Choose three.) A.EMR File System (EMRFS) for storage B.Hadoop Distributed File System (HDFS) for storage C.AWS Glue Data Catalog as the metastore for Apache Hive D.MySQL database on the master node as the metastore for Apache Hive E.Multiple master nodes in a single Availability Zone F.Multiple master nodes in multiple Availability Zones Answer: BCF QUESTION 109 A retail company wants to use Amazon QuickSight to generate dashboards for web and in-store sales. A group of 50 business intelligence professionals will develop and use the dashboards. Once ready, the dashboards will be shared with a group of 1,000 users. The sales data comes from different stores and is uploaded to Amazon S3 every 24 hours. The data is partitioned by year and month, and is stored in Apache Parquet format. The company is using the AWS Glue Data Catalog as its main data catalog and Amazon Athena for querying. The total size of the uncompressed data that the dashboards query from at any point is 200 GB. Which configuration will provide the MOST cost-effective solution that meets these requirements? A.Load the data into an Amazon Redshift cluster by using the COPY command. Configure 50 author users and 1,000 reader users. Use QuickSight Enterprise edition. Configure an Amazon Redshift data source with a direct query option. B.Use QuickSight Standard edition. Configure 50 author users and 1,000 reader users. Configure an Athena data source with a direct query option. C.Use QuickSight Enterprise edition. Configure 50 author users and 1,000 reader users. Configure an Athena data source and import the data into SPICE. Automatically refresh every 24 hours. D.Use QuickSight Enterprise edition. Configure 1 administrator and 1,000 reader users. Configure an S3 data source and import the data into SPICE. Automatically refresh every 24 hours. Answer: C QUESTION 110 A central government organization is collecting events from various internal applications using Amazon Managed Streaming for Apache Kafka (Amazon MSK). The organization has configured a separate Kafka topic for each application to separate the data. For security reasons, the Kafka cluster has been configured to only allow TLS encrypted data and it encrypts the data at rest. A recent application update showed that one of the applications was configured incorrectly, resulting in writing data to a Kafka topic that belongs to another application. This resulted in multiple errors in the analytics pipeline as data from different applications appeared on the same topic. After this incident, the organization wants to prevent applications from writing to a topic different than the one they should write to. Which solution meets these requirements with the least amount of effort? A.Create a different Amazon EC2 security group for each application. Configure each security group to have access to a specific topic in the Amazon MSK cluster. Attach the security group to each application based on the topic that the applications should read and write to. B.Install Kafka Connect on each application instance and configure each Kafka Connect instance to write to a specific topic only. C.Use Kafka ACLs and configure read and write permissions for each topic. Use the distinguished name of the clients' TLS certificates as the principal of the ACL. D.Create a different Amazon EC2 security group for each application. Create an Amazon MSK cluster and Kafka topic for each application. Configure each security group to have access to the specific cluster. Answer: B QUESTION 111 A company wants to collect and process events data from different departments in near-real time. Before storing the data in Amazon S3, the company needs to clean the data by standardizing the format of the address and timestamp columns. The data varies in size based on the overall load at each particular point in time. A single data record can be 100 KB-10 MB. How should a data analytics specialist design the solution for data ingestion? A.Use Amazon Kinesis Data Streams. Configure a stream for the raw data. Use a Kinesis Agent to write data to the stream. Create an Amazon Kinesis Data Analytics application that reads data from the raw stream, cleanses it, and stores the output to Amazon S3. B.Use Amazon Kinesis Data Firehose. Configure a Firehose delivery stream with a preprocessing AWS Lambda function for data cleansing. Use a Kinesis Agent to write data to the delivery stream. Configure Kinesis Data Firehose to deliver the data to Amazon S3. C.Use Amazon Managed Streaming for Apache Kafka. Configure a topic for the raw data. Use a Kafka producer to write data to the topic. Create an application on Amazon EC2 that reads data from the topic by using the Apache Kafka consumer API, cleanses the data, and writes to Amazon S3. D.Use Amazon Simple Queue Service (Amazon SQS). Configure an AWS Lambda function to read events from the SQS queue and upload the events to Amazon S3. Answer: B QUESTION 112 An operations team notices that a few AWS Glue jobs for a given ETL application are failing. The AWS Glue jobs read a large number of small JOSN files from an Amazon S3 bucket and write the data to a different S3 bucket in Apache Parquet format with no major transformations. Upon initial investigation, a data engineer notices the following error message in the History tab on the AWS Glue console: "Command Failed with Exit Code 1." Upon further investigation, the data engineer notices that the driver memory profile of the failed jobs crosses the safe threshold of 50% usage quickly and reaches 90?5% soon after. The average memory usage across all executors continues to be less than 4%. The data engineer also notices the following error while examining the related Amazon CloudWatch Logs. What should the data engineer do to solve the failure in the MOST cost-effective way? A.Change the worker type from Standard to G.2X. B.Modify the AWS Glue ETL code to use the `groupFiles': `inPartition' feature. C.Increase the fetch size setting by using AWS Glue dynamics frame. D.Modify maximum capacity to increase the total maximum data processing units (DPUs) used. Answer: D QUESTION 113 A transport company wants to track vehicular movements by capturing geolocation records. The records are 10 B in size and up to 10,000 records are captured each second. Data transmission delays of a few minutes are acceptable, considering unreliable network conditions. The transport company decided to use Amazon Kinesis Data Streams to ingest the data. The company is looking for a reliable mechanism to send data to Kinesis Data Streams while maximizing the throughput efficiency of the Kinesis shards. Which solution will meet the company's requirements? A.Kinesis Agent B.Kinesis Producer Library (KPL) C.Kinesis Data Firehose D.Kinesis SDK Answer: B 2021 Latest Braindump2go DAS-C01 PDF and DAS-C01 VCE Dumps Free Share: https://drive.google.com/drive/folders/1WbSRm3ZlrRzjwyqX7auaqgEhLLzmD-2w?usp=sharing
(April-2021)Braindump2go 350-401 PDF and 350-401 VCE Dumps(Q409-Q433)
QUESTION 409 A customer has 20 stores located throughout a city. Each store has a single Cisco AP managed by a central WLC. The customer wants to gather analytics for users in each store. Which technique supports these requirements? A.angle of arrival B.presence C.hyperlocation D.trilateration Answer: D QUESTION 410 A customer has a pair of Cisco 5520 WLCs set up in an SSO cluster to manage all APs. Guest traffic is anchored to a Cisco 3504 WLC located in a DM2. Which action is needed to ensure that the EolP tunnel remains in an UP state in the event of failover on the SSO cluster? A.Use the mobility MAC when the mobility peer is configured B.Use the same mobility domain on all WLCs C.Enable default gateway reachability check D.Configure back-to-back connectivity on the RP ports Answer: B QUESTION 411 Refer to the exhibit. A network administrator configured RSPAN to troubleshoot an issue between switchl and switch2. The switches are connected using interface GigabitEthernet 1/1. An external packet capture device is connected to swich2 interface GigabitEthernet1/2. Which two commands must be added to complete this configuration? (Choose two) A.Option A B.Option B C.Option C D.Option D Answer: BD QUESTION 412 Refer to the exhibit. Which Python code snippet prints the descriptions of disabled interfaces only? A.Option A B.Option B C.Option C D.Option D Answer: B QUESTION 413 Refer to the exhibit. Which outcome is achieved with this Python code? A.displays the output of the show command in an unformatted way B.displays the output of the show command in a formatted way C.connects to a Cisco device using Telnet and exports the routing table information D.connects to a Cisco device using SSH and exports the routing table information Answer: B QUESTION 414 Which resource is able to be shared among virtual machines deployed on the same physical server? A.disk B.operating system C.VM configuration file D.applications Answer: A QUESTION 415 Refer to the exhibit. An engineer must deny HTTP traffic from host A to host B while allowing all other communication between the hosts. Which command set accomplishes this task? A.Option A B.Option B C.Option C D.Option D Answer: A QUESTION 416 Refer to the exhibit. An engineer must create a script that appends the output of the show process cpu sorted command to a file. Which action completes the configuration? A.action 4.0 syslog command "show process cpu sorted | append flash:high-cpu-file" B.action 4.0 cli command "show process cpu sorted | append flash:high-cpu-file" C.action 4.0 ens-event "show process cpu sorted | append flash:high-cpu-file" D.action 4.0 publish-event "show process cpu sorted | append flash:high-cpu-file" Answer: B QUESTION 417 Refer to the exhibit. Which action completes the configuration to achieve a dynamic continuous mapped NAT for all users? A.Configure a match-host type NAT pool B.Reconfigure the pool to use the 192.168 1 0 address range C.Increase the NAT pool size to support 254 usable addresses D.Configure a one-to-one type NAT pool Answer: C QUESTION 418 Which function is handled by vManage in the Cisco SD-WAN fabric? A.Establishes BFD sessions to test liveliness of links and nodes B.Distributes policies that govern data forwarding C.Performs remote software upgrades for WAN Edge. vSmart and vBond D.Establishes IPsec tunnels with nodes. Answer: B QUESTION 419 Refer to the exhibit. An engineer is configuring an EtherChannel between Switch1 and Switch2 and notices the console message on Switch2. Based on the output, which action resolves this issue? A.Configure less member ports on Switch2. B.Configure the same port channel interface number on both switches C.Configure the same EtherChannel protocol on both switches D.Configure more member ports on Switch1. Answer: B QUESTION 420 How do cloud deployments differ from on-prem deployments? A.Cloud deployments require longer implementation times than on-premises deployments B.Cloud deployments are more customizable than on-premises deployments. C.Cloud deployments require less frequent upgrades than on-premises deployments. D.Cloud deployments have lower upfront costs than on-premises deployments. Answer: B QUESTION 421 Refer to the exhibit. Extended access-list 100 is configured on interface GigabitEthernet 0/0 in an inbound direction, but it does not have the expected behavior of allowing only packets to or from 192 168 0.0/16. Which command set properly configures the access list? A.Option A B.Option B C.Option C D.Option D Answer: D QUESTION 422 An engineer is concerned with the deployment of a new application that is sensitive to inter-packet delay variance. Which command configures the router to be the destination of jitter measurements? A.Router(config)# ip sla responder udp-connect 172.29.139.134 5000 B.Router(config)# ip sla responder tcp-connect 172.29.139.134 5000 C.Router(config)# ip sla responder udp-echo 172.29.139.134 5000 D.Router(config)# ip sla responder tcp-echo 172.29.139.134 5000 Answer: C QUESTION 423 What is a characteristic of a WLC that is in master controller mode? A.All new APs that join the WLAN are assigned to the master controller. B.The master controller is responsible for load balancing all connecting clients to other controllers. C.All wireless LAN controllers are managed by the master controller. D.Configuration on the master controller is executed on all wireless LAN controllers. Answer: A QUESTION 424 Refer to the exhibit. The connection between SW1 and SW2 is not operational. Which two actions resolve the issue? (Choose two.) A.configure switchport mode access on SW2 B.configure switchport nonegotiate on SW2 C.configure switchport mode trunk on SW2 D.configure switchport nonegotiate on SW1 E.configure switchport mode dynamic desirable on SW2 Answer: CE QUESTION 425 An engineer must create an EEM applet that sends a syslog message in the event a change happens in the network due to trouble with an OSPF process. Which action should the engineer use? A.action 1 syslog msg "OSPF ROUTING ERROR" B.action 1 syslog send "OSPF ROUTING ERROR" C.action 1 syslog pattern "OSPF ROUTING ERROR" D.action 1syslog write "OSPF ROUTING ERROR" Answer: C QUESTION 426 An engineer runs the sample code, and the terminal returns this output. Which change to the sample code corrects this issue? A.Change the JSON method from load() to loads(). B.Enclose null in the test_json string in double quotes C.Use a single set of double quotes and condense test_json to a single line D.Call the read() method explicitly on the test_json string Answer: D QUESTION 427 In a Cisco DNA Center Plug and Play environment, why would a device be labeled unclaimed? A.The device has not been assigned a workflow. B.The device could not be added to the fabric. C.The device had an error and could not be provisioned. D.The device is from a third-party vendor. Answer: A QUESTION 428 Which of the following statements regarding BFD are correct? (Select 2 choices.) A.BFD is supported by OSPF, EIGRP, BGP, and IS-IS. B.BFD detects link failures in less than one second. C.BFD can bypass a failed peer without relying on a routing protocol. D.BFD creates one session per routing protocol per interface. E.BFD is supported only on physical interfaces. F.BFD consumes more CPU resources than routing protocol timers do. Answer: AB QUESTION 429 An engineer measures the Wi-Fi coverage at a customer site. The RSSI values are recorded as follows: Which two statements does the engineer use to explain these values to the customer? (Choose two) A.The signal strength at location B is 10 dB better than location C. B.Location D has the strongest RF signal strength. C.The signal strength at location C is too weak to support web surfing. D.The RF signal strength at location B is 50% weaker than location A E.The RF signal strength at location C is 10 times stronger than location B Answer: DE QUESTION 430 What is an advantage of using BFD? A.It local link failure at layer 1 and updates routing table B.It detects local link failure at layer 3 and updates routing protocols C.It has sub-second failure detection for layer 1 and layer 3 problems. D.It has sub-second failure detection for layer 1 and layer 2 problems. Answer: C QUESTION 431 Which three resources must the hypervisor make available to the virtual machines? (Choose three) A.memory B.bandwidth C.IP address D.processor E.storage F.secure access Answer: ABE QUESTION 432 What is the function of vBond in a Cisco SDWAN deployment? A.initiating connections with SD-WAN routers automatically B.pushing of configuration toward SD-WAN routers C.onboarding of SDWAN routers into the SD-WAN overlay D.gathering telemetry data from SD-WAN routers Answer: A QUESTION 433 Which three methods does Cisco DNA Center use to discover devices? (Choose three.) A.CDP B.SNMP C.LLDP D.Ping E.NETCONF F.specified range of IP addresses Answer: ACF 2021 Latest Braindump2go 350-401 PDF and 350-401 VCE Dumps Free Share: https://drive.google.com/drive/folders/1EIsykNTrKvqjDVs9JMySv052qbrCpe8V?usp=sharing
Top 20 Fastest Double Centuries in Test Cricket.
Test cricket is considered to be a little slower game than ODI and T20 cricket format but for the last few decades, this format of cricket has seen significant changes, In the present environment of test cricket quite comfortably 300 to 350 runs can be scored in a single day's play. Many batsmen of the world have also scored double centuries by facing quite fewer balls in this format also, in this article, we will explain the top 20 fastest double centuries in Test cricket. Top 20 cricketers who are scored the fastest double century in Test cricket. 01. Nathan Astle 02. Ben Stokes 03. Virender Sehwag 04. Virender Sehwag 05. Brendom McCullum 06. Virender Sehwag 07. Herschelle Gibbs 08. Adam Gilchrist 09. Ross Taylor 10. Ian Botham 11. Chris Gayle 12. Virender Sehwag 13. Virender Sehwag 14. Aravinda de Silva 15. Jason Holder 16. M. S. Dhoni 17. Graham Thorpe 18. Gordon Greenidge 19. Mohammad Yousuf 20. Victor Trumper 20. Victor Trumper: Full name... Victor Thomas Trumper. Date of birth... 2 November 1877. Date of death... 28 June 1915. Born territory... Darlinghurst, New South Wales, Australia. Height... Na. Batting genre... Right-handed. Bowling genre... Right arm medium. Main Role in the team... As a Batsman. Victor Trumper has scored 214 runs by facing 247 balls in the first innings of the third test match which was played at Adelaide Oval, Australia, from 07 January to 13 January 2011, during the South Africa tour of Australia. In this Test match, he was played 242 minutes of batting on the crease with the help of 26 fours and zero sixes. The following are the details of his performance in this test match. 19. Mohammad Yousuf: Full name... Mohammad Yousef. Date of birth... 27 August 1974 . Born territory... Lahore, Punjab, Pakistan. Height... 1.78 m. Batting genre... Right-handed. Bowling genre... Right arm medium. Main Role in the team... As a batsman. Mohammad Yousuf has scored 204 runs by facing 243 balls in the first innings of the second test match which was played at MA Aziz Stadium, Chattogram, Bangladesh, from 16 January to 18 January 2002, during the Pakistan tour of Bangladesh. In this Test match, he was played 325 minutes of batting on the crease with the help of 34 fours and two sixes. The following are the details of his performance in this test match. 18. Gordon Greenidge: Full name... Cuthbert Gordon Greenidge. Date of birth... 01 May 1951. Born territory... Black Bess, St Peter, Barbados. Height... Na Batting genre... Right-handed. Bowling genre... Right-arm medium. Main Role in the team... As an opening batsman. Gordon Greenidge has scored 200 runs by facing 232 balls in the second innings of the second test match which was played at Lord's cricket ground, London, England, from 28 June to 03 July 1984, during the West Indies tour of England. In this Test match, he has scored a total of 214 runs from the facing of 242 balls by batting for 302 minutes on the crease with the help of 29 fours and two sixes. The following are the details of his performance in this test match. 17. Graham Thorpe: Full name... Graham Paul Thorpe. Date of birth... 01 August 1969. Born territory... Farnham, Surrey, England. Height... Batting genre... Left-handed. Bowling genre... Right arm medium. Main Role in the team... As a batsman Graham Paul Thorpe has scored 200 runs by facing 231 balls in the 2nd innings of the first test match which was played at AMI Stadium, Christchurch, New Zealand, from 13 March to 16 March 2002, during the England tour of New Zealand. 16. MS Dhoni: Full name... Mahendra Singh Dhoni. Date of birth... 07 July 1981. Born territory... Ranchi, jharkhand, India. Height... 1.75 m. Batting genre... Right-handed. Bowling genre... Na Main Role in the team...Wicket-keeper & batsman Mahendra Singh Dhoni has scored 200 runs by facing 231 balls in the first innings of the first test match which was played at MA Chidambaram Stadium, Chepauk, Chennai, India, from 22 February to 26 February 2013, during the Australia tour of India. In this Test match, he has scored a total of 224 runs from the facing of 265 balls by batting for 365 minutes on the crease with the help of 24 fours and six sixes. The following are the details of his performance in this test match. 15. Jason Holder: Full name... Jason Omar Holder. Date of birth... 05 November 1991. Born territory... Bridgetown, Barbados. Height...02.01 m. Batting genre... Right-handed. Bowling genre...Right-arm fast-medium. Main Role in the team...Bowler & all-rounder. Jason Omar Holder has scored 202 runs by facing 229 balls in the 2nd innings of the first test match which was played at Kensington Oval, Bridgetown, Barbados, from 23 January to 26 January 2019, during the England tour of the West Indies. In this Test match, he was played 384 minutes of batting on the crease with the help of 23 fours and eight sixes. The brief data of his batting performance on this Test match are the following. 14. Aravinda de Silva: Full name... Pinnaduwage Aravinda de Silva. Date of birth... 17 October 1965. Born territory... Colombo, Sri Lanka. Height... 1.65 m. Batting genre... Right-handed. Bowling genre... Right arm off-spin. Main Role in the team... As a batsman Aravinda de Silva has scored 200 runs by facing 229 balls in the first innings of the first test match which was played at P Sara Oval, Colombo, Sri Lanka, from 21 July to 23 July 2002, during the Bangladesh tour of Sri Lanka. In this Test match, he has scored a total of 206 runs from the facing of 234 balls by batting 318 minutes on the crease with the help of 28 fours and one sixes. The following are the details of his performance in this test match. 13. Virender Sehwag: Full name... Virender Sehwag. Date of birth... 20 October 1978. Born territory... Najafgarh, Delhi, India. Height... 1.73 m. Batting genre... Right-handed. Bowling genre... Right arm off break. Main Role in the team... As an opening batsman Virender Sehwag has scored 200 runs by facing 227 balls in the first innings of the second test match which was played at Galle International Stadium, Sri Lanka, from 31 July to 04 August 2008, during the India tour of Sri Lanka. In this Test match, he has scored a total of 201 runs from the facing of 231 balls by batting 348 minutes on the crease with the help of 22 fours and four sixes. The following are the details of his performance in this test match. Kindly click here to continue reading this article. https://www.theindia24.com/2021/04/fastest-double-centuries-in-test.html
If life is a game, You must be top Gamer
in case you are a professional gamer with excessive-give-up requirements or an informal gamer or streamer, this computer configuration will make sure you placed your money to high-quality use. when youโ€™re spending an excessive amount of cash, there are numerous options to choose from and we will assist you to make the selections. Best Gaming Laptops The components we've decided on for this gaming computer will no longer simplest offer you the nice frame prices with remarkable pics in games nowadays however additionally live aggressive within the destiny. For the CPU we've long gone in favor of the blue team. The i5 9400F is an ideal mid-range gaming processor. although itโ€™s a completely stable preference to go with, there are worth options from the red group as well. The AMD Ryzen 5 2600 is likewise available in a similar price category, a touch extra high priced. Why we've got chosen the i5 9400F over the Ryzen counterpart is the high single-center performance. The middle i5 pulls ahead inside the unmarried-center workloads which makes it higher for gaming. but, Ryzen CPUs are recognized to perform better in multicore situations, like video enhancing or rendering. In case you are a content material writer, you may take gain of the 6 cores and 12 threads on the Ryzen five 2600 vs the 6 cores and six threads on the i5 9400F. Spending a few more money will advantage you if you could exploit the hyper-threading. As this pc is focused on gaming, we will go together with the gaming king, Intel. Acer Predator Helios 300 New Inspiron 15 7501 By Dell ASUS ROG Zephyrus G14 Lenovo Legion Y7000 SE Laptop Acer Nitro 5 HP Gaming Pavilion 15 Asus TUF Gaming A17 MSI GF65 M1 Macbook Air Acer Predator Triton 300
WhatsApp vs GroupMe | Which one is the Best?
WhatsApp vs GroupMe WhatsApp WhatsApp Messenger, or just WhatsApp, is an American freeware, cross-platform centralized messaging and voice-over-IP service owned by Facebook, Inc. It permits customers to send textual content messages and voice messages, make voice and video calls, and share pictures, documents, person areas, and different content material. WhatsApp makes use of your cellphoneโ€™s mobile or Wi-Fi connection to facilitate messaging and voice calling to just about anybody on the planet, alone or in a group, and is very good for households and small collaborative workgroups. The app enables you to make calls and send and obtain messages, paperwork, pictures, and movies. How do WhatsApp works? WhatsApp is free โ€” with no charges or subscriptions โ€” as a result of it makes use of your phoneโ€™s 5G, 4G, 3G, 2G, EDGE, or Wi-Fi connection as a substitute for your cell planโ€™s voice minutes or textual content plan GroupMe GroupMe is a mobile group messaging app owned by Microsoft. It was launched in May 2010 by the personal firm GroupMe. In August 2011, GroupMe delivered over 100 million messages every month and by June 2012, that quantity jumped to 550 million. In 2013, GroupMe had over 12 million registered customers. GroupMe brings group textual content messaging to each cell phone. Group message with the individuals in your life that might be essential to you. How does it work? Users enroll with their Facebook credentials, Microsoft/Skype logins, phone numbers, or E-mail addresses, and they're then capable of sending personal or group messages to different individuals. GroupMe messaging works throughout platforms โ€” all you want is a tool (iPhone, Android, computer, or pill) and Wi-Fi or knowledge to get related. To Continue to Click Here
(April-2021)Braindump2go 300-425 PDF and 300-425 VCE Dumps(Q181-Q201)
QUESTION 81 An engineer is trying to determine the most cost-effective way to deploy high availability for a campus enterprise wireless network that currently leverages three wireless LAN controllers. Which architecture should the engineer deploy? A.N+1 solution without SSO B.N+1 with SSO C.N+N solution without SSO D.N+N with SSO Answer: B QUESTION 82 During a post deployment site survey, issues are found with non-Wi-Fi interference. What should the engineer use to identify the source of the interference? A.Cisco Spectrum Expert B.wireless intrusion prevention C.Wireshark D.network analysis module Answer: A QUESTION 83 Refer to the exhibit. An enterprise is using wireless as the main network connectivity for clients. To ensure service continuity, a pair of controllers will be installed in a datacenter. An engineer is designing SSO on the pair of controllers. What needs to be included m the design to avoid having the secondary controller go into maintenance mode? A.The keep alive timer is too low which causes synchronization problems. B.The connection between the redundancy ports is missing. C.The redundancy port must be the same subnet as the redundancy mgmt. D.The Global Configuration of SSO is set to Disabled on the controller. Answer: A QUESTION 84 Campus users report a poor wireless experience. An engineer investigating the issue notices that in high-density areas the wireless clients fail to switch the AP to which are automatically connected. This sticky client behavior is causing roaming issues. Which feature must the engineer configure? A.load balancing and band select B.optimized roaming C.Layer 3 roaming D.Layer 2 roaming Answer: B QUESTION 85 An engineer changed me TPC Power Threshold for a wireless deployment from the default value to 65 dBm. The engineer conducts a new post deployment survey to validate the results. What is the expected outcome? A.increased received sensitivity B.decreased channel overlap C.decreased client signal strength D.increased cell size Answer: C QUESTION 86 A customer is looking for a network design with Cisco Hyperlocation using AP4800 for location tracking via a custom mobile app issues appeared in me past with refresh rates for location updates. What needs to be implemented to meet these requirements? A.Cisco FastLocate technology B.redundant CMX and fetch location in round-robin fashion C.device Bluetooth via the app D.Cisco CMX SDK in the location app Answer: A QUESTION 87 What is the attenuation value of a human body on a wireless signal? A.3 dB B.4 dB C.6 dB D.12 dB Answer: B QUESTION 88 Why is 802.11a connectivity reduced in an X-ray room? A.X-rays impact the 802.11a UNll-2 channels Vial cause access points to dynamically change channels. B.X-ray rooms exhibit increased signal attenuation C.X-rays within these rooms cause multipath issues. D.X-rays create significant non-Wi-Fi interference on the 802.11a band Answer: B QUESTION 89 A medium-sized hospitality company with 50 hotels needs to upgrade the existing WLAN m each hotel to 802Hn. During the site surveys tor each hotel, what needs to be taken into consideration when determining the locations for each AP? A.Selecting AP locations where power is already available B.Selecting APs that can be hidden in ceiling panels lo provide a secure and clean aesthetic look. C.Selecting locations that make visual assessment of the AP operation easy D.Selecting locations that are easily accessed so maintenance and upgrades can be performed quickly Answer: A QUESTION 90 A network engineer needs to create a wireless design to bridge wired IP surveillance cameras in the parking lot through a mesh AP. To which operate mode of the AP should the cameras connect? A.RAP B.local C.FlexConnect D.MAP Answer: D QUESTION 91 An engineer at a global enterprise organization must ensure that a mesh deployment has the highest number of channels available to the backhaul regardless of region deployed. Which design meets this requirement? A.one controller per country code B.redundant controllers in the least restrictive regulatory domain C.redundant controllers in the most restrictive regulatory domain D.one controller per continent Answer: C QUESTION 92 An enterprise is using two wireless controllers to support the wireless network. The data center is located in the head office. Each controller has a corporate WLAN configured with the name CoprNET390595865WLC-1 and Copr-NET6837l638WLC-2. The APs are installed using a round-robin approach to load balance the traffic. What should be changed on the configuration to optimize roaming? A.Move all access points to one controller and use the other as N-1 HA. B.Use the same WLAN name for the corporate network on both controllers C.Move the controllers to an external data center with higher internet speeds D.Place the access points per floor on the same controller. Answer: D QUESTION 93 An engineer is conducting a Layer 2 site survey. Which type of client must the engineer match to the survey? A.best client available B.phone client C.normal client D.worst client available Answer: C QUESTION 94 A wireless engineer is using Ekahau Site Survey to validate that an existing wireless network is operating as expected. Which type of survey should be used to identify the end-to-end network performance? A.spectrum analysis B.passive C.GPS assisted D.active ping Answer: A QUESTION 95 The wireless learn must configure a new voice SSID for optimized roaming across multiple WLCs with Cisco 8821 phones. Which two WLC settings accomplish this goal? (Choose two) A.Configure mobility groups between WLCs B.Use Cisco Centralized Key Management for authentication. C.Configure AP groups between WLCs. D.Configure AVC profiles on the SSlD E.Use AVC to lag traffic voice traffic as best effort. Answer: BE QUESTION 96 An engineer is designing an outdoor mesh network to cover several sports fields. The core of the network is located In a building at the entrance of a sports complex. Which type of antenna should be used with the RAP for backhaul connectivity? A.5 GHz. 8-dBi omnidirectional antenna B.2.4 GHz. 8-dBi patch antenna C.2 4 GHz. 14-dBi omnidirectional antenna D.5 GHz. 14-dBi patch antenna Answer: A QUESTION 97 A customer has restricted the AP and antenna combinations for a design to be limited to one model integrated antenna AP for carpeted spaces and one model external antenna AP with high gam antennas tor industrial, maintenance, or storage areas. When moving between a carpeted area to an industrial area, the engineer forgets to change survey devices and surveys several Aps. Which strategy will reduce the negative impact of the design? A.Resurvey and adjust the design B.Deploy unsurveyed access points to the design C.Deploy the specified access points per area type D.increase the Tx power on incorrectly surveyed access points Answer: A QUESTION 98 An engineer is designing a wireless network to support nigh availability. The network will need to support the total number of APs and client SSO. Live services should continue to work without interruption during the failover. Which two requirements need to be incorporated into the design to meet these needs? (Choose two) A.redundant vWLC B.controller high availability pair with one of the WLCs having a valid AP count license C.10 sec RTT D.back-to-back direct connection between WLCs E.WLC 7.5 code or more recent Answer: BD QUESTION 99 Refer to the exhibit. During a post Mesh deployment survey, an engineer notices that frame cessions occur when MAP-1 and MAP-3 talk to RAP-2. Which type of issue does the engineer need to address in the design? A.co-channel interference B.backhaul latency C.hidden node D.exposed node Answer: A QUESTION 100 An enterprise is using the wireless network as the main network connection for corporate users and guests. To ensure wireless network availability, two standalone controllers are installed in the head office APs are connected to the controllers using a round-robin approach to load balance the traffic. After a power cut, the wireless clients disconnect while roaming. An engineer tried to ping from the controller but fails. Which protocol needs to be allowed between the networks that the controllers are Installed? A.IP Protocol 67 B.IP Protocol 77 C.IP Protocol 87 D.IP Protocol 97 Answer: D QUESTION 101 An engineer must perform a pre deployment site survey. For a new building in a high-security area. The design must provide a primary signal RSSI of -65 dBM for the clients. Which two requirements complete this design? (Choose two) A.site access B.AP model C.WLC model D.HVAC access E.number of clients Answer: BE 2021 Latest Braindump2go 300-425 PDF and 300-425 VCE Dumps Free Share: https://drive.google.com/drive/folders/116pgsScHZoMX_x10f-SEvzUZ9ec2kgWd?usp=sharing