kongwei
10+ Views

(April-2021)Braindump2go 5V0-34.19 PDF and 5V0-34.19 VCE Dumps(Q29-Q49)

QUESTION 29
A user wants to create a super metric and apply it to a custom group to capture the total of CPU Demand (MHz) of virtual machines that are children of the custom group.

Which super metric function would be used to accomplish this?

A.Average
B.Max
C.Sum
D.Count

Answer: C

QUESTION 30
Review the exhibit. When the Cluster Metric Load or Cluster Object Load exceeds 100%, what is the next step a vRealize Operations administrator should take?

A.Reduce the vRealize Operations data retention time.
B.Add an additional vRealize Operations data node.
C.Increase vRealize Operations polling time.
D.Remove a vCenter from the vSphere management pack.

Answer: B

QUESTION 31
Which object attributes are used in vRealize Operations Compliance analysis?

A.tags
C.user access lists
D.host profiles

Answer: B

QUESTION 32
Based on the highlighted HIPPA compliance template above, how many hosts are in a compliant state?

A.5
B.24
C.29
D.31

Answer: A

QUESTION 33
How can vRealize Operations tags be used?

A.be dynamically assigned to objects
B.to group virtual machines in vCenter
C.to set object access controls
D.to filter objects within dashboard widgets

Answer: B

QUESTION 34
The default collection cycle is set.
When changing the Cluster Time Remaining settings, how long will it take before time remaining and risk level are recalculated?

A.5 minutes
B.1 hour
C.12 hours
D.24 hours

Answer: A

QUESTION 35
What is a prerequisite for using Business Intent?

A.DRS clusters
B.storage policies
C.vSphere 6.7
D.vCenter tags

Answer: D

QUESTION 36
What can be configured within a policy?

A.alert notifications
B.symptom definition threshold overrides
C.custom group membership criteria
D.symptom definition operator overrides

Answer: B

QUESTION 37
Which organizational construct within vRealize Operations has a user-configured dynamic membership criteria?

A.Resource Pool
B.Tags
C.Custom group
D.Custom Datacenter

Answer: C

QUESTION 38
How should a remote collector be added to a vRealize Operations installation?

A.Log in as Admin on a master node and enable High Availability.
B.Open the Setup Wizard from the login page.
C.Navigate to a newly deployed node and click Expand an Existing Installation.
D.Navigate to the Admin interface of a data node.

Answer: C

QUESTION 39
Refer to the exhibit. How is vSphere Usable Capacity calculated?

A.Demand plus Reservation
B.Total Capacity minus High Availability
C.Total Capacity minus Overhead
D.Demand plus High Availability

Answer: B

QUESTION 40
A view is created in vRealize Operations to track virtual machine maximum and average contention for the past thirty days.
Which method is used to enhance the view to easily spot VMs with high contention values?

A.Set a tag on virtual machines and filter on the tag.
B.Edit the view and set filters for the transformation value maximum and average contention.
C.Create a custom group to dynamically track virtual machines.
D.Configure Metric Coloring in the Advanced Settings of the view.

Answer: C

QUESTION 41
Refer to the exhibit. A user has installed and configured Telegraf agent on a Windows domain controller. No application data is being collected.
Which two actions should the user take to see the application data? (Choose two.)

A.Verify the vCenter adapter collection status.
B.Re-configure the agent on the Windows virtual machine manually.
C.Verify Active Directory Service status.
D.Configure ICMP Remote Check.
E.Validate time synchronization between vRealize Application Remote Collector and vRealize Operations.

Answer: AE

QUESTION 42
Which dashboard widget provides a two-dimensional relationship?

A.Heat Map
B.Object Selector
C.Scoreboard

Answer: A

QUESTION 43
What must an administrator do to use the Troubleshoot with Logs Dashboard in vRealize Operations?

A.Configure the vRealize Log Insight agent.
B.Enable Log Forwarding within vRealize Operations.
C.Configure vRealize Operations within vRealize Log Insight.
D.Configure symptoms and alerts within vRealize Operations.

Answer: C

QUESTION 44
vRealize Operations places a tagless virtual machines on a tagged host.
Which setting causes this behavior?

A.Host-Based Business Intent
B.Consolidated Operational Intent
C.Balanced Operational Intent
D.Cluster-Based Business Intent

Answer: A

QUESTION 45
The default collection cycle is set.
How often are cost calculations run?

A.every 5 minutes
B.daily
C.weekly
D.monthly

Answer: B

QUESTION 46
vRealize Operations is actively collecting data from vCenter and the entire inventory is licensed.
Why would backup VMDKs of an active virtual machine in the vCenter appear in Orphaned Disks?

A.They are related to the VM.
B.They are named the same as the VM.
C.They are not in vCenter inventory.
D.They are not actively being utilized.

Answer: C

QUESTION 47
In which two locations should all nodes be when deploying an analytics node? (Choose two.)

A.same data center
B.same vCenter
C.remote data center
D.same subnet
E.different subnet

Answer: AD

QUESTION 48
Which type of view allows a user to create a view to provide tabular data about specific objects?

A.Distribution
B.Text
C.List
D.Trend

Answer: C

QUESTION 49
Which Operational Intent setting drives maximum application performance by avoiding resource spikes?

A.Moderate
B.Consolidate
C.Over provision
D.Balance

Answer: B
2021 Latest Braindump2go 5V0-34.19 PDF and 5V0-34.19 VCE Dumps Free Share:
Comment
Suggested
Recent
Cards you may also be interested in
(April-2021)Braindump2go 1Y0-231 PDF and 1Y0-231 VCE Dumps(Q21-Q41)
Question: 21 Scenario: A Citrix Administrator needs to test a SAML authentication deployment to be used by internal users while accessing several externally hosted applications. During testing, the administrator notices that after successfully accessing any partner application, subsequent applications seem to launch without any explicit authentication request. Which statement is true regarding the behavior described above? A.It is expected if the Citrix ADC appliance is the common SAML identity provider (IdP) for all partners. B.It is expected due to SAML authentication successfully logging on to all internal applications. C.It is expected if all partner organizations use a common SAML service provider (SP). D.It indicates the SAML authentication has failed and the next available protocol was used. Answer: B Question: 22 Scenario: A Citrix Administrator configured SNMP to send traps to an external SNMP system. When reviewing the messages, the administrator notices several entity UP and entity DOWN messages. To what are these messages related? A.Load-balancing virtual servers B.SSL certificate C.VLAN D.High availability nodes Answer: A Question: 23 Scenario: A Citrix Administrator configured a new router that requires some incoming and outgoing traffic to take different paths through it. The administrator notices that this is failing and runs a network trace. After a short monitoring period, the administrator notices that the packets are still NOT getting to the new router from the Citrix ADC. Which mode should the administrator disable on the Citrix ADC to facilitate the successful routing of the packets? A.Layer3 B.USNIP C.MAC-based forwarding (MBF) D.USIP Answer: C Question: 24 A Citrix Administrator needs to configure a Citrix ADC high availability (HA) pair with each Citrix ADC in a different subnet. What does the administrator need to do for HA to work in different subnets? A.Configure SyncVLAN B.Turn on Independent Network Configuration (INC) mode C.Turn on HA monitoring on all Interfaces D.Turn on fail-safe mode Answer: B Question: 25 Scenario: A Citrix Administrator is managing a Citrix Gateway with a standard platform license and remote employees in the environment. The administrator wants to increase access by 3,000 users through the Citrix Gateway using VPN access. Which license should the administrator recommend purchasing? A.Citrix Gateway Express B.Citrix ADC Upgrade C.Citrix Gateway Universal D.Citrix ADC Burst Pack Answer: C Reference: https://support.citrix.com/content/dam/supportWS/kA560000000TNDvCAO/XD_and_XA_7.x_Licens ing_FAQ.pdf Question: 26 Which four steps should a Citrix Administrator take to configure SmartAccess? (Choose four.) A.Execute “set-BrokerSite -TrustRequestsSentToTheXMLServicePort $True” on any Citrix Delivery Controller in the Site. B.Enable Citrix Workspace control within StoreFront. C.Ensure that the SmartAccess filter name on the Delivery Group matches the name of the Citrix Gateway virtual server. D.Ensure that the SmartAccess filter name on the Delivery Group matches the name of the Citrix Gateway policy. E.Ensure that ICA Only is unchecked on the Citrix Gateway virtual server. F.Ensure that the Callback URL is defined in the Citrix Gateway configuration within Store Front. G.Ensure that ICA Only is checked on the Citrix Gateway virtual server. Answer: ACEF Reference: https://support.citrix.com/article/CTX227055 Question: 27 Which three Citrix Gateway elements can be configured by the Citrix Gateway Wizard? (Choose three.) A.The rewrite policy for HTTP to HTTPS redirect B.The responder policy for HTTP to HTTPS redirect C.The Citrix Gateway primary virtual server D.The bind SSL server certificate for the Citrix Gateway virtual server E.The primary and optional secondary authentications Answer: CDE Reference: https://docs.citrix.com/en-us/citrix-gateway/12-1/citrix-gateway-12.1.pdf (333) Question: 28 Scenario: A Citrix Administrator configures an access control list (ACL) to block traffic from the IP address 10.102.29.5: add simpleacl rule1 DENY -srcIP 10.102.29.5 A week later, the administrator discovers that the ACL is no longer present on the Citrix ADC. What could be the reason for this? A.The administrator did NOT run the apply ACL command. B.The simple ACLs remain active for only 600 seconds. C.The simple ACLs remain active for only 60 seconds. D.The Citrix ADC has been restarted without saving the configurations. Answer: A Question: 29 While applying a new Citrix ADC device, a Citrix Administrator notices an issue with the time on the appliance. Which two steps can the administrator perform to automatically adjust the time? (Choose two.) A.Add an SNMP manager. B.Add an SNMP trap. C.Enable NTP synchronization. D.Add an NTP server. E.Configure an NTP monitor. Answer: CE Question: 30 A Citrix Network Engineer informs a Citrix Administrator that a data interface used by Citrix ADC SDX is being saturated. Which action could the administrator take to address this bandwidth concern? A.Add a second interface to each Citrix ADC VPX instance. B.Configure LACP on the SDX for management interface. C.Configure LACP on the SDX for the data interface. D.Configure a failover interface set on each Citrix ADC VPX instance. Answer: C Reference: https://training.citrix.com/public/Exam+Prep+Guides/241/1Y0- 241_Exam_Preparation_Guide_v01.pdf (22) Question: 31 Scenario: Users are attempting to logon through Citrix Gateway. They successfully pass the Endpoint Analysis (EPA) scan, but are NOT able to see the client choices at logon. What can a Citrix Administrator disable to allow users to see the client choices at logon? A.Quarantine groups B.Client choices globally C.Split tunneling D.nFactor authentication Answer: A Reference: https://www.carlstalhood.com/category/netscaler/netscaler-12/netscaler-gateway-12/ Question: 32 Scenario: To meet the security requirements of the organization, a Citrix Administrator needs to configure a Citrix Gateway virtual server with time-outs for user sessions triggered by the behaviors below: Inactivity for at least 15 minutes. No keyboard or mouse activity for at least 15 minutes Which set of time-out settings can the administrator configure to meet the requirements? A.Session time-out and client idle time-out set to 15 B.Session time-out and forced time-out set to 15 C.Client idle time-out and forced time-out set to 15 D.Client idle time-out and forced time-out set to 900 Answer: A Reference: https://docs.citrix.com/en-us/citrix-gateway/current-release/vpn-user-config/configure- pluginconnections/configure-time-out-settings.html Question: 33 A Citrix Administrator needs to configure a Citrix Gateway virtual IP to allow incoming connections initiated exclusively from web browser sessions. Which advanced policy will accomplish this? A.REQ.HTTP.HEADER User-Agent NOTCONTAINS CitrixReceiver B.REQ.HTTP.HEADER User-Agent CONTAINS Chrome/78.0.3904.108 Safari/537.36 C.HTTP.REQ.HEADER(“User-Agent”).CONTAINS(“Mozilla”) D.HTTP.REQ.HEADER(“User-Agent”).CONTAINS(“CitrixReceiver”) Answer: A Reference: https://stalhood2.rssing.com/chan-58610415/all_p2.html Question: 34 Scenario: A Citrix Administrator currently manages a Citrix ADC environment for a growing retail company that may soon double its business volume. A Citrix ADC MPX 5901 is currently handling web and SSL transactions, but is close to full capacity. Due to the forecasted growth, the administrator needs to find a costeffective solution. Which cost-effective recommendation can the administrator provide to management to handle the growth? A.A license upgrade to a Citrix ADC MPX 5905 B.The addition of another MPX 5901 appliance C.A hardware upgrade to a Citrix ADC MPX 8905 D.A hardware upgrade to a Citrix ADC SDX 15020 Answer: A Question: 35 What can a Citrix Administrator configure to access RDP shortcuts? A.Split tunneling B.Bookmarks C.Next hop server D.Intranet applications Answer: B Reference: https://docs.citrix.com/en-us/citrix-gateway/current-release/rdp-proxy.html Question: 36 If a user device does NOT comply with a company’s security requirements, which type of policy can a Citrix Administrator apply to a Citrix Gateway virtual server to limit access to Citrix Virtual Apps and Desktops resources? A.Session B.Responder C.Authorization D.Traffic Answer: A Reference:https://www.citrix.com/content/dam/citrix/en_us/documents/products- solutions/creating-andenforcing-advanced-access-policies-with-xenapp.pdf Question: 37 A Citrix Administrator has received a low disk space alert for /var on the Citrix ADC. Which type of files should the administrator archive to free up space? A.Syslog B.Nslog C.DNScache D.Nsconfig Answer: B Reference: https://support.citrix.com/article/CTX205014?recommended Question: 38 Which license type must be installed to configure Endpoint Analysis scans? A.Citrix Web App Firewall B.Universal C.Platform D.Burst pack Answer: B Reference:https://docs.citrix.com/en-us/citrix-gateway/current-release/citrix-gateway-licensing.html Question: 39 Which two features can a Citrix Administrator use to allow secure external access to a sensitive company web server that is load-balanced by the Citrix ADC? (Choose two.) A.Authentication, authorization, and auditing (AAA) B.Citrix Web App Firewall C.ICA proxy D.AppFlow E.Integrated caching Answer: AB Question: 40 Scenario: A Citrix ADC MPX is using one of four available 10G ports. A Citrix Administrator discovers a traffic bottleneck at the Citrix ADC. What can the administrator do to increase bandwidth on the Citrix ADC? A.Add two more 10G Citrix ADC ports to the network and configure VLAN. B.Add another 10G Citrix ADC port to the switch, and configure link aggregation control protocol (LACP). C.Purchase another Citrix ADC MPX appliance. D.Plug another 10G Citrix ADC port into the router. Answer: A Question: 41 Scenario: Client connections to certain virtual servers are abnormally high. A Citrix Administrator needs to be alerted whenever the connections pass a certain threshold. How can the administrator use Citrix Application Delivery Management (ADM) to accomplish this? A.Configure TCP Insight on the Citrix ADM. B.Configure SMTP reporting on the Citrix ADM by adding the threshold and email address. C.Configure specific alerts for virtual servers using Citrix ADM. D.Configure network reporting on the Citrix ADM by setting the threshold and email address. Answer: D 2021 Latest Braindump2go 1Y0-231 PDF and 1Y0-231 VCE Dumps Free Share: https://drive.google.com/drive/folders/1QWBrUQIP4dhwazi-gFooYmyX1m-iWAlw?usp=sharing
156-215.80 Prüfungsfragen, 156-215.80 Prüfungsvorbereitung
www.it-pruefungen.de----Kostenlose 156-215.80 Testvision vor dem Kauf herunterladen Unsere 156-215.80 Prüfungsmaterialien werden in vielen Ländern als die besten Lernmaterialien in der IT-Branche betrachtet. Zögern Sie noch mit der Qualität, würden wir Sie gerne bitten, die kostenlose 156-215.80 Testvision auf unserer Webseite herunterzuladen, damit Sie einen allgemeinen Überblick über unsere Produkte erhalten, bevor Sie eine vernünftige Entscheidung treffen. Ich bin mir sicher, dass Sie mit unseren 156-215.80 Prüfung Dump ganz zufrieden würden sein. Zögern Sie nicht und handeln Sie sofort, die Training Demo von 156-215.80 Prüfung auszuprobieren. Während dem ganzen Prozess brauchen Sie nur den Knopf „Download kostenlos" klicken und dann wählen Sie eine von den drei Arten Visionen, die Ihnen am besten Passt. Hier gibt es drei Visionen, die Ihnen zur Verfügung stehen, sie sind nämlich PDF, PC Test Engine sowie Testengine. CheckPoint 156-215.80 Prüfungsfragen Prüfungsunterlagen Info zu dieser Prüfungsvorbereitung 156-215.80 Prüfungsnummer:156-215.80 Prüfungsname:Check Point Certified Security Administrator (CCSA) R80 Version:V19.99 Anzahl:533 Prüfungsfragen mit Lösungen Schnelle, einfache und sichere Zahlung per Credit Card Um die Sicherheit der Zahlung zu sichern, haben wir eine strategische Kooperation mit Credit Card etabliert, dem zuverlässigsten Bezahlungssystem der Welt. Credit Card ist ein führender Online-Zahlungsdienstleister, der einen schnellen, einfachen und sicheren Zahlungsprozess anbietet, was ermöglicht, dass jedem sofort eine E-Mail-Adresse gesendet wird, ohne dabei sensible finanzielle Informationen preiszugeben. Mit dieser Zahlungsplattform brauchen Sie sich dann beim Kaufen der 156-215.80 Prüfung Unterlagen und Materialien nichts zu sorgen. Und wir werden unermüdlich große Anstrengungen machen, um Ihre Interessen vor jeglicher Gefahr zu schützen. Genießen Sie die schnelle Lieferung von 156-215.80 Prüfung Unterlagen und Materialien Kein Wunder, dass jeder seine bestellten Waren so schnell wie möglich erhalten möchte, vor allem diejenigen, die sich auf die Prüfung 156-215.80 vorbereiten wollen. Wie wir alle wissen, dass nichts kostbarer ist als die Zeit. Da unsere 156-215.80 Prüfung Unterlagen und Materialien elektronische Produkte sind, können wir Ihnen schnelle Zulieferung sicherstellen. Unser Betriebssystem schickt Ihnen automatisch per E-Mail die 156-215.80 Prüfung Unterlagen und Materialien in 5-10 Minuten nach Ihrer Zahlung. Und wir können Ihnen versprechen, dass dies sicherlich die schnellste Lieferung in dieser Branche ist. Verschwenden Sie Ihre Zeit nicht, Kaufen Sie unsere Produkt sofort und Sie werden die nützlichste Check Point Certified Security Administrator (CCSA) R80 Prüfung Unterlagen und Materialien nur nach 5-10 Minuten erhalten. www.it-pruefungen.de---Gültigkeit von 156-215.80 Fragen Alle unsere CheckPoint 156-215.80 Prüfungsfragen werden von den Kandidaten gesammelt, die die CheckPoint 156-215.80 Prüfung vor kurzem abgelegt haben, und werden von zertifizierten Experten und Fachleuten beantwortet, die die neuesten und auf dem gesamten Markt gültigen sind. Mit unseren CheckPoint 156-215.80 Prüfungsfragen www.it-pruefungen.de können Sie alle damit verbundenen Prüfungen üben und testen.
(April-2021)Braindump2go OG0-091 PDF and OG0-091 VCE Dumps(Q249-Q270)
QUESTION 249 What ADM phase defines the scope for the architecture development initiative and identifies the stakeholders? A.Requirements Management B.Phase D: Technology Architecture C.Preliminary Phase D.Phase A: Architecture Vision E.Phase B: Business Architecture Answer: D QUESTION 250 What is the first step in the architecture development Phases B, C, and D? A.Develop Baseline Architecture Description B.Select reference models, viewpoints and tools C.Perform gap analysis D.Resolve impacts across the Architecture Landscape E.Conduct formal stakeholder review Answer: B QUESTION 251 TOGAF Part VII describes how the ADM can be used to establish an Architecture Capability in an organization. Which architecture would describe the architecture processes and organization structure? A.Technology Architecture B.Data Architecture C.Business Architecture D.Transition Architecture E.Application Architecture Answer: C QUESTION 252 Which one of the following is considered a relevant architecture resource in ADM Phase D? A.Existing application models B.Generic business models relevant to the organization's industry sector C.Existing IT services D.Generic data models relevant to the organization's industry sector Answer: C QUESTION 253 Which of the following terms is defined as the key interests that are crucially important to the stakeholders in a system? A.Viewpoints B.Concerns C.Requirements D.Principles E.Views Answer: B QUESTION 254 What are the four architecture domains that the TOGAF standard deals with? A.Capability, Segment, Enterprise, Federated B.Business, Data, Application, Technology C.Application, Data, Information, Knowledge D.Process, Organization, Strategic, Requirements E.Baseline, Candidate, Transition, Target Answer: B QUESTION 255 What ADM phase includes establishing the Architecture Capability and definition of Architecture Principles? A.Phase C: Data Architecture B.Preliminary Phase C.Phase A: Architecture Vision D.Phase F: Migration Planning E.Phase B: Business Architecture Answer: B QUESTION 256 In which part of the ADM cycle do the earliest building block definitions start as abstract entities? A.Phases E and F B.Preliminary Phase C.Phase B, C, and D D.Phase G and H E.Phase A Answer: E QUESTION 257 What is an objective of the ADM Preliminary Phase? A.To create the initial version of the Architecture Roadmap B.To develop a vision of the business value to be delivered by the proposed enterprise architecture C.To select and implement tools to support the Architecture Capability D.To obtain approval for the Statement of Architecture Work E.To document the baseline architecture Answer: C QUESTION 258 What is an objective of ADM Phase G, Implementation Governance? A.To ensure that implementation projects conform with the Target Architecture B.To ensure that the enterprise's Architecture Capability meets current requirements C.To assess the performance of the architecture and make recommendations for change D.To establish the value realization process E.To prioritise the projects through risk validation Answer: A QUESTION 259 What are considered as generic Building Blocks in the Solutions Continuum? A.Common Systems Solutions B.Strategic Solutions C.Industry Solutions D.Organization-Specific Solutions E.Foundation Solutions Answer: E QUESTION 260 What version number does the TOGAF ADM use to indicate that a high-level outline of the architecture is in place? A.Version 0.5 B.Version 0.7 C.Version 1.0 D.Version 0.1 E.Version 0.9 Answer: D QUESTION 261 In which ADM Phase is the focus the creation of an Implementation and Migration Plan in co-operation with the portfolio and project managers? A.Phase A B.Phase F C.Phase D D.Phase G E.Phase E Answer: B QUESTION 262 What level of risk is the risk categorization prior to determining and implementing mitigating actions? A.Marginal B.Residual C.Low D.Initial E.Critical Answer: D QUESTION 263 Complete the sentence. The major information areas managed by a governance repository should include __________________. A.Common Systems Solutions, Organization-Specific Solutions and Industry Solutions B.Catalogs, Matrices and Diagrams C.Artifacts, Best Practices and Standards D.Audit Information, Process Status and Reference Data E.Capability, Segment, and Transition Architectures Answer: D QUESTION 264 Which one of the following best describes the purpose of a Change Request? A.To act as a deliverable container for artifacts created during a project B.To ensure that the results of a Compliance Assessment are distributed to the Architecture Board C.To request a dispensation or to kick-start a further cycle of architecture work D.The ensure that information is communicated to the right stakeholders at the right time E.To review project progress and ensure the implementation is inline with the objectives Answer: C QUESTION 265 Complete the sentence. In the TOGAF Architecture Content Framework, a work product that shows the relationship between things is known as a ___________. A.deliverable B.matrix C.building block D.diagram E.catalog Answer: B QUESTION 266 Within the Architecture Repository, what does the class of information known as the Architecture Capability include? A.Parameters, structures, and processes to support governance of the repository B.Specifications to which architectures must conform C.The organization specific architecture framework, including a method for architecture development and a metamodel for architecture content D.A record of the governance activity across the enterprise E.Patterns, templates, and guidelines used to create new architectures Answer: A QUESTION 267 What are the levels of the Architecture Landscape? A.Foundation, Common and Solution Architectures B.Business, Data, Applications and Technology Architectures C.Capability, Segment and Strategic Architectures D.Corporate EA, Project Team and Portfolio Team Architectures E.Baseline, Transition and Target Architectures Answer: C QUESTION 268 In which ADM phase does the value and change management process determine the circumstances under which the Architecture Development Cycle will be initiated to develop a new architecture? A.Phase H B.Phase F C.Phase E D.Preliminary Phase E.Phase G Answer: A QUESTION 269 Which of the following best describes Requirements Management within the TOGAF ADM? A.Reviewing business requirements B.Addressing and prioritizing architecture requirements C.Developing requirements that deliver business value D.Validating requirements between ADM phases E.Managing architecture requirements throughout the ADM cycle Answer: E QUESTION 270 Which of the following best describes the purpose of the Architecture Requirements Specification? A.It is sent form the sponsor and triggers the start of an architecture development cycle B.It provides a list of work packages and a schedule for implementation of the target architecture C.It defines the scope and approach to complete an architecture project D.It provides a set of statements that outline what a project must do to comply with the architecture E.It contains an assessment of the current architecture requirements. Answer: D 2021 Latest Braindump2go OG0-091 PDF and OG0-091 VCE Dumps Free Share: https://drive.google.com/drive/folders/1eRiDkUWfbKGtT5lDv9Bc2DEbQP0LDO-8?usp=sharing
MS-700 Prüfungsfragen deutsch Managing Microsoft Teams
Garantie von Examen MS-700 Prüfungsfragen deutsch Managing Microsoft Teams---www.it-pruefungen.de Es wird garantiert, dass Sie die gewüsnchte Prüfung mit unseren Microsoft MS-700 Prüfungsfragen erfolgreich bestehen können. Wenn Sie die Managing Microsoft Teams MS-700 Prüfung mit unserem Produkt nicht bestehen, erhalten Sie volle Rückerstattung von der Zahlungsgebühr mit dem Screenshot Ihres fehlgeschlagenen Ergebnisberichts innerhalb von DREI Monaten. Microsoft MS-700 Prüfungsfragen Prüfungsunterlagen it-pruefungen.ch Info zu dieser Prüfungsvorbereitung MS-700 Prüfungsnummer:MS-700 Prüfungsname:Managing Microsoft Teams Version:V19.99 Anzahl:292 Prüfungsfragen mit Lösungen MS-700 Updateservice Sobald die Microsoft MS-700 Prüfungsfragen Managing Microsoft Teams vom Prüfungszentrum geändert werden, werden wir unsere MS-700 Prüfungsfragen rechtzeitig aktualisieren. Wenn Sie Microsoft MS-700 Prüfungsfragen auf unserer Website erwerben, erhalten Sie kostenloses Update innerhalb von einem Jahr ab Kaufdatum. Wenn Sie feststellen, dass die Anzahl der MS-700Prüfungsfragen abweicht, setzen Sie sich bitte mit uns in Verbindung, um eine aktuelle Version zu erhalten. MS-700Übungsfragen--www.it-pruefungen.de Bevor Sie sich entscheiden, die Microsoft MS-700 Prüfungsfragen bei uns zu kaufen, können Sie unseren kostenlosen Microsoft MS-700 Übungsfragen testen. Sie können Microsoft MS-700 Übungsfragen auf der vorherigen Seite mehrmals testen. Formate von MS-700 Fragen Unsere Microsoft MS-700 Prüfungsfragen werden in zwei Versionen angeboten: PDF und Software-Format. MS-700 Managing Microsoft Teams PDF vesion: Es ist einfach und bequem, alle Fragen und Antworten zu lesen. Sie können auch sie ausdrucken, um alle Fragen und Antworten zu studieren. www.it-pruefungen.de----MS-700 Software version: Sie können alle Fragen und Antworten in einer echten Prüfungsumgebung üben.
156-315.80 Prüfung, 156-315.80 Fragen und antworten deutsch
www.it-pruefungen.ch---wollen Sie Ihr aktuelles Leben verändern? Gewinnen Sie die 156-315.80 Prüfung Zertifizierung, damit können Sie sich mit mehr Wettbewerbsvorteil ausrüsten. Qualifizierung durch die 156-315.80 Zertifizierung zeigt, dass Sie Ihre Fähigkeiten durch strenge Ausbildung und praktische Erfahrung geschliffen haben. In der Job Jagd haben die qualifizierten Menschen mehr Möglichkeit, eine bessere Position zu bekommen. Um mehr Chancen für Optionen zu bekommen, ist es notwendig, die 156-315.80 Prüfung Zertifizierung zu bekommen. Weil Ihr studiertes Wissen nicht ausreicht, um den eigentlichen Test zu bestehen, benötigen Sie also ein nützliches Studienmaterial, z.B. den 156-315.80 it-pruefungen.ch Ausbildung Führer auf unserer Website www.it-pruefungen.ch. CheckPoint 156-315.80 Prüfungsfragen Prüfungsunterlagen Info zu dieser Prüfungsvorbereitung 156-315.80 Prüfungsnummer:156-315.80 Prüfungsname:Check Point Certified Security Expert - R80 Version:V19.99 Anzahl:533 Prüfungsfragen mit Lösungen Pass mit Leichtigkeit mithilfe 156-315.80 it-pruefungen.ch Prüfung pdf Vielleicht haben Sie viel über die 156-315.80 tatsächliche Prüfung gelernt, aber Ihr Wissen ist chaotisch und kann den tatsächlichen Test nicht erfüllen Nun kann CheckPoint 156-315.80 it-pruefungen.ch Lernen Guide Ihnen helfen, die Schwierigkeiten zu überwinden. 156-315.80 it-pruefungen.ch gültige Ausbildung Unterlagen und Materialien werden Ihnen helfen, alle Themen auf dem CheckPoint 156-315.80 tatsächlichen Test zu meistern. Sie finden die ähnlichen Fragen und Test-Tipps, die Ihnen helfen, Bereiche der Schwäche zu identifizieren. Und Sie verbessern sowohl Ihre Grundkenntnisse und praktische Fähigkeiten über 156-315.80 Tatsächliche Prüfung. Außerdem ist die Erklärung hinter jedem 156-315.80 it-pruefungen.ch Fragen & Antworten sehr spezifisch und leicht zu verstehen. Was ist mehr, die Qualität der 156-315.80 Check Point Certified Security Expert - R80Prüfung Überprüfung torrents wird von unseren professionellen Experten mit hoher Trefferquote überprüft und kann Ihnen helfen, Ihren 156-315.80 tatsächlichen Prüfungstest mit Leichtigkeit zu bestehen. Bereiten Sie mit weniger Zeit mithilfer 156-315.80 Soft-Test-Engine vor Sie können sich über die lange Zeit beschweren, um den 156-315.80 it-pruefungen.ch Trainingstest zu überprüfen. Sie brauchen sicher nur noch einige Stunden Zeit, um den Test zu bestehen, und das Ergebnis wird in diesen Tagen sein. Eigentlich haben Sie viel Mühe gemacht, die Vorbereitung für die 156-315.80 tatsächlichen Test zu treffen. Unsere 156-315.80 it-pruefungen.ch Prüfung pdf bringt Ihnen eine hocheffiziente Ausbildung. 156-315.80 Soft-Test-Engine kann den realen Test simulieren; So können Sie im Voraus einen Simulationstest durchführen. Außerdem können Sie die CheckPoint 156-315.80 Soft-Test-Engine auf Ihrem Telefon oder I-Pad installieren, damit kann Ihre Freizeit voll genutzt werden. Sie können Ihr Wissen verbessern, wenn Sie auf der U-Bahn oder auf einen Bus warten. Ich glaube, Sie werden die 156-315.80 tatsächliche Prüfung durch spezifische Studium Plan mit der Hilfe unserer 156-315.80 Prüfung Überprüfung torrents bestehen. www.it-pruefungen.ch---Kostenloses Update innerhalb eines Jahres Wenn Sie andere Aufstellungsorte besuchen oder Kaufabzüge von anderen Anbietern kaufen, finden Sie das freie Update unter einigen eingeschränkten Bedingung. Aber für unsere CheckPoint 156-315.80 it-pruefungen.ch gültige Studium Unterlagen und Materialien gibt es keine anderen komplexen Einschränkungen. Sie genießen einjähriges kostenlosen Update nach dem Kauf. Wie erhalten Sie die aktualisierte 156-315.80 Check Point Certified Security Expert - R80it-pruefungen.ch Prüfung Unterlagen und Materialien? Unser System sendet die neuste 156-315.80 it-pruefungen.ch Prüfung Unterlagen und Materialien automatisch an Ihre Zahlungsemail, sobald sie aktualisiert wird. Wenn Sie eine gewünschte Notwendigkeit für die neuesten Unterlagen und Materialien haben, können Sie Ihre Zahlungsemail prüfen. Wenn Sie nichts finden, überprüfen Sie bitte Ihren Spam. Mit den neusten 156-315.80 it-pruefungen.ch Prüfung Unterlagen und Materialien werden Sie das Examen sicher bestehen.
(April-2021)Braindump2go AZ-303 PDF and AZ-303 VCE Dumps(Q223-Q233)
QUESTION 223 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure subscription. You have an on-premises file server named Server1 that runs Windows Server 2019. You manage Server1 by using Windows Admin Center. You need to ensure that if Server1 fails, you can recover Server1 files from Azure. Solution: You register Windows Admin Center in Azure and configure Azure Backup. Does this meet the goal? A.Yes B.No Answer: B QUESTION 224 You have an application that is hosted across multiple Azure regions. You need to ensure that users connect automatically to their nearest application host based on network latency. What should you implement? A.Azure Application Gateway B.Azure Load Balancer C.Azure Traffic Manager D.Azure Bastion Answer: C QUESTION 225 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. Your company is deploying an on-premises application named App1. Users will access App1 by using a URL of https://app1.contoso.com. You register App1 in Azure Active Directory (Azure AD) and publish App1 by using the Azure AD Application Proxy. You need to ensure that App1 appears in the My Apps portal for all the users. Solution: You modify User and Groups for App1. Does this meet the goal? A.Yes B.No Answer: A QUESTION 226 You create a social media application that users can use to upload images and other content. Users report that adult content is being posted in an area of the site that is accessible to and intended for young children. You need to automatically detect and flag potentially offensive content. The solution must not require any custom coding other than code to scan and evaluate images. What should you implement? A.Bing Visual Search B.Bing Image Search C.Custom Vision Search D.Computer Vision API Answer: D QUESTION 227 You have an Azure subscription named Subscription1. Subscription1 contains the resource groups in the following table. RG1 has a web app named WebApp1. WebApp1 is located in West Europe. You move WebApp1 to RG2. What is the effect of the move? A.The App Service plan for WebApp1 moves to North Europe. Policy1 applies to WebApp1. B.The App Service plan for WebApp1 remains in West Europe. Policy1 applies to WebApp1. C.The App Service plan for WebApp1 moves to North Europe. Policy2 applies to WebApp1. D.The App Service plan for WebApp1 remains in West Europe. Policy2 applies to WebApp1. Answer: D QUESTION 228 You have an Azure App Service API that allows users to upload documents to the cloud with a mobile device. A mobile app connects to the service by using REST API calls. When a new document is uploaded to the service, the service extracts the document metadata. Usage statistics for the app show significant increases in app usage. The extraction process is CPU-intensive. You plan to modify the API to use a queue. You need to ensure that the solution scales, handles request spikes, and reduces costs between request spikes. What should you do? A.Configure a CPU Optimized virtual machine (VM) and install the Web App service on the new instance. B.Configure a series of CPU Optimized virtual machine (VM) instances and install extraction logic to process a queue. C.Move the extraction logic into an Azure Function. Create a queue triggered function to process the queue. D.Configure Azure Container Service to retrieve items from a queue and run across a pool of virtual machine (VM) nodes using the extraction logic. Answer: C QUESTION 229 You have an Azure App Service named WebApp1. You plan to add a WebJob named WebJob1 to WebApp1. You need to ensure that WebJob1 is triggered every 15 minutes. What should you do? A.Change the Web.config file to include the 0*/15**** CRON expression B.From the application settings of WebApp1, add a default document named Settings.job. Add the 1-31 1-12 1-7 0*/15* CRON expression to the JOB file C.Add a file named Settings.job to the ZIP file that contains the WebJob script. Add the 0*/15**** CRON expression to the JOB file D.Create an Azure Automation account and add a schedule to the account. Set the recurrence for the schedule Answer: C QUESTION 230 You have an Azure App Service named WebApp1. You plan to add a WebJob named WebJob1 to WebApp1. You need to ensure that WebJob1 is triggered every 15 minutes. What should you do? A.Change the Web.config file to include the 1-31 1-12 1-7 0*/15* CRON expression B.From the properties of WebJob1, change the CRON expression to 0*/15****. C.Add a file named Settings.job to the ZIP file that contains the WebJob script. Add the 1-31 1-12 1-7 0*/15* CRON expression to the JOB file D.Create an Azure Automation account and add a schedule to the account. Set the recurrence for the schedule Answer: B QUESTION 231 You have an on-premises web app named App1 that is behind a firewall. The firewall blocks all incoming network traffic. You need to expose App1 to the internet via Azure. The solution must meet the following requirements: - Ensure that access to App1 requires authentication by using Azure. - Avoid deploying additional services and servers to the on-premises network. What should you use? A.Azure Application Gateway B.Azure Relay C.Azure Front Door Service D.Azure Active Directory (Azure AD) Application Proxy Answer: D QUESTION 232 Your company is developing an e-commerce Azure App Service Web App to support hundreds of restaurant locations around the world. You are designing the messaging solution architecture to support the e-commerce transactions and messages. The solution will include the following features: You need to design a solution for the Inventory Distribution feature. Which Azure service should you use? A.Azure Service Bus B.Azure Relay C.Azure Event Grid D.Azure Event Hub Answer: A QUESTION 233 You are responsible for mobile app development for a company. The company develops apps on IOS, and Android. You plan to integrate push notifications into every app. You need to be able to send users alerts from a backend server. Which two options can you use to achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A.Azure Web App B.Azure Mobile App Service C.Azure SQL Database D.Azure Notification Hubs E.a virtual machine Answer: BD QUESTION 234 Hotspot Question You need to design an authentication solution that will integrate on-premises Active Directory and Azure Active Directory (Azure AD). The solution must meet the following requirements: - Active Directory users must not be able to sign in to Azure AD-integrated apps outside of the sign-in hours configured in the Active Directory user accounts. - Active Directory users must authenticate by using multi-factor authentication (MFA) when they sign in to Azure AD-integrated apps. - Administrators must be able to obtain Azure AD-generated reports that list the Active Directory users who have leaked credentials. - The infrastructure required to implement and maintain the solution must be minimized. What should you include in the solution? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer: QUESTION 235 Hotspot Question You have an Azure subscription that contains the resources shown in the following table. You plan to deploy an Azure virtual machine that will have the following configurations: - Name: VM1 - Azure region: Central US - Image: Ubuntu Server 18.04 LTS - Operating system disk size: 1 TB - Virtual machine generation: Gen 2 - Operating system disk type: Standard SSD You need to protect VM1 by using Azure Disk Encryption and Azure Backup. On VM1, which configurations should you change? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer: 2021 Latest Braindump2go AZ-303 PDF and AZ-303 VCE Dumps Free Share: https://drive.google.com/drive/folders/1l4-Ncx3vdn9Ra2pN5d9Lnjv3pxbJpxZB?usp=sharing
(April-2021)Braindump2go SAA-C02 PDF and DP-201 VCE Dumps(Q616-Q636)
QUESTION 616 What should a solutions architect do to optimize utilization MOST oosl-elfectively? A.Enable auto scaling on the original Aurora Database B.Convert the original Aurora Database to Aurora parallel query C.Convert the original Aurora Database to Aurora global database D.Convert the original Aurora Database to Aurora Aurora serverless Answer: D QUESTION 617 A company's website handles millions of requests each day. and the number of requests continues to increase. A solutions architect needs to improve the response time of the web application. The solutions architect determines that the application needs to decrease latency. When retrieving product details from the Amazon DynamoDB table? A.Set up a DynamoOB Accelerator (DAX) cluster. Route all read requests through DAX. B.Set up Amazon ElasliCache (or Redis between the DynamoOB table and the web application. Route all read requests through Redis. C.Set up Amazon ElasliCache for Memcached between the DynamoOB table and the web application. Route all read requests through Memcached. D.Set up Amazon DynamoOB Streams on the table, and have AWS Lambda read from the table and populate Amazon ElastiCache. Route all read requests through ElasliCache. Answer: A QUESTION 618 A company needs to retain application log files for a critical application for 10years. The application team regularly accesses logs from the past month for troubleshooting, but logs older than 1 month are rarely accessed. The application generates more than 10 TB of logs per month. Which storage option meets these requirements MOST cost-effectively? A.Store the logs in Amazon S3. Use AWS Backup to move logs more than 1 month old to S3 Glacier Deep Archive B.Store the logs in Amazon S3. Use S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep Archive. C.Store the logs in Amazon CloudWatch Logs. Use AWS Backup to move logs more than 1 month old to S3 Glacier Deep Archive. D.Store the logs in Amazon CloudWatch Logs. Use Amazon S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep Archive. Answer: B QUESTION 619 A gaming company is using Amazon DynamoDB to run a high-score leaderboard and record the game progress for users. The company is launching a new game that is expected to be active for years. The database activity at launch cannot be predicted; but it is expected to stabilize after 4 weeks. Currently, the company is using on-demand capacity mode for processing reads and writes on all DynamoDB tables. What is the MOST cost-effective way for the company to control the DynamoDB capacity during the new game launch? A.Use on-demand mode and purchase DynamoDB reserved capacity for the first 4 weeks of the game launch B.Use provisioned capacity mode, and purchase DynamoDB reserved capacity for the first 4 weeks of the game launch C.Use provisioned capacity mode for the game launch, switch back to on-demand mode after 4 weeks, and then purchase DynamoDB reserved capacity D.Use on-demand mode for the game launch, switch to provisioned capacity mode after 4 weeks and then purchase DynamoDB reserved capacity Answer: D QUESTION 620 A solutions architect is reviewing the cost of a company's scheduled nightly maintenance. The solutions architect notices that three Amazon EC2 instances are being run to perform nine scripted tasks that take less than 5 minutes each to complete. The scripts are all written in Python. Which action should the company take to optimize costs of the nightly maintenance? A.Consolidate the scripts from the three EC2 instances to run on one EC2 instance. B.Convert the scripts to AWS Lambda functions and schedule them with Amazon EventBridge (Amazon CloudWatch Events). C.Purchase a Compute Savings Plan for the running EC2 instances. D.Create a Spot Fleet to replace the running EC2 instances for executing the scripts. Answer: B QUESTION 621 A company runs an online media site, hosted on-premises. An employee posted a product review that contained videos and pictures. The review went viral and the company needs to handle the resulting spike in website traffic. What action would provide an immediate solution? A.Redesign the website to use Amazon API Gateway, and use AWS Lambda to deliver content B.Add server instances using Amazon EC2 and use Amazon Route 53 with a failover routing policy C.Serve the images and videos using an Amazon CloudFront distribution created using the news site as the origin D.Use Amazon ElasbCache for Redis for caching and reducing the load requests from the origin Answer: C QUESTION 622 A solutions architect is designing an elastic application that will have between 10 and 50 Amazon EC2 concurrent instances running depending on the load. Each instance must mount storage that will read and write to the same 50 GB folder. Which storage type meets the requirements? A.Amazon S3 B.Amazon Elastic File System (Amazon EFS) C.Amazon Amazon Elastic Block Store (Amazon EBS) volumes D.Amazon EC2 instance store Answer: B QUESTION 623 A company is using Amazon RDS for MySQL. The company disaster recovery requirements state that a near real time replica of the database must be maintained on premises. The company wants the data to be encrypted in transit/ Which solution meets these requirements? A.Use AWS Database Migration Service (AWS DMS) and AWS Direct Connect to migrate the data from AWS to on premises. B.Use MySQL replication to replicate from AWS to on premises over an IPsec VPN on top of an AWS Direct Connect Connection. C.Use AWS Data Pipeline to replicate from AWS to on premises over an IPsec VPN on top of an AWS Direct Connect Connection. D.Use the Amazon RDS Multi-Az Feature. Choose on premises as the failover availability zone over an IPsec vpn on top of an AWS Direct Connect Connection Answer: D QUESTION 624 A company stops a cluster of Amazon EC2 instances over a weekend. The costs decrease, but they do not drop to zero. Which resources could still be generating costs? A.Elastic IP address B.Data transfer out C.Regional data transfers D.Amazon Elastic Block Store (Amazon EBS) volumes E.AWS Auto Scaling Answer: AD QUESTION 625 A city has deployed a web application running on AmazonEC2 instances behind an Application Load Balancer (ALB). The Application's users have reported sporadic performance, which appears to be related to DDoS attacks originating from random IP addresses. The City needs a solution that requires minimal configuration changes and provides an audit trail for the DDoS source. Which solution meets these requirements? A.Enable an AWS WAF web ACL on the ALB and configure rules to block traffic from unknown sources. B.Subscribe to Amazon inspector. Engage the AWS DDoS Resource Team (DRT) to integrate migrating controls into the service. C.Subscribe to AWS shield advanced. Engage the AWS DDoS Response team (DRT) to integrate migrating controls into the service. D.Create an Amazon CloudFront distribution for the application and set the ALB as the origin. Enable an AWS WAF web ACL on the distribution and configure rules to block traffic from unknown sources. Answer: C QUESTION 626 A solutions architect is designing a new workload in which an AWS Lambda function will access an Amazon DynamoDB table. What is the MOST secure means of granting the Lambda function access to the DynamoDB labia? A.Create an IAM role with the necessary permissions to access the DynamoDB table. Assign the role to the Lambda function. B.Create a DynamoDB user name and password and give them to the developer to use in the Lambda function. C.Create an IAM user, and create access and secret keys for the user. Give the user the necessary permissions to access the DynarnoOB table. Have the developer use these keys to access the resources. D.Create an IAM role allowing access from AWS Lambda. Assign the role to the DynamoDB table Answer: A QUESTION 627 A company expects its user base to increase five times over one year. Its application is hosted in one region and uses an Amazon RDS for MySQL database, an Application Load Balance Amazon Elastic Container Service (Amazon ECS) to host the website and its microservices. Which design changes should a solutions architect recommend to support the expected growth? (Select TWO.) A.Move static files from Amazon ECS to Amazon S3 B.Use an Amazon Route 53 geolocation routing policy. C.Scale the environment based on real-time AWS CloudTrail logs. D.Create a dedicated Elastic Load Balancer for each microservice. E.Create RDS lead replicas and change the application to use these replicas. Answer: AE QUESTION 628 A company wants to run a static website served through Amazon CloudFront. What is an advantage of storing the website content in an Amazon S3 bucket instead of an Amazon Elastic Block Store (Amazon EBS) volume? A.S3 buckets are replicated globally, allowing for large scalability. EBS volumes are replicated only within an AWS Region. B.S3 is an origin for CloudFront. EBS volumes would need EC2 instances behind an Elastic Load Balancing load balancer to be an origin C.S3 buckets can be encrypted, allowing for secure storage of the web files. EBS volumes cannot be encrypted. D.S3 buckets support object-level read throttling, preventing abuse. EBS volumes do not provide object-level throttling. Answer: B QUESTION 629 A company has an application running on Amazon EC2 On-Demand Instances. The application does not scale, and the Instances run In one AWS Region. The company wants the flexibility to change the operating system from Windows to AWS Linux in the future. The company needs to reduce the cost of the instances without creating additional operational overhead or changes to the application. What should the company purchase lo meet these requirements MOST cost-effectively? A.Dedicated Hosts for the Instance type being used B.A Compute Savings Plan for the instance type being used C.An EC2 Instance Savings Plan (or the instance type being used D.Convertible Reserved Instances tor the instance type being used Answer: D QUESTION 630 A company has a Windows-based application that must be migrated to AWS. The application requires the use of a shared Windows file system attached to multiple Amazon EC2 Windows instances that are deployed across Availability Zones. What should a solution architect do to meet this requirement? A.Configure AWS Storage gateway in volume gateway mode. Mount the volume to each Windows instance. B.Configure Amazon FSx for Windows File Server. Mount the Amazon FSx file system to each Windows instance. C.Configure a file system by using Amazon Elastic File System (Amazon EFS). Mount the EFS file system to each Windows instance. D.Configure an Amazon Elastic Block Store (Amazon EBS) volume with the required size. Attach each instance to the volume. Mount the file system within the volume to each Windows instance. Answer: C QUESTION 631 A company has an application running as a service in Amazon Elastic Container Service (Amazon EC2) using the Amazon launch type. The application code makes AWS API calls to publish messages to Amazon Simple Queue Service (Amazon SQS). What is the MOST secure method of giving the application permission to publish messages to Amazon SQS? A.Use AWS identity and Access Management (IAM) to grant SQS permissions to the role used by the launch configuration for the Auto Scaling group of the ECS cluster. B.Create a new IAM user with SQS permissions. The update the task definition to declare the access key ID and secret access key as environment variables. C.Create a new IAM role with SQS permissions. The update the task definition to use this role for the task role setting. D.Update the security group used by the ECS cluster to allow access to Amazon SQS Answer: B QUESTION 632 53 latency -based routing to route requests to its UDP-based application tor users around the world the application is hosted on redundant servers in the company's on-premises data centers in the United States Asia, and Europe The company's compliance requirements state that the application must be hosted on premises. The company wants to improve the performance and availability of the application. What should a solutions architect do to meet these requirements? A.Configure throe Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the NLBs as its endpoints. Provide access to the application by using a CNAML that points to the accelerator DNS. B.Configure three Application Load Balancers (ALGs) in the three AWS Regions to wireless the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the ALBs as its endpoints. Provide access to the application by using a CNAK1L that points to the accelerator UNS C.Configure three Network Load Balancers (NLOs) in the three AWS Regions to address the on-prernises endpoints in Route 53. Create latency-based record that points to the three NLBs. and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAML that points to the CloudFront DNS. D.Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on premises endpoint. in Route 53. Create a latency based record that points to the three ALUs and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAMF that points to the CloudFront DNS. Answer: C QUESTION 633 A company wants to launch a new application using Amazon Route 53, an Application Load Balancer (ALB), and an Amazon EC2 Auto Scaling group. The company is preparing to perform user experience testing and has a limited budget for this phase of the project. Although the company plans to do a load test in the future, it wants to prevent users from load testing at this time because it wants to limit unnecessary EC2 automatic scaling. What should a solutions architect do to minimize costs of the user experience testing? A.Configure AWS Shield's client request threshold to 100 connections per client. B.Deploy AWS WAF on the ALB with a rate-based rule configured to limit the number of requests each client can make. C.Configure the ALB with an advanced request routing policy to throttle the client connections being sent to the Auto Scaling group. D.Deploy Amazon Simple Queue Service (Amazon SQS) between the ALB and Auto Scaling group to queue client requests and change the Auto Scaling group maximum size to one. Answer: B QUESTION 634 An application launched on Amazon EC2 instances needs to publish personally identifiable information (PH) about customers using Amazon Simple Notification Service (Amazon SNS). The application is launched in private subnets within an Amazon VPC. What is the MOST secure way to allow the application to access service endpoints in the same AWS Region? A.Use an internet gateway B.Use AWS PrivateLink C.Use a NAT gateway. D.Use a proxy instance Answer: B QUESTION 635 A company fails an AWS security review conducted by a third party. The review finds that some of the company's methods to access the Amazon EMR API are not secure. Developers are using AWS Cloud9, and access keys are connecting to the Amazon EMR API through the public internet. Which combination of steps should the company take to MOST improve its security? (Select TWO) A.Set up a VPC peering connection to the Amazon EMR API B.Set up VPC endpoints to connect to the Amazon EMR API C.Set up a NAT gateway to connect to the Amazon EMR API. D.Set up IAM roles to be used to connect to the Amazon EMR API E.Set up each developer with AWS Secrets Manager to store access keys Answer: BD QUESTION 636 A company is rolling out a new web service, but is unsure how many customers the service will attract. However, the company is unwilling to accept any downtime. What could a solutions architect recommend to the company to keep? A.Amazon EC2 B.Amazon RDS C.AWS CtoudTrail D.Amazon DynamoDB Answer: B 2021 Latest Braindump2go SAA-C02 PDF and SAA-C02 VCE Dumps Free Share: https://drive.google.com/drive/folders/1_5IK3H_eM74C6AKwU7sKaLn1rrn8xTfm?usp=sharing
Last-Minute Gifting Ideas
Brainstorming for some special gifts for your loved ones can be difficult at times especially if it's very last minute! There come many occasions where we are bound to gift our loved ones or our partners but we run out of ideas or there is so little time left. This article might turn out to be useful for you and give some inspiration to pick out some last-minute gifts for your loved ones. 1) Perfume I know it is a very common gift but it is a no-brainer and actually a very useful gift. You’ve got to make sure that it is a fragrance which is suitable for all occasions and mostly everywhere they go. The fragrance should at least last 8 to 10 hours. You can select from the wide range of perfume choices from this amazing alternative to eBay. 2) Personalized Cuff Links Giving something personalized adds a little personal touch to it which makes the gift all the more meaningful. There are various websites that design personalized cuff links with initials on them. I understand that getting something personalized is not possible last minute but you can always look for other designs across the web too. 3) Electronic Gadgets Gifting gadgets is a really cool option as well. If you specifically know what the other person wants then you can go for that as well. If they are into PlayStation or video games, you can gift them a PS5 if you have your budget. It would be a really cool gifting option. 4) Airpods or Headphones If your loved ones are really into music or love to have some music while working out or doing some chores, this would be the best gift for them. You would find various color options and models, make sure you choose the best one! 5) Kindle If your loved ones are into reading books a lot, this would be the perfect option for them. Most of the time, book lovers get short of space to store their books. In such cases, Kindle comes much handy. It takes minimal space and stores a maximum number of books. 6) Coffee Machine There are so many people who are immense coffee lovers. But it gets difficult to spend money each time they need to drink coffee or spend so much time beating it to get it ready. In such cases, gift them a coffee machine and they would be the happiest people on this planet. 7) Chocolates and Flowers Now I know it is a very easy-peasy option but for the last minute, it can do wonders and you can never go wrong with it. Because ultimately it is the thought process and feelings behind it that counts, isn’t it? So pick out their favorite chocolates and a super elegant flower arrangement and you’re good to go. 8) Watch I personally really like watches. If your loved one is a watch person or if they don’t have any watches yet, you can help them build a watch collection. So, this is a great idea too. You can find a wide variety of watches online as well as from your nearby shops. These were some last-minute options that would never go wrong. I really hope I came out as a help to you in finding a perfect gift for your loved ones last minute. So, get some amazing gifts for them!
(April-2021)Braindump2go AWS-Developer-Associate PDF and AWS-Developer-Associate VCE Dumps(Q680-Q693)
QUESTION 680 A developer is building an application that will run on Amazon EC2 instances. The application needs to connect to an Amazon DynamoDB table to read and write records. The security team must periodically rotate access keys. Which approach will satisfy these requirements? A.Create an IAM role with read and write access to the DynamoDB table. Generate access keys for the user and store the access keys in the application as environment variables. B.Create an IAM user with read and write access to the DynamoDB table. Store the user name and password in the application and generate access keys using an AWS SDK. C.Create an IAM role, configure read and write access for the DynamoDB table, and attach to the EC2 instances. D.Create an IAM user with read and write access to the DynamoDB table. Generate access keys for the user and store the access keys in the application as a credentials file. Answer: D QUESTION 681 A developer is monitoring an application running on an Amazon EC2 instance. The application accesses an Amazon DynamoDB table and the developer has configured a custom Amazon CloudWatch metric with data granularity of 1 second. If there are any issues, the developer wants to be notified within 30 seconds using Amazon SNS. Which CloudWatch mechanism will satisfy this requirement? A.Configure a high-resolution CloudWatch alarm. B.Set up a custom AWS Lambda CloudWatch log. C.Use a Cloud Watch stream. D.Change to a default CloudWatch metric. Answer: A QUESTION 682 A developer is implementing authentication and authorization for an application. The developer needs to ensure that the user credentials are never exposed. Which approach should the developer take to meet this requirement? A.Store the user credentials in Amazon DynamoDB. Build an AWS Lambda function to validate the credentials and authorize users. B.Deploy a custom authentication and authorization API on an Amazon EC2 instance. Store the user credentials in Amazon S3 and encrypt the credentials using Amazon S3 server-side encryption. C.Use Amazon Cognito to configure a user pool, and user the Cognito API to authenticate and authorize the user. D.Store the user credentials in Amazon RDS. Enable the encryption option for the Amazon RDS DB instances. Build an API using AWS Lambda to validate the credentials and authorize users. Answer: C QUESTION 683 A developer is building a new complex application on AWS. The application consists of multiple microservices hosted on Amazon EC2. The developer wants to determine which microservice adds the most latency while handling a request. Which method should the developer use to make this determination? A.Instrument each microservice request using the AWS X-Ray SDK. Examine the annotations associated with the requests. B.Instrument each microservice request using the AWS X-Ray SDK. Examine the subsegments associated with the requests. C.Instrument each microservice request using the AWS X-Ray SDK. Examine the Amazon CloudWatch EC2 instance metrics associated with the requests. D.Instrument each microservice request using the Amazon CloudWatch SDK. Examine the CloudWatch EC2 instance metrics associated with the requests. Answer: C QUESTION 684 A company has a two-tier application running on an Amazon EC2 server that handles all of its AWS based e-commerce activity. During peak times, the backend servers that process orders are overloaded with requests. This results in some orders failing to process. A developer needs to create a solution that will re- factor the application. Which steps will allow for more flexibility during peak times, while still remaining cost-effective? (Choose two.) A.Increase the backend T2 EC2 instance sizes to x1 to handle the largest possible load throughout the year. B.Implement an Amazon SQS queue to decouple the front-end and backend servers. C.Use an Amazon SNS queue to decouple the front-end and backend servers. D.Migrate the backend servers to on-premises and pull from an Amazon SNS queue. E.Modify the backend servers to pull from an Amazon SQS queue. Answer: BE QUESTION 685 A developer is asked to integrate Amazon CloudWatch into an on-premises application. How should the application access CloudWatch, according to AWS security best practices? A.Configure AWS credentials in the application server with an AWS SDK B.Implement and proxy API-calls through an EC2 instance C.Store IAM credentials in the source code to enable access D.Add the application server SSH-key to AWS Answer: A QUESTION 686 A company's new mobile app uses Amazon API Gateway. As the development team completes a new release of its APIs, a developer must safely and transparently roll out the API change. What is the SIMPLEST solution for the developer to use for rolling out the new API version to a limited number of users through API Gateway? A.Create a new API in API Gateway. Direct a portion of the traffic to the new API using an Amazon Route 53 weighted routing policy. B.Validate the new API version and promote it to production during the window of lowest expected utilization. C.Implement an Amazon CloudWatch alarm to trigger a rollback if the observed HTTP 500 status code rate exceeds a predetermined threshold. D.Use the canary release deployment option in API Gateway. Direct a percentage of the API traffic using the canarySettings setting. Answer: D QUESTION 687 A developer must modify an Alexa skill backed by an AWS Lambda function to access an Amazon DynamoDB table in a second account. A role in the second account has been created with permissions to access the table. How should the table be accessed? A.Modify the Lambda function execution role's permissions to include the new role. B.Change the Lambda function execution role to be the new role. C.Assume the new role in the Lambda function when accessing the table. D.Store the access key and the secret key for the new role and use then when accessing the table. Answer: A QUESTION 688 A developer is creating a new application that will be accessed by users through an API created using Amazon API Gateway. The users need to be authenticated by a third-party Security Assertion Markup Language (SAML) identity provider. Once authenticated, users will need access to other AWS services, such as Amazon S3 and Amazon DynamoDB. How can these requirements be met? A.Use an Amazon Cognito user pool with SAML as the resource server. B.Use Amazon Cognito identity pools with a SAML identity provider as one of the authentication providers. C.Use the AWS IAM service to provide the sign-up and sign-in functionality. D.Use Amazon CloudFront signed URLs to connect with the SAML identity provider. Answer: A QUESTION 689 A company processes incoming documents from an Amazon S3 bucket. Users upload documents to an S3 bucket using a web user interface. Upon receiving files in S3, an AWS Lambda function is invoked to process the files, but the Lambda function times out intermittently. If the Lambda function is configured with the default settings, what will happen to the S3 event when there is a timeout exception? A.Notification of a failed S3 event is send as an email through Amazon SNS. B.The S3 event is sent to the default Dead Letter Queue. C.The S3 event is processed until it is successful. D.The S3 event is discarded after the event is retried twice. Answer: A QUESTION 690 A developer has designed a customer-facing application that is running on an Amazon EC2 instance. The application logs every request made to it. The application usually runs seamlessly, but a spike in traffic generates several logs that cause the disk to fill up and eventually run out of memory. Company policy requires old logs to be centralized for analysis. Which long-term solution should the developer employ to prevent the issue from reoccurring? A.Set up log rotation to rotate the file every day. Also set up log rotation to rotate after every 100 MB and compress the file. B.Install the Amazon CloudWatch agent on the instance to send the logs to CloudWatch. Delete the logs from the instance once they are sent to CloudWatch. C.Enable AWS Auto Scaling on Amazon Elastic Block Store (Amazon EBS) to automatically add volumes to the instance when it reaches a specified threshold. D.Create an Amazon EventBridge (Amazon CloudWatch Events) rule to pull the logs from the instance. Configure the rule to delete the logs after they have been pulled. Answer: C QUESTION 691 A developer is creating a serverless web application and maintains different branches of code. The developer wants to avoid updating the Amazon API Gateway target endpoint each time a new code push is performed. What solution would allow the developer to perform a code push efficiently, without the need to update the API Gateway? A.Associate different AWS Lambda functions to an API Gateway target endpoint. B.Create different stages in API Gateway, then associate API Gateway with AWS Lambda. C.Create aliases and versions in AWS Lambda. D.Tag the AWS Lambda functions with different names. Answer: C QUESTION 692 A developer wants to secure sensitive configuration data such as passwords, database strings, and application license codes. Access to this sensitive information must be tracked for future audit purposes. Where should the sensitive information be stored, adhering to security best practices and operational requirements? A.In an encrypted file on the source code bundle; grant the application access with Amazon IAM B.In the Amazon EC2 Systems Manager Parameter Store; grant the application access with IAM C.On an Amazon EBS encrypted volume; attach the volume to an Amazon EC2 instance to access the data D.As an object in an Amazon S3 bucket; grant an Amazon EC2 instance access with an IAM role Answer: B QUESTION 693 A developer has built an application using Amazon Cognito for authentication and authorization. After a user is successfully logged in to the application, the application creates a user record in an Amazon DynamoDB table. What is the correct flow to authenticate the user and create a record in the DynamoDB table? A.Authenticate and get a token from an Amazon Cognito user pool. Use the token to access DynamoDB. B.Authenticate and get a token from an Amazon Cognito identity pool. Use the token to access DynamoDB. C.Authenticate and get a token from an Amazon Cognito user pool. Exchange the token for AWS credentials with an Amazon Cognito identity pool. Use the credentials to access DynamoDB. D.Authenticate and get a token from an Amazon Cognito identity pool. Exchange the token for AWS credentials with an Amazon Cognito user pool. Use the credentials to access DynamoDB. Answer: D 2021 Latest Braindump2go AWS-Developer-Associate PDF and VCE Dumps Free Share: https://drive.google.com/drive/folders/1dvoSqn8UfssZYMvGJJdAPW320Fvfpph3?usp=sharing
If life is a game, You must be top Gamer
in case you are a professional gamer with excessive-give-up requirements or an informal gamer or streamer, this computer configuration will make sure you placed your money to high-quality use. when you’re spending an excessive amount of cash, there are numerous options to choose from and we will assist you to make the selections. Best Gaming Laptops The components we've decided on for this gaming computer will no longer simplest offer you the nice frame prices with remarkable pics in games nowadays however additionally live aggressive within the destiny. For the CPU we've long gone in favor of the blue team. The i5 9400F is an ideal mid-range gaming processor. although it’s a completely stable preference to go with, there are worth options from the red group as well. The AMD Ryzen 5 2600 is likewise available in a similar price category, a touch extra high priced. Why we've got chosen the i5 9400F over the Ryzen counterpart is the high single-center performance. The middle i5 pulls ahead inside the unmarried-center workloads which makes it higher for gaming. but, Ryzen CPUs are recognized to perform better in multicore situations, like video enhancing or rendering. In case you are a content material writer, you may take gain of the 6 cores and 12 threads on the Ryzen five 2600 vs the 6 cores and six threads on the i5 9400F. Spending a few more money will advantage you if you could exploit the hyper-threading. As this pc is focused on gaming, we will go together with the gaming king, Intel. Acer Predator Helios 300 New Inspiron 15 7501 By Dell ASUS ROG Zephyrus G14 Lenovo Legion Y7000 SE Laptop Acer Nitro 5 HP Gaming Pavilion 15 Asus TUF Gaming A17 MSI GF65 M1 Macbook Air Acer Predator Triton 300
(April-2021)Braindump2go DAS-C01 PDF and DAS-C01 VCE Dumps(Q88-Q113)
QUESTION 88 An online gaming company is using an Amazon Kinesis Data Analytics SQL application with a Kinesis data stream as its source. The source sends three non-null fields to the application: player_id, score, and us_5_digit_zip_code. A data analyst has a .csv mapping file that maps a small number of us_5_digit_zip_code values to a territory code. The data analyst needs to include the territory code, if one exists, as an additional output of the Kinesis Data Analytics application. How should the data analyst meet this requirement while minimizing costs? A.Store the contents of the mapping file in an Amazon DynamoDB table. Preprocess the records as they arrive in the Kinesis Data Analytics application with an AWS Lambda function that fetches the mapping and supplements each record to include the territory code, if one exists. Change the SQL query in the application to include the new field in the SELECT statement. B.Store the mapping file in an Amazon S3 bucket and configure the reference data column headers for the .csv file in the Kinesis Data Analytics application. Change the SQL query in the application to include a join to the file's S3 Amazon Resource Name (ARN), and add the territory code field to the SELECT columns. C.Store the mapping file in an Amazon S3 bucket and configure it as a reference data source for the Kinesis Data Analytics application. Change the SQL query in the application to include a join to the reference table and add the territory code field to the SELECT columns. D.Store the contents of the mapping file in an Amazon DynamoDB table. Change the Kinesis Data Analytics application to send its output to an AWS Lambda function that fetches the mapping and supplements each record to include the territory code, if one exists. Forward the record from the Lambda function to the original application destination. Answer: C QUESTION 89 A company has collected more than 100 TB of log files in the last 24 months. The files are stored as raw text in a dedicated Amazon S3 bucket. Each object has a key of the form year-month- day_log_HHmmss.txt where HHmmss represents the time the log file was initially created. A table was created in Amazon Athena that points to the S3 bucket. One-time queries are run against a subset of columns in the table several times an hour. A data analyst must make changes to reduce the cost of running these queries. Management wants a solution with minimal maintenance overhead. Which combination of steps should the data analyst take to meet these requirements? (Choose three.) A.Convert the log files to Apace Avro format. B.Add a key prefix of the form date=year-month-day/ to the S3 objects to partition the data. C.Convert the log files to Apache Parquet format. D.Add a key prefix of the form year-month-day/ to the S3 objects to partition the data. E.Drop and recreate the table with the PARTITIONED BY clause. Run the ALTER TABLE ADD PARTITION statement. F.Drop and recreate the table with the PARTITIONED BY clause. Run the MSCK REPAIR TABLE statement. Answer: BCF QUESTION 90 A company has an application that ingests streaming data. The company needs to analyze this stream over a 5-minute timeframe to evaluate the stream for anomalies with Random Cut Forest (RCF) and summarize the current count of status codes. The source and summarized data should be persisted for future use. Which approach would enable the desired outcome while keeping data persistence costs low? A.Ingest the data stream with Amazon Kinesis Data Streams. Have an AWS Lambda consumer evaluate the stream, collect the number status codes, and evaluate the data against a previously trained RCF model. Persist the source and results as a time series to Amazon DynamoDB. B.Ingest the data stream with Amazon Kinesis Data Streams. Have a Kinesis Data Analytics application evaluate the stream over a 5-minute window using the RCF function and summarize the count of status codes. Persist the source and results to Amazon S3 through output delivery to Kinesis Data Firehouse. C.Ingest the data stream with Amazon Kinesis Data Firehose with a delivery frequency of 1 minute or 1 MB in Amazon S3. Ensure Amazon S3 triggers an event to invoke an AWS Lambda consumer that evaluates the batch data, collects the number status codes, and evaluates the data against a previously trained RCF model. Persist the source and results as a time series to Amazon DynamoDB. D.Ingest the data stream with Amazon Kinesis Data Firehose with a delivery frequency of 5 minutes or 1 MB into Amazon S3. Have a Kinesis Data Analytics application evaluate the stream over a 1-minute window using the RCF function and summarize the count of status codes. Persist the results to Amazon S3 through a Kinesis Data Analytics output to an AWS Lambda integration. Answer: B QUESTION 91 An online retailer needs to deploy a product sales reporting solution. The source data is exported from an external online transaction processing (OLTP) system for reporting. Roll-up data is calculated each day for the previous day's activities. The reporting system has the following requirements: - Have the daily roll-up data readily available for 1 year. - After 1 year, archive the daily roll-up data for occasional but immediate access. - The source data exports stored in the reporting system must be retained for 5 years. Query access will be needed only for re-evaluation, which may occur within the first 90 days. Which combination of actions will meet these requirements while keeping storage costs to a minimum? (Choose two.) A.Store the source data initially in the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Glacier Deep Archive 90 days after creation, and then deletes the data 5 years after creation. B.Store the source data initially in the Amazon S3 Glacier storage class. Apply a lifecycle configuration that changes the storage class from Amazon S3 Glacier to Amazon S3 Glacier Deep Archive 90 days after creation, and then deletes the data 5 years after creation. C.Store the daily roll-up data initially in the Amazon S3 Standard storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Glacier Deep Archive 1 year after data creation. D.Store the daily roll-up data initially in the Amazon S3 Standard storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Standard-Infrequent Access (S3 Standard- IA) 1 year after data creation. E.Store the daily roll-up data initially in the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Apply a lifecycle configuration that changes the storage class to Amazon S3 Glacier 1 year after data creation. Answer: BE QUESTION 92 A company needs to store objects containing log data in JSON format. The objects are generated by eight applications running in AWS. Six of the applications generate a total of 500 KiB of data per second, and two of the applications can generate up to 2 MiB of data per second. A data engineer wants to implement a scalable solution to capture and store usage data in an Amazon S3 bucket. The usage data objects need to be reformatted, converted to .csv format, and then compressed before they are stored in Amazon S3. The company requires the solution to include the least custom code possible and has authorized the data engineer to request a service quota increase if needed. Which solution meets these requirements? A.Configure an Amazon Kinesis Data Firehose delivery stream for each application. Write AWS Lambda functions to read log data objects from the stream for each application. Have the function perform reformatting and .csv conversion. Enable compression on all the delivery streams. B.Configure an Amazon Kinesis data stream with one shard per application. Write an AWS Lambda function to read usage data objects from the shards. Have the function perform .csv conversion, reformatting, and compression of the data. Have the function store the output in Amazon S3. C.Configure an Amazon Kinesis data stream for each application. Write an AWS Lambda function to read usage data objects from the stream for each application. Have the function perform .csv conversion, reformatting, and compression of the data. Have the function store the output in Amazon S3. D.Store usage data objects in an Amazon DynamoDB table. Configure a DynamoDB stream to copy the objects to an S3 bucket. Configure an AWS Lambda function to be triggered when objects are written to the S3 bucket. Have the function convert the objects into .csv format. Answer: B QUESTION 93 A data analytics specialist is building an automated ETL ingestion pipeline using AWS Glue to ingest compressed files that have been uploaded to an Amazon S3 bucket. The ingestion pipeline should support incremental data processing. Which AWS Glue feature should the data analytics specialist use to meet this requirement? A.Workflows B.Triggers C.Job bookmarks D.Classifiers Answer: B QUESTION 94 A telecommunications company is looking for an anomaly-detection solution to identify fraudulent calls. The company currently uses Amazon Kinesis to stream voice call records in a JSON format from its on- premises database to Amazon S3. The existing dataset contains voice call records with 200 columns. To detect fraudulent calls, the solution would need to look at 5 of these columns only. The company is interested in a cost-effective solution using AWS that requires minimal effort and experience in anomaly-detection algorithms. Which solution meets these requirements? A.Use an AWS Glue job to transform the data from JSON to Apache Parquet. Use AWS Glue crawlers to discover the schema and build the AWS Glue Data Catalog. Use Amazon Athena to create a table with a subset of columns. Use Amazon QuickSight to visualize the data and then use Amazon QuickSight machine learning-powered anomaly detection. B.Use Kinesis Data Firehose to detect anomalies on a data stream from Kinesis by running SQL queries, which compute an anomaly score for all calls and store the output in Amazon RDS. Use Amazon Athena to build a dataset and Amazon QuickSight to visualize the results. C.Use an AWS Glue job to transform the data from JSON to Apache Parquet. Use AWS Glue crawlers to discover the schema and build the AWS Glue Data Catalog. Use Amazon SageMaker to build an anomaly detection model that can detect fraudulent calls by ingesting data from Amazon S3. D.Use Kinesis Data Analytics to detect anomalies on a data stream from Kinesis by running SQL queries, which compute an anomaly score for all calls. Connect Amazon QuickSight to Kinesis Data Analytics to visualize the anomaly scores. Answer: A QUESTION 95 An online retailer is rebuilding its inventory management system and inventory reordering system to automatically reorder products by using Amazon Kinesis Data Streams. The inventory management system uses the Kinesis Producer Library (KPL) to publish data to a stream. The inventory reordering system uses the Kinesis Client Library (KCL) to consume data from the stream. The stream has been configured to scale as needed. Just before production deployment, the retailer discovers that the inventory reordering system is receiving duplicated data. Which factors could be causing the duplicated data? (Choose two.) A.The producer has a network-related timeout. B.The stream's value for the IteratorAgeMilliseconds metric is too high. C.There was a change in the number of shards, record processors, or both. D.The AggregationEnabled configuration property was set to true. E.The max_records configuration property was set to a number that is too high. Answer: BD QUESTION 96 A large retailer has successfully migrated to an Amazon S3 data lake architecture. The company's marketing team is using Amazon Redshift and Amazon QuickSight to analyze data, and derive and visualize insights. To ensure the marketing team has the most up-to-date actionable information, a data analyst implements nightly refreshes of Amazon Redshift using terabytes of updates from the previous day. After the first nightly refresh, users report that half of the most popular dashboards that had been running correctly before the refresh are now running much slower. Amazon CloudWatch does not show any alerts. What is the MOST likely cause for the performance degradation? A.The dashboards are suffering from inefficient SQL queries. B.The cluster is undersized for the queries being run by the dashboards. C.The nightly data refreshes are causing a lingering transaction that cannot be automatically closed by Amazon Redshift due to ongoing user workloads. D.The nightly data refreshes left the dashboard tables in need of a vacuum operation that could not be automatically performed by Amazon Redshift due to ongoing user workloads. Answer: B QUESTION 97 A marketing company is storing its campaign response data in Amazon S3. A consistent set of sources has generated the data for each campaign. The data is saved into Amazon S3 as .csv files. A business analyst will use Amazon Athena to analyze each campaign's data. The company needs the cost of ongoing data analysis with Athena to be minimized. Which combination of actions should a data analytics specialist take to meet these requirements? (Choose two.) A.Convert the .csv files to Apache Parquet. B.Convert the .csv files to Apache Avro. C.Partition the data by campaign. D.Partition the data by source. E.Compress the .csv files. Answer: BC QUESTION 98 An online retail company is migrating its reporting system to AWS. The company's legacy system runs data processing on online transactions using a complex series of nested Apache Hive queries. Transactional data is exported from the online system to the reporting system several times a day. Schemas in the files are stable between updates. A data analyst wants to quickly migrate the data processing to AWS, so any code changes should be minimized. To keep storage costs low, the data analyst decides to store the data in Amazon S3. It is vital that the data from the reports and associated analytics is completely up to date based on the data in Amazon S3. Which solution meets these requirements? A.Create an AWS Glue Data Catalog to manage the Hive metadata. Create an AWS Glue crawler over Amazon S3 that runs when data is refreshed to ensure that data changes are updated. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. B.Create an AWS Glue Data Catalog to manage the Hive metadata. Create an Amazon EMR cluster with consistent view enabled. Run emrfs sync before each analytics step to ensure data changes are updated. Create an EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. C.Create an Amazon Athena table with CREATE TABLE AS SELECT (CTAS) to ensure data is refreshed from underlying queries against the raw dataset. Create an AWS Glue Data Catalog to manage the Hive metadata over the CTAS table. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. D.Use an S3 Select query to ensure that the data is properly updated. Create an AWS Glue Data Catalog to manage the Hive metadata over the S3 Select table. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR. Answer: A QUESTION 99 A media company is using Amazon QuickSight dashboards to visualize its national sales data. The dashboard is using a dataset with these fields: ID, date, time_zone, city, state, country, longitude, latitude, sales_volume, and number_of_items. To modify ongoing campaigns, the company wants an interactive and intuitive visualization of which states across the country recorded a significantly lower sales volume compared to the national average. Which addition to the company's QuickSight dashboard will meet this requirement? A.A geospatial color-coded chart of sales volume data across the country. B.A pivot table of sales volume data summed up at the state level. C.A drill-down layer for state-level sales volume data. D.A drill through to other dashboards containing state-level sales volume data. Answer: B QUESTION 100 A company hosts an on-premises PostgreSQL database that contains historical data. An internal legacy application uses the database for read-only activities. The company's business team wants to move the data to a data lake in Amazon S3 as soon as possible and enrich the data for analytics. The company has set up an AWS Direct Connect connection between its VPC and its on-premises network. A data analytics specialist must design a solution that achieves the business team's goals with the least operational overhead. Which solution meets these requirements? A.Upload the data from the on-premises PostgreSQL database to Amazon S3 by using a customized batch upload process. Use the AWS Glue crawler to catalog the data in Amazon S3. Use an AWS Glue job to enrich and store the result in a separate S3 bucket in Apache Parquet format. Use Amazon Athena to query the data. B.Create an Amazon RDS for PostgreSQL database and use AWS Database Migration Service (AWS DMS) to migrate the data into Amazon RDS. Use AWS Data Pipeline to copy and enrich the data from the Amazon RDS for PostgreSQL table and move the data to Amazon S3. Use Amazon Athena to query the data. C.Configure an AWS Glue crawler to use a JDBC connection to catalog the data in the on-premises database. Use an AWS Glue job to enrich the data and save the result to Amazon S3 in Apache Parquet format. Create an Amazon Redshift cluster and use Amazon Redshift Spectrum to query the data. D.Configure an AWS Glue crawler to use a JDBC connection to catalog the data in the on-premises database. Use an AWS Glue job to enrich the data and save the result to Amazon S3 in Apache Parquet format. Use Amazon Athena to query the data. Answer: B QUESTION 101 A medical company has a system with sensor devices that read metrics and send them in real time to an Amazon Kinesis data stream. The Kinesis data stream has multiple shards. The company needs to calculate the average value of a numeric metric every second and set an alarm for whenever the value is above one threshold or below another threshold. The alarm must be sent to Amazon Simple Notification Service (Amazon SNS) in less than 30 seconds. Which architecture meets these requirements? A.Use an Amazon Kinesis Data Firehose delivery stream to read the data from the Kinesis data stream with an AWS Lambda transformation function that calculates the average per second and sends the alarm to Amazon SNS. B.Use an AWS Lambda function to read from the Kinesis data stream to calculate the average per second and sent the alarm to Amazon SNS. C.Use an Amazon Kinesis Data Firehose deliver stream to read the data from the Kinesis data stream and store it on Amazon S3. Have Amazon S3 trigger an AWS Lambda function that calculates the average per second and sends the alarm to Amazon SNS. D.Use an Amazon Kinesis Data Analytics application to read from the Kinesis data stream and calculate the average per second. Send the results to an AWS Lambda function that sends the alarm to Amazon SNS. Answer: C QUESTION 102 An IoT company wants to release a new device that will collect data to track sleep overnight on an intelligent mattress. Sensors will send data that will be uploaded to an Amazon S3 bucket. About 2 MB of data is generated each night for each bed. Data must be processed and summarized for each user, and the results need to be available as soon as possible. Part of the process consists of time windowing and other functions. Based on tests with a Python script, every run will require about 1 GB of memory and will complete within a couple of minutes. Which solution will run the script in the MOST cost-effective way? A.AWS Lambda with a Python script B.AWS Glue with a Scala job C.Amazon EMR with an Apache Spark script D.AWS Glue with a PySpark job Answer: A QUESTION 103 A company wants to provide its data analysts with uninterrupted access to the data in its Amazon Redshift cluster. All data is streamed to an Amazon S3 bucket with Amazon Kinesis Data Firehose. An AWS Glue job that is scheduled to run every 5 minutes issues a COPY command to move the data into Amazon Redshift. The amount of data delivered is uneven throughout then day, and cluster utilization is high during certain periods. The COPY command usually completes within a couple of seconds. However, when load spike occurs, locks can exist and data can be missed. Currently, the AWS Glue job is configured to run without retries, with timeout at 5 minutes and concurrency at 1. How should a data analytics specialist configure the AWS Glue job to optimize fault tolerance and improve data availability in the Amazon Redshift cluster? A.Increase the number of retries. Decrease the timeout value. Increase the job concurrency. B.Keep the number of retries at 0. Decrease the timeout value. Increase the job concurrency. C.Keep the number of retries at 0. Decrease the timeout value. Keep the job concurrency at 1. D.Keep the number of retries at 0. Increase the timeout value. Keep the job concurrency at 1. Answer: B QUESTION 104 A retail company leverages Amazon Athena for ad-hoc queries against an AWS Glue Data Catalog. The data analytics team manages the data catalog and data access for the company. The data analytics team wants to separate queries and manage the cost of running those queries by different workloads and teams. Ideally, the data analysts want to group the queries run by different users within a team, store the query results in individual Amazon S3 buckets specific to each team, and enforce cost constraints on the queries run against the Data Catalog. Which solution meets these requirements? A.Create IAM groups and resource tags for each team within the company. Set up IAM policies that control user access and actions on the Data Catalog resources. B.Create Athena resource groups for each team within the company and assign users to these groups. Add S3 bucket names and other query configurations to the properties list for the resource groups. C.Create Athena workgroups for each team within the company. Set up IAM workgroup policies that control user access and actions on the workgroup resources. D.Create Athena query groups for each team within the company and assign users to the groups. Answer: A QUESTION 105 A manufacturing company uses Amazon S3 to store its data. The company wants to use AWS Lake Formation to provide granular-level security on those data assets. The data is in Apache Parquet format. The company has set a deadline for a consultant to build a data lake. How should the consultant create the MOST cost-effective solution that meets these requirements? A.Run Lake Formation blueprints to move the data to Lake Formation. Once Lake Formation has the data, apply permissions on Lake Formation. B.To create the data catalog, run an AWS Glue crawler on the existing Parquet data. Register the Amazon S3 path and then apply permissions through Lake Formation to provide granular-level security. C.Install Apache Ranger on an Amazon EC2 instance and integrate with Amazon EMR. Using Ranger policies, create role-based access control for the existing data assets in Amazon S3. D.Create multiple IAM roles for different users and groups. Assign IAM roles to different data assets in Amazon S3 to create table-based and column-based access controls. Answer: C QUESTION 106 A company has an application that uses the Amazon Kinesis Client Library (KCL) to read records from a Kinesis data stream. After a successful marketing campaign, the application experienced a significant increase in usage. As a result, a data analyst had to split some shards in the data stream. When the shards were split, the application started throwing an ExpiredIteratorExceptions error sporadically. What should the data analyst do to resolve this? A.Increase the number of threads that process the stream records. B.Increase the provisioned read capacity units assigned to the stream's Amazon DynamoDB table. C.Increase the provisioned write capacity units assigned to the stream's Amazon DynamoDB table. D.Decrease the provisioned write capacity units assigned to the stream's Amazon DynamoDB table. Answer: C QUESTION 107 A company is building a service to monitor fleets of vehicles. The company collects IoT data from a device in each vehicle and loads the data into Amazon Redshift in near-real time. Fleet owners upload .csv files containing vehicle reference data into Amazon S3 at different times throughout the day. A nightly process loads the vehicle reference data from Amazon S3 into Amazon Redshift. The company joins the IoT data from the device and the vehicle reference data to power reporting and dashboards. Fleet owners are frustrated by waiting a day for the dashboards to update. Which solution would provide the SHORTEST delay between uploading reference data to Amazon S3 and the change showing up in the owners' dashboards? A.Use S3 event notifications to trigger an AWS Lambda function to copy the vehicle reference data into Amazon Redshift immediately when the reference data is uploaded to Amazon S3. B.Create and schedule an AWS Glue Spark job to run every 5 minutes. The job inserts reference data into Amazon Redshift. C.Send reference data to Amazon Kinesis Data Streams. Configure the Kinesis data stream to directly load the reference data into Amazon Redshift in real time. D.Send the reference data to an Amazon Kinesis Data Firehose delivery stream. Configure Kinesis with a buffer interval of 60 seconds and to directly load the data into Amazon Redshift. Answer: A QUESTION 108 A company is migrating from an on-premises Apache Hadoop cluster to an Amazon EMR cluster. The cluster runs only during business hours. Due to a company requirement to avoid intraday cluster failures, the EMR cluster must be highly available. When the cluster is terminated at the end of each business day, the data must persist. Which configurations would enable the EMR cluster to meet these requirements? (Choose three.) A.EMR File System (EMRFS) for storage B.Hadoop Distributed File System (HDFS) for storage C.AWS Glue Data Catalog as the metastore for Apache Hive D.MySQL database on the master node as the metastore for Apache Hive E.Multiple master nodes in a single Availability Zone F.Multiple master nodes in multiple Availability Zones Answer: BCF QUESTION 109 A retail company wants to use Amazon QuickSight to generate dashboards for web and in-store sales. A group of 50 business intelligence professionals will develop and use the dashboards. Once ready, the dashboards will be shared with a group of 1,000 users. The sales data comes from different stores and is uploaded to Amazon S3 every 24 hours. The data is partitioned by year and month, and is stored in Apache Parquet format. The company is using the AWS Glue Data Catalog as its main data catalog and Amazon Athena for querying. The total size of the uncompressed data that the dashboards query from at any point is 200 GB. Which configuration will provide the MOST cost-effective solution that meets these requirements? A.Load the data into an Amazon Redshift cluster by using the COPY command. Configure 50 author users and 1,000 reader users. Use QuickSight Enterprise edition. Configure an Amazon Redshift data source with a direct query option. B.Use QuickSight Standard edition. Configure 50 author users and 1,000 reader users. Configure an Athena data source with a direct query option. C.Use QuickSight Enterprise edition. Configure 50 author users and 1,000 reader users. Configure an Athena data source and import the data into SPICE. Automatically refresh every 24 hours. D.Use QuickSight Enterprise edition. Configure 1 administrator and 1,000 reader users. Configure an S3 data source and import the data into SPICE. Automatically refresh every 24 hours. Answer: C QUESTION 110 A central government organization is collecting events from various internal applications using Amazon Managed Streaming for Apache Kafka (Amazon MSK). The organization has configured a separate Kafka topic for each application to separate the data. For security reasons, the Kafka cluster has been configured to only allow TLS encrypted data and it encrypts the data at rest. A recent application update showed that one of the applications was configured incorrectly, resulting in writing data to a Kafka topic that belongs to another application. This resulted in multiple errors in the analytics pipeline as data from different applications appeared on the same topic. After this incident, the organization wants to prevent applications from writing to a topic different than the one they should write to. Which solution meets these requirements with the least amount of effort? A.Create a different Amazon EC2 security group for each application. Configure each security group to have access to a specific topic in the Amazon MSK cluster. Attach the security group to each application based on the topic that the applications should read and write to. B.Install Kafka Connect on each application instance and configure each Kafka Connect instance to write to a specific topic only. C.Use Kafka ACLs and configure read and write permissions for each topic. Use the distinguished name of the clients' TLS certificates as the principal of the ACL. D.Create a different Amazon EC2 security group for each application. Create an Amazon MSK cluster and Kafka topic for each application. Configure each security group to have access to the specific cluster. Answer: B QUESTION 111 A company wants to collect and process events data from different departments in near-real time. Before storing the data in Amazon S3, the company needs to clean the data by standardizing the format of the address and timestamp columns. The data varies in size based on the overall load at each particular point in time. A single data record can be 100 KB-10 MB. How should a data analytics specialist design the solution for data ingestion? A.Use Amazon Kinesis Data Streams. Configure a stream for the raw data. Use a Kinesis Agent to write data to the stream. Create an Amazon Kinesis Data Analytics application that reads data from the raw stream, cleanses it, and stores the output to Amazon S3. B.Use Amazon Kinesis Data Firehose. Configure a Firehose delivery stream with a preprocessing AWS Lambda function for data cleansing. Use a Kinesis Agent to write data to the delivery stream. Configure Kinesis Data Firehose to deliver the data to Amazon S3. C.Use Amazon Managed Streaming for Apache Kafka. Configure a topic for the raw data. Use a Kafka producer to write data to the topic. Create an application on Amazon EC2 that reads data from the topic by using the Apache Kafka consumer API, cleanses the data, and writes to Amazon S3. D.Use Amazon Simple Queue Service (Amazon SQS). Configure an AWS Lambda function to read events from the SQS queue and upload the events to Amazon S3. Answer: B QUESTION 112 An operations team notices that a few AWS Glue jobs for a given ETL application are failing. The AWS Glue jobs read a large number of small JOSN files from an Amazon S3 bucket and write the data to a different S3 bucket in Apache Parquet format with no major transformations. Upon initial investigation, a data engineer notices the following error message in the History tab on the AWS Glue console: "Command Failed with Exit Code 1." Upon further investigation, the data engineer notices that the driver memory profile of the failed jobs crosses the safe threshold of 50% usage quickly and reaches 90?5% soon after. The average memory usage across all executors continues to be less than 4%. The data engineer also notices the following error while examining the related Amazon CloudWatch Logs. What should the data engineer do to solve the failure in the MOST cost-effective way? A.Change the worker type from Standard to G.2X. B.Modify the AWS Glue ETL code to use the `groupFiles': `inPartition' feature. C.Increase the fetch size setting by using AWS Glue dynamics frame. D.Modify maximum capacity to increase the total maximum data processing units (DPUs) used. Answer: D QUESTION 113 A transport company wants to track vehicular movements by capturing geolocation records. The records are 10 B in size and up to 10,000 records are captured each second. Data transmission delays of a few minutes are acceptable, considering unreliable network conditions. The transport company decided to use Amazon Kinesis Data Streams to ingest the data. The company is looking for a reliable mechanism to send data to Kinesis Data Streams while maximizing the throughput efficiency of the Kinesis shards. Which solution will meet the company's requirements? A.Kinesis Agent B.Kinesis Producer Library (KPL) C.Kinesis Data Firehose D.Kinesis SDK Answer: B 2021 Latest Braindump2go DAS-C01 PDF and DAS-C01 VCE Dumps Free Share: https://drive.google.com/drive/folders/1WbSRm3ZlrRzjwyqX7auaqgEhLLzmD-2w?usp=sharing