The latest updated Microsoft DP-200 exam dumps and free DP-200 exam practice questions and answers! Latest updates from Lead4Pass Microsoft DP-200 Dumps PDF and DP-200 Dumps VCE, Lead4Pass DP-200 exam questions updated and answers corrected!
Get the full Microsoft DP-200 dumps from https://www.leads4pass.com/dp-200.html (VCE&PDF)

Latest DP-200 PDF for free

Share the Microsoft DP-200 Dumps PDF for free From Lead4pass DP-200 Dumps part of the distraction collected on Google Drive shared by Lead4pass
https://drive.google.com/file/d/1yPwGMOV41sUNiuQQzX7YNc528kZSuoYk/

Latest Lead4pass DP-200 Youtube

Share the latest Microsoft DP-200 exam practice questions and answers for free from Led4Pass Dumps viewed online by Youtube Videos

https://youtube.com/watch?v=W3ZDmADQY60

The latest updated Microsoft DP-200 Exam Practice Questions and Answers Online Practice Test is free to share from Lead4Pass (Q1-Q13)

QUESTION 1
A company uses Microsoft Azure SQL Database to store sensitive company data. You encrypt the data and only allow
access to specified users from specified locations.
You must monitor data usage, and data copied from the system to prevent data leakage.
You need to configure Azure SQL Database to email a specific user when data leakage occurs.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions
to the answer area and arrange them in the correct order.
Select and Place:[2021.1] lead4pass dp-200 practice test q1

Step 1: Enable advanced threat protection Set up threat detection for your database in the Azure portal
1. Launch the Azure portal at https://portal.azure.com.
2. Navigate to the configuration page of the Azure SQL Database server you want to protect. In the security settings,
select Advanced Data Security.
3. On the Advanced Data Security configuration page:
Enable advanced data security on the server.
In Threat Detection Settings, in the Send alerts to the text box, provide the list of emails to receive security alerts upon
detection of anomalous database activities.

[2021.1] lead4pass dp-200 practice test q1-1

Step 2: Configure the service to send email alerts to [email protected]
Step 3:..of type data exfiltration
The benefits of Advanced Threat Protection for Azure Storage include:
Detection of anomalous access and data exfiltration activities.
Security alerts are triggered when anomalies in activity occur: access from an unusual location, anonymous access,
access by an unusual application, data exfiltration, unexpected delete operations, access permission change, and so
on.
Admins can view these alerts via Azure Security Center and can also choose to be notified of each of them via email.
References:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-threat-detection
https://www.helpnetsecurity.com/2019/04/04/microsoft-azure-security/

 

QUESTION 2
You develop a data ingestion process that will import data to a Microsoft Azure SQL Data Warehouse. The data to be
ingested resides in parquet files stored in an Azure Data Lake Gen 2 storage account.
You need to load the data from the Azure Data Lake Gen 2 storage account into the Azure SQL Data Warehouse.
Solution:
1.
Create an external data source pointing to the Azure storage account
2.
Create a workload group using the Azure storage account name as the pool name
3.
Load the data using the CREATE TABLE AS SELECT statement Does the solution meet the goal?
A. Yes
B. No
Correct Answer: B
Use the Azure Data Lake Gen 2 storage account.
References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-load-from-azure-data-lakestore

 

QUESTION 3
You plan to create a dimension table in Azure Data Warehouse that will be less than 1 GB.
You need to create a table to meet the following requirements:
Provide the fastest query time.
Minimize data movement.
Which type of table should you use?
A. hash distributed
B. heap
C. replicated
D. round-robin
Correct Answer: D
Usually, common dimension tables or tables that doesn\\’t distribute evenly are good candidates for round-robin
distributed table.
Note: Dimension tables or other lookup tables in a schema can usually be stored as round-robin tables. Usually, these
tables connect to more than one fact table, and optimizing for one join may not be the best idea. Also usually dimension
tables are smaller which can leave some distributions empty when hash distributed. Round-robin by definition
guarantees a uniform data distribution.
References: https://blogs.msdn.microsoft.com/sqlcat/2015/08/11/choosing-hash-distributed-table-vs-round-robindistributed-table-in-azure-sql-dw-service/

 

QUESTION 4
You plan to deploy an Azure Cosmos DB database that supports multi-master replication.
You need to select a consistency level for the database to meet the following requirements:
Provide a recovery point objective (RPO) of less than 15 minutes.
Provide a recovery time objective (RTO) of zero minutes.
What are three possible consistency levels that you can select? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Strong
B. Bounded Staleness
C. Eventual
D. Session
E. Consistent Prefix
Correct Answer: CDE[2021.1] lead4pass dp-200 practice test q4

References: https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels-choosing

 

QUESTION 5
You have an Azure SQL data warehouse.
Using PolyBase, you create a table named [Ext].[Items] to query Parquet files stored in Azure Data Lake Storage Gen2
without importing the data to the data warehouse.
The external table has three columns.
You discover that the Parquet files have a fourth column named ItemID.
Which command should you run to add the ItemID column to the external table?[2021.1] lead4pass dp-200 practice test q5

A. Option A
B. Option B
C. Option C
D. Option D
Correct Answer: A
Incorrect Answers:
B, D: Only these Data Definition Language (DDL) statements are allowed on external tables:
CREATE TABLE and DROP TABLE
CREATE STATISTICS and DROP STATISTICS
CREATE VIEW and DROP VIEW
References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-external-table-transact-sql

 

QUESTION 6
You manage a financial computation data analysis process. Microsoft Azure virtual machines (VMs) run the process in
daily jobs, and store the results in virtual hard drives (VHDs.)
The VMs product results using data from the previous day and store the results in a snapshot of the VHD. When a new month begins, a process creates a new VHD.
You must implement the following data retention requirements:
Daily results must be kept for 90 days
Data for the current year must be available for weekly reports
Data from the previous 10 years must be stored for auditing purposes
Data required for an audit must be produced within 10 days of a request.
You need to enforce the data retention requirements while minimizing cost.
How should you configure the lifecycle policy? To answer, drag the appropriate JSON segments to the correct locations.
Each JSON segment may be used once, more than once, or not at all. You may need to drag the split bat between
panes or scroll to view content. NOTE: Each correct selection is worth one point.
Select and Place:
[2021.1] lead4pass dp-200 practice test q6

Correct Answer:

[2021.1] lead4pass dp-200 practice test q6-1

The Set-AzStorageAccountManagementPolicy cmdlet creates or modifies the management policy of an Azure Storage
account.
Example: Create or update the management policy of a Storage account with ManagementPolicy rule objects.

[2021.1] lead4pass dp-200 practice test q6-2

Action -BaseBlobAction Delete -daysAfterModificationGreaterThan 100
PS C:\>$action1 = Add-AzStorageAccountManagementPolicyAction -InputObject $action1 -BaseBlobAction
TierToArchive -daysAfterModificationGreaterThan 50
PS C:\>$action1 = Add-AzStorageAccountManagementPolicyAction -InputObject $action1 -BaseBlobAction TierToCool
-daysAfterModificationGreaterThan 30
PS C:\>$action1 = Add-AzStorageAccountManagementPolicyAction -InputObject $action1 -SnapshotAction Delete
-daysAfterCreationGreaterThan 100
PS C:\>$filter1 = New-AzStorageAccountManagementPolicyFilter -PrefixMatch ab,cd
PS C:\>$rule1 = New-AzStorageAccountManagementPolicyRule -Name Test -Action $action1 -Filter $filter1
PS C:\>$action2 = Add-AzStorageAccountManagementPolicyAction -BaseBlobAction Delete
-daysAfterModificationGreaterThan 100
PS C:\>$filter2 = New-AzStorageAccountManagementPolicyFilter
References:
https://docs.microsoft.com/en-us/powershell/module/az.storage/set-azstorageaccountmanagementpolicy

 

QUESTION 7
You plan to create an Azure Databricks workspace that has a tiered structure. The workspace will contain the following
three workloads:
A workload for data engineers who will use Python and SQL
A workload for jobs that will run notebooks that use Python, Spark, Scala, and SQL
A workload that data scientists will use to perform ad hoc analysis in Scala and R
The enterprise architecture team at your company identifies the following standards for Databricks environments:
The data engineers must share a cluster.
The job cluster will be managed by using a request process whereby data scientists and data engineers provide
packaged notebooks for deployment to the cluster.
All the data scientists must be assigned their own cluster that terminates automatically after 120 minutes of inactivity.
Currently, there are three data scientists.
You need to create the Databrick clusters for the workloads.
Solution: You create a Standard cluster for each data scientist, a High Concurrency cluster for the data engineers, and a
High Concurrency cluster for the jobs.
Does this meet the goal?
A. Yes
B. No
Correct Answer: A
We need a High Concurrency cluster for the data engineers and the jobs.
Note:
Standard clusters are recommended for a single user. Standard can run workloads developed in any language: Python,
R, Scala, and SQL.
A high concurrency cluster is a managed cloud resource. The key benefits of high concurrency clusters are that they
provide Apache Spark-native fine-grained sharing for maximum resource utilization and minimum query latencies.
References:
https://docs.azuredatabricks.net/clusters/configure.html

 

QUESTION 8
You are the data engineer for your company. An application uses a NoSQL database to store data. The database uses
the key-value and wide-column NoSQL database type.
Developers need to access data in the database using an API.
You need to determine which API to use for the database model and type.
Which two APIs should you use? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Table API
B. MongoDB API
C. Gremlin API
D. SQL API
E. Cassandra API
Correct Answer: BE
B: Azure Cosmos DB is the globally distributed, multimodel database service from Microsoft for mission-critical
applications. It is a multimodel database and supports document, key-value, graph, and columnar data models.
E: Wide-column stores store data together as columns instead of rows and are optimized for queries over large
datasets. The most popular are Cassandra and HBase.
References: https://docs.microsoft.com/en-us/azure/cosmos-db/graph-introduction
https://www.mongodb.com/scale/types-of-nosql-databases

 

QUESTION 9
You develop a data ingestion process that will import data to a Microsoft Azure SQL Data Warehouse. The data to be
ingested resides in parquet files stored in an Azure Data Lake Gen 2 storage account.
You need to load the data from the Azure Data Lake Gen 2 storage account into the Azure SQL Data Warehouse.
Solution:
1.
Create an external data source pointing to the Azure storage account
2.
Create a workload group using the Azure storage account name as the pool name
3.
Load the data using the INSERT…SELECT statement Does the solution meet the goal?
A. Yes
B. No
Correct Answer: B
You need to create an external file format and external table using the external data source. You then load the data
using the CREATE TABLE AS SELECT statement.
References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-load-from-azure-data-lakestore

 

QUESTION 10
You are a data engineer. You are designing a Hadoop Distributed File System (HDFS) architecture. You plan to use
Microsoft Azure Data Lake as a data storage repository.
You must provision the repository with a resilient data schema. You need to ensure the resiliency of the Azure Data
Lake Storage. What should you use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:[2021.1] lead4pass dp-200 practice test q10

Correct Answer:

[2021.1] lead4pass dp-200 practice test q10-1

Explanation/Reference:
Box 1: NameNode
An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and
regulates access to files by clients.
Box 2: DataNode
The DataNodes are responsible for serving read and write requests from the file system

 

QUESTION 11
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains
a unique solution that might meet the stated goals. Some question sets might have more than one correct solution,
while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not
appear on the review screen.
You need to implement diagnostic logging for Data Warehouse monitoring.
Which log should you use?
A. RequestSteps
B. DmsWorkers
C. SqlRequests
D. ExecRequests
Correct Answer: C
Scenario:
The Azure SQL Data Warehouse cache must be monitored when the database is being used.[2021.1] lead4pass dp-200 practice test q11

References: https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-pdwsql-requests-transact-sq

 

QUESTION 12
What should you include in the Data Factory pipeline for Race Central?
A. a copy activity that uses a stored procedure as a source
B. a copy activity that contains schema mappings
C. a delete activity that has logging enabled
D. a filter activity that has a condition
Correct Answer: B
Scenario:
An Azure Data Factory pipeline must be used to move data from Cosmos DB to SQL Database for Race Central. If the
data load takes longer than 20 minutes, configuration changes must be made to Data Factory.
The telemetry data is sent to a MongoDB database. A custom application then moves the data to databases in SQL
Server 2017. The telemetry data in MongoDB has more than 500 attributes. The application changes the attribute
names
when the data is moved to SQL Server 2017.
You can copy data to or from Azure Cosmos DB (SQL API) by using the Azure Data Factory pipeline.
Column mapping applies when copying data from source to sink. By default, copy activity map source data to sink by
column names. You can specify the explicit mapping to customize the column mapping based on your need. More
specifically,
copy activity:
Read the data from the source and determine the source schema
Use default column mapping to map columns by name, or apply explicit column mapping if specified.
Write the data to sink
Write the data to sink
References:
https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-schema-and-type-mapping


Fulldumps shares the latest updated Microsoft DP-200 exam exercise questions, DP-200 dumps pdf, and Youtube video learning for free.
All exam questions and answers come from the Lead4pass exam dumps shared part! Lead4pass updates throughout the year and shares a portion of your exam questions for free to help you understand the exam content and enhance your exam experience!
Get the full Microsoft DP-200 exam dumps questions at https://www.leads4pass.com/dp-200.html (pdf&vce)

ps.
Get free Microsoft DP-200 dumps PDF online: https://drive.google.com/file/d/1yPwGMOV41sUNiuQQzX7YNc528kZSuoYk/

Author