Wednesday, December 23, 2020

Pass Microsoft 70-767 Exam with 100% Passing Assurance | Realexamdumps.com


 Question #:1


Note: This question is part of a series of questions that use the same scenario. For your convenience, the

scenario is repeated in each question. Each question presents a different goal and answer choices, but the text

of the scenario is exactly the same in each question in this series.

You have a Microsoft SQL Server data warehouse instance that supports several client applications.

The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer,

Dimension.Date, Fact.Ticket, and Fact.Order. The Dimension.SalesTerritory and Dimension.Customer tables

are frequently updated. The Fact.Order table is optimized for weekly reporting, but the company wants to

change it daily. The Fact.Order table is loaded by using an ETL process. Indexes have been added to the table

over time, but the presence of these indexes slows data loading.

All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You have a

second database named DB2 that contains copies of production data for a development environment. The data

warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently

and is considered historical.

You have the following requirements:


Implement table partitioning to improve the manageability of the data warehouse and to avoid the needto repopulate all transactional data each night. Use a partitioning strategy that is as granular as possible.

Partition the Fact.Order table and retain a total of seven years of data.

Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition

structure must apply a sliding window strategy to ensure that a new partition is available for the

upcoming month, and that the oldest month of data is archived and removed.

Optimize data loading for the Dimension.SalesTerritory, Dimension.Customer, and Dimension.Date

tables.

Incrementally load all tables in the database and ensure that all incremental changes are processed.

Maximize the performance during the data loading process for the Fact.Order partition.

Ensure that historical data remains online and available for querying.

Reduce ongoing storage costs while maintaining query performance for current data.

You are not permitted to make changes to the client applications.

You need to implement the data partitioning strategy.

How should you partition the Fact.Order table?


A. Create 17,520 partitions.

B. Use a granularity of two days.

C. Create 2,557 partitions.

D. Create 730 partitions.


Answer: C


Explanation


We create on partition for each day. 7 years times 365 days is 2,555. Make that 2,557 to provide for leap years.

From scenario: Partition the Fact.Order table and retain a total of seven years of data.

Maximize the performance during the data loading process for the Fact.Order partition.


Question #:2


Note: This question is part of a series of questions that use the same or similar answer choices. An answer

choice may be correct for more than one question in the series. Each question is independent of the other

questions in this series. Information and details provided in a question apply only to that question.

You are designing a data warehouse and the load process for the data warehouse.

You have a source system that contains two tables named Table1 and Table2. All the rows in each table have a

corresponding row in the other table.

The primary key for Table1 is named Key1. The primary key for Table2 is named Key2.

You need to combine both tables into a single table named Table3 in the data warehouse. The solution must

ensure that all the nonkey columns in Table1 and Table2 exist in Table3.

Which component should you use to load the data to the data warehouse?


A. the Slowly Changing Dimension transformation

B. the Conditional Split transformation

C. the Merge transformation

D. the Data Conversion transformation

E. an Execute SQL task

F. the Aggregate transformation

G. the Lookup transformation


Answer: G


Explanation


The Lookup transformation performs lookups by joining data in input columns with columns in a reference

dataset. You use the lookup to access additional information in a related table that is based on values in

common columns.

You can configure the Lookup transformation in the following ways:

Specify joins between the input and the reference dataset.

Add columns from the reference dataset to the Lookup transformation output.

Etc.


No comments:

Post a Comment