Cognitive Class: DataOps Methodology Exam Answers

Are you looking for DataOps Methodology Exam Answers by Cognitive Class? If yes, this article will help you find all the questions and answers asked in the Cognitive Class DataOps Methodology Exam Answers. I have followed this article to solve all the questions for this exam.

In the DataOps Methodology course, you will learn about best practices for defining a repeatable and business-oriented framework to provide delivery of trusted data. This course is part of the Data Engineering Specialization, which provides learners with the foundational skills required to be a Data Engineer.

TrainerElaine Hanley
OrganizationCognitive Class
EligibilityAnyone who wants to learn about DataOps
LevelBeginner
LanguageEnglish
PriceFree
CertificateYes
DataOps MethodologyClick Here
  • Min Pass Mark: 70%
  • All Review Questions: 50%
  • Final Exam: 50%
  • True/False: 1 Attempt
  • Other Questions: 2 Attempts

Note: Use CTRL + F or Find In Page to find questions & answers.

Cognitive Class – DataOps Methodology Answers

Cognitive Class: DataOps Methodology Exam Answers

Lesson 2 – Establish Data Strategy

1. Before we can put together a data strategy, we need to have a good understanding of the data available and how it is used in the organization.

  1. True
  2. False

2. What is a data strategy?

  1. An architecture and actionable roadmap along with an action plan
  2. A competitive publication to show that our organization is modern
  3. A plan to move all legacy data systems to the cloud

3. Implementing a data strategy should always result in cost savings in the year the plan is realized.

  1. True
  2. False  

4. Which of the following statements about Data Strategy are ?

  1. Whatever the type of data, it should only include internally produced data
  2. All types of data – both structured and unstructured need to be considered
  3. Volumes of data have increased hugely, but are now starting to stabilize
  4. Only business executives should be consulted in putting together a strategy

5. Data Governance is a key part of executing a data strategy.

  1. True
  2. False

Lesson 3 – Establish Team

1. A DataOps team consists of members mostly from IT departments.

  1. True
  2. False

2. Which of the following roles are active team members of any DataOps team?

  1. Chief Technology Officer
  2. Chief Data Officer
  3. Data Engineer
  4. Database Administrator
  5. Data Steward
  6. Data Architect
  7. Data Scientist

3. Creating and maintain business terms is a major responsibility of which following role?

  1. Data Engineer
  2. Data Quality Analyst
  3. Data Steward
  4. Data Scientist

4. Only Chief Data Officer can update the KPIs for a data sprint.

  1. True
  2. False

5. DataOps relies heavily on the use of automation, so that communication among team members is not necessary.

  1. True
  2. False

Lesson 1 – Establish Toolchain

1. DataOps toolchain helps you deliver quality data slowly.

  1. True
  2. False  

2. DataOps Toolchain and DevOps are the same thing.

  1. True
  2. False

3. DataOps Toolchain can work without DataOps API(s).

  1. True
  2. False

4. What are the key components of DataOps Toolchain?

  1. Continuous Deployment
  2. Communication
  3. Source Control
  4. All of above

5. Who is responsible for creating DataOps Toolchain? (Choose all that apply)

  1. Data Scientist
  2. Administrator
  3. DBA
  4. Data Engineer

Lesson 2 – Establish Baseline

1. Data Management is the same as Information Governance.

  1. True
  2. False

2. What is the most costly result from an external influence to an organization?

  1. Data Breach Fines and Penalties
  2. Insurance Policy Payout
  3. Claim Settlement
  4. None of these

3. Reference data is defined as data used as a permissible value within a data field.

  1. True
  2. False

Lesson 3 – Establish Business Priorities

1. Business Priority should be the primary focus when deciding what the DataOps team should do.

  1. True
  2. False

2. What is a data backlog?

  1. A bottleneck in the data pipeline
  2. A list of all data sources
  3. A prioritized set of requirements expressed as data tasks
  4. A plan to move all data into a catalog

3. A prioritized data backlog will reduce the time taken to start the next DataOps iteration.

  1. True
  2. False

4. A Data Task should be prioritized by considering:

  1. The cost of providing the data
  2. The career advancement possibilities of solving business challenges
  3. The impact to sales from implementing the data pipeline
  4. All of the above

5. KPIs are used to determine the progress and throughput of a DataOps data sprint.

  1. True
  2. False

Lesson 1 – Discover

1. You will need someone on your team with detailed knowledge of the business processes you’re going to analyze so selected data elements are appropriate to reaching your objectives.

  1. True
  2. False

2. What should you do if you identify gaps or mismatches in the data required for the analysis?

  1. Rethink how you will do the analysis with different data
  2. Create the missing data
  3. Find a new source for the missing or mismatched data
  4. All of the above

3. You should trace the linage of data elements to be used for analysis to make sure they come from a trusted source.

  1. True
  2. False

4. What is the primary objective of the Discover phase?

  1. Decide what the analytics team wants to have for lunch
  2. Identify and locate the specific data elements required to accomplish an analysis
  3. Uncover the meaning of data column headers and how they relate to the underlying data
  4. Gain an understanding of the business goals and KPIs of an analysis effort

5. A Data Engineer who thoroughly understands where specific data resides, including the specific databases and files where each identified data element resides, should be involved in Data Discovery process.

  1. True
  2. False

Lesson 2 – Classify

1. Classification of each data element will make it easier going forward for users to distinguish the meaning and applicability of the data for their purposes.

  1. True
  2. False

2. Which description best defines taxonomy?

  1. Organizing data elements into meaningful structures
  2. An IBM network protocol which reduces network latency
  3. The art of preparing, stuffing, and mounting the skins of animals with lifelike effect

3. A single data element can be placed into an unlimited number of data domains.

  1. True
  2. False

4. Which of the following is the objective of classification?

  1. To bring out points of similarity and dissimilarity among various groups
  2. To present data in a simple, logical and understandable form
  3. To condense the mass of data
  4. All of the above

5. You should design workflows which are specific to the classification tool you are using.

  1. True
  2. False

Lesson 1 – Manage Qualities & Entities

1. Data quality is data accuracy.

  1. True
  2. False

2. All data across the enterprise should have the same data quality.

  1. True
  2. False

3. A data quality framework consists of which of the following 4 phases:

  1. Profile
  2. Define
  3. Remediate
  4. Monitor
  5. Assess
  6. Deploy 

4. When assessing data quality, you only need the data set containing the data, metadata is optional.

  1. True
  2. False

Lesson 2 – Manage Policies

1. How does data classification affect defining policies?

  1. Inheritance, retention and probabilities
  2. Protection, reporting and inheritance
  3. Protection, accessibility and retention
  4. Retention, deletion and storage

2. What impact does a highly sensitive classification have on a policy definition?

  1. Require data anonymization, de-identification, and masking
  2. Limit access to the data and/or require data masking
  3. Limit access to the data and make it unprintable
  4. No impact

3. What are the most common state, country or regional regulations affecting personal information?

  1. SIN, SSN and BAN
  2. FDIC, BCBS and SOX
  3. CCPA, GDPR and LGPD
  4. PCI, PII and PHI

4. Once policies have been defined affecting the data, rules must be enforced to act.

  1. True
  2. False

Lesson 1 – Self Service

1. Self Service of data is only possible when any data movement and transformation required to join multiple data assets have been performed.

  1. True
  2. False

2. Self Service can use the following governance artefacts to refine a search in a catalog. (Choose all that apply)

  1. Data Protection Rules
  2. Business Terms
  3. Tags

3. A data consumer should not be able to access data that has been identified as sensitive, where there is not a business need to do so.

  1. True
  2. False

4. Which of the following statements about Self Service are ?

  1. Data consumers typically do not know how to manipulate the data
  2. Data Protection rules prevent a data consumer from inadvertently seeing data that is sensitive
  3. Creating multiple catalogs can partition data assets by their content and anticipated audience
  4. A data consumer needs to know SQL to join multiple data assets

5. Data Consumers provide valuable input to data scientists by clarifying the combination of data assets and how they need to be transformed, prior to data movement being designed and implemented.

  1. True
  2. False

Lesson 2 – Manage Movement & Integration

1. You should define the use case at the outset of a Data Movement and Integration project to support a “Build It and They Will Come” strategy.

  1. True
  2. False

2. Which of the following does not represent a data integration pattern:

  1. Data virtualization
  2. Data replication
  3. Data lineage
  4. Message-oriented movement
  5. Bulk/batch

3. Which of the following is not a Data Movement and Integration Job Design consideration?

  1. Design for reusability
  2. Deployment models (e.g. Containers, Kubernetes Orchestration, OpenShift)
  3. Design for parallel processing
  4. Everything should be programmed in Python
  5. Design for job portability (build once and run anywhere)

4. Hand coding generally provides a 10X productivity gain over commercial data integration software tooling.

  1. True
  2. False

5. Which of the following is not an example of a message queuing system?

  1. Kafka
  2. VSAM
  3. Microsoft Azure Queues
  4. GCP PubSub
  5. AWS Simple Queue Service
  6. MQ

Lesson 3 – Improve/Complete

1. DataOps is a completely new methodology and it doesn’t learn anything from agile and devOps.

  1. True
  2. False

2. Data consumers can first start to provide feedback to the current data sprint in the stakeholder review meeting.

  1. True
  2. False

3. Which of the following assets or artifacts could be found in catalog?

  1. Code
  2. Business terms
  3. Data rules
  4. Source data
  5. Data lineage

4. All issues need to be remediated before moving on to the next data sprint.

  1. True
  2. False

5. Completing a data sprint involves publishing governed artifacts and data assets to a production environment.

  1. True
  2. False

Review and Refine DataOps

1. DataOps is a fixed process which should not be changed once defined.

  1. True
  2. False

2. Improvements to the DataOps process could involve changes to

  1. Technology used in DataOps
  2. DataOps team roles and responsibilities
  3. Processes for ETL
  4. All of the above  

3. Reviewing the Data classification phase involves reviewing how accurate the data mappings to the business terms are.

  1. True
  2. False

4. Reviewing the Establish Baseline Process should include reviewing how effective the processes are for establishing a baseline for –

  1. External Regulatory requirements
  2. Organization maturity and Readiness
  3. Governance and Oversight
  4. All of the above

5. KPIs are key in determining the effectiveness of all parts of the DataOps process.

  1. True
  2. False

DataOps Methodology Final Exam Answers

1. What is a data strategy?

  1. An architecture and actionable roadmap along with an action plan
  2. A competitive publication to show that our organization is modern
  3. A plan to move all legacy data systems to the cloud

2. Which of the following statements about Data Strategy are ?

  1. Whatever the type of data, it should only include internally produced data
  2. All types of data – both structured and unstructured need to be considered
  3. Volumes of data have increased hugely, but are now starting to stabilize
  4. Only business executives should be consulted in putting together a strategy

3. Which of the following roles are active team members of any DataOps team?

  1. Chief Technology Officer
  2. Chief Data Officer
  3. Data Engineer
  4. Database Administrator
  5. Data Steward
  6. Data Architect
  7. Data Scientist

4. Creating and maintaining business terms is a major responsibility of which following role?

  1. Data Engineer
  2. Data Quality Analyst
  3. Data Steward
  4. Data Scientist

5. Business Priority should be the primary focus when deciding what the DataOps team should do.

  1. True
  2. False

6. What is a data backlog?

  1. A bottleneck in the data pipeline
  2. A list of all data sources
  3. A prioritized set of requirements expressed as data tasks
  4. A plan to move all data into a catalog

7. A Data Task should be prioritized by considering:

  1. The cost of providing the data
  2. The career advancement possibilities of solving business challenges
  3. The impact to sales from implementing the data pipeline
  4. All of the above

8. KPIs are used to determine the progress and throughput of a DataOps data sprint.

  1. True
  2. False

9. What are key components of DataOps toolchain?

  1. Continuous Deployment
  2. Communication
  3. Source Control
  4. All of above

10. Who is responsible for creating DataOps toolchain? (Choose all that apply)

  1. Data Scientist
  2. Administrator
  3. DBA
  4. Data Engineer

11. What is the primary objective of the Discover phase?

  1. Decide what the analytics team wants to have for lunch.
  2. Identify and locate the specific data elements required to accomplish an analysis
  3. Uncover the meaning of data column headers and how they relate to the underlying data.
  4. Gain an understanding of the business goals and KPIs of an analysis effort.

12. Which description best defines taxonomy?

  1. Organizing data elements into meaningful structures.
  2. An IBM network protocol which reduces network latency.
  3. The art of preparing, stuffing, and mounting the skins of animals with lifelike effect.

13. Which of the following is the objective of classification?

  1. To bring out points of similarity and dissimilarity among various groups.
  2. To present data in a simple, logical and understandable form.
  3. To condense the mass of data.
  4. All of the above

14. A data quality framework consists of which of the following 4 phases:

  1. Profile
  2. Define
  3. Remediate
  4. Monitor
  5. Assess
  6. Deploy

15. How does data classification affect defining policies?

  1. Inheritance, retention and probabilities
  2. Protection, reporting and inheritance
  3. Protection, accessibility and retention
  4. Retention, deletion and storage

16. What impact does a highly sensitive classification have on a policy definition?

  1. Require data anonymization, de-identification, and masking
  2. Limit access to the data and/or require data masking
  3. Limit access to the data and make it unprintable
  4. No impact

17. Self Service can use the following governance artefacts to refine a search in a catalog. (Choose all that apply)

  1. Data Protection Rules
  2. Business Terms
  3. Tags

18. Which of the following statements about Self Service are ?

  • A data consumer needs to know SQL to join multiple data assets
  • Data Protection rules prevent a data consumer from inadvertently seeing data that is sensitive
  • Creating multiple catalogs can partition data assets by their content and anticipated audience
  • Data consumers typically do not know how to manipulate the data

19. Which of the following does not represent a data integration pattern:

  1. Data virtualization
  2. Data replication
  3. Data lineage
  4. Message-oriented movement
  5. Bulk/batch

20. Which of the following is not a Data Movement and Integration Job Design consideration?

  1. Design for reusability
  2. Deployment models (e.g. containers, Kubernetes orchestration, OpenShift)
  3. Design for parallel processing
  4. Everything should be programmed in Python
  5. Design for job portability (build once and run anywhere)

21. Data consumers can first start to provide feedback to the current data sprint in the stakeholder review meeting.

  1. True
  2. False

22. Which of the following could be found in catalog?

  1. Code
  2. Business terms
  3. Data rules
  4. Source data
  5. Data lineage

23. All issues need to be remediated before moving on to the next data sprint.

  1. True
  2. False

24. Improvements to the DataOps process could involve changes to

  1. Technology used in DataOps
  2. DataOps team roles and responsibilities
  3. Processes for ETL
  4. All of the above

25. Reviewing the Establish Baseline Process should include reviewing how effective are the processes for establishing a baseline for –

  1. External Regulatory requirements
  2. Organization maturity and Readiness
  3. Governance and Oversight
  4. All of the above

Wrap Up

I hope this article would be helpful for you to find all the “Cognitive Class Answers: DataOps Methodology Quiz Answers.” If this article helped you learn something new for free, then share it on social media, let others know about this, and check out the other free courses we have shared here.

Leave a Comment