Before FEAF was introduced, different government agencies performed their functions in disparate ways and used all kinds of data formats to store and process data. This led to problems in data sharing as well as coordination between the agencies when their paths crossed or quick coordination required between them. 

FEAF aims to facilitate shared development of common processes and information, among government agencies, working within the US federal structure. 

In this post you will learn the Data Reference Model a.k.a DRM, as specified by the US federal enterprise architecture framework (FEAF). 

Table of Contents

What is Federal Enterprise Architecture Framework? 

FEAF can be further partitioned into 5 functional parts:

  • Business Reference Model (BRM): Answers questions like Who does what, and how/when/why. The components reference model (CRM): The software/hardware components that are used to execute the BRM.  
  • Technical Reference Model (TRM): The Computer/Communications technology and standards that support above three layers. 
  • Data Reference Model(DRM): Defines standard ways to describe all the information/data that will be used by the government agency to execute its duties and programs.  
  • Performance Reference Model (PRM): Defines standard ways to measure the value delivered by above processes/architectures.   
  • Security Reference Model (SRM): which defines how security should be defined and maintained across organizations and while the data they use is stored or is under transit(shared). 

Perform ETL in Minutes Using Hevo’s No-Code Data Pipeline

Hevo Data, a Fully-managed Data Aggregation solution, can help you automate, simplify & enrich your aggregation process in a few clicks. With Hevo’s out-of-the-box connectors and blazing-fast Data Pipelines, you can extract & aggregate data from 100+ Data Sources straight into your Data Warehouse, Database, or any destination. To further streamline and prepare your data for analysis, you can process and enrich Raw Granular Data using Hevo’s robust & built-in Transformation Layer without writing a single line of code!”

GET STARTED WITH HEVO FOR FREE

“With Hevo in place, you can reduce your Data Extraction, Cleaning, Preparation, and Enrichment time & effort by many folds! In addition, Hevo’s native integration with BI & Analytics Tools such as Tableau will empower you to mine your aggregated data, run Predictive Analytics and gain actionable insights with ease!”

Experience an entirely automated hassle-free Data Aggregation. Try our 14-day free trial today!

Understanding Data Reference Model  

Data Reference Model, falls in the Data architecture partition.  

A detailed technical discussion of all the aspected of the Data Reference Model would be quite lengthy and out of scope for this article, but you will learn its salient features and their importance. 

“Data Reference Model” refers to organizing and standardizing business information, with a view to gain uniformity, ease sharing and avoid ambiguity.  The Data Reference Model describes artifacts which can be generated from the data architectures of government agencies and promotes uniform data management practices. 

Data Reference Model aims to describe, categorize and easily share data. 

Therefore, its further divided into 3 standardization areas.

Data Reference Model: Data Description

 Provides a means to uniformly describe data, with an aim to ease its discovery and sharing. The aim here is to ensure that there are minimal or no discrepancies between data with similar context. It ensures that the data has proper syntax and semantics and each data resource is meaningfully identified. It is further ensured that one must be able to  quickly and accurately identify desired data, reuse data whenever needed and data artifacts are harmonized for easy comparison.  Data description aims to eradicate any ambiguities in data interpretation,  ensure that data schemas are accompanied by their respective data dictionaries, and incremental changes maintain the existing standards. 

Data Reference Model: Data Context

 Adds context and classification to data and data files, to facilitate discovery of data. It has well defined taxonomies to enable the clear identification of data assets that pertain to a community of interest.
This layer defines some key concepts to execute its purpose, some of them are:- 

  • Data Assets: It’s a managed container for data, e.g. a database or document repository 
  • Data Steward: A person responsible for managing a Data Asset 
  • Relationships: as the name suggests, its a relationship between two topics, which is clearly defined here. 
  • Topic: A category within the taxonomy is called a Topic here, its often synonymous with “node”.  A Topic could further categorize a Data asset or a Digital Data Resource or a Query Point. 

As an example, let’s consider a stack of feedback forms filled by users, pertaining to different products. 

The Data asset is the form and the responses recorded for each question. 

The CIO or Database administrator could be the Data Steward. 

If a user can full up multiple forms, there exists a many-to-one relationship between feedback and users. 

The topics here could be the products or the whole feedback exercise.  

Data Reference Model: Data Sharing

Once the data has been properly described and contextualized, a well defined protocol to request data, grant access, exchange data through well defined fixed/recurring transactions between parties is needed. 

Without sharing data, no cooperation or communication is possible between agencies. This layer/area exists for precisely the above function. It takes help from the Data Description layer to get a uniform definition of Exchange Packages and Query Points , and from the Data

Context layer it borrows the categorization of data exchange Packets and discovery of Query Points. 

Then it provides architectural patterns and mechanisms to securely share data. 

The Data-Supplier-to-Consumer-Matrix is an important construct defined in this layer.

Data Reference Model: data supplier to consumer matrix
Image Source 

Simplify your Data Analysis with Hevo’s No-code Data Pipeline

Data Analysis can be a mammoth task without the right set of tools. Hevo’s automated platform empowers you with everything you need to have a smooth Data Collection, Processing, and Aggregation experience. Our platform has the following in store for you!

  • Exceptional Security: A Fault-tolerant Architecture that ensures Zero Data Loss.
  • Built to Scale: Exceptional Horizontal Scalability with Minimal Latency for Modern-data Needs.
  • Built-in Connectors: Support for 100+ Custom Data Sources, including Databases, SaaS Platforms, Native Webhooks, REST APIs, Files & More. 
  • Data Transformations: Best-in-class & flexible Native Support for Complex Code and No-code Data Transformation at the fingertips of everyone.
  • Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
  • Quick Setup: Hevo with its automated features, can be set up in minimal time. Moreover, with its simple and interactive UI, it is extremely easy for new customers to work on and perform operations.
  • Auto Schema Mapping: Hevo takes away the tedious task of schema management & automatically detects the format of incoming data and replicates it to the destination schema. You can also choose between Full & Incremental Mappings to suit your Data Replication requirements.

Simplify your Data Analysis with Hevo today! SIGN UP HERE FOR A 14-DAY FREE TRIAL!

Guidelines, Goals and Impacts of Data Reference Models

There are prescribed guidelines to Extract, Transform, Load (ETL) data, Data Publication, Entity/Relationship Extraction (Unstructured documents to structured documents or structured data objects), Document Translation, Contextual and Structural Awareness, Content Search and Discovery Services, Retrieval Services, Subscription and Notification Services etc. 

Further, the goals of the Data Reference Model are:

  • Increase the visibility and availability of data and related data artifacts.
  • Enhance and promote information sharing.
  • Facilitate harmonization within and across COIs to form common data entities that support sharing.
  • Increasing the relevance and reuse of data/data artifacts via consistent taxonomical classification.

Before you move ahead, you must understand that the Data Reference Model is an abstract framework, it just provides foundational principles of how things should be done, and does not force any methodology/technology/implementation of its followers. 

Agencies that implement Data Reference Model can choose multiple implementation approaches, methodologies and technologies; based on their budget/circumstances, it’s allowed as long as they remain consistent with the foundational principles of the Data Reference Model. 

For example, for data exchange and transmission, agencies can choose to implement their part of the picture via SOAP ( Simple Object Access Protocol, EDI based) OR via XML( eXtensible Markup Language,  World Wide Web Consortium based) ; if they are consistent with the specs of the Data Reference Model, the information exchange and interplay can work smoothly. 

Data Reference Model: drm
Image Source 

Below is how the 3 layers of Data Reference Model influence the processes within an agency.

Data Reference Model: data strategy
Image Source

Security and Privacy in Data Reference Model 

All the 3 functional areas of Data Reference Model comply with Security and privacy considerations. 

Security here defines the methods to protect data from unauthorized access/disclosure/modification/deletion etc., and applies to data at rest as well in transit. 

Privacy here means divulging only the minimum necessary and allowed information when data is being collected., transmitted or stored. Not only individual privacy but moral and ethical concerns are addressed well by Data Reference Model. 

For different types of data/usage, there are different standards and legislations prescribed by the federal and state governments. Data Reference Model complies to the following policies/legislations:- 

  • National Institute of Standards and Technology (NIST) FIPS: FIPS 199 provides standards for the security categorization of federal information and information systems
  • Federal Information Security Management Act (FISMA): Part of the e-Governance act, FISMA  provides a comprehensive framework for ensuring the effectiveness of information security controls over information resources 
  • OMB Circular A-11 (Section 31-8): This Circular specifies the management improvement initiatives and policies to be implemented by agencies to ensure security and privacy. 
  • E-Government Act of 2002: Among other things, this act mandates that OMB issue guidance to agencies on implementing the privacy provisions of the E-Government Act 
  • NIST 800-60

Conclusion

You have learned what the Data Reference Model or Data Reference Model (as a part of FEAF) is. Some of its constituents and salient features, and how it enables different government agencies to co-operate and share relevant data, to speed up processes and save the taxpayers’ money. 

Also you have discussed how it inter-operates with other federal policies and standards, to ensure uniform data storage and sharing, while keeping security and privacy as paramount concerns. 

However, as a Developer, extracting complex data from a diverse set of data sources like Databases, CRMs, Project management Tools, Streaming Services, and Marketing Platforms to your Database can seem to be quite challenging. If you are from non-technical background or are new in the game of data warehouse and analytics, Hevo Data can help!

Visit our Website to Explore Hevo

Hevo Data will automate your data transfer process, hence allowing you to focus on other aspects of your business like Analytics, Customer Management, etc. This platform allows you to transfer data from 100+ multiple sources to Cloud-based Data Warehouses like Snowflake, Google BigQuery, Amazon Redshift, etc. It will provide you with a hassle-free experience and make your work life much easier.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand.

You can also have a look at our unbeatable pricing that will help you choose the right plan for your business needs!

Pratik Dwivedi
Freelance Technical Content Writer, Hevo Data

Pratik writes about various topics related to data industry who loves creating engaging content on topics like data analytics, machine learning, AI, big data, and business intelligence.

No-Code Data Pipeline for Your Data Warehouse

Get Started with Hevo