The amount of data being generated along with the multitude of sources that contribute towards data including IOT devices, social media data, sales as well as internal business data has increased significantly.

This along with the rise in computing power, rising need for big data analytics and cloud computing have rendered the traditional data management practices inefficient.

Data warehousing has traditionally been the standard approach for performing business analytics through the years.

The upsurge in business data in recent years has made it imperative for business organizations to make the move towards a more modern data architecture system in addition to a data warehouse.

Adoption of a data system that not only stores and retrieves data more efficiently but also lets you reach the valuable insights faster has now become necessary giving rise to data lake architecture.

What is a data lake?

A data lake is a centralized data repository that can store both structured (processed) data as well as the unstructured (raw) data at any scale required.

As the data flows in from multiple data sources, a data lake provides centralized storage and prevents it from getting siloed.

The organization can then make use of various analytics techniques to result in data-driven insights and business practices.

Since the data ownership and access is not limited to a select few within the business, it promotes a culture of innovation and collaboration.

How is data lake architecture different from data warehousing?

A data warehouse stores structured business data in its processed form. This approach requires fairly rigid schemas for well-understood types of data.

While data warehouses are an important tool for enterprises to manage their important business data as a source for business intelligence, they don’t work well with unstructured data.

Data lakes allow the storage of raw data, both relational, as well as non-relational that is intended to be used by data scientists and developers along with the business analysts.

They take the data out of the silos and make it accessible to all business users promoting centralization of data.

Here’s how data lake differs from a data warehouse.

Attribute Data warehouse Data lake
Type of Data Structured data from sources like transactional systems and operational databases. Raw Data from varied sources like websites, mobile apps, IoT devices, social media channels etc.
Schema Schema-on-write Schema-on-read
Intended users Primarily business analysts Data scientists, developers and business analysts
Type of analytics Business intelligence, visualization and batch reporting Machine learning, predictive analytics, profiling and data discovery.
Agility Fixed configuration, less agile Highly agile, can be configured and reconfigured as per requirements.
Cost associated Priced higher for getting faster query results Lower associated costs with faster query results
Security More secure storage Higher accessibility makes ensuring security a challenge

Benefits of moving over to a data lake

1. Self-service

Following the traditional data warehouse, the time lag between the request to access data and the delivery of reports is quite high.

Implementation of data lake architecture revamps the accessibility allowing the various stakeholders to access the data store in real time resulting in timely insights.

A well-designed data lake architecture and well-trained business community act as a facilitator for self-service business intelligence.

2. Computation speed

In case of data lakes, centralized data storage is aimed at resolving unknown use cases. Because of the absence of a controlling structure and subsequent silos and schemas associated with it, supporting new use cases is a straightforward task.

Storage of data in lambda architecture in data lake follows two paths of processing- a speed layer and a batch layer. Raw data gets stored in the batch layer while the speed layer processes the data in real time.

Data lake architecture allows for the new data consumption requests to be fulfilled in a speedier manner owing to the centralization of the enterprise data.

3. Innovation

Departmentally divided data silos act as a barrier to organization-wide innovation. Data lake solutions open the door to data mining and exploratory analysis paving the way towards enterprise innovation.

One of the innovations of the data lake is early ingestion and late processing. Such data integration allows the integrated data to become available as soon as possible for reporting, analytics and operations.

Data lake has analytical sandboxes as a key component. These are areas of exploration for data scientists where new hypotheses are developed and tested, data is explored to form new use cases and rapid prototypes are created in order to extract the maximum value from the existing business data and open new avenues for innovation.

4. Advanced analytics

Since data lakes contain all types of data, structured, semi-structured and unstructured, advanced analytics can easily be applied on it.

By making use of techniques such as big data analytics, machine learning and predictive analysis, the organization can make use of the relevant inferences and take data driven decisions.  

Data lake architecture makes use of metadata, both business and technical in order to determine data characteristics and arrive at data supported decisions.

This allows businesses to generate numerous insights, reports on historical data and machine learning models to forecast the likely outcomes and prescribe actions for achieving the best result.  

5. Cost effective cataloging of data

Data lake architecture can be on-premise or cloud hosted. The long term cost saving on switching over to cloud services results in significant savings for the organization.

Data lakes allow the storage of both relational as well as non-relational data. The relational data comprises of the data from business applications and operational databases. The non-relational data is derived from social media, web pages, mobile apps and IoT devices.

Data lake architecture empowers with data-driven insights obtained as a result of crawling, cataloging and indexing of data in a cost-effective manner.

While you can implement data lake architecture for your business with your internal IT teams, you can also hire a custom software development company to help you implement it.

How to implement data lake architecture for your business?

1. Design the data architecture

Designing of the data lake architecture is critical for laying down a strong data foundation.

While this type of architecture aims at storing the maximum data possible in its raw form for an extended period of time, the lack of design planning can result in the lake getting transformed into a data swamp.

The transforms in the data lake pattern need to be dynamic, scalable and should quickly evolve to keep up with the demands of the analytic consumer.

The underlying core storage needs to be free of a fixed schema and have the ability to decouple storage from compute thus enabling independent scaling of both of these.

Adoption of on-cloud, object-based storage of data lakes has significant advantages over legacy big data storage on Hadoop.

A number of cloud providers like Google cloud, AWS and Azure all provide cloud-based object storage capabilities. Cloud computing has proved itself to be of immense value in sectors such as healthcare, retail, finance and manufacturing.

The data in the raw layer also needs to be neatly organized to ensure faster data transactions. Placing meta-data into the name of the object in the data lake including important details regarding the data can be a part of best practices for data storage.

2. Choose the file format for data

Data lake architecture offers a huge amount of control over the specifics of data storage. Data lake engineers get to decide upon an array of elements such as the file sizes, block sizes, type of storage, indexing, schemas and degree of compression.

In case of large files that are an even multiple of the block size, the Hadoop ecosystem tools work well. The file format used for such large data is the Apache ORC which has the ability to selectively read, decompress and process the queries as a result of columnar file formatting letting organizations save petabytes of data in their data warehouse.

In case the same storage structure is not suitable for two different workloads, the low cost of storage on the data lakes enables businesses to create two separate copies of the same data in different formats.

3. Plan for security of the data

The increased accessibility of data in a data lake comes with a downside as well, increased susceptibility to threats to data.

This is the reason why security planning for data stored within the data lake is of crucial importance. Ensuring the security of data needs three primary components- data encryption, network level security and access control

All the major cloud providers provide a basic encryption for storage but managing the encryption keys need to be given important consideration.

The encryption keys can either by created and managed by the cloud providers or they can be customer-generated on-premise. Data in transit also needs to be covered by encryption which can be easily done by obtaining TLS/SSL certifications.

The inappropriate access paths at the network level need to be walled off by using ACL and CIDR block restrictions. The authentication and authorization of the users also needs to be done at the network level to ensure access control of the data.

Mapping of the corporate identity infrastructure over the permissions infrastructure results in enabling of fine-grained permissions control over authorized operations.  

4. Lay down the policies for data governance

Governance of the enterprise data lake needs to be consistent with the organizational policies and practices at large. The management of usability, availability and security of the data involved relies on the business policies as well as the technical practices.

The data governance policies involve ensuring that all the data entering the data lake have associated metadata to facilitate the cataloging and search of data.

Automation of metadata creation for data across all the storage levels is the key to consistent data storage that is free of human errors.

Stringent data quality requirements regarding the completeness, accuracy, consistency and standardization of data need to be in place in order to guide the organizational decision making with data driven insights.

5. Integration with existing data warehouse

When it comes to choosing between data warehouses and data lakes, it isn’t really an either/or approach. A number of organizations already put in high investments in setting up a data warehouse.

Abandoning that to move to a data lake architecture isn’t really a financially viable move. The good news is, you don’t have to.

The data lake architecture can integrate with the existing data warehouses. Using tools such as Google BigQuery, Azure SQL Data warehouse and Amazon Redshift, you can ingest a portion of your data from the lake into column store platform.

Data lake and warehouses complement each other nicely. With a modern data architecture, organizations can continue to leverage their existing investments, make use of innovative data analytics techniques, and ultimately enable analysts and data scientists to obtain insights faster.



Author: Dr Vinati Kamani
Dr Vinati Kamani writes about emerging technology and its application across industries for Arkenea. Dr Kamani is a medical professional and has worked as a dental practitioner in her earlier roles. She is an avid reader and self proclaimed bibliophile. When Vinati is not at her desk penning down articles or reading up on the recent trends, she can be found travelling to remote places and soaking up different cultural experiences.