Build your bigdata platform with Galeo Data platform solution

Galeo, driven by the growing demand for cloud data platforms, has designed a solution with Azure services, easy to manage, scalable and open source software for those services that are not widespread among cloud providers. The architecture is designed to support any future challenges or use cases.

In this post we will detail the different components of the architecture, their functionalities and the technologies used.

1. EXTRACTION from different sources

Within a data platform, the sources can be very diverse depending on each use case.

We can find events, which can be ingested in streaming from physical devices or from an own database through a CDC. An event-driven architecture is based on the publish-subscribe principle. Messages are sent once to a topic and we can have N subscribers consuming the same message. Producers and consumers are totally decoupled.

We can have data sources that would be more in the batch or micro-batch world, such as a massive extraction from a database, a CRM, SAP, etc… or any type of periodic API query.

Batch ingestion and streaming

2.1 Streaming Ingestion

Here the casuistry would depend mainly on the data source and the specific use case. For example, we can have streaming ingests from a CDC such as Debezium, which is an open-source project that allows us to detect and transmit in the form of events the changes in a database, as well as any other connector developed by Galeo or any other provider.

Azure Kubernetes Services is chosen for this, as it provides us with:

  • Scalability
  • Application isolation
  • Deployment facilities
  • High availability
  • Ability to test new technologies outside the Azure environment

For events produced by IoT devices, our starting point would be the Azure IoT Hub thanks to its easy integration with other Azure resources such as Azure Event Hub, Azure Event Grid or Azure Logic Apps.

Once we have the initial streaming ingest done, our events would be passed to a topic-based messaging manager such as Azure Event Hub:

  • Scalable, depending on performance and usage requirements
  • Secure, as it protects data in real time.
  • With the possibility of partitioning messages by a key into partitions

2.2 Batch Ingestion

For the batch world, Databricks has been chosen for the vast majority of cases motivated by:
The ease with which it can be managed

  • The ease with which it can be managed
  • Acceleration of Spark development thanks to notebooks as a support tool
  • Its SQL engine, as well as the subsequent possibility of working in Delta tables, solves the eternal problem of intra-partition data updates in data lakes.
  • Schedule workloads (Jobs) that run on dedicated clusters for this purpose, which have a much lower cost than interactive clusters for development.
  • Reuse of the resource for different profiles (Data Engineers, SQL Analysts, ML Engineers).

There are in cases, such as native Office 365 extractions or one-off data copy activities, where Azure Data Factory could also be used.

3. Lake House

It is the cornerstone of a data platform, this is where all the information converges, our inputs deposit raw data to be read, transformed and processed into a final solution.

The Lake House allows us to store the following information:

  • Structured data (relational, SQL).
  • Semi-structured data (No-SQL).
  • Unstructured or binary data (documents, video, images).

Our choice within the Azure ecosystem has been the Azure Data Lake Storage Gen2 service. This resource provides us with many functionalities:

  • Hierarchical namespace: this difference compared to the classic Blob Storage allows us to significantly optimize the work with big data in tools such as Hive, Spark, etc… In turn, the decrease in work latency is equivalent to a reduction in cost.
  • Hadoop-compatible access: this allows us to mount a Hadoop distributed file system (HDFS).
  • POSIX permissions: gives us the possibility to define a security model compatible with ACL and POSIX permissions at directory or file level.
  • Volume: it is capable of storing and processing large amounts of data, with low latency.

Azure Data Lake Storage Gen2 together with Databricks, gives us the possibility to have our Lake House with tables in Delta format. What advantages does this give us?

  • ACID (Atomic, Consistent, Isolated, Durable) transactions: this guarantees consistency with parties reading or writing data at the same time.
  • Schema Evolution: supports schema evolution and compliance, supporting DW schema architectures such as star/snowflake.
  • Time Travel: versioning of tables. As a code repository, users can have versions of their data each time the dataset changes.
  • Support for a variety of data types, from unstructured to structured data.
  • Support for BI tools: allows us to use tools directly on the source data.

Since we will not need a shared metastore, we will use Databricks’ own Hive Metastore to persist the metadata from the Delta tables.

We will divide our data into three layers:

  • Bronze zone: raw data from both the streaming and batch parts converge here.
  • Silver zone: the tables are cleaned and processed to make them searchable (normalization process).
  • Gold zone: here we deposit the aggregated tables containing the calculation of the KPIs for each use case.

In addition to this, we will use the DBT tool, deployed on our AKS. DBT is a tool for organizing and documenting the transformations performed on our Lake House. The way it operates is as follows:

  • Each consultation is a model. For example “SELECT A, B FROM LAKE_DATA”.
  • This model can be materialized in a table or a view.
  • To this table or view, there is a documentation block to associate the description of the query, the description of each field and tests on the data.

In DBT, with Spark, we can have incremental models on the tables, either by inserting new records (append) or overwriting them(insert_overwrite), either on a partition or on the entire table.

Using Delta tables, we have one more type of model, which can update old records and insert new ones at the same time(merge), based on a unique key(unique_key).

4. Data warehouse

A Datawarehouse is a unified repository for data collected by a company’s various systems.

In our architecture we will use Snowflake as an auxiliary tool to the Lake House to load the data to be read from Power BI.

Databricks has a native connector for Power BI with Delta tables, but it is not sufficiently operational if you do not have the cluster up constantly, since every time you need to update the data, you will have to wait for the cluster up time and set a low auto-off time so as not to be generating overhead for every update you make.

There is also the possibility of setting up a SQL Endpoint, with the same problems as mentioned above. We would still need a serverless data warehouse or have the endpoint cluster always on, unless a serverless version of this feature is developed in Azure.

Snowflake, on the other hand, has the advantage of being serverless and data from our Delta tables can be easily transformed and loaded from Databricks.

5. Data storage

5.1 Low Latency & operational

For operations that require very low latency, with a large number of queries of specific data in real time, we use No-SQL databases.
Depending on the use case, we can use Redis Cache or CosmosDB.

What is Azure Cosmos DB?

It is a fully managed NoSQL database service built for fast and predictable performance, high availability, elastic scalability, global distribution and ease of development.

What is Redis?

It is an in-memory database that persists on disk. It is an advanced open source key-value store. It is often referred to as a data structure server, since the keys may contain strings, hashes, lists, etc….

As mentioned, it would depend on the specific use case. In terms of cost, Redis is usually cheaper than CosmosDB, especially when there are many transactions, since Redis bills by cache size and number of nodes, while CosmosDB bills by throughput (Request Unit per second).

6. Visualization (Web App)

Depending on our use case, we can have a web service and a reporting tool such as Power BI. Although these two resources can coexist perfectly together, having a web service that is only accessible by certain users, where they can perform, for example, activities on other Azure resources through API calls and also have a section for viewing reports that come out of Power BI.

Our solution would choose to deploy the web part over our AKS, taking advantage of:

  • Hyper Scalability.
  • Full control over virtual machines.
  • Use of tools such as Apache Kudu, which is an open-source tool that provides low-latency reads along with efficient analytical access patterns on structured data. In addition, it can be used in conjunction with Apache Spark to access data.
  • Low cost compared to Azure App Service.

To store the metadata and tokens of any of our applications, either from the web or any resource that needs it, we will use a small PostgreSQL.

7. CATALOG and data quality

It is becoming increasingly important to have a data catalog within a large platform. Data offers increasing opportunities for business strategies. Often, however, poor knowledge of the data and its lack of availability prevent users from taking full advantage of its value. A data catalog is intended to fill this gap.
We have chosen DataHub for this because of:

  • End-to-end search: ability to integrate with databases, datalakes, BI platforms, ETLs, etc…
  • Easy understanding of the data path from one end to the other thanks to its dashboards.
  • It provides dataset profiles to understand how that dataset has evolved over time.
  • Data Governance and access controls.
  • Platform usage analytics.

In addition, we will use Great Expectations to validate, document and profile our data. Great Expectations gives us the possibility to automate tests to detect data problems quickly, basically unit tests. In addition, we can also create data documentation and quality reports based on these expectations. It is quite useful for monitoring ETLs that ingest data into a Datalake or Datawarehouse.

These tools would be deployed in our AKS along with the ingest connectors, DBT, the PostgreSQL for metadata, the web and all other open source tools needed to be deployed in the future.

If you would like to know more about the solution or would like to see a demo, our experts will be happy to assist you.

Call us on +34 665 22 35 67