Simplifying Data Ingestion with Databricks Ingest

Webinar date: June 24, 2020

[databricks on-demand artificial inteligence software development webinars ai]

As part of our online learning series, our session on 24 June will walk through an introduction of Databricks Ingest - an easy and efficient way of bringing all your data together from different sources into Delta Lake.

Data teams are looking for the most complete and recent data possible for data science, machine learning, and business analytics, but it can be difficult to reliably load this data from hundreds of different sources into a centralised data lake. Delta Lake is quickly becoming the open-source standard for building fast and reliable data lakes at scale.

To make it easier for Databricks users to access all your data in Delta Lake and ingest data from 3rd party sources, Databricks introduced a new feature, Auto loader, and have partnered with a set of data ingestion products. This network of data ingestion partners have built native integrations with Databricks to ingest and store data in Delta Lake directly in your cloud storage.

Join us and learn how Databricks Ingest makes it easy to load into Delta Lake from various sources – applications like Salesforce, Marketo, Zendesk, SAP, and Google Analytics; databases like Kafka, Cassandra, Oracle, MySQL, and MongoDB , and file storage like Amazon S3, Azure Data Lake Storage, Google Cloud Storage.

Capabilities include:

Data Ingestion Network: Leverage an ecosystem of partners like Fivetran, Qlik, Infoworks, Streamsets, and Syncsort to easily ingest data into Delta Lake from an easy to use partner gallery

Auto Loader: Ingest data continuously into your data lake from cloud storage like AWS S3 or Azure Data Lake Storage, ensuring data recency without any having to manually setup job triggers or scheduling

Date & Time

Space is limited!

RSVP now to save your spot.

Privacy Policy | Terms of Use

Share on: