Best Practices for Building a Fast and Reliable IoT Data Pipeline
The amount of data produced by IoT is expected to reach 4.4 zettabytes in 2020, up from just 0.1 zettabytes in 2013. Of course, the fundamental principle of IoT is making swift, data-driven decisions—all this data is only valuable if it can be analyzed. Enterprises need to collect data from multiple IoT devices and store that data in a data lake with the ultimate goal of analyzing and gaining insights from it. Sounds simple, right?
Unfortunately, setting up a fast and reliable data pipeline that enables enterprises to obtain value from their IoT data can be overwhelmingly complex and costly. Join us to learn from subject matter experts from Microsoft, Software AG and Dremio as we explore these challenges and best practices for addressing them.
Chris Furlong Director of Product Management & Strategy Software AG
Jeff King Sr. Program Manager Microsoft
Ryan Murray Principal Consulting Engineer Dremio
Speaker Name 4
What you will learn:
Strategies for building a scalable and cost-effective data lake architected for large-scale analytics
Best practices for storing data emitted from IoT devices in a highly-efficient format that’s suitable for analytical queries
How to run ad-hoc queries as well as more sophisticated analytical queries directly against IoT data stored in the data lake
How to build a data pipeline that empowers data scientists to aggregate and analyze IoT and business data from multiple sources for maximum insight