Using Fluo to incrementally process data in Accumulo

Back to Schedule



APIs / Frameworks


Fluo provides a framework to incrementally process large datasets stored in Accumulo. Using Fluo, developers can write applications that maintain a large scale computation using a series of small transactional updates. When compared to batch processing frameworks, Fluo enables lower latency, continuous analysis of data by sacrificing throughput. This talk will provide an overview of the Fluo project by touching on its design, use cases, and API. The talk will show how developers can write Fluo applications to solve problems in a new way. It will highlight the benefits of using Fluo as well as cover the trade offs and potential problems developers may face when writing Fluo applications. The talk will end with a discussion of the current status and future direction of the Fluo project.


Michael Walch
Software Engineer, Peterson Technologies
Mike is a software engineer and committer on the Fluo project. He has a background in distributed systems and data science. He holds a Masters in Computer Science from Johns Hopkins University and and B.S in Electrical & Computer Engineering from Carnegie Mellon University.