Product tour

Discover the typical workflow of Digazu. In a few steps you will have a fully operational data science environment ready to explore, play and deploy.

Connect
your data

Registering data sources of any kind with our connectors is done in a couple of clicks. After a simple configuration, Digazu will start collecting your data, and make it available in your data lake in a standardized and managed way.

Explore 
with Dremio

Explore your data lake to find and select the data you need. Run SQL queries directly on your stored data, whatever the technology of the data sources. You can even query past data thanks to the automatic historization.

Build your realtime dataflows

Visually build your data pipelines directly in Digazu, or use SQL streaming for complex ones. Combine hot and cold data without the burden of manually creating and managing stateful jobs.

Access your data from a Jupyter notebook

In just one line of code, you can load data in Digazu’s Jupyter notebook and start working on your data science model. Be productive and bring value from day one instead of struggling to get access to the data you need.

Train, deploy and scale with Kubeflow

Quickly move from Jupyter to a scalable docker running your data science models in production thanks to Kubeflow. Digazu automates the packaging and deployment of your model, including regular automatic retraining.

await fetch('https://api.digazu.com/myproject/my_dataflow/get')

By production ready we really mean it.

The core nature of Digazu is to make sure that you are able to deploy as fast as possible in the most secure environment possible. That is why you can either create a true API endpoint ready for production or connect with any other output you need. It will always only be a matter of minutes.

Want to see it in action?
Book a demo

Book a demo