We help people to Automate data workflows on the web, process and transform data at any scale.
Use Web Scraper to extract information from web sites with visual point-and-click toolkit.
We automate dynamic web content download using Headless Chrome browser in the cloud.
Use visual point-and-click toolkit to crawl any website and extract structured data.
Don't spend your time for servers setup and maintenance. Let us do the work!
Collect search results (SERP data) from Google, Bing, DuckDuckGo, Baidu, Yandex.
Extract organic results, ads, news, images from the most popular search Engines.
Just send a request specifying URL and parameters to save web page content to PDF file.
Turn web pages into PDF with a single click.
Use Dataflow Kit powerful and highly customizable screenshot API to make snapshots of websites.
Convert URL to Screenshot online right in your application.
The most popular solution nowadays is to use Headless Chrome browser. This renders websites in the same way as a real browser would do it.
And Besides, Chrome is equipped with its own tools for saving HTML to PDF and generating screenshots.
We offer Service for rendering email@example.com:slotix/dfk-webserver.git JS driven web pages to static HTML in our cloud.
Nowadays, many popular websites including google and other search engines provide with different, personalised content depending on user's IP address or GSM location.
Sometimes websites restrict an access to users from other countries.
This is where our worldwide proxy network comes into place. We offer Dataflow kit Proxies service to get around content download restrictions from certain websites or proxify requests to obtain country-specific versions of target websites.
Just specify target country from 100+ supported global locations to send your web/ SERPs scraping API requests. Or select "country-any" to simply use random geo-targets.
Of course, it is not enough in many cases to just scrape web pages, but to perform tasks with them.
Actions are useful for simulating real-world human interaction with the page. They are performed by scraper upon visiting a Web page helping you to be closer to desired data.
Here is the list of available actions:
|It is used for performing search queries, or fill forms.|
|Clicks on an element on a web page.|
|Waits for the specific DOM elements you want to manipulate next.|
|Automatically scrolls a page down to load more content.|
Just send an API request specifying desired web page and some parameters.Easily integrate DFK API with your applications using your favourite framework or language including:
It only takes a few minutes to start using our API at scale using code generators available. Generate "ready-to-run" code for your favourite language in no time.
Save scraped data to the one of data formats listed below.
|Structured JSON is the industry's most advanced data format which is ready to integrate with your apps.|
|JSON Lines format may be useful for storing huge volumes of data.
Read our article about JSON Lines format on Hackernoon.
|Microsoft Excel is a well known spreadsheet software that is familiar to many users.|
|CSV is a simple human-readable data format is intended for easy integration into existing tools or for spreadsheet analysis.|
|XML is a file format that both humans and machines could read. Tags in XML document define its data structure.|
We use internally save scraped data into S3 compatible storage, giving you high availability and scalability. Store from a few records to a few hundred million, with the same low latency and high reliability.
Besides you can upload your data directly to the following cloud storages: