Scrape do

Scrape do

Scrape.do is a web scraping API offering rotating residential, data-center, and mobile proxies with headless browser support and session management to bypass anti-bot protections (e.g., Cloudflare, Akamai) and extract data at scale in formats like JSON and HTML.

89VIEWS
2USERS

Install MCP Server

Paste and run this command in your terminal to set up Cursor with MCP

npx @composio/cli add cursor --app scrape_donpx @composio/cli add cursor --app scrape_donpx @composio/cli add cursor --app scrape_donpx @composio/cli add cursor --app scrape_do

After running the command, restart Cursor to start using the MCP Server.

Available Tools

Get Account Information

Retrieves account information and usage statistics from scrape.do. this action makes a get request to the scrape.do info endpoint to fetch: - subscription status - concurrent request limits and usage - monthly request limits and remaining requests - real-time usage statistics rate limit: maximum 10 requests per minute

Get Rendered Page Content

This tool allows you to scrape web pages with javascript rendering enabled. it's particularly useful for scraping dynamic websites where content is loaded through javascript. the tool will wait for the javascript to execute and return the fully rendered html content.

Scrape Webpage Using Scrape.do

A tool to scrape web pages using scrape.do's api service. it makes a basic get request to fetch the content of a target webpage while handling anti-bot protections and proxy rotation automatically.

Use Scrape.do Proxy Mode

This tool implements the proxy mode functionality of scrape.do, which allows routing requests through their proxy server. it provides an alternative way to access web scraping capabilities by handling complex javascript-rendered pages, geolocation-based routing, device simulation, and built-in anti-bot and retry mechanisms.

Set Cookies For Scraping

This tool allows users to set specific cookies for their scraping requests to a target website. it is useful for maintaining session states or authentication through cookies.

Set Scrape.do Super Mode

The scrape do set super mode tool enables enhanced scraping by using residential and mobile proxies, bypassing blocks and restrictions associated with datacenter ips. when the 'super' parameter is set to true, it activates a mode that leverages a network of residential ip addresses, which is particularly useful to bypass strict anti-bot measures and for accessing websites that block datacenter ips.

Block Specific Urls During Scraping

This tool allows users to block specific urls during the scraping process. it's particularly useful for blocking unwanted resources like analytics scripts, advertisements, or any other urls that might interfere with the scraping process or slow it down. it provides granular control by allowing users to specify url patterns to block, thereby improving scraping performance and maintaining privacy.

Set Custom Headers For Scrape.do Request

A tool to send custom headers with scrape.do requests. this allows simulating specific browser behaviors or adding authentication headers by controlling all headers sent to the target website.

Set Custom Wait Time

This tool sets the custom wait time in milliseconds after page load when using the render option in scrape.do. it is particularly useful for dealing with dynamic content to ensure that it is fully loaded before scraping, especially on javascript-heavy websites or single-page applications. the action allows fine-tuned control over the rendering wait time and must be used with render=true.

Set Device Type For Scraping

This tool allows users to set the device type (desktop, mobile, or tablet) for making scraping requests. it is used to emulate different devices, which helps in testing responsive designs or fetching device-specific content.

Set Disable Redirection

Controls the automatic redirection behavior of scrape.do requests. when enabled (disable redirection=true), prevents the automatic following of redirects during web scraping operations. this allows: - inspection of the redirect chain - capturing intermediate redirect responses - manual control of redirection flow - analysis of http status codes of redirect responses the redirect url will be available in the scrape.do-target-redirected-location response header.

Set Pure Cookies Mode

This tool enables getting the original set-cookie headers from target websites instead of the processed scrape.do-cookies format. when enabled, this parameter returns the original set-cookie headers from the target website rather than using the default scrape.do-cookies header format.

Set Regional Geolocation For Scraping

This tool allows users to set a broader geographical targeting by specifying a region code instead of a specific country code. this is useful when you want to scrape content from an entire region rather than a specific country. note that this feature requires super mode to be enabled and is only available for business plan or higher subscriptions.

Set Retry Timeout

This tool allows users to set the maximum wait time (in milliseconds) before retrying a failed request in scrape.do. it requires a parameter 'retry timeout' (integer) which specifies the maximum time to wait before retrying, with a default of 15000 ms. it is designed to improve the reliability of web scraping operations, especially when dealing with unstable or slow-responding websites.

Set Screenshot Capture For Scraping

This tool enables the screenshot functionality for the scrape.do api, allowing users to capture a visual representation of the scraped webpage. when enabled, the api will return a screenshot of the rendered page along with the regular response. features: - basic screenshot capture - full page screenshot capture - capture specific area using css selector

Set Session Id For Sticky Sessions

This tool implements the session id functionality for scrape.do to maintain a sticky session with the same proxy ip across multiple requests. it achieves this by adding a sessionid parameter to the query parameters of any scraping request, which is crucial for ensuring session consistency when scraping websites with stringent session requirements.

Set Wait For Selector

This action allows setting a css selector to wait for before considering the page load complete. it is particularly useful when scraping javascript-heavy pages to ensure that certain elements have loaded dynamically.

Set Wait Until Condition

This tool sets the waituntil parameter for the scrape.do api, defining when the rendering should consider the page loaded during javascript execution. it is particularly useful for handling dynamic websites by specifying conditions such as 'domcontentloaded', 'networkidle0', or 'networkidle2'.

Monitor Websocket Requests Using Scrape.do

This tool provides the ability to view websocket requests made by a webpage. it requires using render=true and returnjson=true parameters along with showwebsocketrequests=true to enable logging of websocket requests.

19 actions available