(Published for RefinePro on March 12, 2020)

Data is the new differentiator. It’s what you, a product owner, a marketing strategist, your local journalist, and a multimillionaire who already owns twelve successful companies all need. And web scraping is one way to get that data.

But where to start? Sure, the Internet will give you everything you need to know. Soon, you’ll come across lists of the best tools available, each with a name that will never make the list for best marketing decision of the year: Octoparse, Scrapy, BeautifulSoup, ParseHub, Mozenda … But how to choose? Looking at the description and the ratings is a good system when you’re buying shoes, but not when you’re trying to find the best scraper for your project.

Think about it: if you’re about to send something out there on the web to gather the reliable data you need, you have to make sure that the tool you’re using is the best for your project and your specific goals. A pair of flip flops, no matter how good the ratings, won’t get you far if you’re visiting Norway in December. And that’s why you need to put a lot of energy in finding the right tools for your project. Over the years, experts like RefinePro have tested many of them and you can now benefit from their experience. Here’s what we know.



Web scraping is the process of fetching and extracting data from websites and downloading it in a usable format. It’s also referred to as “web harvesting,” “crawling,” “spidering,” and “web data extraction.” The process usually involves a scraper, the tool designed to extract the data for the webpages, and the crawler, or the spider, whose job is to browse the Internet to index and search for relevant content. Web scraping saves you the trouble of manually searching, downloading, and copying the data you need, and will work regardless of format. And by gathering big sets of data, you can help your organization grow by creating new products and innovating faster.


Web scraping is used across an extensive range of industries, including risk management, retail, finance, sales and marketing, insurance, artificial intelligence, and journalism, to name only a few.

Web scraping can help you:
  • Gather data on your competitors and their products
  • Optimize your customer relationship management
  • Detect fraud
  • Feed a natural language system
  • Train machine learning models
  • Generate more and better leads
  • Perform reliable competitive analysis
  • Monitor your reputation
  • And so much more

Web scraping has opened the door to big data in a world where every market research and business strategy relies on it. Think about it: all your strategies, plans, and insights into the future rely on data. Web scraping offers the privilege of non-discrimination.

As long as you adhere to the standards in place and use the right system with enough data warehousing capacity, there are no limits to the amount of data you can collect.

But whether you’re trying to figure out how your consumers feel about your product or trying to gather data from hundreds of websites in real-time, you’ll need different tools and capacities. The first step of every web scraping project is to decide whether you’d like to take care of everything internally or with a partner, like us. And that’s only the first question of a long list of things you need to figure out before you decide on a scraper.


When the time comes to choose the right scraper, you should base your decision on specific criteria, and not on benchmarks. These criteria are largely defined by your project and capacities, and by your choice of externalizing or keeping everything in-house.

Criteria 1: The solution maturity.
You want to make sure you invest in a technology that is going to be there in the long run and actively supported. Check when the software was initially released and how often it’s been updated. Make sure the documentation and custom support is available. Do you want an open source or proprietary solution?

Criteria 2: The development environment.
For example, the choice of Windows, Mac or Linux and browser-based for SaaS. (SaaS are “Software as a service”, a software model licensed on a subscription basis and hosted by a third-party provider.) Take also into account how complex is the software and what skills do you need to write a scraper? How fast can you ramp-up new employee?

Criteria 3: The execution platform and hosting options.
Once the project developed where can you execute it? Either public, private or hybrid clouds. The main question here is: do you want to rely on a third-party infrastructure to collect critical data or do you need to keep everything in-house?

Criteria 4: The possibility to circumvent CAPTCHA.
That can be done using a third-party like 2Captcha.

Criteria 5: The ability to fine-tune proxy management and rotation.
This could, for example, help you select the countries from which requests will come and support for residential IP addresses. Good web scraper lets you connect with third-party providers like Luminati.

Criteria 6: The ability to handle advanced anti-scraping features.
This includes device fingerprint anonymization and fine-tune browser and profile management.

Criteria 7: The capacity to add custom scripts.
By adding new pieces of code, we can extend the software capabilities.

Criteria 8: The API and Workflow integration possiblities.
How easily can you integrate your web scraping project into your workflow? Your scraper should offer the possiblity to add an API or external connectors to configure and project but also retrieve the data.

Criteria 9: Scheduling, monitoring, and maintaining.
These are crucial to any web scraping project, so your tool should allow you to perform all three according to your needs.

Criteria 10: Pricing.
At the end of the day, we all have a budget to respect.

So, ultimately, it’s your project, and most importantly, its schedule, scale, and eventually budget, that will determine what tools you should use.


Once you know more precisely the kind of tool you’re looking for, it’s time to start shopping. You’ll find a plethora of websites listing all the best tools with their pros and cons. The truth is, however, that few of them have tested those tools as much as RefinePro, with projects that differ in goals, scale, and complexity. So, to make it easier for you, we’ve gathered a list of their three favorites, with a short description and comparative chart.


ParseHub is a point and click web scraping software that manages all projects on its infrastructure. This system means that you’re completely dependent on their infrastructure, yet you benefit from not having to make any of the usual set-up investment to provision environments. Their plan includes the maintenance of web scraping servers and proxy networks, preventing unexpected costs as you scale.

ParseHub is a bit more expensive than its counterparts (it does offer a free version for small and simple projects, though), but it’s a great option for easy projects with high volumes of data.


Content Grabber, a point and click web scraping software developed by Sequentum. It provides a robust and scalable solution to collect data from complex websites and offers the advantage of being deployed on-premises, on Windows servers. We can help you manage your project without relying on a third-party vendor. And by having full control over the infrastructure, we can meet the most restrictive data privacy and security requirements.


Puppeteer is a headless browser that uses DevTools Protocol to communicate with Chrome or Chromium. We call it “headless” because it was designed to be used by machines, not humans. It has no user interface and its main goal is to allow programs to read and interact with it. Like Content Grabber and ParseHub, Puppeteer is well designed for large projects, but it offers a complete solution control for complex websites using advanced anti-scraping features.

But complex features also mean a more complex design, so Puppeteer is not for the neophytes. It requires a high level of expertise and you’ll need a trained developer to create the project and maintain it. But as we’ve mentioned many times before, it’s a mistake to think of web scraping as a product: it’s a service. So, if your project requires complex extraction, make sure to ask an expert like us to develop and schedule your project, but also to maintain and monitor it.


Here’s a slightly more complete table that compares the three tools described above.

 ParseHubContent GrabberPuppeteer
1. Solution MaturityMature proprietary software released in 2015. New version released in Fall 2019.Mature proprietary software released in 2015. Follow Visual Web Ripper released in early 2000.Mature open source released in 2017. Large community support with 250+ developers
2. Development EnvironmentEasy Point and click software for Mac, Windows, and LinuxEasy Point and click software for Windows onlyComplex, using a developer envirnoment on Mac, Windows, and Linux
3. Excution Platform & HostingParseHub Platform (SaaS)Self-hosted Windows serverSelf-hosted Linux server
4. CaptchaBasic resolution included, possible to connect to third-partyConnect with third-partyConnect with third-party
5. ProxyBasic proxy included, possible to connect to a third-party providerConnect with third-partyConnect with third-party
6. Anti ScrapingLimitedAdvancedAdvanced
7. Custom ScriptsJavascript and regular expression to select content onlyExtends the software with C#, regular expression, VB, Python 3JavaScript environment, but everything is code-based!
8. API and Workflow IntegrationYes—to orchestrate (start, stop, and pass parameter) and retrieve dataYes—to orchestrate (start, stop, and pass parameters).


Support data export in multiple formats

No—you need to orchestrate the script and manage your data export yourself
9. Ease to schedule and monitorEasy – everything is supported by ParseHubMedium – You need to provide the infrastructure, then everything is managed via Content Grabber Agent Control CenterNot available. You need to provide
10. PricingSelf Service. Prices are available online.Enterprise Solution. Contact Sequentum for a quote.Free and Open Source.


Web scraping can be easy. But to be easy, it necessitates a plan and a vast amount of technical know-how. Even when trying to choose the right tool, you should always seek the advice of a professional. Our goal here at RefinePro is to make sure your web scraping project will help you meet your long-term needs and that it will scale with you.

It’s easy to make a bad choice with web scraping and to send your team on a wild goose chase. Using the wrong tool to gather the wrong kinds of data on the wrong website, using wrong techniques that get you kick out are costly mistakes.

So instead of chasing a wild goose, find an expert you can trust, like the team at RefinePro, to tell them about your project. And if you’re still not convinced you need help, go read my article about the medium- and long-term implications of web scraping. You’ll learn how the launching of a web scraping project is only the beginning of a long story that can only end well if it involves ongoing and constant scheduling, monitoring, and maintenance.