Guides on browser automation and reducing repetitive work
XPath is one of the most versatile methods to locate an HTML element from a web page when using Selenium. In this article, let's learn how to use it, from writing a basic XPath to one that suits different conditions.
Puppeteer is developed for Node.js but you can also use it to automate Chrome/Chromium in Python with Pyppeteer. We'll show you how in this article, with examples like taking screenshots, downloading images, and extracting data from a web page.
When automating browser tasks without code, you often need to format data to fit your needs. Here's how to do it with databases, Zapier, and Browserbear.
In browser automation, there are times to use credentials, cookies, or proxies to simulate different browsing experiences. Here's what nocoders should know.
Being blocked for seemingly no reason while web scraping can be frustrating. Here are five common reasons you may have been blocked and how to get around it.
Maintaining a great website involves constantly testing for errors. Here are a few ways Browserbear can help you pinpoint issues and promptly address them.
Image alt text is crucial to site accessibility and SEO. This automation will scan new posts for missing alt text and prompt you to take action when necessary.
Gathering large amounts of data from websites manually is inefficient and unfeasible but you can use appropriate tools to help you. Let's learn how to use Playwright in Python to scrape websites!
Selenium is often used for automating web applications for testing purposes but it is not what all it does. In this article, we’ll show you how to use Selenium for web scraping.
Sign up for our once a fortnight newsletter update on new Roborabbit features and our business journey