×
Community Blog How to Write a Headless Web Scraping Bot in Python

How to Write a Headless Web Scraping Bot in Python

In this article, you will get some information on writing our own basic headless web scraping "bot" in Python with Beautiful Soup 4 on an Alibaba Clou.

Installing Python 3, PIP3 and Nano from the Terminal Command Line

It's always a good idea to update everything on a particular instance. First, let's update all the packages to the latest versions so we won't run into any issues down the road.

sudo yum update

We will be using Python for our basic web scraping "bot". I admire the language for its relative simplicity and of course the wide variety of modules available to play around with. In particular, we will be using the Requests and Beautiful Soup 4 modules.

Usually Python3 is installed by default, but if it's not, install Python 3 and Pip. First we are going to install IUS, which stands for Inline with Upstream Stable. A community project, IUS provides Red Hat Package Manager (RPM) packages for some newer versions of select software. Then move forward with installing python36u and pip.

sudo yum install https://centos7.iuscommunity.org/ius-release.rpm
sudo yum install python36u
sudo yum install python36u-pip

Pip is a package management system used to install and manage software packages, such as those found in the Python Package Index. What is Pip? Pip is a replacement for easy_install.

I've ran into some headaches in the past with installing Pip rather than python36u-pip so be aware that installing pip is for Python 2.7 and python36u-pip is for Python 3.

Nano is a basic text editor that is useful in applications such as this. Let's install Nano.

sudo yum install nano

Install our Python Packages through Pip

Now we need to install our Python packages we will be using today, Requests and Beautiful Soup 4.

We will install these through PIP.

pip36u install requests
pip36u install beautifulsoup4

Requests is a Python module that allows us to navigate to a web page with the Requests .get method.

Requests allows you to send HTTP/1.1 requests, all programatically through the Python script. There's no need to manually add query strings to your URLs, or to form-encode your POST data. Keep-alive and HTTP connection pooling are 100% automatic. We will be focusing on the Requests .get method today to grab a web page source.

Beautiful Soup is a Python library for pulling data out of HTML and XML files. It works with your favorite parser to provide idiomatic ways of navigating, searching, and modifying the parse tree.

We will be using Beautiful Soup 4 with the standard html.parser in Python to parse and organize the data from the web page source we will be getting with Requests. In this tutorial we will use the Beautiful Soup "prettify" method to organize our data in a more human readable way.

Lets make a folder called "Python_apps". Then, we will change our present working directory to Python_apps.

mkdir Python_apps
cd Python_apps

Writing Our Headless Scraping Bot in Python

Now comes the fun part! We can write our Python Headless Scraper Bot. We will be using Requests to go to a URL and grab the page source. Then we will use Beautiful Soup 4 to parse the HTML source into a semi readable format. After doing this we will save the parsed data to a local file on the instance. Let's get to work.

We will be using Requests to grab the page source and BeautifulSoup4 to format the data to a readable state. We will then use the Python methods of open() and write() to save the page data to our local hard drive. Let's go to Headless Web Scraping in Python with Beautiful Soup 4 for details.

Related Blog Posts

How to Apply API Gateway for DirectMail in Python

This led me to the idea of using the DirectMail REST API. This means that I can use Python and learn how to use the REST API and in particular how to create the Signature that is not documented very well. Nothing like picking a difficult challenge to really learn a new service.

In this article I will focus on the DirectMail REST API and include a real working example in Python.

Why You Should Use FlashText Instead of RegEx for Data Analysis

If you have done any text/data analysis, you might already be familiar with Regular Expressions (RegEx). RegEx evolved as a necessary tool to execute text editing. If you are still using RegEx to deal with text processing, then you may have some problems to deal with. Why? When it comes to large-sized texts, the low efficiency of RegEx can make data analysis unacceptably slow.

In this article, we will discuss how you can use FlashText, a python library that is 100 times faster than RegEx to perform data analysis.

Related Market Product

Seafile6.2.3 powered by Websoft9( Python | CentOS7.4)

Websoft9 Seafile is a pre-configured, ready to run image for running Seafile on Alibaba Cloud.Seafile is an enterprise file hosting platform with high reliability and performance.

Seafile is an enterprise file hosting platform with high reliability and performance. Put files on your own server. Sync and share files across different devices, or access all the files as a virtual disk.Organize files into libraries. A library can be selectively synced into any device. Reliable and efficient file syncing improves your productivity.

Related Documentation

Elastic Compute Service Python SDK Developer Guide

The example shows how to create an ECS instance by calling the CreateInstance API of Alibaba Cloud Python SDK.

Elastic Compute Service (ECS) is a type of computing service that features elastic processing capabilities. ECS has a simpler and more efficient management mode than physical servers. You can create instances, change the operating system, and add or release any quantity of ECS instances at any time to fit your business needs.

Use Python SDK

This document introduces how to install and call Alibaba Python SDK.

Alibaba Cloud Python SDK supports Python 2.6.x, 2.7.x, 3.x and later. You can install Python SDK using two methods.

Related Products

Elastic Compute Service

Alibaba Cloud Elastic Compute Service (ECS) provides fast memory and the latest Intel CPUs to help you to power your cloud applications and achieve faster results with low latency. All ECS instances come with Anti-DDoS protection to safeguard your data and applications from DDoS and Trojan attacks.

Object Storage Service

Alibaba Cloud Object Storage Service (OSS) is an encrypted, secure, cost-effective, and easy-to-use object storage service that enables you to store, back up, and archive large amounts of data in the cloud, with a guaranteed reliability of 99.999999999%. RESTful APIs allow storage and access to OSS anywhere on the Internet. You can elastically scale the capacity and processing capability, and choose from a variety of storage types to optimize the storage cost.

Related Course

Python Structured Data Processing Quick Start

The data set of this course is from virtual blog site, we are going to use the data to solve business problems, for example what countries do your customers come from; Which day of the week receives the most online traffic; Which region contributes the most clickstream data etc,. Basic functions for data cleaning, data analysis and visualization will be coverd in this course. It is also the foundation for programming on distributed system like Spark SQL,or with Alibaba cloud MaxCompute Python SDK.

0 0 0
Share on

Alibaba Clouder

2,605 posts | 747 followers

You may also like

Comments