GET 10% OFF on all Annual Plans. Use Code - FETCH2024

Building A Rank Tracker using Python & Google Search API

Table of Contents

If you’re into SEO or part of a marketing team, you’re likely familiar with rank-tracking tools. I am sure you might use tools like Semrush, Ahref, or something similar, but they are too expensive.

In this tutorial, we will create our rank-tracking tool with Scrapingdog’s SERP API and Python.

Logic and Requirements

To build a rank-tracking tool, we will need access to LIVE organic search results data. We can get this data from the Scrapingdog API. Once we have the data, we can get the rank of any website for any keyword with a simple GET request.

  • We will need a trial account on Scrapingdog.
  • Python should be pre-installed on your machine. If it is not installed then you can download it from here.

Setup

Create a folder with any name you like. We will store the scraped data and Python script in this folder only.

				
					mkdir tracker
				
			

Install the requests library inside this folder. This library will be used to make a GET request using Scrapingdog’s Google Search API.

We will also need pandas for this tutorial. I will explain the use case of this library later in this tutorial.

				
					pip install requests
pip install pandas
				
			

Now, create a Python file inside this folder. I am naming this file as rank.py. Let’s code now.

Flow

We will track scrapingdog.com on certain keywords. We will hit the API for every keyword and then collect the rank from the top 100 results. We will keep storing this data in a list and then later using pandas we will store it in csv file.

Let’s code this now.

Coding Rank Tracker

The basic code will look like this.

				
					import requests
import pandas as pd


keywords=['web scraping api','web scraping tool','google web scraping','best serp api','google scraping api']
api_key = "your-api-key"
url = "https://api.scrapingdog.com/google/"


for i in range(0,5):
    params = {
        "api_key": api_key,
        "query": keywords[i],
        "results": 100,
        "country": "us",
        "page": 0
    }

    response = requests.get(url, params=params)

    if response.status_code == 200:
        data = response.json()
        print(data)
    else:
        print(f"Request failed with status code: {response.status_code}")
				
			

Here I have just selected five random keywords and stored them in the keywords array. Then we run a for loop to iterate over all the keywords and then with the help of Scrapingdog’s Google Search API we are pulling the rank.

Now, let’s collect the ranking data for the keywords.

				
					import requests
import pandas as pd

l=[]
obj={}

keywords=['web scraping api','web scraping tool','google web scraping','best serp api','google scraping api']
api_key = "your-api-key"
url = "https://api.scrapingdog.com/google/"


for i in range(0,5):
    params = {
        "api_key": api_key,
        "query": keywords[i],
        "results": 100,
        "country": "us",
        "page": 0
    }

    response = requests.get(url, params=params)

    if response.status_code == 200:
        data = response.json()
        for x in range(0,len(data['organic_results'])):

            if 'scrapingdog.com' in data['organic_results'][x]['link']:
                obj["keyword"]=keywords[i]
                obj["position"]=data['organic_results'][x]["rank"]
                obj["page"]=data['organic_results'][x]["link"]
                l.append(obj)
                obj={}
    else:
        print(f"Request failed with status code: {response.status_code}")
				
			

The code is very simple. We are first checking if the API response status code is 200 or not. If it is then we are going to run a for loop on all the 100 results we got from the API. While iterating over results we are checking if scrapingdog.com is mentioned in the link property or not. If it is mentioned then we will collect the position and the page link that is ranking in that particular location.

Finally, now we can use pandas in order to store the data in a CSV file.

				
					import requests
import pandas as pd

l=[]
obj={}

keywords=['web scraping api','web scraping tool','google web scraping','best serp api','google scraping api']
api_key = "your-api-key"
url = "https://api.scrapingdog.com/google/"


for i in range(0,5):
    params = {
        "api_key": api_key,
        "query": keywords[i],
        "results": 100,
        "country": "us",
        "page": 0
    }

    response = requests.get(url, params=params)

    if response.status_code == 200:
        data = response.json()
        for x in range(0,len(data['organic_results'])):

            if 'scrapingdog.com' in data['organic_results'][x]['link']:
                obj["keyword"]=keywords[i]
                obj["position"]=data['organic_results'][x]["rank"]
                obj["page"]=data['organic_results'][x]["link"]
                l.append(obj)
                obj={}
    else:
        print(f"Request failed with status code: {response.status_code}")


df = pd.DataFrame(l)
df.to_csv('rank.csv', index=False, encoding='utf-8')

print("done")
				
			

Once you run this code you will find a CSV file inside your tracker folder.

Going forward, if you’d like to receive a daily update of your ranking status every morning, you can use the schedule library. For sending emails to yourself you can use smtplib.

Conclusion

Keyword rank tracking is a crucial part of any SEO strategy, enabling businesses to monitor their performance and refine their approaches to stay ahead of the competition.

Using Python and Google Search API, you can automate the process and efficiently gather accurate ranking data.

This not only saves time but also provides insights that empower data-driven decisions for improving visibility and driving organic traffic.

You can monitor your rankings or analyze competitors, this combination provides a scalable and cost-effective solution.

Additional Sources

My name is Manthan Koolwal and I am the founder of scrapingdog.com. I love creating scraper and seamless data pipelines.
Manthan Koolwal

Web Scraping with Scrapingdog

Scrape the web without the hassle of getting blocked

Recent Blogs

scrape people also ask google using python

Scrape Google People Also Ask with Python (using Scrapingdog’s Google SERP API)

In this read, we have scraped People Also Ask section using Python & Scrapingdog's Google Search API.

Building A Rank Tracker using Python & Google Search API

In this post, I have build a google keyword rank tracker using Python. I have used Scrapingdog's Google search API to pull keyword ranking data.