You're using Jekyll for its simplicity, but you feel limited by its static nature when it comes to data-driven decisions. You check Cloudflare Analytics manually, but wish that data could automatically influence your site's content or layout. The disconnect between your analytics data and your static site prevents you from creating truly responsive, data-informed experiences. What if your Jekyll blog could automatically highlight trending posts or show visitor statistics without manual updates?

In This Article

Moving Beyond Static Limitations with Data

Jekyll is static by design, but that doesn't mean it has to be disconnected from live data. The key is understanding the Jekyll build process: you can run scripts that fetch external data and generate static files with that data embedded. This approach gives you the best of both worlds: the speed and security of a static site with the intelligence of live data, updated on whatever schedule you choose.

Ruby, as Jekyll's native language, is perfectly suited for this task. You can write Ruby scripts that call the Cloudflare Analytics API, process the JSON responses, and output data files that Jekyll can include during its build. This creates a powerful feedback loop: your site's performance influences its own content strategy automatically. For example, you could have a "Trending This Week" section that updates every time you rebuild your site, based on actual pageview data from Cloudflare.

Setting Up Cloudflare API Access for Ruby

First, you need programmatic access to your Cloudflare analytics data. Navigate to your Cloudflare dashboard, go to "My Profile" → "API Tokens." Create a new token with at least "Zone.Zone.Read" and "Zone.Analytics.Read" permissions. Copy the generated token immediately—it won't be shown again.

In your Jekyll project, create a secure way to store this token. The best practice is to use environment variables. Create a `.env` file in your project root (and add it to `.gitignore`) with: `CLOUDFLARE_API_TOKEN=your_token_here`. You'll need the Ruby `dotenv` gem to load these variables. Add to your `Gemfile`: `gem 'dotenv'`, then run `bundle install`. Now you can securely access your token in Ruby scripts without hardcoding sensitive data.


# Gemfile addition
group :development do
  gem 'dotenv'
  gem 'httparty'  # For making HTTP requests
  gem 'json'      # For parsing JSON responses
end

# .env file (ADD TO .gitignore!)
CLOUDFLARE_API_TOKEN=your_actual_token_here
CLOUDFLARE_ZONE_ID=your_zone_id_here

Building Ruby Scripts to Fetch Analytics Data

Create a `_scripts` directory in your Jekyll project to keep your data scripts organized. Here's a basic Ruby script to fetch top pages from Cloudflare Analytics API:


# _scripts/fetch_analytics.rb
require 'dotenv/load'
require 'httparty'
require 'json'
require 'yaml'

# Load environment variables
api_token = ENV['CLOUDFLARE_API_TOKEN']
zone_id = ENV['CLOUDFLARE_ZONE_ID']

# Set up API request
headers = {
  'Authorization' => "Bearer #{api_token}",
  'Content-Type' => 'application/json'
}

# Define time range (last 7 days)
end_time = Time.now.utc
start_time = end_time - (7 * 24 * 60 * 60)  # 7 days ago

# Build request body for top pages
request_body = {
  'start' => start_time.iso8601,
  'end' => end_time.iso8601,
  'metrics' => ['pageViews'],
  'dimensions' => ['page'],
  'limit' => 10
}

# Make API call
response = HTTParty.post(
  "https://api.cloudflare.com/client/v4/zones/#{zone_id}/analytics/events/top",
  headers: headers,
  body: request_body.to_json
)

if response.success?
  data = JSON.parse(response.body)
  
  # Process and structure the data
  top_pages = data['result'].map do |item|
    {
      'url' => item['dimensions'][0],
      'pageViews' => item['metrics'][0]
    }
  end
  
  # Write to a data file Jekyll can read
  File.open('_data/top_pages.yml', 'w') do |file|
    file.write(top_pages.to_yaml)
  end
  
  puts "✅ Successfully fetched and saved top pages data"
else
  puts "❌ API request failed: #{response.code} - #{response.body}"
end

Integrating Live Data into Jekyll Build Process

Now that you have a script that creates `_data/top_pages.yml`, Jekyll can automatically use this data. The `_data` directory is a special Jekyll folder where you can store YAML, JSON, or CSV files that become accessible via `site.data`. To make this automatic, modify your build process. Create a Rakefile or modify your build script to run the analytics fetch before building:


# Rakefile
task :build do
  puts "Fetching Cloudflare analytics..."
  ruby "_scripts/fetch_analytics.rb"
  
  puts "Building Jekyll site..."
  system("jekyll build")
end

task :deploy do
  Rake::Task['build'].invoke
  puts "Deploying to GitHub Pages..."
  # Add your deployment commands here
end

Now run `rake build` to fetch fresh data and rebuild your site. For GitHub Pages, you can set up GitHub Actions to run this script on a schedule (daily or weekly) and commit the updated data files automatically.

Creating Dynamic Site Components with Analytics

With data flowing into Jekyll, create dynamic components that enhance user experience. Here are three practical implementations:

1. Trending Posts Sidebar




2. Analytics Dashboard Page (Private)

Create a private page (using a secret URL) that shows detailed analytics to you. Use the Cloudflare API to fetch more metrics and display them in a simple dashboard using Chart.js or a similar library.

3. Smart "Related Posts" Algorithm

Enhance Jekyll's typical related posts (based on tags) with actual engagement data. Weight related posts higher if they also appear in the trending data from Cloudflare.

Automating the Entire Data Pipeline

The final step is full automation. Set up a GitHub Actions workflow that runs daily:


# .github/workflows/update-analytics.yml
name: Update Analytics Data
on:
  schedule:
    - cron: '0 2 * * *'  # Run daily at 2 AM UTC
  workflow_dispatch:  # Allow manual trigger

jobs:
  update-data:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Set up Ruby
        uses: ruby/setup-ruby@v1
        with:
          ruby-version: '3.0'
      - name: Install dependencies
        run: bundle install
      - name: Fetch Cloudflare analytics
        env:
          CLOUDFLARE_API_TOKEN: $
          CLOUDFLARE_ZONE_ID: $
        run: ruby _scripts/fetch_analytics.rb
      - name: Commit and push if changed
        run: |
          git config --local user.email "action@github.com"
          git config --local user.name "GitHub Action"
          git add _data/top_pages.yml
          git diff --quiet && git diff --staged --quiet || git commit -m "Update analytics data"
          git push

This creates a fully automated system where your Jekyll site refreshes its understanding of what's popular every day, without any manual intervention. The site remains static and fast, but its content strategy becomes dynamic and data-driven.

Stop manually checking analytics and wishing your site was smarter. Start by creating the API token and `.env` file. Then implement the basic fetch script and add a simple trending section to your sidebar. This foundation will transform your static Jekyll blog into a data-informed platform that automatically highlights what your audience truly values.