Watch 5 AI Frameworks Evolve: A Stunning Visual History

See how open-source AI gets built, commit by commit

Watch 5 AI Frameworks Evolve: A Stunning Visual History
Digital Constellations: Witnessing the birth of AI frameworks, image by author and Midjourney

What if you could watch artificial intelligence evolve before your eyes? Not as lines of code, but as a living, breathing digital organism?

This isn’t just data; it’s a time machine showing how 5 major AI frameworks — PyTorch, TensorFlow, LangChain, Hugging Face, and Scikit-learn — came to life, commit by commit.

I’ve visualised their growth using a technique inspired by the 80s and 90s demo scene, complete with retro digital art (powered by Gource and custom Python script for sound fx) and soundtracks from legendary demo scene musicians.

And if you stick around until the end, I have a special treat for you: a proper old-school demo scene “greetz” section that’ll make any retro coder smile.

So to understand the spirit of collaboration and innovation that drives AI, let’s go back in time to an unexpected source of inspiration: the demo scene.


The demo scene: Pure digital magic

Never heard of it? Get ready. Think ’80s and ’90s. Underground coders. Artists turning computers into their canvas.

They made magic — stuff that couldn’t be done on their hardware. It was pure digital.

Don’t believe me? Watch “Second Reality” by Future Crew. Or “State of the Art” by Spaceballs. “Desert Dream” by Kefrens. These aren’t just programs. They’re art. They still blow my mind today.

The sound was just as revolutionary. Those tiny sound chips? Demo scene musicians made them sing, creating the foundation of electronic music we still hear today.

Inspired by the demo scene’s ability to turn code into art, I wanted to bring that same creative energy to the story of AI’s evolution.

Let me show you what happened when I applied this artistic approach to PyTorch — its journey is absolutely fascinating.


PyTorch: The community-driven powerhouse

And what a journey it is — watching a community explosion happen in real time.

PyTorch’s evolution. Music: ‘okta-xy-geen’ by Skaven / Future Crew, Video by author.

The story starts quietly in 2012. Just a handful of dedicated developers laying the groundwork. Nothing dramatic — until 2017 hits.

Then? BOOM.

The visualization erupts like a supernova. Those lights you see? Each one is a developer joining the revolution. Watch how the connections spread like wildfire, forming this incredible collaboration network.

Fun fact: This explosion was so intense, it broke my first sound generator! It just created one massive continuous explosion.

I had to rebuild it to capture this growth spurt's nuances.

What triggered this big bang? Let me break it down:

  • Open Source Revolution: January 2017 PyTorch went open source. Suddenly, everyone could peek under the hood, tinker with the code, and make it their own.
  • Research Community Love Story: Something magical happened in the research world. PyTorch’s dynamic computation graph and Python-first approach felt like a breath of fresh air. Finally, researchers had a framework that thought like they did.
  • FAIR Goes All In: Facebook’s AI Research (FAIR) didn’t just support PyTorch — they supercharged it. They poured resources into the project and took it even further by connecting it with other frameworks like Caffe2.

This isn’t just a visualization — it’s watching a community find its voice. Every light you see is someone saying, “Yes, this is how AI should work.”

But hold on to your keyboard — because our next visualization tells a completely different story.

If PyTorch’s growth was a digital garden blooming in time-lapse, TensorFlow’s arrival was like a spaceship landing — fully formed and ready to revolutionize AI.


TensorFlow: Google’s open-source behemoth

Check out the difference! No gentle growth phase here-TensorFlow burst onto the scene with the full might of Google behind it.

TensorFlow’s Git history unfolds. Music: ‘the only one left’ by Elwood. Video by author.

This wasn’t a grassroots movement — it was a strategic coup. Google didn’t just release a framework — they dropped a full AI system. See how the visualization spreads out in all directions?

What’s cool is the steady rhythm. While it’s not as explosive as PyTorch in 2017, TensorFlow is consistent and strong. Every beat is another developer or Google engineer pushing the boundaries of what’s possible.

Think of it like the difference between a garden growing naturally and a designed park. Both are beautiful, but TensorFlow’s growth shows what happens when you launch with a clear vision and Google’s resources behind it.

But wait — our visualization story isn’t done yet. What happens when we point our demo scene-inspired tools at the newest kid on the block?

Fast forward to late 2022, when the LLM revolution was in full swing. That’s when LangChain burst onto the scene. And trust me — you’re going to love how this looks.

LangChain: The LLM connector

The visualization captures the moment perfectly. No slow build-up, no gradual evolution — just pure, focused energy as developers rushed to build the future of AI interaction.

The LLM connector, LangChain’s Git history unfolds. Music: ‘Project Genesis’ by Vincenzo. Video by author.

LangChain isn’t about building LLMs from scratch. It’s about connecting to them. It provides a way to interact with powerful language models like OpenAI, Cohere and others through APIs.

Think of it as a universal remote for LLMs. It simplifies building applications that use these models, so you can focus on what you want to build instead of how.

The visualization shows this rapid adoption. You see a bunch of contributions as people quickly realized the value of LangChain and started building on top of it.

It’s a sign of the growing importance of LLMs and the need for tools to make them more accessible.

Now, get ready for the light show. When I pointed my visualization tools at Hugging Face Transformers, something magical happened.


Hugging Face Transformers: The beauty of open collaboration

Starting in late 2018, Hugging Face Transformers didn’t just grow — it created a constellation of collaboration that’s simply stunning to watch.

The evolution of Huggingface’s Transformers, Music: “Cosmic Potion” by SunSpire, Video by author.

Hugging Face has created a very collaborative environment, with contributions from researchers and developers all over the world.

Each point of light is a contribution, a piece of code, a refinement. The lines connecting them show the relationships between different parts of the project.

Transformers are the foundation of many of the most advanced language models today. They’ve changed how we approach tasks like text generation, translation and question answering.

Hugging Face’s library makes these powerful models available to everyone.

Time for one last visualization, and it’s a special one. You know how every great demo had that moment of calm beauty between the wild effects.

That’s what I found when I visualized Scikit-learn’s journey. While our previous visualizations exploded with dramatic growth, this 2010 pioneer shows us something different — the quiet elegance of consistent evolution.


Scikit-learn: The foundation of machine learning

Get ready to see why this library earned its place as the bedrock of Python machine learning.

The evolution of Scikit-Learn, Music: “Planet Boelex” by Swansong, Video by author

Scikit-learn’s history is one of steady progress. The graph shows a steady, adult growth — no bumps, just continuous expansion.

That’s what maturity looks like. Scikit-learn has a complete set of tools for classification, regression, clustering and dimensionality reduction. It’s known for its clean API, great documentation and ease of use.

While the pace of new feature additions has slowed down in recent years, the project is not dead. You still see activity focused on stability, performance and algorithm refinement. The overall architecture is stable.

Scikit-learn’s impact is lasting. It’s a foundation library that has enabled an entire generation of data scientists and the graph tells the story of a project that is now adult, stable and super influential.

Alright, you’ve seen what these visualizations can show us about each framework. But if there’s one thing I learned from the demo scene, it’s that the real fun comes from sharing how the magic happens.

Ready to peek behind the curtain?


Making the magic happen: A developer’s journey

What you’re about to see looks complex at first glance, but I’ll break it down the same way those old-school demo creators would — step by step, building block by building block.

The recipe for code visualization

Here’s everything we’ll be playing with:

  • Git repositories (our time machines into code history)
  • Gource (our visualization wizard)
  • FFmpeg (our video converter)
  • Python (our sound effect generator)
  • Demo scene music (because coding deserves a soundtrack!)
  • ScreenFlow (to weave it all together)

Step 1: Gource — The visual magic

Now comes the fun part — turning that log into something beautiful:

# The main visualization command - let's break it down piece by piece 
gource \ 
  # Output to a high-quality PPM file that we'll convert to video later 
  -o ./pytorch.ppm \ 
   
  # Speed setting: Show 100 days of development per second 
  # Lower number = slower visualization 
  # Try values between 0.001 (super slow) to 1.0 (super fast) 
  --seconds-per-day 0.01 \ 
   
  # Limit how many files we show at once to prevent visual overload 
  # Adjust this based on your project size 
  # Too high = messy visualization, too low = missing data 
  --max-files 7000 \ 
   
  # Add a title to the visualization 
  --title "PyTorch" \ 
   
  # Clean up the visualization by hiding extra text 
  # Remove this line if you want to see all the details! 
  --hide filenames,users,dirnames \ 
   
  # Set the video resolution (width x height) 
  # Standard HD resolution - increase for 4K displays 
  --1280x720 \ 
   
  # The dot at the end tells Gource which directory to visualize 
  # You can replace it with a specific path 
  .

A quick tip: I hid the filenames and usernames because PyTorch has so many that they turned my screen into alphabet soup! For smaller projects, try showing them — it adds a personal touch.

Step 2: Converting to video

Gource gives us a PPM file — great for quality but not for sharing. Let’s fix that with FFmpeg:

# Let's convert our visualization into a web-friendly video format 
ffmpeg \ 
  # Read in our source PPM file from Gource 
  -i pytorch.ppm \ 
   
  # Use H.264 video coding - perfect for web playback 
  # This is what YouTube and most websites expect 
  -c:v libx264 \ 
   
  # Set video quality with a 10 megabit bitrate 
  # Bigger number = better quality but larger file size 
  # Try 5M for smaller files, up to 20M for crystal clear quality 
  -b:v 10M \ 
   
  # Use a standard pixel format that works everywhere 
  # yuv420p is like the universal language of video formats 
  # Your browser, phone, and media player will all understand it 
  -pix_fmt yuv420p \ 
   
  # Name of your output video file 
  # Change 'pytorch' to whatever project you're visualizing 
  pytorch.mp4ibx264 -b:v 10M -pix_fmt yuv420p pytorch.mp4

Now we’ve got a proper video file! But it’s silent… let’s fix that.

Step 3: Capturing git’s story

First, we need Gource to tell us everything that happened in our repository — every commit, file change, and little detail. Here’s how:

gource --output-custom-log ./tensorflow.log .

This creates a log file that looks like this:

1678822668|Harrison Chase|A|/langchain/chains/qa_generation/prompt.py 
1678822668|Harrison Chase|A|/langchain/evaluation/loading.py

Think of it as a diary of your code’s life — when files were born, changed, or said goodbye. We’ll use this to create our sound effects.

Step 4: The sound of code

We take the log from the previous step and turn it into sound effects! I wrote a Python program called GitSymphony that reads the log and generates sound effects for different events.

It transforms your code contributions into an audio track representing your project’s evolution.

Flowchart showing audio processing pipeline: Source log flows through Parsing, Grouping, Mapping, and Generating stages to produce WAV file output
The four parts of creating the sound effects wav track, image by author

The four parts of creating the sound effects WAV track (shown in the image) break down like this:

1. Parsing
First, we parse the Gource log file, which contains timestamps, users, actions, and file paths:

# Each line in the log looks like:  
# 1585305600|dev1|A|src/main.py

The parser extracts these elements into structured data we can work with. Each event tells us:

  • When it happened (timestamp)
  • Who made the change (user)
  • What they did (A=Added, M=Modified, D=Deleted)
  • Which file they changed

2. Grouping
Next, we group related events that happen close together:

# Group events within 15 seconds from the same user doing the same action 
grouped_events = group_events(events, grouping_window=15.0, min_files=25)

This is where it gets interesting — the code recognizes when you’re in a “coding burst” (making many similar changes in a short time window).

Instead of playing a sound for every file change, it groups them. This prevents the audio from becoming a chaotic mess of overlapping sounds.

3. Mapping
Then comes the fun part — mapping actions to actual sounds:

# Our sound mapping - each action gets its own sound 
mapping_rules = { 
    "actions": [ 
        { 
            "pattern": "A",  # New files get a swoosh 
            "sound_file": "swoosh.wav" 
        }, 
        { 
            "pattern": "M",  # Changes get an explosion 
            "sound_file": "explosion3.wav" 
        }, 
        { 
            "pattern": "D",  # Deletions get a different explosion 
            "sound_file": "explosion1.wav" 
        } 
    ] 
}

I chose sounds that felt right for each action: swoosh sounds for adding new files (like bringing something new into existence) and explosion sounds for modifications and deletions (representing impact and transformation).

4. Generating
Finally, we generate the actual audio file:

process_audio( 
    mapped_events, 
    sound_folder="sounds", 
    output="./output", 
    input_basename="project_history", 
    target_duration_seconds=58,  # Make it fit in about a minute 
    seconds_per_day=0.01         # Control the tempo 
)

This stage places each sound at the right moment in the timeline, scaling days of development into seconds of audio. The magic happens when the code calculates precisely where each sound should be played based on when the original code change occurred.

Making It Musical
Fun fact: When I first ran this on PyTorch’s 2017 repository, the output was just one long, continuous explosion.

So many changes happened simultaneously that all the sounds blended together. I had to rewrite the code to be more… musical.

I added two crucial features:

  1. A minimum gap between sounds (min_sound_gap_seconds: 2.0) to prevent overwhelming overlaps
  2. Intelligent event prioritization that favors events with more files when they happen at the same time
# If multiple events happen simultaneously, keep the one with more files 
mapped_events.sort(key=lambda e: (e["timestamp"], -e["file_count"]))

The result is a soundtrack that captures your project’s rhythm — intense bursts of activity come through as clusters of sounds, while quieter periods create natural pauses.

When played alongside a Gource visualization, it adds a whole extra dimension to understanding how your project evolved.

You don’t just see the changes anymore—you hear them, too! Go to the GitHub repo for more information on running the script.

Step 5: The demo scene touch

Now, we need our soundtrack. I picked some classic demo scene tracks—electronic beats from the ’80s and ’90s that perfectly capture the energy of code evolution.

The pulsing rhythms and synthetic melodies feel right, don’t they? But you can choose any music, of course.

Step 6: The final mix in ScreenFlow

Last step — bringing it all together in ScreenFlow:

  • Drop in our Gource MP4
  • Layer in the demo scene soundtrack
  • Add our custom sound effects
  • Fine-tune everything until it feels just right

And there you have it — code evolution turned into an audiovisual experience!

But hang on — we can’t wrap this up without one last demo scene tradition. Remember how every great demo ended with those scrolling ‘greetz’?

Those weren’t just credits — they were a celebration of community, a reminder that even the coolest effects came from standing on the shoulders of giants.


Greetz: A demo scene tradition

Time to give our AI visualization journey the same treatment. Keep your eyes on the familiar visual style, but this time watch it weave a story of thanks.

Giving credit where it’s due, demo scene style. Music: “Cosmic Potion” by SunSpire, Video by author.

This video continues the Gource visualization style, but this time, it’s all about acknowledging those who helped make this AI exploration possible.

This tradition goes back to the earliest days of the demo scene, and it felt right to include it here.

Speaking of demo scene traditions — there’s one more we haven’t talked about yet: inspiring others to create their own digital art.

I’ve shown you my visualizations, shared all my tools and tricks, and now I can’t wait to see what YOU dream up.


Now it’s your turn

Because the real magic? It happens when you start exploring your own code’s hidden stories. And trust me — every project has a story just waiting to be visualized.

Imagine discovering the hidden rhythms in your team’s codebase or watching your side project bloom into a mesmerizing visual story.

Ready to create your visualization? I’ve compiled everything you need to get started in my GitHub repository. You’ll find:

  • The complete GitSymphony sound generator
  • Ready-to-use configuration files for Gource
  • Step-by-step setup guides for all the tools
  • Sample datasets to practice with

The tools are waiting for you: Gource, FFMPEG, and the Python scripts in my repo.

Start small—maybe visualize a personal project first. Then, once you’ve mastered it, scale up to larger codebases.

Tag me when you create something amazing—I can’t wait to see what you build!

This story is published on Generative AI. Connect with us on LinkedIn and follow Zeniteq to stay in the loop with the latest AI stories.

Subscribe to our newsletter and YouTube channel to stay updated with the latest news and updates on generative AI. Let’s shape the future of AI together!