15 Buttons to AI Chaos: Build a Connect 3 Stream Deck Game

Level Up Your Stream Deck: How I Added AI Smart to a Classic Game

15 Buttons to AI Chaos: Build a Connect 3 Stream Deck Game
Your new AI gaming buddy: A lovable robot ready to challenge you on the Stream Deck, image by Midjourney

You know those moments when a random gift sparks an unexpected obsession? That’s exactly what happened when I got my hands on a Stream Deck — one of those fancy button panels streamers use for sound effects and memes.

It was a farewell gift from my incredible colleagues at Hoogendoorn, where I’d spent nine years turning IIVO from an ambitious idea into a market-leading greenhouse automation system. (Yeah, I’m still pinching myself about that one.)

Most people would’ve just loaded it with emojis and called it a day. But something about those 15 buttons, arranged in that perfect little 3x5 grid, got my mind racing.

I didn’t see a stream controller — I saw the world’s tiniest gaming console begging for an AI challenge.

My first thought? Connect Four! But reality hit fast: squeezing a 6x7 game board onto this tiny deck would be like trying to parallel park a bus in a phone booth.

So, I pivoted to its scrappy cousin, Connect Three. It is quick, brutal, and perfectly sized for my miniature battleground.

But here’s where it gets interesting. I didn’t want just any AI opponent — I built three distinct personalities:

Sparky

First up is Sparky, my Q-Learning rookie. Sparky learns through trial and error, like a caffeinated toddler discovering chess. Its moves are so delightfully unpredictable that they’re almost endearing. Think of a puppy chasing its own tail, but make it digital.

Deep Blue Jr

Then there’s Deep Blue Jr., the calculated killer. Powered by Monte Carlo Tree Search — the same tech that helped machines master Go — this one plays like it’s plotting your downfall three moves ahead (because it is).

Socrates

Finally, meet Socrates, the philosopher. This AI, powered by Llama 3.3 through Groq’s API, doesn’t crunch numbers. Instead, it reasons about the game, like a grandmaster giving a lecture. It’s a different approach with equally fascinating results.

What started as a great farewell gift turned into my own personal AI fight club. I’ve watched these algorithms duke it out, celebrated unlikely victories, and been humbled by their increasingly clever strategies.

Before we dive under the hood, let’s get to know our AI challengers. I’ve built three unique personalities, each with its own quirks. We’ll meet them one by one, starting with the scrappiest of the bunch: Sparky.


Meet Sparky: The clueless AI that learns by failing

Remember our rookie AI I mentioned earlier? Let me properly introduce you to the Q-learning agent.

I’d successfully trained Q-learning on Tic-Tac-Toe before and thought Connect Three would be a breeze. Spoiler alert: I was wonderfully wrong.

Think of Q-learning as learning to cook by repeatedly burning toast until you finally figure out how not to set off the smoke alarm. I had the AI play countless games against itself to speed things up.

Picture two enthusiastic but clueless toddlers teaching each other soccer — chaos, entertainment, and the occasional accidental goal.

Initially, the results looked promising — its win rate kept climbing steadily. But those impressive stats were hiding some hilariously bad decisions. One moment, it would brilliantly block your winning move; the next, it would hand you victory on a silver platter.

Here’s what 20,000 training rounds taught us:

  • The AI memorized 7,000 unique game states — impressive memory, questionable application
  • It achieved a 90% win rate against random moves — but still fell for painfully obvious traps

Let me show you what this looks like in practice. Check out these graphs:

Graphs illustrating Q-learning AI training over 20,000 episodes. Charts show fluctuating episode rewards stabilizing, high primary agent win rates, cumulative victories dominating over losses, rapid initial Q-table growth plateauing, declining exploration rates, and rising ELO ratings.
Q-learning AI training progress: Chaotic rewards, rising win rates, and expanding memory. Impressive on paper — until you watch it play. Image by author.

The blue spikes in the rewards chart? That’s our AI having its “eureka” moments, followed by long stretches of “Wait, what was I doing again?” The steadily climbing win rate looks great until you realize it’s mostly winning against copies of itself — like declaring yourself world champion of sock puppet boxing.

But here’s what makes this AI endearing: its unpredictability. Sometimes, it pulls off moves so clever you’d think it was secretly a grandmaster. Other times… well, let’s say it keeps the game interesting in ways we never intended.

Want to see this adorable chaos in action? Here’s a quick video of me playing against our digital rookie:

Animated GIF showing gameplay of DeckDrop’s Connect Three on an Elgato Stream Deck, featuring the Q-learning AI. Tokens drop onto a 3-by-5 grid with moves demonstrating the AI’s playful inconsistency — occasionally strategic, yet humorously prone to mistakes, highlighting its learning process.
Me playing against Sparky, out Q Learning AI Agent, image by author.

Notice how it sometimes nails obvious moves — and other times completely misses the plot? It’s like teaching someone to cook using only a microwave; sure, they’ll heat food, but Gordon Ramsay won’t call anytime soon.

Watching Sparky stumble through matches was charming — but I craved something smarter. What if we had an AI that didn’t just learn by failure but could foresee your moves like a chess grandmaster?

Enter Deep Blue Jr., powered by Monte Carlo Tree Search — the ruthless strategist I never knew I needed.


Deep Blue Junior: The unbeatable Connect 3 mastermind

After watching our lovable Q-learning rookie stumble around, it was time to bring in the heavy hitter.

Enter Monte Carlo Tree Search (MCTS) — the technique that helped AlphaGo shock the world by defeating human Go champions.

Here’s what makes MCTS unique: while Q-learning bumbles through games, hoping to learn from its mistakes, MCTS is like having a chess grandmaster who can see every possible future.

For each move, it simulates thousands of potential game outcomes. Imagine playing poker against someone who’s already seen every possible hand you might draw — that’s what it feels like facing this AI.

And here’s where it gets exciting: Connect Three is solvable, unlike Chess or Go. Because the game board is so compact, an algorithm like MCTS can explore every possible sequence of moves. As a result, it doesn’t just play well — it plays perfectly.

When it moves first, it becomes practically unbeatable. Playing against it feels less like a game and more like arguing physics with Einstein: technically, you can try, but you know how it will end.

Don’t believe me? Let me show you a typical game:

Animated GIF showcasing a game of Connect Three on an Elgato Stream Deck against the Monte Carlo Tree Search AI. Gameplay is swift and precise, with tokens strategically placed on a 3-by-5 grid. The AI consistently makes optimal moves, revealing a virtually unbeatable and polished opponent.
Monte Carlo Tree Search in action: the AI moves first, and just like that, the game feels suspiciously rigged., image by author.

See that first move in the center? It’s not just a good move — it’s the beginning of your inevitable defeat. Every response you make has already been calculated, and every counter-strategy anticipated.

Q-learning might have been your goofy friend who occasionally stumbles into brilliant moves. MCTS is the stone-cold strategist who’s already mapped out every path to victory before you even start playing. It’s fascinating, impressive, and just a tiny bit terrifying.

Want to experience what it’s like to play against an AI that never makes mistakes? Then MCTS is your new best frenemy.

Or, if you prefer a more… unpredictable opponent, stick with our beloved Q-learning rookie, Sparky. He’s still learning!

Sure, Deep Blue Jr.’s flawless calculations were impressive, but it felt a bit… robotic. I wanted a player with more personality and more unpredictability — a digital partner capable of surprising me with uniquely human-like insights and even human-like mistakes.

Enter Socrates: a philosophical AI opponent who doesn’t just calculate moves, but reasons about them, sometimes with surprising — and occasionally flawed — results.


Meet Socrates: The AI that thinks about Connect 3

After dancing with our cheerfully confused Q-learning rookie and getting repeatedly crushed by the MCTS grandmaster, I wondered: could we create an AI that plays more… thoughtfully? Less like a calculator, more like your most thoughtful friend pondering their next move?

Enter Groq. They build specialized AI chips, called Language Processors Units (LPUs), that are orders of magnitude faster at running large language models than traditional hardware.

I decided to use their llama-3.3–70b-versatile model, which packs a whopping 70 billion parameters. That’s like strapping a rocket engine to our Connect Three game. And the best part? You can still access Groq’s API for free and experiment with these powerful models. But instead of using it to crunch numbers, I wanted it to think about the game the way humans do.

Here’s the cool part: while our first two AIs see the game as a complex math problem, this one receives the board state as a simple text description:

Current board state: 
. . X . . 
O X O . . 
X O . . .

Then it does something fascinating — it thinks about the game almost like a human would. The prompt we give it is beautifully simple:

You are playing a Connect-3 game on a 3x5 grid.  
 
You need to get 3 in a row horizontally, vertically, or diagonally to win. 
 
Current board state (X = player 1, O = player 2, . = empty): 
. . X . . 
O X O . . 
X O . . . 
 
You are playing as O. The columns are numbered 0-4 from left to right. 
Analyze the board carefully and return ONLY a single digit (0-4)  
representing the column where you want to place your token. 
Choose the best strategic move.  
 
If a column is full,  you cannot place a token there.

We tell it, “You need to get 3 in a row horizontally, vertically, or diagonally to win,” and ask it to analyze the board and choose the best strategic move.

Here’s an example of a raw response from the LLM from a different board scenario:

To make the best move, let's analyze the current state of the board: 
 
. . . . .  
. . O . .  
. . X X . 
 
Player 1 (X) has two X's in a row, which means they could potentially win  
if they get one more X in the same row.  
 
However, since we are playing as O, our goal is to prevent X from winning  
and create an opportunity for ourselves to win. 
 
Given the current state, placing an O in column 2 would be a good strategic  
move, as it would block X's potential three-in-a-row and also give us a  
chance to create a diagonal or vertical line. 
 
So, the best move would be to place the O in column 2.  
 
2

Here’s what makes this opponent unique:

  1. Natural Language Strategy: Instead of learning through trial and error or calculating move trees, it understands the game through language. It can “see” the whole board and reason about it strategically, just like you would explain your moves to a friend.
  2. Human-like Decision Making: Every move comes from actual strategic thinking rather than pure calculation. Sometimes it plays defensively, other times it sets up subtle traps that might take you a moment to spot.
  3. Surprising Creativity: While our MCTS master might find mathematically perfect moves, this AI occasionally comes up with strategies that feel almost… inspired. It might spot patterns that our other AIs would miss simply because it can “think” about the game in higher-level terms.

Think of it as playing against your brilliant friend who’s had precisely two glasses of wine — thoughtful and strategic but also capable of those “wait, I didn’t see that coming” moments.

Here is an example game of me playing against Socrates.

Animated GIF showing gameplay of DeckDrop’s Connect Three on an Elgato Stream Deck, featuring the LLM AI. Tokens drop onto a 3-by-5 grid with moves demonstrating the AI’s playful inconsistency — occasionally strategic, yet humorously prone to mistakes, highlighting its learning process.
Me playing against Socrates, the LLM AI Agent, image by author.

The result? An AI opponent that doesn’t just play Connect Three — it feels like it appreciates the game. It won’t always be as relentless as MCTS or as chaotic as Sparky, but it still finds ways to demolish you when you least expect it.


Inside the AI training environment

Behind every AI is a training gym — a digital playground where algorithms flex their muscles, make mistakes, and (hopefully) learn from them.

For Sparky, our Q-learning agent, this took the form of a separate Python-based training environment, distinct from the Node.js/TypeScript codebase of the Stream Deck plugin itself.

You can find this training environment as a separate project within the GitHub repository.

Here’s how I set up the Q-learning training environment:

Flowchart illustrating DeckDrop’s AI training pipeline. Environment.py and Agent.py feed into Self Play Training, producing a Q-Table JSON file. This JSON is converted via convert.py into a Q-Table TypeScript module, integrating directly into the Stream Deck plugin.
Training pipeline simplified: Python self-play generates the Q-table, ready to be plugged straight into TypeScript, image by author.

The environment and agent are both part of the training process.

Environment (environment.py):
This is the game referee. It enforces rules, evaluates the game board, and gives the AI immediate feedback—like rewarding good moves or penalizing illegal ones.

Right at the heart of the environment lies the step() function, controlling how the game reacts each time the AI makes a move. Here's how it looks under the hood:

def step(self, action): 
  # Check if the action is valid 
  if not self._is_valid_action(action): 
      return self._get_state(), -10, self.done, {"invalid_move": True} 
       
  # Drop the piece and record previous state value 
  self.previous_player_state[self.current_player] = self._calculate_state_value(self.current_player) 
  row = self._get_next_open_row(action) 
  self.board[row][action] = self.current_player 
 
  # Check for win/draw and assign rewards 
  if self._check_win(): 
      self.done = True 
      self.winner = self.current_player 
      reward = 1.0 if self.current_player == 1 else -1.0 
      return self._get_state(), reward, self.done, {"winner": self.current_player} 
       
  # Small negative reward for non-terminal move 
  reward = -0.05   
   
  # Add intermediate rewards for good positioning 
  if self.intermediate_rewards: 
      current_state_value = self._calculate_state_value(self.current_player) 
      previous_state_value = self.previous_player_state[self.current_player] or 0 
      state_improvement = current_state_value - previous_state_value 
       
      if state_improvement > 0: 
          reward += 0.1 * state_improvement

Agent (agent.py):
The learner, a digital player experimenting with every possible move, gradually refining its decision-making. Imagine someone who learns chess purely by losing a few thousand games first—painful, yes, but effective.

The magic happens during self-play training. The AI repeatedly battles itself, updating a huge reference table (the “Q-table”) that stores each move’s strategic value. Once training is complete, this Q-table, a massive collection of game states and their learned outcomes, is exported as a JSON file.

To make this Q-table accessible within the TypeScript environment of the Stream Deck plugin, I use a script called convert.py to transform the JSON data into a TypeScript variable.

I initially explored loading the JSON file directly at runtime but encountered difficulties with how the Stream Deck plugin environment handles external files. Embedding the Q-table as a TypeScript variable provided a more reliable and performant solution within the plugin’s constraints.

Now, the Q-learning AI is ready to play — and lose hilariously — on your Stream Deck.

Here’s a peek at the code powering the learning process:

def update(self, state, action, reward, next_state, next_valid_actions): 
  # Initialize Q-values if needed 
  if state not in self.q_table: 
      self.q_table[state] = {a: 0.0 for a in range(5)} 
 
  # Get max Q-value for next state 
  max_next_q = 0.0 
  if next_valid_actions: 
      max_next_q = max(self.q_table[next_state].get(a, 0.0) for a in next_valid_actions) 
 
  # Q-learning update formula  
  current_q = self.q_table[state].get(action, 0.0) 
  new_q = current_q + self.learning_rate * (reward + self.discount_factor * max_next_q - current_q) 
  self.q_table[state][action] = new_q

Think of this formula as the AI’s cheat sheet. After each move, the agent reassesses the board, adjusts its predictions, and (hopefully) learns something useful — like realizing it’s probably not great to hand victory to the opponent in one obvious move.

And that’s it: A pipeline transforming chaos into (slightly less chaotic) gameplay.

What we discovered

The code reveals some fascinating insights:

  1. State representation matters enormously — We use a simple string like “000120000100200” where each digit represents a cell state.
  2. Strategic knowledge accelerates learning — By building in pattern recognition, we drastically reduced training time and improve quality.
  3. Self-play creates a moving target — As our agent improved, its opponent (updated every 500 games) also improved, pushing continuous learning.
  4. Balancing exploration vs. exploitation is crucial — Starting with 100% random moves and gradually focusing on learned strategies was key to success.

Want to see the complete code? Check out the full project on GitHub where you can explore every detail of how both our AI opponents work.


The stream deck plugin architecture

Before we dive deeper into the code, let me show you the blueprint of the plugin part of our game. I’ve sketched out how all the pieces work together — think of it as a bird’s eye view of our digital playground!

A diagram illustrating the structure of a Stream Deck plugin. The top-level box labeled “Stream Deck Plugin” contains three primary sections: “AI Components,” “Game Components,” and “Stream Deck Interface.” AI Components include Q-Learning (ai-opponent.ts), MCTS (mcts-opponent.ts), and LLM (groq-opponent.ts), each connected via a Strategy Pattern to an AI Opponent module. The Q-Learning module is linked to an external QTable. The Game Components include Game Logic (game-logic.ts).
DeckDrop’s plugin architecture, image by author.

Pretty cool, right? Let’s break down what you’re looking at, because this diagram tells the whole story of how our game comes to life.

How stream deck plugins actually work

First, let’s tackle the basics. Your Stream Deck is basically a canvas of buttons, and each button can run what’s called an “action.” Think of actions as little programs you can assign to any button through the Stream Deck software.

In our case, we’ve created a single action called DeckDrop — it’s our entire game! Note that “DeckDrop” is also the name of the overall project and the GitHub repository, but here, we’re referring specifically to the Stream Deck action.

But here’s the cool part: we want this game to spread across all 15 buttons on your Stream Deck.

That’s where Profiles come in. A Profile is like a preset layout that tells each button what to do. We’ve created one that fills your entire Stream Deck with our game, starting with an empty board ready for action.

Let’s break it further down into bite-sized pieces that actually make sense.

1. Stream deck interface (Top Right)

Plugin Entry (plugin.ts): Think of this as our game’s handshake with the Stream Deck. It says “Hey, we’re a game, and here’s how we work!”. In this file we register our single DeckDropGame action.

import streamDeck, { LogLevel } from "@elgato/streamdeck"; 
 
import { DeckDropGame } from "./actions/deck-drop-game"; 
 
// We can enable "info" logging so that all messages between the Stream Deck, and the plugin are recorded. 
streamDeck.logger.setLevel(LogLevel.INFO); 
 
streamDeck.actions.registerAction(new DeckDropGame()); 
 
// Connect to the Stream Deck. 
streamDeck.connect();  
 
// Export your actions here 
export const actions = [ 
  DeckDropGame 
];

Deck Drop Game (deck-drop-game.ts): The conductor of our orchestra, making sure all the pieces play together nicely. It handles, Button presses, Game state updates, Player interactions, and AI move coordination.

Now here are a couple of interesting things going on, which have to do that we basically have on game and multiple instances of the same action.

First thing is the onWillAppear event, it occurs when an action appears on the Stream Deck. So on our 5 x 3 board it will be called 15 times. When called we want to store the reference to the action and where it is located. We use this information to check on which column the user drops (press the button) the token later.

Also, here we load the DeckDrop profile when you drop the DeckDrop action on your board. We do this only a single time.

override async onWillAppear(ev: WillAppearEvent<GameSettings>): Promise<void> {   
  // Store action reference in our lookup map if it has coordinates 
  if (ev.action.coordinates) { 
    const col = ev.action.coordinates.column; 
    const row = ev.action.coordinates.row; 
    const key = this.getCoordinateKey(row, col); 
    this.actionLookup.set(key, ev.action); 
    streamDeck.logger.info(`Stored action at coordinates [${row}, ${col}]`); 
     
    // Set initial image 
    await this.renderer.setButtonImage(row, col, EMPTY_SLOT_IMAGE); 
  } 
   
  // Switch to the DeckDrop profile if we haven't already 
  if (!this.hasProfileSwitched) { 
    try { 
      await streamDeck.profiles.switchToProfile(ev.action.device.id, "DeckDrop"); 
      this.hasProfileSwitched = true; 
      streamDeck.logger.info('Switched to DeckDrop profile'); 
    } catch (error) { 
      streamDeck.logger.error(`Failed to switch profile: ${error}`); 
    } 
  } 
   
  // Render the board 
  await this.renderer.renderBoard(this.gameLogic.getBoard()); 
}

But what happens when a player actually makes a move? That’s where the keydown event comes in — it’s the heart of our game’s interaction.

Let’s break down what happens when you tap a button:

override async onKeyDown(ev: KeyDownEvent<GameSettings>): Promise<void> { 
  // Only proceed if we have coordinates 
  if (ev.action.coordinates) { 
    // If the pressed button is in the top row, drop the token 
    if (ev.action.coordinates.row === 0) { 
      const column = ev.action.coordinates.column; 
      const moveResult = this.gameLogic.makeMove(column, this.renderer.showWinner.bind(this.renderer)); 
       
      // Explicitly render the board after move 
      await this.renderer.renderBoard(this.gameLogic.getBoard()); 
       
      // If the move was successful, check if we need to reset 
      if (moveResult) { 
        this.scheduleGameReset(); 
      } 
    } else { 
      // Button is in a lower row, show valid move indicators 
      streamDeck.logger.info('Button in lower row pressed, showing valid move indicators'); 
      await this.showValidMoveIndicators(); 
    } 
  } else { 
    streamDeck.logger.info('Button has no coordinates, no action taken'); 
  } 
}

This might look like a lot, but it’s actually doing something quite clever. Instead of just handling button presses, we’ve turned the Stream Deck into an intelligent game interface.

When you tap any button, the game instantly knows which column you’re aiming for. Tap the top row to make your move, or tap lower buttons to see hints about where you can play. The board updates instantly, the AI takes its turn, and the game keeps flowing smoothly.

Let’s peek at another crucial piece: how we handle the AI moves themselves

2. AI Components (Top left)

Ever heard the phrase “keep your friends close and your AI strategies closer”?

That’s precisely what we’re doing here. This list shows where each personality resides within the project’s codebase. We have three distinct AI implementations:

  • Sparky: Implemented in ai-opponent.ts. This file contains the code that utilizes the pre-trained Q-Table (converted from the Python training environment) to make decisions. This is our Q-learning AI.
  • Deep Blue Jr.: Implemented in mcts-opponent.ts. This file houses the Monte Carlo Tree Search algorithm responsible for Deep Blue Jr.’s strategic gameplay.
  • Socrates: Implemented in groq-opponent.ts. This file contains the logic for interacting with Groq’s API and leveraging the Llama 3.3 model for Socrates’ reasoning-based gameplay.

All three of these plug into our AI Opponent using the Strategy Pattern — a fancy way of saying we can hot-swap between our AIs without the rest of the game even noticing.

Want to practice against the lovable rookie? Start there. Ready for a brutal challenge? Switch to the MCTS master. Are you in the mood for some thoughtful gameplay? The LLM awaits.

You can even have them face off against each other!

Screenshot of the DeckDrop plugin configuration screen showing various settings including “Game Mode” set to “AI vs AI,” “Player Selection” as “Player One,” the first AI strategy set to “MCTS (Stronger),” the second AI strategy set to “Groq LLM (Experimental),” and AI difficulty set to “Medium (500 simulations).” An icon depicting a yellow game piece on a blue background is visible on the left.
DeckDrop Plugin Configuration Interface, image by author.

3. Game components (bottom)

This is where the rubber meets the road:

Game Logic: The referee of our virtual playground. It
– Keeps track of whose turn it is
– Makes sure nobody’s cheating
– Coordinates between the AI and the interface
– Manages the game state (think of it as the game’s memory)

  • Win Checker: Our observer who spots every potential victory. It’s constantly scanning for those three-in-a-row moments that make or break the game.
  • Game Renderer: The artist of our operation. It turns boring numbers and states into those pretty buttons you see on your Stream Deck. Those satisfying animations when you win? That’s all thanks to this component.

Implementation guide

Want to try this out or make your own tweaks? Here’s how to get started!

Getting set up

First, you’ll need:

  • Stream Deck MK.2 hardware or simulator
  • Node.js (version 20 or newer)
  • The Stream Deck app from Elgato
  • A code editor you’re comfortable with ( I use VS Code)

Quick start

  1. Grab the code:
git clone https://github.com/PatrickKalkman/deckdrop 
cd deckdrop/deckdrop 
npm install

2. Start the development server:

npm run watch

3. Tell the Stream Deck about your plugin:

  • Open the Stream Deck app “Configure Stream Deck”
  • Drag the DeckDrop action onto the Stream Deck

That’s it! It will ask you to install the profile, and then you should see the game board appear on your Stream Deck.

To use Socrates as an opponent, you need a free API key from Groq. The API key can be entered in the DeckDrop Groq API key field.

Need help?

If something’s not working:

  • Check the Stream Deck logs (they’re in the app’s settings)
  • Make sure your Stream Deck software is up to date

Remember, this is all about having fun. Break things, experiment, and see what you can create!


Conclusion

Who would’ve thought a tiny Connect Four game squeezed onto a Stream Deck could teach us so much? It turns out watching AI stumble through games is both hilarious and enlightening.

Sure, the algorithms look brilliant on paper, but wait until you see them make moves, which’ll have you questioning everything you know about artificial intelligence.

Here’s what I learned: sometimes, the best ideas come from working with weird limitations. And while my AI might brag about winning 90% of its games, don’t be fooled — it still makes moves that’ll make you cringe. It’s a humbling reminder that even impressive statistics don’t tell the whole story.

Curious? Give it a whirl. Test yourself against Sparky’s chaos, challenge Deep Blue Jr.’s perfect precision, or have a thoughtful conversation about strategy with Socrates.

Download it, tweak it, break it — heck, reinvent it!


Resources

Ready to dive in — or bravely plunge headfirst into the AI rabbit hole? Here’s everything you need, neatly sorted:

GitHub Repository: Connect Three AI
– Source code, setup guide, docs, and issue tracker.

Elgato Developer Documentation:
Stream Deck SDK Docs (Essential for plugin newbies)
Plugin Creation Guide (Step-by-step starter)

AI Implementation References:
Monte Carlo Tree Search Guide (Deep dive into strategic mastery)
Q-Learning Tutorial (Beginner-friendly walkthrough)
Game AI Development Resources (Comprehensive overview)

Remember, this project thrives on your brilliant hacks, bold experiments, and hilarious mistakes. Whether you’re squashing bugs, outsmarting my AI, or proudly breaking the game in new and creative ways, every quirky contribution counts.

Let’s build something gloriously imperfect together!

This story is published on Generative AI. Connect with us on LinkedIn and follow Zeniteq to stay in the loop with the latest AI stories.

Subscribe to our newsletter and YouTube channel to stay updated with the latest news and updates on generative AI. Let’s shape the future of AI together!