Viben Ring - Does Vibe Coding Work?

Vibe coded mini game with Windsurf AI, Claude 3.7 Sonnet, and Three.js

I tried building a mini Elden Ring game with AI.

I vibe coded with Windsurf AI, Claude 3.7 Sonnet, Three.js, and React Three Fiber.

I have zero experience in game development.

But I grew up playing Baldur’s Gate, Neverwinter Nights, Dark Age of Camelot, Asheron’s Call, The 4th Coming, etc.

Overall, I was surprised how far I could get vibe coding.

However, it soon became impossible to let AI code 100% everything 👇️ 

Here’s the playable Viben Ring demo:

Here’s the open source project:

Does Vibe Coding Work?

Setting Things Up

I started by creating a Next.js app from a template. I knew I wanted to use React Three Fiber to make an Elden Ring-like game with a 3rd-person point of view and boss fights.

Initially, I wanted to get the environment, character, and basic movement working.

I knew Mixamo had models and animations, so I downloaded what looked good and had AI implement it. Watch my vibe coding session here. Sadly, this pants-less model was the only quality free model with a sword.

Challenges

I had a global state using context, which AI edited directly.

For example, dodging consumed stamina. But instead of using setStamina AI would use the current value, which might already be outdated when dodging in quick succession. So the player would dodge, and stamina tracking would break because the code wasn’t using a reliable previous value.

But, it wasn’t just stamina.

Collision detection faced similar issues.

At first, I tried Elden Ring-style hit detection. Touching the tip of a sword should register a hit. I attempted bounding boxes on the skeleton.

Super cool in theory. Total pain in practice.

Eventually, I gave up and switched to a simpler system:

  • A cone in front of the boss to represent the attack range

  • A spherical hitbox for the character

Then at the end of an animation, I'd just check if the spheres collided.

You can see this in action by enabling “Debug Mode” when you play the game.

For the boss, I told AI to use a "behavior tree" and it actually implemented one with goal sequences. Overkill, but fascinating. I probably cheated a bit by saying "behavior tree," but hey it worked 🤷‍♀️.

Overall, AI was impressive… until it wasn’t.

It would almost correctly suggest edits, but sometimes it'd insert XML tags into a TypeScript file, or place <window> tags outside JSX.

AI also struggled with understanding the size of models. If I placed a hitbox on the character, it didn’t know where the center was. It was easier to manually move the hitbox around until it looked right.

Once I started implementing terrain, boss movement, and other fancy stuff, AI went rogue. Very unpredictable. It started mixing character logic, hitboxes, movement, and attack into one mega-component. As a result…

💣️ I restarted the entire project

…because AI started amalgamating different components into one massive file.

When components grew beyond 200-300 lines, things became unmanageable. In my 2nd attempt, I had to be more specific and guide the architecture to keep things organized.

I couldn’t just say “make me a character and add this animation.” I had to babysit AI to avoid circular logic bugs.

This kinda ruined my “vibe”.

You Still Have to Read the Code

I tried to avoid reading the code. I really did.

I’d sit back, let AI do its thing, read Three.js docs in another tab, or watch Black Mirror.

Eventually, something would break, and I’d have to jump in and fix it. Usually, it was messy.

Sometimes AI would leave behind old placeholder code. Later edits would reference it, thinking it was still there.

One time I updated a player animation, and suddenly the trees in the environment started duplicating and re-rendering. Why? Turns out there was a render loop tied to animation logic, which retriggered environmental renders with new random parameters.

Junior dev mistake, or senior dev on a time crunch.

Either way: garbage.

Second Rewrite

When I started fresh, I structured things better:

  • Each piece was its own isolated component

  • Character had its own hook

  • I guided the architecture to prevent recursive insanity

Still, after 5–10 features, AI started crumbling.

The bigger the project, the more it tripped over itself.

More Challenges

Getting a hitbox to trigger at the right animation frame was hard.

I had to tell AI exactly how to do it: calculate animation duration based on speed and frames, then use a delay before checking collision.

This worked, yay! But only because I dictated the logic like an annoying robot manager.

Also, AI was generally awful at dealing with useRef, side effects, and delayed state.

Timers? Forget it.

It’d reference objects too early.

Or effects would fire multiple times unexpectedly.

Or some hooks would pull state that was already stale.

The Bigger the Project, The Slower AI Gets

The code generated was often messy - spaghetti-like with side effects triggering unexpected behaviors.

Like I mentioned earlier, updating a dodge roll animation somehow caused trees in the environment to multiply and re-render! What the heck…

Unrelated components kept getting tangled together. Literal spaghetti.

The biggest problem is that you still need to read, review, and test the code, which defeats part of the purpose of using AI to code for you.

Generation times became painfully slow as the project grew:

  • “Generating request… taking longer than expected”

  • “Still generating…”

I had time to watch entire shows while it "fixed" stuff.

Spoiler: it didn’t fix it 🤣

What's worse, the AI would sometimes repeat the same approach that didn't work previously, making the debugging process more frustrating. Too often, a full hour would pass with absolutely zero progress.

Vibe-kill.

Could I Have Avoided a Rewrite?

Maybe.

If I’d architected better from the beginning — character, environment, boss logic, etc. — maybe.

But that defeats the purpose of using AI to do it for me.

The game itself is simple:

  • Player moves, dodges, attacks

  • Boss approaches, attacks

  • One dies, game ends

It’s just a flashy demo, a 3D toy.

Three.js looks awesome, but it’s still basic game logic.

After implementing about 5-10 features, it became increasingly difficult to track why the AI was changing certain things. The sweet spot seems to be the initial demo with 1-2 features. Beyond that, things get complicated unless you have completely independent components.

AI Makes it Look Cool, But...

People see the animation and say: “Whoa, AI made that?”

Not really.

The 3D models were made by Mixamo. AI just wired things together.

The result looks great, but underneath? Spaghetti code. I spotted a nested 5-level IF statement somewhere…

Testing

One area AI definitely helped: writing tests.

Edge cases, randomized tests - AI was pretty helpful.

Also great at naming things. I hate naming. Sometimes I’d just sit there and talk to it:

“What do you think of this naming convention?”

Closing Thoughts

In the end, I was left with code that conceptually made sense but was so tangled that I couldn't fully understand it.

You can't just tell AI to make a game without understanding what it's doing.

There's still a level of technical aptitude required that goes beyond just downloading an editor and starting a template project. While the 3D visuals create a "wow" effect, especially when people hear it was "written by AI," they don't see the engineer struggling behind the scenes.

I prefer AI for smaller chunks of code that I can easily digest. UI components without complex logic are perfect for AI generation since they follow standard patterns. For more complex logic, a 70/30 or 80/20 split between human and AI seems ideal.

What’s the real cost of letting AI build stuff for you?

You lose understanding.

Every new feature takes longer because you have no idea what AI changed.

If I turned off AI now, I’d have to reread the whole codebase… and probably rewrite it.

You might object, “You didn’t architect it properly.”

Yeah, that’s the point. If I had to, I wouldn’t need AI.

If you insist on full AI codegen, you better:

  • Architect everything

  • Read code constantly

  • Guide it like a toddler

My recommendation, as above, is to use AI for 30%–40% of the work. 

Let it scaffold UI, generate boilerplate, help you test.

But for core game logic, you (the senior developer) must drive.

It is clear to me:

Senior devs treat AI like junior devs.

Junior devs treat AI like senior devs.

Need More Help? đź‘‹ 

1/ If you want to grow on social media and build a business in coaching, consulting, speaking, selling apps, or digital products… check out Blotato

2/ Free AI courses & playbooks here