Page content

title: Vibe Coding For Old Fogeys category:

  • Programming draft: true

It isn’t news that AI is having a big impact on coding, but I thought it would still be potentially interesting to describe my experiences.

[I have ridden the mighty vibe worm]

My History

Not because you’re intrinsically fascinated with it, but just so you can calibrate the rest of what I say based on it.

I’m coming up fast on 30 years professionally programming, and some years hobby programming before then. I’m very comfortable in the pre-AI world. Between my hobbies and professions, I’ve roamed all over the programming landscape; front end, back end, numerous languages, GUIs, data processing, network servers, games, all sorts of things.

When AI coding first became practical at a professional level… which really wasn’t all that long ago!… I was in the middle of cleaning up a project I’d been working on for over a year. It was the final polish phase, which always takes longer than you’d like, but my assessment at the time was that AI didn’t really have much to offer me at that point in the dev cycle. The tests were largely written. The architecture done. The bullet points mostly checked off. The bugs were obscure and detailed.

Such experiences as I have had with AI since then have not changed my mind. I already had a finely-tuned guesser about what was wrong, an external one telling me the moral equivalent of “have you turned it off and on” wouldn’t have helped much. That’s not because AI can’t do that sort of thing, but just that I was so far down in the weeds on a high-knowledge project that it wouldn’t have been able to help much.

Since then I’d dabbled with AI. The most success I’d had was translating one sort of program to another, because “style transfer” is something they are very good at, by the nature of their design. You still need to check that they didn’t just drop a clause or two because it was too “improbable”, but it’s a very powerful porting tool for bits of code. I find it far faster to read and review code side-by-side than to actually write it from scratch.

But I hadn’t had a great opportunity to get in and really get dirty with it.

And then I realized my wife tossed me a softball and I needed to take a swing.

The Meme Tablet

A couple of years back I got a fairly large Samsung tablet for what seemed like good and sufficient reasons at the time, but which turned out to be wrong. It ended up just sort of sitting around doing nothing. But it’s a bit on the expensive side as such kit goes to be doing nothing, so I decided to repurpose it to do something I wanted to do for a while: The Meme Tablet.

Mount it on the wall, grab some memes, have it randomly display them throughout the day. Walk by, get a quick hit of “heh”, move on with your day.

My basic requirements were:

  • Be updated through syncthing to get the new memes; I’m not manually adding these things. I want it to interact with something that ultimately backs to a standard directory.
  • Display them slowly. I don’t want people to sit there staring at the tablet waiting for the next one every 15 seconds, nor do I want them to be sitting there swiping at it for memes. I just want it to be a quick hit as you walk by. I was thinking 5 minutes delay on the rotation.
  • Display them simply. No swooping, panning, zooming, etc.
  • No ads.

You would think this is not a complicated set of requirements, but I burned through quite a lot of apps before finding one that half worked. Many slideshow apps are optimized for the cases you’ve probably seen at funerals, or birthdays for more elderly people, where pictures are displayed at roughly 10 second intervals, with a default swoop/pan/whatever to keep it “interesting”, and music playing underneath. Many apps wouldn’t even let me set more than 15 seconds on the rotate option, well below my acceptance criterion.

Or they want a subscription.

This is not a subscription situation.

Or they can’t handle thousands of pictures. Or they won’t cycle through automatically, but will only do it once. Or they only load pictures from a cloud account. Or some other major disqualifying issue.

I have not tried all of them. I’ve only got so much free time. Eventually I found one that was at least adequate. However, the family still has some major complaints:

  • The display order is randomized, but the swipe order matches the original order of the pictures. That is, the slideshow may display picture 397, then go to 129, but if you missed the picture and want to go back, swiping left will take you to image 128 in the original order.

    This really annoys the family. If you want to point out the meme to someone else, but the timer has expired, it’s basically just gone now.

  • I’m not entirely sure whether or not the randomness is really pulling from the entire set or if it’s actually pulling from the last 1024 or 2048 or something. It’s hard to prove with an undifferentiated mass of pictures.

  • There’s a couple of other minor annoyances.

  • Finally… and this is by no means a feature I expect a standard app to have… but we’ve noticed that located where it is, it doubles as a rather nice night light. But it’s very unreliable, it can suddenly switch brightness on you when it switches pictures. It occurs to me I’d like a mode where for a configurable period of time at night it will display a solid color, and possibly manipulate the brightness of the screen directly.

My wife asked me a couple of weeks ago whether or not I could fix it. After all, how hard is a slideshow app, right?

Well, my pre-AI answer is that while I know I could do it eventually, the amount of new stuff I’d have to learn was just absurd. Android development has apparently moved to Kotlin; I don’t know Kotlin. (Those of you who are expert in Android development probably already have an accurate sense of where I stand just from that statement!) Which also means, I don’t know any of its libraries. What I know about Android qua Android as a programming environment is at least a decade old and wasn’t much to start with. (Technically, I had done more with Palm than Android, and my Palm experience was “a button that popped up “Hello World” and then a series of null pointer dereferences.) I don’t know the Android Studio IDE.

All of these issues are perfectly overcomable in time, but I’m in an exploit phase of my career. Doesn’t mean I never learn anything new; I just put Typescript under my belt in the last 8 months or so for work. It just means I’m being a bit more selective about what I load into my brain space, and that was not a great cost/benefit tradeoff. I’ve done GUIs. I’ve done system programming. I’ve done new languages. I wasn’t going to learn much new about programming, just learn about Android details.

But then I realized, hey, here’s a perfect chance to try out AI coding. Reading Kotlin isn’t too hard with the experience I have (and an AI anxious to explain anything on demand, in the local context), but I’m not writing it. I can’t second guess library choices or how to drive the app development process. I’ll have the AI and nothing but. Plus Android Studio has it free right now. With an app key you can get the more advanced Gemini but you can get a still-decent one for free. How will it go?

Riding The Mighty Vibe Worm

So I downloaded Android Studio for Linux and got cracking.

One thing I’d recommend is starting out by asking the AI the best way to do something before just asking for something. The AI will happily offer you an older or deprecated way of doing something, even flagging it as the older mechanism, but hey, if the AI is going to write the code, it might as well write the modern code first.

I started by asking it about how to store user preferences for the app and immediately it offered the old and the newer way. I had to ask for the newer way, which involves type-safe storage rather than storing strings. I also had to prompt it to deal with the newer Media permissions used on Android (if by “newer” you mean “in the past 10 versions or so”) because it started writing code to get media permissions in the old way.

I still ended up needing some debugging skills because the AI can get itself into trouble that it doesn’t always understand. One bad one was a mismatch of library versions it suggests causing an error that didn’t give the AI anything to grip on to:

ERROR HERE

I had to hit Google myself and do some old-school interpretation of the results to figure out what the problem was.

But still, on the whole, the experience was quite nice. Within about 4 hours I had a basic slide show working, and I believe, without a lot of “cheating”. It gets the images out of a content media query that could be easily adapted to other things.

That said, I still found many of the things I’ve learned over the years to still come into play, and could easily have flummoxed a non-programmer for hours or even resulted in the project being abandoned. For instance, the AI by default wasn’t adding logging. Once I added logging as to when a new image was displayed it didn’t take long to notice that the slideshow continues even if the app is no longer in the foreground. That’s not good for battery life and such. A simple prompt to suspend it was not hard. And once I saw the code to do it, it isn’t hard to imagine how to extend it.

I’ve got a better algorithm for randomization in mind than just telling the system to “make it random”, and how to integrate it into the “swiping back” that I seriously doubt the AI would do if I just prompted it to. There’s still going to be a market for people who learn what is even possible with AI tooling, because it’s going to always be a skill and it’s not going to be a skill that everyone will pick up.

Talking my way through all the UI widgets was helpful. An LLM just to figure out which of them you need even if you nominally know them would probably be useful. It’s invaluable if you don’t know them all by name.

The Upshot

If there is one lesson to take away from this story, I would suggest it is this. If you’ve wanted to really get into and grok AI coding, but you’ve had a hard time integrating it into an existing workflow that was never designed in an AI world, I can strongly suggest doing what I did.

Pick up a project in a language you don’t know, in a framework you don’t know, in a context you are lost in. If you haven’t done Android programming, I can certainly recommend that path. When you’re lost, your temptation to just do it yourself is minimal and the prompt is your primary way of interacting with the code base.