2025 reflection, looking forward to 2026 and beyond
Catching up from a no activity in a majority of 2025, looking into 2026 and beyond. As with most things, AI has come for this blog, but don't worry this is still very much "human powered" (for better or worse)!
2025 reflection
First and foremost, welcome to the 4th iteration of this blog! I'll try to get around to writing up a new post on the previous 3 versions at some point. As I find it an interesting journey, as each iteration was created with specific goals in mind which is a reflection of what I'm interested in learning at the time. This version is no different, which I'll touch a little on below.
2025 blog activity was just a single post which I wrote late in December specifically to point toward this new version. Now that you are reading this version, the transition is complete! I dated this post when I wrote it, but odds are it will be a while before it goes "live" along with the 4th iteration.
Because 2025 was so sparse, I want to dedicate a little more time into writing what I've been doing in regards to development, reading and a few other tidbits. So without further ado, here's how 2025 went in regards to development!
2025 was the year of AI tooling madness and some clarity
2025 was the year I started to really look into using AI in depth. I was already using some AI tooling to a degree with early versions of github copilot, but made it a goal to leverage these systems now that they have gained some level of maturity.
Without getting to in depth with specifics, after using a few "agentic AI" IDEs, I realized that github copilot is starting to lag behind, and more aggressive tooling is available. Also, around this time the MCP standard started to get implemented in the tools I was using (copilot included). I started to see the practicality of AI to help not only with development, but to provide a flexible component for automation. Most people talk and focus on the "intelligence" aspect, but I only see a flexible tool that is "artificially intelligent". When pushed AI intelligence falls apart pretty quickly and somewhat obviously in some cases.
For example, if you asked your AI agent what the weather is like, it has no clue what outside is without making some tool call, or web search. It's just acting as the middle man to synthesize a response. This can help, as it can customize the response to your needs and demands, but it's just as clueless about "what outside is" without these external tools. This can be found with some specialized agents, where you ask it what time it is and if it doesn't make a tool call, it will give an older date that aligns with its training material. Or it will tell you outright its own limitations. It isn't magic, it's just how the system works.
That said, me being an automation guy means I've looked for ways to utilize these new tools to build systems and processes that save people time. Even though all LLM powered AI systems now are largely built on theory written decades ago, the scale of these models makes them still pretty powerful at recognizing patterns, and even without integrating with other systems, still a useful flexible part of a systems stack.
It's only once you start to get an AI agent, and integrate it with the outside world you go from a chatbot trying to mimic intelligence, to an autonomous system that can execute tasks on its own, to varying degrees.
There's been a lot of hype around how far AI could actually do, but at the time of this writing it seems like most of that talk has turned out to be closer to sci-fi than reality, with studies such as the MIT review, which found a majority of AI projects fail, or the Apple study that points to the lack of actual "thinking" within these agents.
This brings me to the point of clarity that I'm focusing on for 2026. Which integrating them into positions where their raw flexibility can be leveraged, while avoiding core limitations and expectations that largely stem from not understanding how the systems work. This largely gets into a few key points, which I'll list below but probably go into more detail in future posts.
- LLM powered AI's are built and defined by their training data, whatever context you give them, and what tools they can integrate with.
- AIs aren't so much intelligent, but faking intelligence. This doesn't mean they aren't useful, only the premise of them being "experts" is fleeting beyond a critical point.
- AIs are more tools or systems that can be used within an application to solve very specific problems, that ultimately only work due to their flexibleness, not because they are the best option.
- If context is most important aspect of leveraging AI, then effective validation is the second most important aspect of leveraging AI.
After playing with AI, using it, working within it and trying to take it to its limit, I've come to the conclusion it isn't going anywhere, but it needs focus and work to leverage correctly. Which is one reason I'm back to dedicating time back to this blog.
So how do I plan on using AI for this blog in particular?
Who doesn't love a little AI slop? Just kidding, I assume you are only reading this because you don't want to read AI slop. I promise to avoid that and write my own way, for better worse. That said, I will use AI for a number of "behind the scenes" work, along with acting as a broad editor to help fix any invalid grammar or spelling mistakes.
I will not give the AI a prompt to write a full article, I don't believe that is worth anything to anyone.
I will let AI help me handle this blog's files, technical stack and other tasks that "take up brain juice" that aren't directly related to writing.
Personally I think AI as a tool should stay a tool, and it will be used as such. Trying to elevate it beyond this sort of post or use-case is somewhat unethical, and non-genuine. Plus I think AI writes too much like a robot, you can tell when something is written by AI. It usually tries way to hard for no reason.
So yea, AI is here, but not where it counts!
So why a new blog version?
This new version is built using docusaurus, with the main goal being offloading a lot of random hacks I had previously implemented, along with providing a clean "AI first" slate. This new version is focused almost entirely on the "blogging" aspect, with removal of stuff like "projects" as I feel like this is more important at this time. When anyone can create an MVP in a weekend, my weekend projects aren't any more impressive to talk about than any random vibe coder burning tokens.
I also wanted to dedicate myself to more deliberate deep thinking, and part of that writing, hence why the simplification and focus around the blog.
That said I have a few technical goals for this version beyond just leverage AI for maintenance, and out of the box docusaurus features. Namely I want to finally get around to cross posting/support with dev.to. I enjoy that platform as one of my main sources of learning and keeping up with trends, so I thought it would be interesting to try to write to it and here. Will have a blog post about that... if I ever get to working on it!
Finally, the old version is just "old". It hasn't had activity and I thought it would be a good time to leverage AI to build out a new clean slate, and help keep this maintained going forward.
Summary and goals for 2026
I've already started on some new books unrelated to sci-fi, which I'll try to slot in some time to write about in the first dedicated post.
At the same time, even though this post has been written, the site/blog itself isn't published! I'll be working on getting that in line at some point in the next few weeks hopefully.
- get back into open source development, specifically getting up to speed with a few libraries that have been neglected
- write more blog posts, around specific topics worth
- cross post this blog with dev.to (mostly because it would be cool)
- setup a time and place to spend time writing and working on this site, I have a plan to schedule this in but we will see if I follow it!
And with that, welcome to 2026, keep learning, keep building and keep questioning everything!
