I am pretty sure most of us haven’t heard about “AI Fatigue”. And no, it is not related to fatigue for AI, but for humans working on AI. It is a phenomenon where humans face mental exhaustion, burnout, and emotional drain from the constant interaction with AI systems. And if someone would have told me this 2 years back, I would have laughed it off. But it is real and it is happening.
I came across an amazing video talk between Neil deGrasse Tyson and Geoffrey Hinton on YouTube. The first 30 minutes are amazing for any AI practitioner to understand the current state of AI and its potential impact on humanity.
Geoffrey Hinton is one of the pioneers of AI and has been working on it for decades. He is also one of the few people who has been warning about the potential dangers of AI for years. In this talk, he discusses the current state of AI and its potential impact on humanity. Geoffrey starts with a brief history of AI, neural net, backpropogation, etc. for the first 30 minutes. And then comes the best part on the potential impact on humanity, consiousness, singularity, etc. This is a must watch for anybody interested in the current state of AI and where it is leading, how the exponential growth of AI power has the potential to change the world (for good or bad is a question that is yet to be answered).
Context is the lifeline of LLM. Without context (or with invalid context) an LLM is nothing but a gibberish word generating machine. Context allows for the LLM to personalize, reason, provide coherence and grounding for any response it generates. The Context Window is the model’s active memory. Claude Opus-4 for instance has a 200K token context window. It is huge and suffices for most of the tasks. But efficient use of the context window is also needed to improve coherence and dept in responses.
I got a bunch of Gale (Google Wifi devices) on an auction site as broken. The owner of the devices was basically saying that all the devices were working but randomly shutting down. To make them work again, they had to be unplugged and replugged. So he/she auctioned them off as replacement devices at an ultra low price (3 AC-1204 and 2 GJ2CQ models) of 30$ for all 5. I of course had to get it. Power doesn’t seem to be an issue. Mostly a SW conflict due to the configuration being used by the previous owner.
Gale-Gatekeeper is a security tool that allows you to control devices on your Wifi. Gale-Gatekeeper is a Telegram-based network access control system for OpenWrt routers. When a new device connects to the network via DHCP, the system sends a Telegram notification with interactive Approve/Deny buttons. Devices with static DHCP leases are automatically allowed. Temporary devices require manual approval and have timeout-based access (30 minutes default).
I was so tired of the impossibility to control the wifi devices on my network. My kids used to jump devices from one to another. I also got a host of devices from school which I had no control over. So my kids basically got access to the internet almost 24x7 jumping devices. I have been trying very hard to implement a effort-reward mechanism but it was not working as I had no/almost 0 control over the screen time for my kids.
Happy New Year and welcome 2026. This is my first article of the year and I have embarked on a very interesting journey. And I am excited to share it with you. Keep reading.
Claude code is an AI coding tool operating within a terminal. It allows users to write, debug and manage code and/or AI tasks more efficiently, without going through an IDE. This typically enhances the AI interaction experience and is also quite light-weight. It also has a in-built “Agentic Loop” where the 3 phases (contextualization, action and verification) happens in a loop to provide for the most optimum output without detailed repetitive prompting. It also supports MCP, subagents, hooks, plugins and what have you.
Something amazingly simple turned out to be amazingly weird! I was trying to create a react agent that uses tavily search to fetch relevant articles and then uses openai (on localai server) to generate a response based on the articles. I was following the Langchain docs and the tavily search docs to create the agent.
The code is available on github. If I use the TavilyClient directly as a tool in my agent, it works fine. But if I use the TavilySearch tool, it truncates the query in a weird way and sends the result back. The LLM (gpt-oss) then goes into an infinite loop trying to get the correct information from the tool. The tool in turn gives back invalid responses which do not match the query and the whole cycle is repeated again.
LangSmith is a platform that allows you to track and analyze the performance of your LLMs. It provides a lot of features like tracing, debugging, etc. LangSmith is available as a python package and can be installed using pip and also as a docker image. But the best way to start using LangSmith is pretty simple.
If you have followed my previous article on Use LocalAI server with Langchain, you should have a localAI server running and a langchain setup to interact with it. You can download the example code that I have pasted in the previous article and run it to interact with the localAI server.