AIDE
Integrate Tavily Search with Langchain
· ☕ 3 min read · 🤖 Naresh Mehta

Something amazingly simple turned out to be amazingly weird! I was trying to create a react agent that uses tavily search to fetch relevant articles and then uses openai (on localai server) to generate a response based on the articles. I was following the Langchain docs and the tavily search docs to create the agent.

The code is available on github. If I use the TavilyClient directly as a tool in my agent, it works fine. But if I use the TavilySearch tool, it truncates the query in a weird way and sends the result back. The LLM (gpt-oss) then goes into an infinite loop trying to get the correct information from the tool. The tool in turn gives back invalid responses which do not match the query and the whole cycle is repeated again.


Integrate LangSmith with Langchain
· ☕ 3 min read · 🤖 Naresh Mehta

LangSmith is a platform that allows you to track and analyze the performance of your LLMs. It provides a lot of features like tracing, debugging, etc. LangSmith is available as a python package and can be installed using pip and also as a docker image. But the best way to start using LangSmith is pretty simple.

If you have followed my previous article on Use LocalAI server with Langchain, you should have a localAI server running and a langchain setup to interact with it. You can download the example code that I have pasted in the previous article and run it to interact with the localAI server.


Use LocalAI server with Langchain
· ☕ 7 min read · 🤖 Naresh Mehta

Now you might have used gemini, chatgpt, deepseek, claude, etc. for your daily activities or asking a question here and there. And most of these services ask for a subscription to use their services. And if you are not tech sauvy, you might not be aware that it is very easy to run local AI servers on your local machine. No need to have a subscription or pay for cloud services. The best part is that your normal PC/Mac will work just fine. Of course depending on your HW, there will be limits on the type of models that can be used. But running an SLM (Small Language Model) or a smaller LLM (Large Language Model) is a piece of cake.


How to get Started with AI?
· ☕ 6 min read · 🤖 Naresh Mehta

AI has expanded into multiple territories. The pace of expansion has been exponential after 2022 when the first “free” LLMs were exposed to the general public for use. Suddenly overnight we have a variety of demography using AI for a variety of purposes. The chat interface offered by companies such as OpenAI, Google, ChatGPT, etc. provides for the most basic usage. People from all walks of life, all age groups and all professions use the chat interface to use the power of AI. Natural Language Processing (NLP) is a game changer in that context. I work in a multi-cultural environment with customers spread across the globe speaking different languages. Just about 5 years ago, a customer email in a local language had to be translated manually. And the quality of translation left a lot to be desired. Fast forward to today and language barriers seem almost non-existent! Most of the linked-in job postings now talk about practical AI usage experience rather than proficiency in certain language as a key skill. Language though comes as an added skillset. With the development of headsets that do on the fly translation, I guess it would be pushed further down in priority, all thanks to NLP and LLMs.