Sunday, March 1, 2026

Vibe coding with Databricks: Step by step guide to setup your environment

                                        Image is taken from: https://github.com/databricks-solutions/ai-dev-kit

In the era of Agentic AI, vibe coding is gaining popularity, and Databricks Field Engineering has developed a Toolkit for Coding Agents. This blog post provides step-by-step guidelines to help you prepare your environment for vibe coding. 

There are few pre-requisites those needing to install before the Dev tool kit can be installed. Step 1 to 4 are for pre-requisites and step 5 is for the Dev Kit installation. 

Prerequisites 

Step 1: Installing the Python package manger astral 
 If you are using Windows please use the below code to install the python package astral 
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" 

In case there is an error to install the file, you need to install chocolaty 

Step 2: Install chocolaty 
 Please make sure to open the PowerShell in Administrator mode 
 Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString(‘https://community.chocolatey.org/install.ps1')) Verify if chocolaty is installed correctly.
                                          Fig 1: validate is choco is installed correctly. 
Step 3: Download and Install Databricks CLI 
Use below CLI command: choco install databricks-cli
                                             Fig 2: download the Databricks CLI
After downloading, install it:
                                         Fig 3: Databricks CLI is installed 
Validate if the CLI is installed properly: databricks -v
                                             Fig 4: Validate the CLI installed properly 

Step 4: Install git with choco 
 Next you need to install git, you can use following command: choco install git -y
                                              Fig 5: Installing git 
Step 5: Install the AI DEV Kit
Since all the pre-requisites are completed, now let’s install the dev kit by using the following command: 
Step 5A: Choose the tools irm https://raw.githubusercontent.com/databricks-solutions/ai-dev-kit/main/install.ps1 | iex 
                                          Fig 6: Dev Kit installation 
It will ask for your Databricks existing profile as shown in the below fig 7

Step 5B: select Databricks profile Press enter or click to view image in full size
                                              Fig 7: Profile and MCP server location 
Step 5C: Choose path for MCP and tools The chosen tool, Claude, cursor, copilot and MCP server will be installed.
                                          Fig 8: MCP server configuration path 
Next step is to authenticate: Press enter or click to view image in full size
                                                   Fig 9: Authentication 
After this step it will take you to the browser to authenticate and after you complete the authentication tools and MCP server will be installed Press enter or click to view image in full size
                                         Fig 10: install completed 
As above fig 10 shown, the installation is completed, however, to further configure it Now you can use VS code to open the project and start the vibe coding as shown fig 11 😊 Press enter or click to view image in full size
                          Fig 11: start using Databricks vibe coding with VS code 

You will find more details: https://github.com/databricks-solutions/ai-dev-kit

Sunday, January 11, 2026

Agentic AI: How Not to Become Part of the 80% Failure Side!


 

Many research reports cite that 80%–85% of GenAI pilots fail, and Gartner predicts that over 40% of Agentic AI projects will be canceled by the end of 2027 [1].

Organizations have started building AI solutions — especially AI agents — to serve specific purposes, and in many cases, multiple agents are being developed. However, these solutions are often built in silos. While they may work well in proof-of-concept (PoC) stages, they struggle when pushed into production.

The biggest question that emerges on a scale is trust.

Trust First: Security, Governance, and Cost Control

Before scaling AI, we must make it safe, compliant, and with predictable cost. It means:

  • PII and sensitive data never leaves our controlled boundary
  • Every query is auditable and attributable
  • Cost per interaction is enforced by design, not monitored after the bill arrives

Why this matters:
Without trust, AI adoption stops at the first incident. To gain trust we need to avoid hallucination, confabulation and we can make some action to monitor and govern. I was working with Databricks to monitor and govern the agent. With step by step process I will share how you can setup AI governance in Databricks.

AI Governance with Databricks
Once you deploy your model as a serving endpoint, you can configure the Databricks AI Gateway, which includes:
• Input and Output guardrails
• Usage monitoring
• Rate limiting
• Inference tables

This setup is illustrated in Figure 1.

Press enter or click to view image in full size
Fig 1: Databricks Mosaic AI Gateway

AI Guardrails: Filter input & output to prevent unwanted data, such as personal data or unsafe content. Figure 2 demonstrates how guardrails protect both input and output.

Press enter or click to view image in full size
Fig 2: AI guardrails to protect Input/Output

Inference tables: Inference tables track the data sent to and returned from models, providing transparency and auditability over model interactions. Refer to Figure 3 for inference table, where you can assign the inference table.

Press enter or click to view image in full size
Fig 3: Inference table

Rate Limiting: Enforce request rate limits to manage traffic for the endpoint. As shown in fig 4, you can limit number queries and number of token for the particular serving end point.

Press enter or click to view image in full size
Fig 4: Rate limits

In above Fig 4, QPM stands for number of queries the end point can process per minute and TPM stands for number of tokens the endpoint can process per minute

Usage Tracking: Databricks provides built-in usage tracking so you can monitor:
• Who is calling the APIs or model endpoints
• How frequently they are used

Press enter or click to view image in full size
Fig 5: Usage tracking

Usage data is available in the System table: system.serving.endpoint_usage

In summary, Databricks offers the tools needed to build trusted and governed Agentic AI solutions, helping you land on the 20% success side of AI adoption.

MIT report: 95% of generative AI pilots at companies are failing | Fortune

https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027