Calvin UMAP

Similar to the embedding map of my blog posts, I created an embedding map of Calvin & Hobbes. It uses the same process as before. Video

How I use AI to teach

I’ve been using AI in my Tools in Data Science course for over two years - to teach AI, and using AI to teach. I told GitHub Copilot (prompt) to go through my transcripts, blog posts, code, and things I learned since 2024 to list my every experiment in AI education, rating it on importance and novelty. Here is the full list of my experiments. 1. Teach using exams and prompts, not content ⭐ Use exams to teach. The typical student is busy. They want grades, not learning. They’ll write the exams, but not read the content. So, I moved the course material into the questions. If they can answer the question, great. Skip the content. Use AI to generate the content. I used to write content. Then I linked to the best content online – it’s better than mine. Now, AI drafts comics, interactive explainers, and simulators. My job is to pick good topics and generate in good formats. Give them prompts directly. Skip the content! I generated them with prompts anyway. Give students the prompts directly. They can use better AI models, revise the prompts, and learn how to learn with AI. ⭐ Add an “Ask AI” button. Make it easy for students to use ChatGPT. Stop pretending that real-world problem solving is closed-book and solo. ⭐ Make test cases teach, not just grade. Automate the testing (with code or AI). Good test cases show students the kind of mistake they may - teaching them, not just grading them. That’s great for teachers to analyze, too. Test first, then teach from the mistakes. Let them solve problems first. Then teach them, focusing on what failed. AI does the work; humans handle what AI can’t. This lets us teach really useful skills based on real mistakes. 2. Make cheating pointless through design, not detection ...

Local context repositories for AI

When people ask me for connections, I share my LinkedIn data and ask them to pick. This week, three people asked for AI ideas. I shared my local content with AI coding agents and asked them to pick. STEP 1: Give access to content. I use a Dockerfile and script to isolate coding agents. To give access, I run: dev.sh -v /home/sanand/code/blog/:/home/sanand/code/blog/:ro \ -v /home/sanand/code/til:/home/sanand/code/til:ro \ -v /home/sanand/Dropbox/notes/transcripts:/home/sanand/Dropbox/notes/transcripts:ro This gives read-only access to my blog, things I learned, transcripts, and I can add more. (My transcripts are private, the rest are public.) ...

SearXNG and Vane

While exploring resonant computing tools, I discovered SearXNG, a self-hostable metasearch engine, which aggregates results from multiple search engines. It lets you search using APIs without needing to buy API keys and without being tracked. Pretty useful for research, people discovery, etc. when combined with LLMs. Setting it up for API use seems easy (thought Gemini got it wrong twice): cat <<EOF > settings.yml use_default_settings: true server: secret_key: "local_dummy_secret_key_987654321" search: formats: - html - json EOF docker run -d \ -p 8080:8080 \ --name searxng \ -v "$(pwd)/settings.yml:/etc/searxng/settings.yml" \ -e "SEARXNG_BASE_URL=http://localhost:8080/" \ -e "SEARXNG_SERVER_LIMITER=false" \ searxng/searxng Now, you can run: ...

AI in SDLC at PyConf

I was at a panel on AI in SDLC at PyConf. Here’s the summary of my advice: Process Make AI your entire SDLC loop. Record client calls, feed them to a coding agent to directly build & deploy the solution. Record your prompts, run post-mortems, and distill them into SKILLS.md files for reuse. Prompting Ask AI to make output more reviewable. Don’t waste time reviewing unclear output. Prefer directional feedback (feeling, emotion, intent) over implementational. Also give AI freedom to do things its way. Learn from that - you’ll be surprised. Learning ...

Interactive Explainers

Given how easy it is to create interactive explainers with LLMs, we should totally do more of these! For example, I read about “Adversarial Validation” in my Kaggle Notebooks exploration. It’s the first time I heard of it and I couldn’t understand it. So, I asked Gemini to create an interactive explainer: Create an interactive animated explainer to teach what adversarial validation is. Provide sample code only at the end. Keep the bulk of the explainer focused on explaining the concept in simple language. ELI15 ...

Human as an Interface

People often email me questions they could have answered with ChatGPT. I just copy-paste the question, copy-paste the answer. This isn’t new. From 1998-2005, I used to do this Google searches. Even people who have Google Maps on their phone ask me for directions. I pull out my Google Maps and tell them. They don’t even get the sarcasm. Effectively, I’m the Human-as-an-Interface (HAAI everyone!) But I learnt today that this has historical precedent. Doormen, lift operators, doormen, the waiter who recites the menu, the secretary we used to dictate to, … ...

Software Naming Has Power

Software naming has power. I first became aware of this when a friend commented how much he enjoyed starting Windows 3. “Win,” he said. “I just love typing that!” I felt this this again recently with just. just lint Can you feel it? just build Actually, I just like to say “Just…” just test

Kick-starting a PyConf Panelist Interview

I was a panelist at the PyConf Hyderabad AI in SDLC - Panel Discussion. After that, one of the volunteers asked for a video interview. “How was the panel discussion?” he asked. Ever since I started using AI actively, my brain doesn’t work without it. So, instead of an eloquent answer, I said, “Good.” He tried again. “Um… how did you feel about it?” he asked. I searched for my feelings. Again, fairly empty in the absence of AI. “Good,” I said again. ...

IIM Bangalore PGP Interview Panel

Yesterday, I was part of an IIM Bangalore interview panel at Hyderabad, along with Professor Subhabrata Das and Debajyoti. Panels typically comprise of two faculty and an alumni, and handle 8 interviews in the morning and eight in the evening, though in our case, we had 9 each. As we arrived, we were given a USB drive with the student’s resume, statement of purpose, and other documents that they had submitted, which included employment contracts, declarations, letters of recommendation, etc., depending on the student. Each interview was approximately 20 minutes. Luckily, Dr Das set a timer for 18, so we didn’t go too far beyond. ...

Blog embeddings map

I created an embedding map of my blog posts. Each point is a blog post. Similar posts are closer to each other. They’re colored by category. I’ve been blogging since 1999 and over time, my posts have evolved. 1999-2005: mostly links. I started by link-blogging 2005-2007: mostly quizzes, how I do things, Excel tips, etc. 2008-2014: mostly coding, how I do things and business realities 2015-2019: mostly nothing 2019-2023: mostly LinkedIn with some data and how I do things 2024-2026: mostly LLMs … and this transition is entirely visible in the embedding space. ...

AI Palmistry

I shared a photo of my right hand with popular AI agents and asked for a detailed palmistry reading. Apply all the principles of palmistry and read my hand. Be exhaustive and cross-check against the different schools of palmistry. Tell me what they consistently agree on and what they are differing on. I was more interested in how much they agree with each other than with reality. So I shared all three readings and asked Claude: ...

Hardening my Dev Container Setup

I run AI coding agents inside a Docker container for safety. The setup is dev.dockerfile: builds the image dev.sh: launches the container with the right mounts and env vars dev.test.sh: verifies everything works. I wrote them semi-manually and it had bugs. I had GitHub Copilot + GPT-5.4 High update tests and actually run the commands to verify the setup. Here’s what I learned from the process. 1. Make it easier to review. The first run took long. I pressed Ctrl+C, told Copilot to “add colored output, timing, and a live status line”. Then I re-ran. Instead of a bunch of ERROR: lines, I now got a color-coded output with timing + a live status line showing what’s running. ...

Cracking online exams with coding agents

An effective way to solve online exams is to point a coding agent at it. I use that on my Tools in Data Science course in two ways: As a test case of my code. If my agent can solve it, good: I set the question correctly. As a test of student ability. If it can’t, good: it’s a tough question (provided I didn’t make a mistake). For PyConf, Hyderabad, my colleague built a Crack the Prompt challenge. Crack it and you get… I don’t know… goodies? A job interview? Leaderboard bragging rights? ...

The Future of Work with AI

I often research how the world will change with AI by asking AI. Today’s session was informative. I asked Claude, roughly Economics changes human behavior. As intelligence cost falls to zero, here are some changes in my behavior [I listed these]. Others will have experienced behavioral changes too. Search online and synthesize behavioral changes. It said this. 🟡 People spend time on problem framing & evaluation. AI can execute the middle. (I’m OK at this. Need to do more framing + evaluation.) 🟢 People don’t plan, they just build. (I’m prototyping a lot.) 🟢 People build personal data & context. (I’m mining my digital exhaust.) 🔴 People queue work for agents, delegating into the future. (I’m not. I need to do far more of this.) 🟢 People shift from searching to asking for answers. (I do this a lot, e.g. this post.) 🟡 People are AI-delegating junior jobs and developing senior level taste early. (Need to do more.) 🟡 People treat unresolved emotions as prompts. (Need to do more.) Rough legend: 🟢 = Stuff I know. 🟡 = I kind-of know. 🔴 = New learning. ...

Recording screencasts

Since WEBM compresses videos very efficiently, I’ve started using videos more often. For example, in Prototyping the prototypes and in Using game-playing agents to teach. I use a fish script to compress screencasts like this: # Increase quality with lower crf= (55 is default, 45 is better/larger) # and higher fps= (5 is default, 10 is better/larger). screencastcompress --crf 45 --fps 10 a.webm b.webm ... To record the screencasts, I prefer slightly automated approaches for ease and quality. ...

LLM Comic Styles

I maintain an LLM art style gallery - prompts to style any image I generate. Since I generate several comics, I added a comic category page that includes styles like: To generate these, I asked Claude: Here are some examples of image styles I've explored. <image-styles> "2D Animation": "2D flat animation style, clean vector lines, cel-shaded coloring, cartoon proportions" "3D Animation": "Modern 3D animation render, smooth surfaces, dramatic lighting, Octane render quality, cinematic depth" ... </image-styles> In the same vein, I'd like to explore **comic** styles. Create 30 popular comic / cartoon styles, aiming for diverse aesthetics and cultural influences. Name it concisely (1-2 words) based on the source, but the description should not reference the source directly (to avoid copyright issues). Focus on the visual characteristics that define each style. Pick those KEY visual elements that will subliminally evoke the style without explicitly naming it. … followed by: ...

Protyping the prototypes

I added a narrative story to my LLM Pricing chart. That makes it easier for me and others to tell the story of AI’s evolution in the last three years. Video It was vibe-coded over two iterations. In the first version, I prompted it to: Add a scrollytelling narrative. So, when users first visit the page, they see roughly the same thing as now (but prettier). As they scroll down, the page should smoothly move to the earliest month, and then animate month by month on scroll, and explaining the key events and insights in terms of model quality and pricing. Use the data story skill to do this effectively, narrating like Malcolm Gladwell, with the visual style of The New York Times, using the education progression as a framework for measure of intelligence (read prompts.md for context). Store the narrative text in a separate JSON file and read from it. This should control the entire narrative, including what month to jump to next, what models to highlight, what insights to share, and so on. ...

Directional feedback for AI

People worry that AI atrophies skills. Also that junior jobs, hence learning opportunities, are shrinking. Can AI fill the gap, i.e. help build skills? One approach is: Do it without AI. Then have AI critique it and learn from it. (Several variations work, e.g. have the AI do it independently and compare. Have multiple AIs do it and compare. Have AI do it and you critique - but this is hard.) ...

Using game-playing agents to teach

After an early morning beach walk with a classmate, I realized I hadn’t taken my house keys. My daughter would be sleeping, so I wandered with my phone. This is when I get ideas - often a dangerous time for my students. In this case, the idea was a rambling conversation with Claude that roughly begins with: As part of my Tools in Data Science course, I plan to create a Cloudflare worker which allows students to play a game using an API. The aim is to help them learn how to build or use AI coding agents to interact with APIs to solve problems. ...