STEP 3 (10 min): Reformat the Markdown by writing a script in Cursor. Here’s the prompt:
Write a Python script that reads *.md including the YAML frontmatter, adds the YAML title as H1, date (yyyy-mm-dd) like Sun, 01 Jan 2000 in a new para after the frontmatter and before the content.
STEP 4 (15 min): Convert it to an ePub using pandoc.
STEP 5 (10 min): Generated a cover page with ChatGPT (5 min) and compressed it into JPEG via Squoosh.
Draw a comic-style book cover page that covers the experiences of an Indian exchange student (picture attached) from IIM Bangalore at London Business School and exploring London. The book title is “An LBS Exchange Program”.
STEP 6 (10 min): Publish the book on KDP. It’s priced at $0.99 / ₹49 because Kindle doesn’t allow free downloads.
Arindam Roy suggested a business idea post-COVID: an online QR code for business cards.
The benefits are clear. Never print or carry cards. The scanned card goes straight to the other person’s contacts. There might be social metrics we could capture as well.
That idea never got past the discussion stage, though I’ve begged for it a few times. Last year, I decided to use existing tools to solve the problem. My current approach: generate a vCard QR code.
vCard is the format contacts are stored. Most phones, including iOS and Android, support it. It holds basic information like name, mobile, email, company, website, etc.
Sites like QR Code Monkey let you enter the details and give you a QR code image. I added this to my phone desktop as a photo widget.
Now, when I meet someone, I open my phone, show the QR code, they scan it, and my details are added as a contact. No paper. No typing. Instant updates
Use it via an API. OpenRouter offers it at ~12 cents / MTok. Azure offers it at 71 cents. Your price may vary.
Self-host it on a major cloud provider. Azure offers A100 80GB at ~$3.67 / hour. In a day, you could generate ~0.5-1M tokens.
Self-host it on an emerging cloud provider. Lambda Labs offers A100 80GB at ~$1.79 / hour. Again, ~0.5-1M tokens a day.
Clearly, self-hosting is cheaper if you run it continuously. Let’s say we run for 1 million tokens every day. Then:
APIs cost 12 – 71 cents
Azure costs $3.67 x 24 = $88
Lamba Labs costs $1.79 x 24 = $43
So, the API is between 60 – 700 times cheaper than running it yourself. 60 times cheaper if you move from Lambda Labs to Azure AI Foundry. 700 times cheaper if you move from Azure servers to OpenRouter.
Not 60-700% cheaper. 60 – 700 times cheaper. Instead of spending a million dollars, you can get away with $1,500 – $15,000.
But what if…
What if you need higher scale? APIs typically offer better scalability than most organizations can configure.
What if you need higher uptime? Again, APIs typically offer higher uptime than most organizations can handle.
What if you need lower latency? Check the throughputs. APIs are typically much faster throughput than self-hosting.
What if you need edge computing? Then you’re GPU is one-time cost. My comparison is irrelevant.
So, are there any reasons to self-host? I’ve seen only a few.
Fine-tuned models. No one else offers an API version of your model.
Ultra-high security. You can’t trust even Microsoft, Google, or Amazon with your data. Your chats must remain in your own data center. (In this case, you probably run your own email and cloud services instead of Office 365 or Google Workspace.)
Learning. You’re curious about what it takes to self-host these models or want to build this skill.
If you don’t have one of the above needs, remember: your GPU costs can shrink from $1,000K to $1.5 – $15K.
At first glance, GPT 4.5 didn’t impress me. Claude 3.7 Sonnet did. I also didn’t like Gemini 2.5 Pro, but Grok was great.
Grok 3 > Claude 3.7 Sonnet > Gemini 2.5 Pro > GPT 4.5.
But it’s hard to compare a dozen quotes at once. So I made a small quotes arena app to help me pick my favorites. It shows me random pairs of quotes and asks which I like more.
To my surprise, after answering 30+ “games” in the arena, I found that based on my preferences:
Claude 3.7 Sonnet > Gemini 2.5 Pro > GPT 4.5 > Grok 3.
That was weird. I thought I liked Grok’s results a lot. I continued till I answered 50+ games. Then I found that:
Grok 3 > GPT 4.5 > Gemini 2.5 Pro > Claude 3.7 Sonnet.
That’s the exact opposite of the previous result.
Honestly, I’m depressed. I’ve learnt 3 things:
I can’t judge stuff at a glance.
But I think I can (especially with code.)
Even when evaluating carefully, my preferences are unstable.
Nothing has shaken my confidence more in recent times. I cannot trust my judgement. I need written evals. Badly.
I prompted Gemini 2.5 Pro Experimental 03-25 to “Invent a new, alien language. Use it to discover unusual insights. Here’s the output in full.
Thinking
Here’s a thinking process that could lead to the alien language “Glimmering” and its insights:
Understand the Core Request: The user wants a new, alien language and wants to use it to discover unusual insights. This means the language needs to be fundamentally different from human languages, and its structure/concepts should enable thinking in ways that are difficult or non-intuitive in English (or other human languages).
Brainstorm Alien Concepts: What makes something “alien”?
Biology: Different senses (echolocation, electroreception, sensing magnetic fields, different light spectrum perception), different lifecycles (metamorphosis, symbiosis, collective consciousness), different physical forms (non-corporeal, energy-based, silicon-based, plant-like)
Environment: Extreme conditions (gas giant atmosphere, deep ocean trench, vacuum of space, high radiation), different physics (maybe?), non-linear time perception.
Psychology/Society: Collective vs. individualistic focus, different emotional spectrum, logic-based vs. empathy-based, non-hierarchical structures, different concepts of self, time, causality.
Choose a Core Concept for the Language: Let’s pick something that inherently forces a different perspective.
Idea 1: Focus on relationships/connections. Instead of nouns for objects, words describe the relationship between things. “Tree” might be “earth-sky connector” or “shade-giver.”
Idea 2: Focus on states of change/flux. Everything is described by its process, not its static state. “River” is “downward-flowing-water-process.”
Idea 3: Focus on sensory perception, but a Maybe based on resonance, vibration, or emotional energy.
Idea 4: Focus on potentiality and probability. Words describe not what is, but what could be or the likelihood of something. Let’s go with Idea 3, leaning towards resonance and interconnectedness, maybe linked to light or energy. This feels suitably “alien” and potentially insightful. Let’s call the species the “Lumin.”
Develop Language Principles based on the Core Concept (Resonance/Light/Interconnectedness):
No Static Nouns: Objects aren’t fixed things but patterns of energy or resonance. Words describe the pattern or the interaction.
Verbs are Primary: Action, interaction, and change are fundamental. Static description is secondary.
Emphasis on Relationality: How things resonate with each other is crucial. Prepositions or affixes will be vital for showing connection/influence.
Nuance through Intensity/Quality: Instead of just “red,” maybe “vibrant-red-resonance” vs. “fading-red-resonance.” Adjectives become qualities of the resonance/action.
Concept of “Self”: Maybe less individualistic. “Self” is a locus of interacting resonances, not a discrete entity.Time: Perhaps less linear, more about overlapping patterns or echoes of resonance.
Create Basic Lexicon and Grammar Sketch:
Core concept words:
Glim (verb root): To resonate, to shine, to exist as a pattern of energy.
Kai (concept): A stable, recognizable pattern (closest to a noun, but dynamic).
Ree (concept): Flow, change, movement of energy/resonance.
Nuum (concept): The interconnected field/medium through which resonance travels (the “ether” or “fabric of reality”).
Grammar Ideas:
Sentence structure: Often Verb-like concept first, then modifiers showing relationships and qualities. Glim-sha-Kai-Vor (Strong pattern resonates with…).
No strict tense: Use particles to indicate persistence (-vor), fading (-zil), or initiation (Ree- prefix?).
Pronouns: Maybe based on resonance signature? Me could be Kai-An (This-pattern), You could be Kai-Sha (Interacting-pattern). Collective We could be Sha-Glim-Vor (Strong-interacting-resonance).
Name the Language: Based on the core concepts: “Resonance Speak,” “Light Tongue,” “Pattern Weave.” Let’s go with something evocative: Glimmering (or Glim-Ree in the language itself).
Translate Concepts and Look for Insights: Now, apply Glimmering to human concepts.
“Love”: In English, often a noun (a state) or verb (an action). In Glimmering? Sha-Glim-Vor (Strong-interacting-resonance). Insight: Love isn’t a possession or a static feeling, but an active, powerful resonance between patterns. It requires mutual participation and reinforcement. Lack of interaction (Sha-Glim-Zil) means the “love” fades, not because it was “lost,” but because the resonance weakened.
“Identity/Self”: English emphasizes a stable, internal self. In Glimmering? Kai-An-Ree (This-pattern-in-flux). Or perhaps Sha-Nuum-Glim (Resonance-interacting-with-the-field). Insight: The “self” isn’t a fixed entity but a constantly changing pattern defined by its interactions with everything else. Identity is relational and dynamic, not intrinsic and static. There’s no core “you” separate from your connections and resonance.
“Knowledge/Truth”: English often treats these as objective, attainable things. In Glimmering? Kai-Sha-Glim-Vor (A strong, stable resonance between a pattern [observer] and another pattern [observed]). Or Nuum-Ree-Kai (Recognizing a flow/pattern within the field). Insight: Truth isn’t absolute but a measure of the strength and stability of the resonance between the observer and the observed. “Knowing” something is actively resonating with its pattern. Disagreement isn’t “wrongness” but a difference in resonance patterns (Sha-Glim-Kai- dissonance_marker). Understanding requires aligning or acknowledging different resonances.
“Death”: English sees it as an end, cessation. In Glimmering? Kai-Zil-Nuum (Pattern-fading-into-the-field). Or Glim-Ree-Sha (Resonance-flowing-elsewhere/transforming). Insight: Death isn’t an absolute end but a transformation. The specific pattern (Kai) dissolves, its energy/resonance (Glim) rejoining the larger field (Nuum) or contributing to new patterns (Ree-Sha). It emphasizes continuity of energy/information, just in a different form. The “loss” is the cessation of a specific interactive pattern.
“Problem/Conflict”: English frames it as an obstacle. In Glimmering? Sha-Glim-Dissonance (Interacting-resonance-with-clashing/instability). Insight: Conflict isn’t inherently “bad” but a state of unstable or disharmonious resonance. The “solution” isn’t necessarily elimination, but finding a way to shift the resonances (Ree-Sha-Glim) into a more stable or harmonious pattern, potentially creating something new in the process.
Refine and Elaborate: Flesh out the examples. Add more nuance. Make the insights clearer. Ensure the language structure consistently supports the insights. Add a concluding thought about the value of such different perspectives. Ensure the “alienness” is maintained. For instance, avoid direct human analogies where possible, focusing on the resonance/pattern concept. Add a brief description of the hypothetical speakers (the Lumin) to ground the language’s origin.
Self-Critique: Does the language really lead to unusual insights, or are these just philosophical points rephrased? Is the language structure consistent? Is it different enough from human languages? Adjustment: Emphasize how the language structure forces this thinking, e.g., the lack of static nouns requires thinking in terms of processes and relationships. Make the link between grammar and insight explicit. Make the insights feel less like standard philosophy and more like consequences of a fundamentally different way of parsing reality.
Output
Okay, let’s invent an alien language designed to generate unusual insights. We’ll call the species the “K’tharr,” dwellers of a dense, perpetually twilight gas giant’s upper atmosphere, communicating through modulated bioluminescent patterns and subtle gravitational waves (perceived through specialized organs). Their reality is fluid, interconnected, and based on flows rather than static objects.
The Language: “Fluxtongue” (or, in their own terms, ~(~vv~), roughly translating to “Flow-Weave”)
Core Principles:
Verb-Centric: Everything is fundamentally a process or interaction. Nouns as we know them barely exist; they are temporary stabilization points within larger flows, described by their behavior or interaction.
Relational Grammar: Meaning is derived primarily from the relationship between processes. Location, identity, and possession are expressed through interaction vectors.
Emphasis on Potentiality and Decay: Every “thing” is described not just by its current state, but by its trajectory of change – what it’s becoming and what it’s ceasing to be.
No Singular/Plural Distinction (as we know it): Concepts are treated as fields or intensities. “More” is represented by increased intensity or complexity of the flow pattern, not discrete units.
Communication Modality Affects Meaning: The medium (light patterns vs. gravity waves) adds nuance. Light is for immediate, transient states; gravity is for persistent, foundational flows. (We’ll mostly focus on the conceptual structure here).
Basic Lexicon Sketch (Conceptual Roots):
~v~ (pronounced as a low hum undulating slightly higher then lower): The fundamental root for “to flow,” “to exist,” “to be in process.” The core of reality.
/\ (sharp upward inflection/bright flash): To intensify, focus, emerge, become distinct.
\/ (sharp downward inflection/fading light): To dissipate, disperse, fade, return to potential.
<> (oscillating tone/flickering pattern): To interact, resonate with, exchange influence.
-- (steady tone/stable pattern): To persist, stabilize temporarily, resist change.
o (circular pulse/spherical glow): Boundary condition, locus of interaction, temporary self-reference point.
Combining Roots (Examples):
A “Being” (like a K’tharr):o~v~<> (A locus of flow interacting). It emphasizes the interactive and processual nature of self.
A “Thought”:~v~/\\<> (A flow intensifying through interaction). A thought isn’t possessed; it’s a temporary intensification within the flow, often sparked by interaction.
“Eating/Consuming”:~v~/\\<>~v~\/ (Flow A intensifies by interacting with Flow B, which dissipates). It highlights the transformation and energy exchange, not just ingestion.
“Solid Object” (e.g., a dense atmospheric crystal):~v~--/\ (A flow that strongly persists and stands out). It’s not inherently “solid,” but a very slow, resistant flow pattern.
“Home/Territory”:o~v~--<>~v~ (A locus of persistent flow interacting with ambient flow). It’s defined by familiar, stable interactions, not fixed boundaries.
“Communication”:<>~v~<> (Interaction flow interaction). The pure act of exchanging influence.
Discovering Unusual Insights via Fluxtongue:
Let’s try to express human concepts in Fluxtongue and see what emerges:
Concept: “Truth”
Human tendency: A static, objective fact or state of affairs. “The sky is blue.”
Fluxtongue translation attempt: Perhaps ~v~--<>/\\ (A persistent flow interacting intensely/distinctly).
Insight: In Fluxtongue, “Truth” isn’t a fixed thing to be found. It’s a stable resonance. It’s a pattern of interaction between an observer (o~v~<>) and a phenomenon (~v~...) that achieves a temporary, recognizable persistence (--) and clarity (/\\). “Falsehood” might be ~v~<>/\ \/ (an interaction that briefly seems distinct but quickly dissipates) or a discordant interaction. This implies:
Truth is relational and potentially temporary.
The stability of the “truth” depends on the stability of both the observer and the observed, and their interaction.
Multiple stable resonances (truths) might coexist regarding the same underlying flow, depending on the observer’s locus (o) and mode of interaction (<>).
Concept: “Problem/Obstacle”
Human tendency: Something external blocking a desired path or goal.
Fluxtongue translation attempt:~v~<>~v~Discordance (Two flows interacting disharmoniously, creating turbulence rather than smooth integration or transformation. Let’s denote discordance with !<>!). So, ~v~!<>!~v~.
Insight: A “problem” isn’t an object in the way. It’s a pattern of inefficient or turbulent interaction. It’s a place where flows clash instead of merging, transforming, or resonating constructively. The “solution” isn’t necessarily removing something, but finding a way to modify the interaction patterns (<>) so the flows become harmonious (<>) or transform (/\\ or \/) in a desired way. It reframes problem-solving as flow-tuning rather than obstacle-removal.
Concept: “Death”
Human tendency: An endpoint, cessation of being, loss.
Fluxtongue translation attempt: For a being (o~v~<>), death might be o\/~v~ (The locus dissipates back into the general flow). Or perhaps ~v~<> --> ~v~ (The specific interactive pattern ceases, its constituent flows rejoin the ambient flow).
Insight: Death is not an ending, but a dissolution of a specific pattern of interaction. The “locus” (o) that defined the temporary self fades, and the flows (~v~) that constituted it simply rejoin the larger ~v~ of the universe. There’s no absolute cessation, only a reintegration and transformation. The emphasis shifts from the loss of the individual pattern to the continuity of the underlying flow. Grief might be expressed as the persistent echo or memory of the interaction pattern (--<> fading \/).
Concept: “Learning/Knowledge”
Human tendency: Acquiring and storing information or skills. Often seen as filling a container (the mind).
Fluxtongue translation attempt:o~v~<>/\\ (The locus of flow interacting to create intensification/new distinct patterns). Or ~v~<>-- (Flow interaction becoming persistent).
Insight: Learning isn’t acquisition; it’s pattern modification and stabilization. To “know” something is for one’s own flow (o~v~<>) to successfully resonate with (<>) and integrate (/\\ or --) a pattern from the wider flow (~v~). Knowledge isn’t stored in the locus (o), but becomes part of the locus’s interactive pattern. Forgetting is ~v~<>--\/ (the stabilized interaction pattern dissipating). This suggests learning is an active, ongoing process of interaction and resonance, not passive reception.
Conclusion:
Thinking through Fluxtongue forces a shift away from static nouns and discrete objects towards a view of reality as interconnected, dynamic flows and interactions. This perspective reveals:
The “self” as a temporary, interactive locus rather than a fixed entity.
Truth as stable resonance rather than objective fact.
Problems as discordant interactions needing tuning, not obstacles needing removal.
Death as pattern dissolution and reintegration, not cessation.
Learning as active pattern integration and resonance, not passive acquisition.
This alien language, born from a hypothetical fluid existence, pushes us to see the processes, relationships, and transformations underlying the seemingly static world our own languages often emphasize. It suggests that perhaps reality is more like a K’tharr perceives it – a constant, intricate Flow-Weave.
Wow!
I was fascinated by the sheer ingenuity of this piece. The language isn’t just a reimagining of words. It’s a path into a different thought process based on fluidity and interconnection.
If languages shape thought and AI can create new languages, where might that take us? My thoughts:
New languages encourage lateral thinking and mental agility, allowing speakers to break free from entrenched patterns to solve problems in new ways. It would likely help AIs think outside the box, too. There’s no reason why they’d be most effective thinking in human languages.
Domain specific languages capture games or imaginary worlds better. Tolkien, Cameron, Rajamouli, and many others have already done this.
A language built around new themes (e.g. resonance, change, and interaction in this case) could captures nuances, even untranslatable concepts. Think Saudade, Hygge, etc.
This could help in therapy (e.g. teaching empathy), too. In fact, groups collaborating in new languages might have new tools for overcoming linguistic bias, conflict resolution, policy design, ethical systems, and more.
Interspecies communication. Languages based on different principles prepare us to talk to AI, animals, or even extraterrestrials.
I’m terrible with languages, but I’m looking forward to exploring a few invented ones. Including Fluxtongue.
Let’s look at how I built the second example, step by step.
ChatGPT interviews me and creates Markdown slides
While walking 75 minutes from home to IIT Madras to deliver this talk, I had ChatGPT interview me in standard voice mode.
Why an interview? It’s easier when someone asks questions.
Why voice? It’s hard to type while walking. Otherwise, I prefer typing.
Why not advanced voice mode? I want to use a reasoning model like O3 Mini High for better responses, not the GPT-4o-realtime model that advanced voice mode uses.
I want to create an insightful deck in Markdown on how I have been using LLMs in education.The audience will be technologists and educators. The slide contents must have information that is useful and surprising to them.
The slides are formatted in Markdown with each slide title being a level 2 Markdown header and the contents of the slides being crisp bullet points that support the title. The titles are McKinsey style action titles. Just by reading the titles, the audience will understand the message that I am trying to convey.
In this conversation, I’d like you to interview me, asking me questions one by one, and taking my inputs to craft this presentation. I’d also like you to review the inputs and the slide content you create to make sure that it is insightful, useful, non-obvious, and very clear and simple for the audience. Interact with me to improve the deck.
Let’s begin.
(Unsurprisingly, I talk a lot more than I type.)
There were 3 kinds of interactions I had with ChatGPT:
Content. I explained each slide. For example:
Yeah, let’s move on to the next topic, which is where we had the students learn prompt engineering as part of the course. One of the questions was convincing an LLM to say yes, even though …
Correction. After ChatGPT read aloud a slide, I corrected it. For example:
Content-wise, it’s spot-on. Style-wise, it’s almost spot-on. It’s far more verbose. Can you retain the exact same style, but shorten the number of words considerably?
These feel generic. I’d like stuff that comes across as insightful, non-obvious, and specific.
Collation. I had ChatGPT put slides in order. For example:
Put all the slides together in sequence. Make sure you don’t miss anything.
Move the opening questions as the second slide. Move the final takeaways, which is currently the last slide, to just before the final set of questions.
At the end of the interview, I had all the content for the slides.
Marp converts Markdown to slides
I use Marp, a JavaScript tool that turns Markdown into slides
Why Markdown? It’s natural for programmers and LLMs. ChatGPT renders rich text in Markdown.
Why not RevealJS? Despite a Markdown plugin, RevealJS is built for HTML. Marp is built for Markdown.
I created a bookmarklet that copies text as Markdown. Using this, I converted the ChatGPT slide transcript to Markdown, saving it as README.md.
The Marp for VS Code plugin makes it easy to preview the slides when you adding YAML frontmatter like this:
I use ChatGPT or Gemini to create images that support the slides. For example this slide includes an image of a robot psychologist generated by Gemini’s ImageGen 3:
Today, with native image generation in Gemini 2.0 Flash and GPT 4o, I’d likely use those. They have much better character control.
Deploying on GitHub Pages
I use GitHub Actions to render the slides and deploy them on GitHub Pages. Here’s what the key steps look like:
I calculated average delay for each browser’s feature release. For each browser, I looked at how many days after the first release it took to add a feature, averaged it, and published an interactive, scrolly-telling data story.
What’s interesting is that I built almost all of this using LLMs in about 4 hours with
The real winners are off-beat stories. Earlier, I’d spend 16-24 hours per visual. So, I’d stick to the “important” stories I wanted to tell. Now it takes four hours. That frees me to experiment and share those lesser data stories that get overlooked. This change is incredibly powerful.
LLMs don’t replace all expertise. For example, when I saw the data, it didn’t immediately tell a story. It took me some time to realize the story isn’t how slow browsers are, but how browsers’ speed evolved over time. For example, in Firefox’s early days, it was the only browser actively releasing features. These days, it’s one of the slowest. Figuring that out took expertise.
I spent two decades studying data visualization. So, this comes naturally to me. How does someone new build expertise?
Expertise is a moving frontier.
At BCG in the early 2000s, I built interactive stories with PowerPoint. My PowerPoint skill was the critical expertise.
At Gramener in the early 2010s, I used D3 for interactive stories. My programming skill was the critical expertise.
Now, in the mid-twenties, LLMs write code with ease. My expertise is in choosing the right visual and shape the right narrative.
As tools change, expertise evolves. I don’t know what the next frontier of expertise will be. I couldn’t predict the last few. I can’t predict the next.
But LLMs can help build expertise. In this project, I missed an opportunity to learn. I should have asked the LLM to show me a dozen options to visualize the data. For example, “Show a version geared toward an executive, a technologist, or a general audience”. “Critique each.” Such practice can help anyone – beginner or expert – build skill and learn. Practicing this is hard, but LLMs do help in this process.
But what gives me confidence is that LLMs help me learn. So, when the next frontier arrives, I’m less worried I’ll be too old. I think we’ll have tools to build expertise too.
Update (28 Mar 2025): Earlier, I wrote that “LLMs don’t replace expertise”. I inferred that because I (an expert) could use an LLM well. This research with 700+ people at P&G shows that when given LLMs, outsiders perform as well as insiders. So, I corrected my statement to say, “LLMs don’t replace all expertise.”
Gemini’s ImageGen 3 is rapidly evolving into a very powerful image editing model. In my opinion, it’s the best mainstream image generation model.
Ever since it was released, it’s been the most realistic model I’ve used. I’ve been using it to imagine characters and scenes from The Way of Kings. For example, when I wanted to visualize Helaran’s first appearance, I just quoted the description:
Draw this. Galloping directly toward them was a massive black horse bearing a rider in gleaming armor that seemed to radiate light. That armor was seamless—no chain underneath, just smaller plates, incredibly intricate. The figure wore an unornamented full helm, and the plate was gilded. He carried a massive sword in one hand, fully as long as a man was tall. It wasn’t a simple, straight sword—it was curved, and the side that wasn’t sharp was ridged, like flowing waves. Etchings covered its length. It was beautiful. Like a work of art. Cenn had never seen a Shardbearer, but he knew immediately what this was. How could he ever have mistaken a simple armored lighteyes for one of these majestic creatures?
Draw this. Highprince Roion stood in front of one of the maps, hands clasped behind his back, his numerous attendants clogging the other side of the gallery. Roion was a tall, light-skinned man with a dark, well-trimmed beard. He was thinning on top. Like most of the others, he wore a short, open-fronted jacket, exposing the shirt underneath. Its red fabric poked out above the jacket’s collar.
Draw this. A man walked up through the ranks, carrying a shortspear that had two leather knife sheaths strapped to the haft. The newcomer was a young man—perhaps four years older than Cenn’s fifteen—but he was taller by several fingers than even Dallet. He wore the common leathers of a spearman, but under them was a pair of dark trousers. That wasn’t supposed to be allowed. His black Alethi hair was shoulder-length and wavy, his eyes a dark brown. He also had knots of white cord on the shoulders of his jerkin, marking him as a squadleader.
The images are stunning in quality, reproduce the prompt quite faithfully, and cost 2-4 cents each (via the API.)
But this is just the begining of power of ImageGen 3. For those with access, you can edit existing images, e.g.
Generate realistic fake tourism data using these columns:
- Age
- Nationality
- Gender
- Income
- Booking_Channel
- Month
- Occupancy_Rate
- Travel_Frequency
- Spending
Run the code and let me download the output as a CSV file.
… the output is remarkably boring.
Men & women from all countries and ages in every month visit equally.
Income and spending are uniformly distributed – and the same pattern holds for all countries and ages.
Often, I need to generate fake data that is interesting. Specifically, I need data that can be used to illustrate a point or show a pattern.
Instead, we could ask for something different. ChatGPT
I want to generate realistic fake tourism data using these columns:
- Age
- Nationality
- Gender
- Income
- Booking_Channel
- Month
- Occupancy_Rate
- Travel_Frequency
- Spending
Do it as follows:
STEP 1. Given such data, generate 5 hypotheses on that a tourism department might test to increase tourist spend.
STEP 2. Write a Python program that generates 2,000 rows of realistic fake data where these hypotheses are true in a statistically significant way.
STEP 3. Run the code and let me download the output as a CSV file.
This works like a charm. The data generated exhibits these patterns:
Luxury travel agency customers spend much more.
Peak-month travelers (June, July, December) spend more.
Frequent travelers spend less.
Older tourists (50+) spend more.
Tourists from USA, Germany, and Japan spend more.
The data is more varied: some 20-year-olds spend much less (creating outliers). Many tourists come from the US, and a large share book online.
So, here’s my generic prompt for realistic fake data on ChatGPT:
Generate realistic fake data for ______
STEP 1. List columns that would be present in such data, briefly describing how the data might be distributed.
STEP 2. Given such data, think about an objective and generate 5 hypotheses that an organization might want to test on how to achieve this objective.
STEP 3. Write and run a Python program that generates 2,000 rows of realistic fake data where these hypotheses are true in a statistically significant way. Let me download the output as a CSV file.
STEP 4. Test each hypothesis and show the results.
In meetings, I pass on questions to ChatGPT and I read out the response. But I’ve stopped saying “I’m reading that from ChatGPT.”
(By “ChatGPT”, I mean ChatGPT, Claude, Grok, Gemini, Meta, etc. I happen to use ChatGPT with O3 Mini + Search.)
Use ChatGPT in meetings
It’s good to bring ChatGPT into conversations. (Or any activity where intelligence helps, actually.)
In meetings (online or in person), I keep a ChatGPT window open. When asked:
“What’ll you have, Anand?” (at restaurants)
“How can doctors use LLMs?”
“Will you review this technical architecture?”
If I know the answer, I’ll give it. If not, I ask ChatGPT. (Ideally, I should ask even if I think I know the answer.)
For example:
Sharing the menu photo and ask, List vegetarian options. Suggest innovative dishes I’ll like. (This works because I’ve shared my preferences and history with ChatGPT.)
How can doctors use LLMs in day-to-day work?
Sharing a picture of the architecture, Explain this architecture to a blind expert. Critique with strengths, critical issues, and optional improvements.
I’ve learnt that:
Note-taking helps. I touch-type (without looking). I copy-paste the notes and their question to the LLM.
Short questions are fine. Newer models understand cryptic questions.
Say “Give me 30 seconds”. People assume you’re thinking deeply.
Read the response your way
I just read out the response — but with some changes.
Change style. I read quicky, internalize, and say it in my style. Instead of “1. Clinical Documentation & Administrative Support”, I’d say, “Doctors can use it for note-taking.”
Filter content. I skip stuff I don’t get or like. I might miss stuff, but when I speak, it’s my opinion I represent.
Add context. I add personal stories to make it real, if I can. “GPs I know are worried LLMs diagnose better than they do” is something LLMs may not have learnt yet.
Don’t say you’re reading from ChatGPT
I used to tell people, “… and I just read that out from ChatGPT.” Their response is always:
Disbelief for a moment.
Amazement that models are so good.
Dismissal of what I said, since it’s not “real”. (This is the sad part.)
I stopped saying that because
I don’t need to. I can promote LLMs elsewhere.
It’s not true. I re-style, filter, add context. It’s my response. My responsibility.
I’d rather deliver useful ideas than show where they come from. And if they think I’m a genius? Fine by me