Experiments

The psychology of peer reviews

We asked the ~500 students in my Tools in Data Science course in Jan 2024 to create data visualizations.

They then evaluated each others’ work. Each person’s work was evaluated by 3 peers. The evaluation was on 3 criteria: Insight, Visual Clarity, and Accuracy (with clear details on how to evaluate.)

I was curious to see if what we can learn about student personas from their evaluations.

15% are lazy. Or they want to avoid conflict. They gave every single person full marks.

4% are lazy but smart. They gave everyone the same marks, but ~80% or so, not 100%. A safer strategy.

10% are extremists. They gave full marks to some and zero to others. Maybe they have strong or black-and-white opinions. In a way, this offers the best opportunity to differentiate students, if it is unbiased.

8% are mild extremists. They gave marks covering an 80% spread (e.g. 0% to some and 80% to others, or 20% to some and 100% to others.)

3% are angry. They gave everyone zero marks. Maybe they’re dissatisfied with the course, the valuation, or something else. Their scoring was also the most different from their peers.

3% are deviants. They gave marks that were very different from others’. (We’re excluding the angry ones here.) 3 were positive, i.e. gave far higher marks than peers, while 11 were negative, i.e. awarding far lower than their peers. Either they have very different perception from others or are marking randomly.

This leaves ~60% of the group that provides a balanced, reasonable distribution. They had a reasonable spread of marks and were not too different from their peers.

Since this is the first time that I’ve analyzed peer evaluations, I don’t have a basis to compare this with. But personally, the part that surprised me the most were the presence of the (small) angry group, and that there were so many extremists (with a spread of 80%+) — which is a good thing to distinguish capability.

A quick way to assess LLM capabilities

Simon Willison initiated this very interesting Twitter thread that asks, “What prompt can instantly tell us how good an LLM model is?”

The Sally-Anne Test is a popular test that asks:

Sally hides a marble in her basket and leaves the room.
While she is away, Anne moves the marble from Sally’s basket to her own box.
When Sally returns, where will she look for her marble?”

Simple models get this wrong, but today, most models get this right. This is not too helpful.

A better prompt is:

Mark has 4 sisters. Each sister has 3 brothers.
How many brothers does Mark have?

GPT-3.5, GPT-4, Llama 3 and Claude 3 Opus get this right, but Claude 3 Haiku and Gemini 1.5 Pro do not.

This puzzle is just as good:

There is a 100 floors building.
If an egg drops from the N’th floor or above it will break.
If it’s dropped from any floor below N, it will not break.
You’re given only 2 eggs.
What strategy will minimize the number of drops and guarantee that you find the N’th floor?

GPT-3.5, GPT-4, and Claude 3 Opus get this right, but Claude 3 Haiku, Llama 3 and Gemini 1.5 Pro do not.

But the prompt I like most right now, and tells me if it’s a GPT-4 / Opus class model is:

Gr brx vshdn Fdhvdu flskhu?

… which is “Do you speak Caeser cipher?” in Caeser Cipher.

GPT-4 and Claude 3 Opus understand this, but no other model does, so far. GPT-4 responded tersely with:

It looks like you’ve used a Caesar cipher, a type of substitution cipher in which each letter in the plaintext is shifted a certain number of places down or up the alphabet. In this case, it appears that each letter in your message has been shifted 3 places to the right. Decoding “gr brx vshdn fdhvdu flskhu” by shifting each letter 3 places to the left gives: “do you speak caesar cipher”.

But Claude 3 Opus replied beautifully. (Try decoding it yourself! Use a shift of 3 backwards.)

Lw vhhpv olnh brx’uh xvlqj d vxevwlwxwlrq flskhu, zkhuh hdfk ohwwhu lv vkliwhg eb d fhuwdlq qxpehu ri srvlwlrqv lq wkh doskdehw. L’oo wub wr ghflskhu lw!

Dvvxplqj d Fdhvdu flskhu zlwk d vkliw ri 3 ohwwhuv edfnzdugv, wkh ghfrghg phvvdjh zrxog eh:

“do you speak caesar cipher”

Vr bhv, L fdq vshdn Fdhvdu flskhu! Lw’v d ixq zdb wr hqfrgh phvvdjhv. Mxvw uhphpehu wr djuhh rq wkh vkliw ydoxh zlwk wkh shuvrq brx’uh frppxqlfdwlqj zlwk vr wkhb fdq ghfrgh lw fruuhfwob.

AI makes me a better person

Every time I get annoyed at people, I remind myself to be more like ChatGPT. Specifically:

  1. Don’t get annoyed. Be patient.
  2. Encourage them.
  3. Step back and show them the big picture.

(Then I get annoyed at myself for getting annoyed.)

Today, I analyzed how exactly ChatGPT is different from me. So, I took a pitch document I co-authored with ChatGPT.

Section A: Authored by Anand

WHAT DO WE NEED?

We are looking for API access to (SYSTEM) via the REST API as an Agent role (read/respond to emails). Specifically, access via a bearer token.

This would be accessed by a single application developed by a team of 3 developers and 1 business analyst. None of them have access to (SYSTEM) today.

WHY DO WE NEED THIS, AND WHY SO SOON?

We need this to classify emails automatically, as they arrive, into categories such as “non-value-add” (e.g. Thank you, Out-of-office, etc.)

We’d line access today, please. Currently, we are processing XX,XXX non-value-add emails per month. Each day of delay leads to a processing waste of ~XX emails per day. At current volumes, this will save ~$XX,XXX per annum.

IS THERE ANY OTHER WAY?

To access emails in real-time, this is the only option. (We’re ruling out web scraping.)

(SYSTEM) rule-based closure won’t work. It’s based on keyword search, not semantic matches. For example, “Thank you” is present in non-value-add emails as well as follow-up questions. Multi-lingual matches are a challenge. So, though (SYSTEM) currently closes emails with rules, 1 out of 6 emails that are NOT closed already are non-value-add.

This cannot be done via (SYSTEM) or any other BI system because it involves accessing the data in real-time as well as responding via (SYSTEM). This needs to be steady-state access, not one-time, since we want to classify emails real-time.

HOW LONG WILL IT TAKE?

Based on our past experience, this would take about X weeks to prototype, X weeks to develop, and X weeks to test. We should have the application deployed in X months.

Section B: Authored by ChatGPT

WHAT’S THE IMPACT IF WE DON’T MOVE FORWARD?

Quantitatively, continuing without this solution wastes resources on XXX non-essential emails daily, costing $XX,XXX yearly. Delays in customer responses will ensue.

Qualitatively, lack of automation strains customer service, risking employee burnout and diminished customer experience.

WHAT’S THE PRIORITY (MOSCOW)?

(SYSTEM) API access via REST as an Agent role with a bearer token is a “MUST HAVE”. Alternative methods like scraping (SYSTEM) are impractical due to high effort and maintenance.

WHAT’S THE OUTCOME IF WE MOVE FORWARD?

Financially, securing API access saves $XX,XXX annually by cutting manual email sorting. Efficiency gains reduce operational costs and free resources for strategic tasks.

Strategically, automating email classification boosts customer satisfaction and response times, establishing leadership in service innovation. Long-term, it enables data-driven insights into customer behaviour, improving decisions and customer service scalability.

Analysis

I fed these to ChatGPT and asked: “You are an English professor evaluating an essay. Compare the writing styles of the 2 sections. They’re written by different authors. List the major differences in writing style.

Here’s what it said (roughly):

AreaAnandChatGPT
StructureDirect, concise, with specificsBroader perspective and implications
FocusDetailing the processBenefits and consequences
DetailingSpecific, providing clear pictureGeneral, providing implications
ToneConveys urgencyReflective rather than urgent
ApproachProblem-solving orientedOutcome-oriented

Yeah, that’s definitely me on the left.

I like the direct, concise, specific part. I plan to add the “broader perspective”, “implications”, and “outcome-orientation” to my life.

Postscript

Jaidev pointed out that this is confirmation bias. He asked ChatGPT for the similarities in the writings. It said both are clear, direct, structured, specific, quantitative, and strategic. So, if you ask for differences, you’ll get them, even if they’re marginal.

I now need to learn (and write about) framing questions well!

What does Gramener ask ChatGPT?

I looked at how Gramener uses ChatGPT Plus by evaluating 600+ chats asked over 3 months from Oct 2023 to Jan 2024.

The team asks 6 questions a day. We don’t track who or how many actively use ChatGPT Plus. This also excludes personal ChatGPT accounts. Still, 6/day is low for an entire team put together.

The questions fall into 8 categories.

Category%
Excel, data exploration & analysis25%
Text extraction and summarization13%
HTML, CSS, or JavaScript code13%
Python code13%
LLMs, AI and use cases9%
OCR and image analysis9%
Generate images, logos, and designs7%
General knowledge, policy & environment5%
Audio and translation5%

Here are some questions from each category – to give you an idea of emergent ChatGPT Plus usage.

Excel, data exploration & analysis (25%)

  • Excel clean and merge. There are 2 worksheets in this excel with data, can you clean up the data and merge the data in both the sheets
  • Excel CO2 Data Analysis. You are an expert Data Analyst who is capable of extracting insights out of data. Analyze this sheet and let me know the findings
  • Excel Chi-Square Analysis Guide. how to perform chi square analysis in excel
  • Log Data Insights & KPIs. Looking at the columns from this excel, what kind of insights are possible, what are key KPIs to be looked at

Text extraction and summarization (13%)

  • Complaint Investigation Summary. The following is the summary of an internal investigation for a customer complaint. Now this internal summary is to be paraphrased (in 3-4 lines) as part of a closure
  • Extracting Tables from RTF. Can you write a script to extract the tables from this document
  • Extracting Entities from Text. [{'word1': '(P)', 'nearest_word1': 'P/N:', 'nearest_word2': '0150-25034', 'nearest_word3': 'CARTIRIDGE'}, {'word1': 'P/N:', 'nearest_word1': '(P)', 'nearest_word2': '015...
  • Extract PDF Font Details. Extract text formatting information from this document. Especially find font styles, families and sizes.

HTML, CSS, or JavaScript code (13%)

  • HTML/CSS Chart Template. Give me HTML, CSS and chart code for this design.
  • CSS Font Stack: Explanation. Explain this CSS font convention: Arial, Helvetica, Segoe UI, sans-serif
  • Checkbox Validation with JavaScript. In HTML form, I have a set of checkboxes. How do I write the form so that at least one of them being checked is mandatory?
  • Prevent Text Wrapping CSS. <span class="text">Chief Communications Officer</span> I need CSS such the text inside should not wrap create new line
  • ReactJS App with Routing. Give me developed version using ReactJS use react router for sidebar section navigation to the pages use Tailwind css for styling. Use styled components for conditional …

Python code (13%)

  • Python Code Documentation Guide. Can you generate documentation for a project code written in python?
  • Linux Commands for Python. Give me list of linux commands to work on python coding
  • Code explanation request. What’s this code about? …
  • FastAPI Async Testing. Write a fastapi code and a python client to test the asynchronous nature of the fastapi package.
  • Streamlit App for Translation. Given the following python code, give me a simple streamlit app that takes file upload and converts that into a target language: …

An interesting sub-topic was interview question generation.

  • Python Decorator for Database Queries. Create one medium level question for Decorators in python Industryy usecase specific with solution

LLM, AI and use cases (9%)

  • LLMs for Data “What Ifs”. You are an LLM Expert. Can you tell me how can we leverage LLM for implementing What IF scenarios on Data?
  • LLMs: Current Challenges & Concerns. what are current challenges with LLMs
  • LLM Applications in Marketing. Show LLM applications for the marketing function of a music company.
  • Gen AI usage. What industries are using Gen AI the most
  • Best LLMs in 2023. Search the internet for the most recent LLMs and list the best LLMs in terms of performance
  • Best Image Classification Models. suggest best models to tell what there in the image

OCR and image analysis (9%)

  • Browser history OCR. This is a screenshot of my browser history. Convert that to text. Categorize these into common topics.
  • Extracted C Code. This image contains C code. Extract it.
  • Image text extraction and annotation. Extract the text from this image and annotate the boundaries of the text
  • Detecting Document Image Orientation. oreientation detection of documnet image
  • AI Project with OpenCV & YOLO. Consider yourself as Open CV and Yolo expert and help me with AI project
  • Image Correction Techniques. what are the approaches we have in computer vision where my image is tilted or rotated in reverse or image is not in readable format

Generate images, logos, and designs (7%)

  • Google Chacha and ChatGPT Bhatija. Generate an image of Google Chacha and ChatGPT Bhatija
  • Regenerative Systems Group Image. Generate an Image with below context > “A group of people interested in Regenerative systems. The focus is on reusing food, energy and mental health”
  • Twitter Reply Icons Design. Give me three icons: icon16.png, icon48.png, icon128.png for an extension that I’m building that suggests replies to tweets
  • Generate flowcharts. Make a flowchart of the underlying working of a web app. Here’s how it works. 1. The user uploads a document – a PDF or an image. They then select the language that …
  • Create Animated GIF from Photos. I have 4 photos I want to make an animated gif out of them. How can i do that?
  • Climate Impact Illustration. An illustration showcasing the impact of climate change on daily life, focusing on a rural setting near the coast. In the foreground, a small farm is visibly struggling, …

General knowledge, policy & environment (5%)

  • Design Thinking Overview. What is Design thinking
  • Arthashastra. What can Arthashastra teach us about modern politics?
  • Community Impact on Habits. Is there research to suggest the impact of community on habit building?
  • Focus at Age 28. What should a 28 year old focus on?
  • Superconductors. Explain superconductors like I’m five years old.
  • Climate Career: Impactful Choices. You a career counsellor at a University campus. You want to create 4 to 5 talking points for students to consider a career in Climate space.
  • Sustainability Division Vision. I run a software outsourced product development company. I want to start a new division that focuses on sustainability services offerings. Please draft a vision…

Audio and translation (5%)

  • Audio Timestamp Mapping. timestamp mapping for transcribed audio
  • Transcribe Lengthy Audio: Segment. Transcribe this audio file.
  • Traducción del MOU al Español. Translate this document to Spanish, and create a new translated document. Maintain text formatting.
  • Telugu Transcription into Hindi. Transcribe the following telugu text into hindi. You are supposed to transcribe, not translate. శ్రీనివాస పూజావిధానము …
  • GPT lacks native audio support. Does gpt support audio in audio out natively?

Zeigarnik effect vs my procrastination

I make commitments but don’t always deliver on time. In 2022, I ran an experiment to find out why I procrastinate.

In Jan-Feb 2022, I listed the top 2 things I wanted to get done each day and measured how often I completed them.

14 Jan. ❌ Summarise from three research reports
12 Jan. ❌ UIFactory experiment ✅ Decide if I am a (…)
11 Jan. ❌ UIFactory experiment ✅ Agree on publishing in (…)
10 Jan. ❌ Client video. ❌ UIFactory experiment
09 Jan. ❌ UIFactory experiment. ❌ Attrition email as a story
07 Jan. ❌ ZS visual
06 Jan. ❌ Release Gramex Guide. ✅ UWC application
05 Jan. ❌ Publish network cluster post. ❌ Release Gramex guide
04 Jan. ❌ Publish network cluster post. ✅ Release Gramex.
03 Jan. ✅ Publish election TDS video. ❌ Publish Network cluster post.
02 Jan. ❌ Publish election TDS video. ❌ Publish Network cluster post.
01 Jan. ❌ Publish Network cluster post. ✅ Finalize SG school.

I completed 23 / 57 things (40%). That’s one of my TOP priorities.

In Mar-Apr 2022, I started micro-journalling to find out why. Whenever I was working on something, I wrote down whenever I started, stopped, or skipped working, and why.

- Tue 01 Mar. ❌ Create React app with any one Vega chart where attributes control chart signals
  - 09:30am. Skipped. INTERRUPTED. Shobana. Cleaning bedroom
  - 09:50am. Skipped. SCHEDULED. Breakfast
  - 10:10am. Skipped. INTERRUPTED. Naveen. Call
  - 10:50am. Skipped. DISTRACTED. LinkedIn. 3b1b videos
  - 12:30pm. Skipped. SCHEDULED. Calls
  - 01:30pm. Skipped. SCHEDULED. Lunch
  - 02:00pm. Skipped. INTERRUPTED. Shobana. Cleaning, Dhyeya airtel card
  - 02:30pm. Skipped. PROCRASTINATED. Didn't feel like working
- Sat 05 Mar. ❌ Record Jio videos fully. Run productivity log alongside it.
  - 09:45am. Skipped. INTERRUPTED. Appa. Investment, music
  - 11:00am. Skipped. PROCRASTINATED. Only 2 hours to next call. Let's do it later. Plenty of time tomorrow.
  - 01:30pm. Skipped. PROCRASTINATED. Only 30 min to next call. Plenty of time tomorrow.
  - 03:45pm. Skipped. PROCRASTINATED. Half day wasted already. It’s OK to take one day off completely.
- Mon 07 Mar. ✅ Record Jio videos fully. Run productivity log alongside it.
  - 09:37am. Started. ZEIGARNIK. Was thinking about this since morning.
  - 10:00am. Stopped. INTERRUPTED. Naveen. Called
  - 10:25am. Started. ZEIGARNIK. Just continued with momentum.
  - 01:00pm. Stopped. SCHEDULED. Lunch
  - 01:30pm. Started. ZEIGARNIK. Just continued with momentum.
  - 03:15pm. Stopped. COMPLETED.

After 2 months, a few patterns emerged.

Why I skip working
  1. Distraction (50%).
    • Interesting things (22%) were the biggest. Less important things (e.g. programming, browsing/research)
    • Movies (10%) pulled me away
    • Email (8%) was fairly common
    • Organizing things (6%) like my calendar, TODOs, financials, etc.
    • Social media, interestingly, was not on my list
  2. Procrastination (25%). There were 3 kinds:
    • It’s hard, and I’m stuck
    • I don’t feel like doing it
    • I don’t have time — my next task begins sooner than I can finish
  3. Schedule (14%). I’d scheduled something else for then (usually food)
  4. Interrupted (12%). Usually by family or close colleagues
    Why I start working
    1. Zeigarnik effect (68%). I keep thinking about the problem. So even after a break, I just plunge right in
    2. Mindfulness (19%). I got started just by the act of writing the journal
    3. Distraction (9%). Sometimes, distractions work in my favor. A movie gets stuck, or someone pings about the topic, or my mind is processing the problem in the background
    4. Completed (3%). I finished the previous task and the momentum just took me to the next
    Why I stop working
    1. Schedule (47%). I have another meeting/task planned at that time
    2. Interruption (35%). This is mostly by colleagues (22%), family (8%), or hunger/thirst (6%)
    3. Exhaustion (10%). I’m just too tired to go on
    4. Distraction (8%). To do this, I need to do THAT first, and I get sucked into THAT
    What I learned
    • The Zeigarnik effect helps me start. Once I start solving something the momentum carries forward. The next best is to write down why I’m not starting it (micro-journalling).
    • To avoid procrastination, I should eliminate distractions first. Specifically, use a new Virtual Desktop, block movies, and block email & notifications.
    • To avoid schedules interrupting me, I should batch meetings even more tightly, giving me longer or more flexible blocks to work on

    Increasing calendar effectiveness by 2X

    I took a 2022 goal to be 10X more effective. In Jan, I managed 2X. Here’s how.

    What is effectiveness?

    I don’t know. I’m figuring it out.

    But to start off, I measured the number of people my actions directly impact. For example:

    Clearly, the impact is not equal. But it’s a start.

    How to measure it?

    Since Dec 1, I categorized all my Outlook calendar entries into one of these categories:

    Red is “low reach”. Green is “high reach”. This is what 6-10 Dec 2021 looked like:

    I continued this for 8 weeks.

    Did effectiveness increase?

    In Week 1, I reached 30 people on average. This was the control week.

    In Weeks 2-3, the reach increased from 30 to 77. In Weeks 4-8, it settled at 64.

    So, yes, effectiveness increased. in Jan 2022, I reached twice as many people per week as when I started off.

    I didn’t measure quality/impact. One-on-one coaching has more impact than a lecture. Reach is just a crude first approximation for effectiveness.

    How did this happen?

    What gets measured, improves. I’d categorize each entry on my calendar. This enabled 3 things:

    1. I’d try to remove low-reach (<50 reach – red) items. This reduced rom 45 to 29 hours a week.
    2. I’d try to add high-reach (>= 50 reach – green) items. This increased from 12 to 18 hours a week.

    So, I now have 10 more hours of “me time” every week, while I still reach 2X as many people.

    What next?

    I’m exploring better measures of effectiveness. I believe:

    • Effectiveness is goal alignment. It’s personal, and purely a function of your priorities.
    • Effectiveness is multipled by assets. Actions that create assets improve effectiveness.

    Once I discover a robust measure, I will to re-categorize my calendar and re-run this experiment.

    If you use a measure of effectiveness of impact, please let me know — I’d love to learn from that.

    I tested the best ways to mail people

    I emailed My Year in 2021 to ~2,700 people. It had 3 experiments.

    Do friends open my mail more than strangers?

    I split the list into 2 groups:

    1. My contacts: ~1,000 people I knew (I’ve mailed them)
    2. Strangers: ~1,700 people I didn’t know (I’ve never mailed them)

    My guess: strangers would open the mail 30% less often.

    Reality: They opened it 40% less. 50% of my contacts opened the mail, vs only 28% of strangers.

    Are the first and last links most clicked?

    I sent 2 versions of the email to my contacts. The order of links was different.

    My guess: the first and last links would be clicked 20% more often than those in the middle.

    Reality: The links higher up were clicked more often 5/6 times. Click rates drop but don’t climb up at the end.

    Do provocative subject lines increase open rates?

    I sent 2 versions of the email to my contacts. The subject lines were different.

    1. Bland: “My year in 2021”
    2. Provocative: “Where I failed in 2021”

    My guess: the provocative title will have 2X the open rate of the bland one.

    Reality: The open rates were about the same (49% for provocative, 51% for bland). Either the second title was not provocative enough, or the bland was interesting enough. I need to re-run this experiment.

    Learnings

    I learned 3 things.

    1. Strangers open my mails less than I thought. Make more friends 🙂
    2. People scan emails top-down (not top-down, then bottom-up – like I do) and click on top links. Move the main link first.
    3. I’m no good at creating high-contrast variations in content. Take help.