Chapter 3: The Presentation --- From Code to Slides at Midnight¶
The Pivot to Slides¶
It was 11 PM on February 13, and we had just finished patching four critical security vulnerabilities in a single session. The audit had started at 8:51 PM, the fixes landed by 9:15 PM, and the documentation update followed by 9:55 PM. The codebase was hardened. The tests were green. The dashboard was deployed and locked behind PIN auth.
And we had no presentation.
Session transcript, Feb 13 ~23:39
"based on everything that we have built right now, brainstorm on the presentation format"
The GUVI India AI Impact Buildathon required submissions to follow a specific slide template --- a PDF with a fixed 9-slide structure. Nine slides. To explain a system with 3 culturally-authentic personas, 11 evidence extraction types, a self-correcting strategy state machine, cross-session scammer fingerprinting, production security hardening with OIDC verification, a CI/CD pipeline using Workload Identity Federation, and a 9-page real-time Streamlit dashboard. Nine slides.
The pivot from security auditor to presentation designer happened in a single message. One moment we were verifying HMAC-signed cookie validation. The next, we were debating how to convey Sharma Uncle's slow-typing personality in a slide bullet point. The context switch was jarring, but the clock was running. Submission was hours away.
This is where the collaboration model between developer and AI coding partner gets interesting. Claude Code had built the system --- every module, every regex pattern, every prompt template. It knew the codebase intimately. The question was whether that knowledge could be repurposed from writing code to writing a pitch.
The Two-Version Strategy¶
The developer's first instinct was the right one:
Session transcript, Feb 13 ~23:39
"i want to create two slides versions --- one with the original 9 & one with original 9 + important slides"
This was a calculated hackathon move. The required format was 9 slides --- that is what the judges expect. But what happens when a judge leans in and asks "how does your security model work?" or "explain the cross-session learning in more detail"? You either scramble to articulate it from memory, or you jump to slide 10.
The plan:
- Version A (9 slides): The standard submission. Every slide dense, following the template layout exactly, telling the complete story from problem to solution.
- Version B (12 slides): The standard 9 plus 3 bonus slides --- a detailed architecture diagram, the full security hardening stack, and a scoring rubric alignment table.
Version B was Q&A insurance. Hidden slides that no judge would see unless they asked for depth. You lose nothing by having them. You lose everything by not having them when asked.
Version A (9 slides) Version B (12 slides)
+-----------------------+ +-----------------------+
| 1. Problem | | 1. Problem |
| 2. Solution | | 2. Solution |
| 3. How It Works | | 3. How It Works |
| 4. Architecture | | 4. Architecture |
| 5. Evidence | | 5. Evidence |
| 6. Categories | | 6. Categories |
| 7. Demo | | 7. Demo |
| 8. Limitations | | 8. Limitations |
| 9. Submission | | 9. Submission |
+-----------------------+ | 10. Security Stack | <- bonus
| 11. Deep Architecture | <- bonus
| 12. Scoring Alignment | <- bonus
+-----------------------+
Hackathon presentation strategy
If a buildathon gives you a slide limit, hit the limit exactly. But always have extra slides stacked after the last required one. Judges who want to dig deeper will appreciate the material. Judges who do not care will never see it. You lose nothing.
What Made the Cut¶
The hardest problem was not writing the content. It was compression.
Most hackathon teams write their presentations from memory. They open a blank slide deck, try to remember what they built, and struggle to articulate it under time pressure. Key features get forgotten. Technical depth gets flattened into bullet points that could describe any project.
We had a different asset: an AI that had authored every line of code and could explain the reasoning behind each decision. Instead of writing the presentation from the developer's memory (which, at midnight after a security audit, was not at peak recall), we asked Claude Code to analyze the entire codebase and generate structured presentation content.
The result was a 364-line markdown document (presentation-content.md) that served as the script --- covering everything from India's Rs 1,750+ crore annual scam losses to the specific Verhoeff checksum validation we use for Aadhaar numbers. From there, we had to compress. Here is what survived the cut and what did not:
| Engineering Feature | Made the Cut | Presentation Translation |
|---|---|---|
| 3 personas with family backstories and speech patterns | Yes | "Culturally-authentic AI personas that fool real scammers" |
| 11 evidence types with Indian-specific regex | Yes | "Extracts UPI IDs, bank accounts, Aadhaar --- all Indian financial identifiers" |
| Strategy state machine with self-correction | Yes | "AI adjusts its approach mid-conversation when the scammer resists" |
| Honest limitations (voice, images, novel UPI handles) | Yes | "What we cannot do --- and why that matters" |
| Cross-session scammer fingerprinting | Cut | Lived in the bonus slides |
| Keyword scoring weight algorithm | Cut | Too granular for a 9-slide pitch |
| Firestore schema design | Cut | Implementation detail |
| Circuit breaker pattern for Gemini failures | Cut | Infrastructure detail |
| Rate limiting specifics | Cut | Moved to security bonus slide |
The advantage of AI-generated presentation content
Because Claude Code had written the code, it could pull specific details that a developer writing from memory would miss. The presentation included the exact number of UPI handles in the regex list (30+), the Cloud Tasks scheduling delay for callbacks, the specific EMU coordinates of icon grids. These details signal to judges that the system is real, not vaporware. A human at midnight rounds "30+ UPI handles" down to "supports UPI." The AI gave the exact count.
The limitations slide (Slide 8) was one of the strongest in the deck. We deliberately included four things we could not handle: no voice or video processing, text-only engagement, novel UPI handles outside our pattern list, and mid-conversation language switching. Hackathon judges have seen hundreds of pitches that claim to solve everything. A slide that says "here is where we stop" signals that you understand the problem space deeply enough to know your own boundaries.
The PPTX Generation Saga¶
Here is where the collaboration hit a wall. Claude Code can generate text, write Python, and analyze code. It cannot drag-and-drop shapes in PowerPoint. The developer explored multiple paths --- could Claude Code generate PPTX files directly? Could it drive Keynote? The answer was: not directly, but there is a workaround that is arguably better.
Claude Code wrote a 709-line Python script (generate_presentations.py) using the python-pptx library. The script read the GUVI-provided template PPTX, iterated through every placeholder shape on every slide, replaced template text with real content, handled tables, icon grids, caption alignment, text formatting, and generated both Version A and Version B from a single run.
# The GUVI template had some interesting placeholder text...
LEFTOVER_PATTERNS = [
"Lorem Ipsum", "lorem ipsum", "gjhghjgjhg", "<Heading>",
"<Add your title", "Add Pointer here", "Description for",
"[ IMAGE ]", "Sample text",
]
That "gjhghjgjhg" in the patterns list tells a story. The GUVI template had gibberish placeholders that needed to be caught and replaced. The script verified that no leftover text survived into the final output.
The alignment rabbit hole
The first generated PPTX had misaligned icon captions on slides 4 and 12. The caption text boxes did not line up with the icon grid columns above them. This led to commit c12c644 at 4:50 AM --- sorting caption shapes by their x-position and matching them to the correct icon column. A human would drag and drop in 10 seconds. Programmatically, you are wrangling EMU coordinates and shape trees. Presentation tooling is deceptively fiddly.
The script also handled the team attribution. The initial version had the wrong name format on the submission slide. It was corrected to match the official registration details --- a small detail that matters when judges are reading 50 decks and need to connect a slide to a submission.
Writing 709 lines of Python to generate slides feels absurd when you could open PowerPoint and type. But the programmatic approach is repeatable. Fix the caption alignment? Change the code, regenerate. Update the team name? One string change, regenerate. Every edit is a code change, not a manual operation that might introduce new misalignments. At 4 AM, "change one line and regenerate" beats "open the file and try to remember which text box was which."
The Playground¶
Alongside the presentation, we had built another demo artifact: an interactive HTML playground (scamshield-playground.html). This was a 1,056-line single-file web application --- a dark-themed architecture explorer with a cybersecurity aesthetic that let someone click through the entire ScamShield pipeline interactively. Select a scam type, see which persona gets assigned, watch evidence flow through the extraction pipeline, examine the conversation strategy state transitions.
The playground had been committed back on Feb 11 as part of the feature expansion sprint (commit 97b29c2), but it was designed with the presentation in mind. If we ever got to do a live demo, the playground could show the system's decision-making in a visual, interactive way that slides alone could not convey. It was the kind of artifact you build hoping to use but knowing you might not --- a bet that a live demo slot would materialize.
In the end, the playground did not make it into the primary slide submission. But building it forced us to think about the system from a demo perspective: what is the most compelling way to show someone, in 60 seconds, how a scam message becomes an intelligence report? That framing influenced the presentation slides directly. The pipeline diagram on Slide 3 was essentially the playground's flow, flattened into a static image.
Shipping Features While Building Slides¶
The presentation work did not happen in isolation. The same Feb 13--14 window produced a cascade of other commits that had nothing to do with slides:
dd722c0 06:01 AM Fix NOT_SCAM handling: strategy, callback, reclassification, scheduling
ded9b10 07:06 AM Replace Google OAuth with PIN auth, add source tracking, upgrade Gemini
332a1df 07:11 AM Upgrade to Gemini 3 Flash with automatic fallback to 2.0 Flash
d4bd41d 08:15 AM Improve Gemini prompts with few-shot examples and structured JSON output
ea26fdf 08:52 AM Add cookie-based session persistence for dashboard PIN auth
While one thread of work was producing the PPTX generator, another was upgrading the LLM from Gemini 2.0 Flash to Gemini 3 Flash, replacing Google OAuth with PIN-based auth on the dashboard, and fixing a critical bug where the system mishandled legitimate (non-scam) messages.
The NOT_SCAM bug found during presentation prep
Commit dd722c0 at 6:01 AM fixed a subtle issue that preparing the demo uncovered: when the system received a legitimate message (classified as NOT_SCAM), it still tried to run the scam engagement strategy, schedule a callback, and send false-positive intelligence to GUVI. The root cause was that strategy, callback, and scheduling logic never checked scam_type --- they used confidence alone, which has inverted semantics for NOT_SCAM (high confidence means "sure it is NOT a scam," not "sure it IS"). The fix touched four modules, added 289 lines of tests, and landed at 6 AM while the presentation script was also being written. Preparing to demo the system forced us to think about edge cases we had skipped during the build sprint. The presentation found the bug.
This is the hackathon reality: you never have a clean "now we do slides" phase. Features keep breaking. New model versions drop. Authentication needs rethinking. The presentation gets built in the gaps between firefighting.
The Submission¶
By 7:39 AM on February 14, the last commit landed:
That is the commit of someone who just finished generating PPTX files at dawn and is cleaning up before submitting. The .gitignore additions --- *.pptx, presentation_images/, qa_slides/, generate_images.py --- were the digital equivalent of sweeping up the workshop floor.
The full timeline of that night:
| Time (IST) | What Happened |
|---|---|
| ~21:15 | Security audit fixes deployed (commit ba3bc09) |
| ~21:55 | Documentation updated post-audit (commit 61ac7b0) |
| ~23:39 | Presentation brainstorm begins --- two-version strategy decided |
| 04:50 | generate_presentations.py committed (709 lines) with caption alignment fixes |
| 06:01 | NOT_SCAM handling bug discovered and fixed (342 lines of code + tests) |
| 07:06 | OAuth replaced with PIN auth, Gemini 3 Flash upgrade (658 lines) |
| 07:11 | Gemini model fallback logic added |
| 07:39 | Presentation artifacts gitignored. Slides generated. Done. |
| 08:52 | Cookie-based session persistence added (the final polish) |
Then came the submission email, and the nervous wait.
The system was deployed. The tests were green. The slides were generated in two versions. The playground was built. The security audit was clean. We had done everything we could think of. The gap between "submitted" and "evaluated" is a strange liminal space --- too late to change anything meaningful, too early to know if any of it mattered. You refresh your email. You re-read the slides. You wonder if you should have included cross-session fingerprinting in the core 9 instead of cutting it to the bonus slides. You wonder if the judges will even look at Slide 8 long enough to notice the honest limitations. You wonder if someone else built something better.
That last thought would become the subject of the next chapter, two days later, at 3 AM.
What We Learned¶
The AI Advantage in Presentation Work
The most surprising lesson was how effective an AI coding partner is at presentation work --- not because it designs beautiful slides, but because it has total recall of the codebase. Every statistic, every feature, every architectural decision was available instantly. A human writing a presentation about their own project at midnight will forget things. They will undercount features, misremember statistics, and leave out components that became second nature during development. Claude Code did not forget. It pulled details from regex_patterns.py, from orchestrator.py, from the persona prompts, from the test files. The presentation was comprehensive because the AI had perfect codebase memory.
Compression Forces Clarity
Fitting a production AI system into 9 slides forced brutal prioritization --- and that prioritization clarified our own thinking. What is ScamShield AI, really? It is not "a Firebase Cloud Function with Gemini and Firestore." It is "an AI that becomes the scammer's perfect victim." The 9-slide constraint forced us to find that sentence. We would not have found it in a 30-slide deck.
Presentation Prep Finds Bugs
The NOT_SCAM handling bug (commit dd722c0) was discovered during presentation prep. Preparing to demo the system forced us to think about edge cases: what happens when the message is not a scam? The answer was "the system treats it like a scam anyway," which was wrong. Never skip the presentation dry-run. It is a free testing session.
Start the Presentation Early
We had the presentation content drafted on Feb 11, two days before the late-night formatting session. The Feb 13 work was about tooling and layout, not content creation from scratch. If the content had not been pre-drafted, the 11 PM brainstorm would have been an 11 PM panic. Start your presentation content the day you have a working prototype, not the night before the deadline.
Previous: Chapter 2 -- Hardening Under Pressure | Next: Chapter 4 -- Scouting the Competition