<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://www.rshankar.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://www.rshankar.com/" rel="alternate" type="text/html" /><updated>2026-02-21T12:46:28+00:00</updated><id>https://www.rshankar.com/feed.xml</id><title type="html">Ravi Shankar</title><subtitle>Exploring iOS Development, SwiftUI, and Apple Platforms</subtitle><author><name>Ravi Shankar</name></author><entry><title type="html">I Marketed My App for 18 Months Without Knowing If It Worked</title><link href="https://www.rshankar.com/watch-app-analytics-invisible/" rel="alternate" type="text/html" title="I Marketed My App for 18 Months Without Knowing If It Worked" /><published>2026-02-21T00:00:00+00:00</published><updated>2026-02-21T00:00:00+00:00</updated><id>https://www.rshankar.com/watch-app-analytics-invisible</id><content type="html" xml:base="https://www.rshankar.com/watch-app-analytics-invisible/"><![CDATA[<p>In August 2024, I published ChantFlow on the App Store. It’s a mantra counting app for Apple Watch — you wear it during practice, it counts with haptics, that’s it.</p>

<p>For the next 18 months, my analytics looked like this: sales numbers. That’s it. Some days zero, some days one or two. No idea where they came from. No idea who was downloading it or why.</p>

<!--more-->

<p>I assumed it was just how small apps work. You ship, you hope.</p>

<h2 id="the-marketing-push-i-couldnt-measure">The Marketing Push I Couldn’t Measure</h2>

<p>Last August I decided to actually try. I wrote blog posts, posted on Reddit, shared on Instagram and Twitter. Downloads ticked up a little. But I had no way to connect any of it. Did the Reddit post help? Which country responded? Was anyone even landing on the App Store page?</p>

<p>I just didn’t know. I moved on.</p>

<h2 id="what-i-didnt-realize-about-watch-only-apps">What I Didn’t Realize About Watch-Only Apps</h2>

<p>When I started building an iOS companion for ChantFlow — dashboards, streaks, health data from Apple Watch — I wasn’t thinking about analytics. I just wanted to make the app more useful.</p>

<p>The iOS version got approved a few days ago.</p>

<p>Within 24 hours, my App Store Connect analytics tab came alive. Impressions, page views, conversion rates, sources, territories. All of it suddenly there.</p>

<p>And then I noticed the dates. The data went all the way back to launch day — August 2024.</p>

<p><strong>Apple had been collecting everything. It just wasn’t showing it to me because I didn’t have an iOS app.</strong></p>

<h2 id="looking-back-at-august">Looking Back at August</h2>

<p>I went back to August 2025 — my marketing month — and finally saw what actually happened.</p>

<p>There was a real spike. Product page views jumped — people were landing on the App Store page.</p>

<p><img src="/images/august-spike-discovered.png" alt="Product page views over time showing a clear spike in August 2025" /></p>

<p>And when I looked at impressions by territory, it was almost entirely India.</p>

<p>India is my #1 market by a wide margin — more than double the US. The app is a mantra counter, of course it resonates there. But I had no idea. I was writing Reddit posts without knowing who was actually reading them.</p>

<p><img src="/images/india-dominance.png" alt="Impressions by territory with India leading at 8,644 — more than double the US" /></p>

<p>UAE, Singapore, Malaysia all show up too. The South Asian diaspora found this app and I didn’t even know to lean into it.</p>

<p>The other thing I noticed: nearly half my downloads come from App Store Browse, not Search. People stumble onto it while browsing Health &amp; Fitness. If I’d known that earlier I would have focused more on my screenshots and less on obsessing over keywords.</p>

<h2 id="if-you-have-a-watch-only-app">If You Have a Watch-Only App</h2>

<p>You are probably missing all of this. Not because the data doesn’t exist — Apple is collecting it — but because it won’t surface until you have an iOS app in the store.</p>

<p>It doesn’t have to be complex. Even a simple companion with a settings screen would unlock it.</p>

<p>I wish I’d known this at launch. Eighteen months of marketing in the dark. Though finally seeing it all laid out was a pretty good feeling.</p>

<hr />

<p><em>ChantFlow is the app I built. It’s on the App Store if you’re curious.</em></p>]]></content><author><name>Ravi Shankar</name></author><category term="indie" /><category term="indie-dev" /><category term="app-store" /><category term="watchos" /><category term="analytics" /><category term="marketing" /><summary type="html"><![CDATA[Built a Watch-only app and had zero App Store analytics for 18 months. Adding an iOS companion revealed everything — including backdated data from a marketing push I couldn't measure.]]></summary></entry><entry><title type="html">Fixing SwiftUI Performance Issues with AI-Assisted Debugging</title><link href="https://www.rshankar.com/fixing-swiftui-performance-with-ai/" rel="alternate" type="text/html" title="Fixing SwiftUI Performance Issues with AI-Assisted Debugging" /><published>2026-01-22T00:00:00+00:00</published><updated>2026-01-22T00:00:00+00:00</updated><id>https://www.rshankar.com/fixing-swiftui-performance-with-ai</id><content type="html" xml:base="https://www.rshankar.com/fixing-swiftui-performance-with-ai/"><![CDATA[<p>My app felt sluggish. Scrolling wasn’t smooth, and I could see occasional freezes. When I opened Instruments, the Hangs track told the story: <strong>14 micro-hangs in 38 seconds</strong>. That’s almost one hang every 3 seconds.</p>

<!--more-->

<p>Instead of manually digging through traces, I tried something different. I asked Claude Code to analyze my Instruments trace file directly. What followed was a systematic debugging session that not only fixed the issues but taught me patterns I’ll watch for in every SwiftUI app going forward.</p>

<h2 id="the-starting-point">The Starting Point</h2>

<p>Here’s what Instruments showed before any fixes:</p>

<p><img src="/assets/images/swiftui-perf-before.png" alt="Before - Instruments showing hangs and excessive SwiftUI updates" /></p>

<p>The SwiftUI template in Instruments revealed:</p>
<ul>
  <li><strong>76,645 total SwiftUI updates</strong> in 28 seconds</li>
  <li><strong>1.82 seconds</strong> of total update duration</li>
  <li>Multiple visible “Hang” blocks in the Hangs track</li>
  <li>Red markers indicating “Long View Body Updates”</li>
</ul>

<h2 id="ai-assisted-analysis">AI-Assisted Analysis</h2>

<p>Claude Code can read Instruments trace files programmatically using <code class="language-plaintext highlighter-rouge">xcrun xctrace export</code>. This extracts hang data, SwiftUI update counts, and timing information into XML that can be analyzed without clicking through the Instruments UI.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>xcrun xctrace <span class="nb">export</span> <span class="nt">--input</span> trace.trace <span class="se">\</span>
  <span class="nt">--xpath</span> <span class="s1">'//trace-toc[1]/run[1]/data[1]/table[9]'</span> <span class="se">\</span>
  <span class="nt">--output</span> hangs.xml
</code></pre></div></div>

<p>From the trace analysis, we identified five root causes.</p>

<h2 id="root-cause-1-stateobject-for-shared-singletons">Root Cause 1: @StateObject for Shared Singletons</h2>

<p>Almost every view in my app had this pattern:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// WRONG</span>
<span class="kd">@StateObject</span> <span class="kd">private</span> <span class="k">var</span> <span class="nv">themeManager</span> <span class="o">=</span> <span class="kt">ThemeManager</span><span class="o">.</span><span class="n">shared</span>
</code></pre></div></div>

<p>This was in 30+ views. The problem? <code class="language-plaintext highlighter-rouge">@StateObject</code> is meant for objects the view <strong>owns and creates</strong>. For shared singletons, every view was independently subscribing to changes, causing cascading redraws across the entire view hierarchy.</p>

<p><strong>The fix:</strong></p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// RIGHT</span>
<span class="kd">@ObservedObject</span> <span class="kd">private</span> <span class="k">var</span> <span class="nv">themeManager</span> <span class="o">=</span> <span class="kt">ThemeManager</span><span class="o">.</span><span class="n">shared</span>
</code></pre></div></div>

<p><code class="language-plaintext highlighter-rouge">@ObservedObject</code> tells SwiftUI “I’m observing this, but I don’t own it.” The subscription is shared, not duplicated.</p>

<h2 id="root-cause-2-dateformatter-in-computed-properties">Root Cause 2: DateFormatter in Computed Properties</h2>

<p>My timeline view grouped items by date:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// WRONG - runs on EVERY body evaluation</span>
<span class="kd">private</span> <span class="k">var</span> <span class="nv">groupedItems</span><span class="p">:</span> <span class="p">[</span><span class="kt">String</span><span class="p">:</span> <span class="p">[</span><span class="kt">Item</span><span class="p">]]</span> <span class="p">{</span>
    <span class="k">let</span> <span class="nv">formatter</span> <span class="o">=</span> <span class="kt">DateFormatter</span><span class="p">()</span>  <span class="c1">// Created every render!</span>
    <span class="n">formatter</span><span class="o">.</span><span class="n">dateStyle</span> <span class="o">=</span> <span class="o">.</span><span class="n">medium</span>
    <span class="c1">// ... grouping logic using formatter</span>
<span class="p">}</span>
</code></pre></div></div>

<p><code class="language-plaintext highlighter-rouge">DateFormatter</code> is expensive to create. And computed properties in SwiftUI views run every time <code class="language-plaintext highlighter-rouge">body</code> is evaluated. With frequent updates, this was creating hundreds of formatters per second.</p>

<p><strong>The fix:</strong></p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// Cache the result in @State</span>
<span class="kd">@State</span> <span class="kd">private</span> <span class="k">var</span> <span class="nv">cachedSections</span><span class="p">:</span> <span class="p">[</span><span class="kt">TimelineSection</span><span class="p">]</span> <span class="o">=</span> <span class="p">[]</span>

<span class="k">var</span> <span class="nv">body</span><span class="p">:</span> <span class="kd">some</span> <span class="kt">View</span> <span class="p">{</span>
    <span class="c1">// Use cachedSections instead of computed property</span>
<span class="p">}</span>
<span class="o">.</span><span class="nf">task</span><span class="p">(</span><span class="nv">id</span><span class="p">:</span> <span class="n">items</span><span class="o">.</span><span class="n">count</span><span class="p">)</span> <span class="p">{</span>
    <span class="n">cachedSections</span> <span class="o">=</span> <span class="k">Self</span><span class="o">.</span><span class="nf">groupItemsIntoSections</span><span class="p">(</span><span class="n">items</span><span class="p">)</span>
<span class="p">}</span>

<span class="c1">// Static method with formatter created once</span>
<span class="kd">private</span> <span class="kd">static</span> <span class="kd">func</span> <span class="nf">groupItemsIntoSections</span><span class="p">(</span><span class="n">_</span> <span class="nv">items</span><span class="p">:</span> <span class="p">[</span><span class="kt">Item</span><span class="p">])</span> <span class="o">-&gt;</span> <span class="p">[</span><span class="kt">TimelineSection</span><span class="p">]</span> <span class="p">{</span>
    <span class="k">let</span> <span class="nv">formatter</span> <span class="o">=</span> <span class="kt">DateFormatter</span><span class="p">()</span>
    <span class="n">formatter</span><span class="o">.</span><span class="n">dateStyle</span> <span class="o">=</span> <span class="o">.</span><span class="n">medium</span>
    <span class="c1">// ... grouping logic</span>
<span class="p">}</span>
</code></pre></div></div>

<h2 id="root-cause-3-vstack-instead-of-lazyvstack">Root Cause 3: VStack Instead of LazyVStack</h2>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// WRONG - renders ALL items immediately</span>
<span class="kt">ScrollView</span> <span class="p">{</span>
    <span class="kt">VStack</span><span class="p">(</span><span class="nv">spacing</span><span class="p">:</span> <span class="mi">24</span><span class="p">)</span> <span class="p">{</span>
        <span class="kt">ForEach</span><span class="p">(</span><span class="n">items</span><span class="p">)</span> <span class="p">{</span> <span class="n">item</span> <span class="k">in</span>
            <span class="kt">ItemRow</span><span class="p">(</span><span class="nv">item</span><span class="p">:</span> <span class="n">item</span><span class="p">)</span>
        <span class="p">}</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>With 100+ items, <code class="language-plaintext highlighter-rouge">VStack</code> forces SwiftUI to render every single row upfront, even those far off-screen.</p>

<p><strong>The fix:</strong></p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// RIGHT - only renders visible items</span>
<span class="kt">ScrollView</span> <span class="p">{</span>
    <span class="kt">LazyVStack</span><span class="p">(</span><span class="nv">spacing</span><span class="p">:</span> <span class="mi">24</span><span class="p">)</span> <span class="p">{</span>
        <span class="kt">ForEach</span><span class="p">(</span><span class="n">items</span><span class="p">)</span> <span class="p">{</span> <span class="n">item</span> <span class="k">in</span>
            <span class="kt">ItemRow</span><span class="p">(</span><span class="nv">item</span><span class="p">:</span> <span class="n">item</span><span class="p">)</span>
        <span class="p">}</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<h2 id="root-cause-4-unstable-foreach-identity">Root Cause 4: Unstable ForEach Identity</h2>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// WRONG - String identity is unstable when array changes</span>
<span class="kt">ForEach</span><span class="p">(</span><span class="n">item</span><span class="o">.</span><span class="n">tags</span><span class="p">,</span> <span class="nv">id</span><span class="p">:</span> <span class="p">\</span><span class="o">.</span><span class="k">self</span><span class="p">)</span> <span class="p">{</span> <span class="n">tag</span> <span class="k">in</span>
    <span class="kt">TagView</span><span class="p">(</span><span class="nv">tag</span><span class="p">:</span> <span class="n">tag</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div></div>

<p>When using <code class="language-plaintext highlighter-rouge">id: \.self</code> with strings, SwiftUI struggles to track identity if the array is reordered or modified. This causes unnecessary view recreation.</p>

<p><strong>The fix:</strong></p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// RIGHT - stable identity using offset</span>
<span class="kt">ForEach</span><span class="p">(</span><span class="kt">Array</span><span class="p">(</span><span class="n">item</span><span class="o">.</span><span class="n">tags</span><span class="o">.</span><span class="nf">enumerated</span><span class="p">()),</span> <span class="nv">id</span><span class="p">:</span> <span class="p">\</span><span class="o">.</span><span class="n">offset</span><span class="p">)</span> <span class="p">{</span> <span class="n">_</span><span class="p">,</span> <span class="n">tag</span> <span class="k">in</span>
    <span class="kt">TagView</span><span class="p">(</span><span class="nv">tag</span><span class="p">:</span> <span class="n">tag</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div></div>

<h2 id="root-cause-5-array-mutation-during-scroll">Root Cause 5: Array Mutation During Scroll</h2>

<p>In my Wishlist screen, voting on an item would immediately re-sort the array:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// WRONG - sorting during user interaction</span>
<span class="kd">func</span> <span class="nf">voteFor</span><span class="p">(</span><span class="nv">row</span><span class="p">:</span> <span class="kt">RowData</span><span class="p">)</span> <span class="p">{</span>
    <span class="n">rows</span><span class="p">[</span><span class="n">index</span><span class="p">]</span><span class="o">.</span><span class="n">votes</span> <span class="o">+=</span> <span class="mi">1</span>
    <span class="n">rows</span><span class="o">.</span><span class="n">sort</span> <span class="p">{</span> <span class="nv">$0</span><span class="o">.</span><span class="n">votes</span> <span class="o">&gt;</span> <span class="nv">$1</span><span class="o">.</span><span class="n">votes</span> <span class="p">}</span>  <span class="c1">// Crash risk!</span>
<span class="p">}</span>
</code></pre></div></div>

<p>Mutating an array while SwiftUI is diffing the view hierarchy (during scroll) can cause crashes or visual glitches.</p>

<p><strong>The fix:</strong></p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// RIGHT - defer the sort</span>
<span class="kd">func</span> <span class="nf">voteFor</span><span class="p">(</span><span class="nv">row</span><span class="p">:</span> <span class="kt">RowData</span><span class="p">)</span> <span class="p">{</span>
    <span class="n">rows</span><span class="p">[</span><span class="n">index</span><span class="p">]</span><span class="o">.</span><span class="n">votes</span> <span class="o">+=</span> <span class="mi">1</span>

    <span class="kt">DispatchQueue</span><span class="o">.</span><span class="n">main</span><span class="o">.</span><span class="nf">asyncAfter</span><span class="p">(</span><span class="nv">deadline</span><span class="p">:</span> <span class="o">.</span><span class="nf">now</span><span class="p">()</span> <span class="o">+</span> <span class="mf">0.3</span><span class="p">)</span> <span class="p">{</span> <span class="p">[</span><span class="k">weak</span> <span class="k">self</span><span class="p">]</span> <span class="k">in</span>
        <span class="k">self</span><span class="p">?</span><span class="o">.</span><span class="n">rows</span><span class="o">.</span><span class="n">sort</span> <span class="p">{</span> <span class="nv">$0</span><span class="o">.</span><span class="n">votes</span> <span class="o">&gt;</span> <span class="nv">$1</span><span class="o">.</span><span class="n">votes</span> <span class="p">}</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<h2 id="the-result">The Result</h2>

<p>After applying all fixes:</p>

<p><img src="/assets/images/swiftui-perf-after.png" alt="After - Clean Hangs track with reduced SwiftUI updates" /></p>

<table>
  <thead>
    <tr>
      <th>Metric</th>
      <th>Before</th>
      <th>After</th>
      <th>Improvement</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>SwiftUI Updates</td>
      <td>76,645</td>
      <td>54,294</td>
      <td>29% fewer</td>
    </tr>
    <tr>
      <td>Total Duration</td>
      <td>1.82s</td>
      <td>965ms</td>
      <td>47% faster</td>
    </tr>
    <tr>
      <td>Visible Hangs</td>
      <td>Multiple</td>
      <td>Nearly clean</td>
      <td>Significant</td>
    </tr>
  </tbody>
</table>

<p>The Hangs track went from showing multiple orange blocks to being nearly clean. The app feels noticeably smoother.</p>

<h2 id="lessons-learned">Lessons Learned</h2>

<p>Using AI to fix code is fast, but it doesn’t teach you anything unless you ask why. Here are the patterns I’ll now watch for in every SwiftUI project:</p>

<h3 id="quick-checklist">Quick Checklist</h3>

<table>
  <thead>
    <tr>
      <th>Pattern</th>
      <th>Problem</th>
      <th>Fix</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">@StateObject</code> with <code class="language-plaintext highlighter-rouge">.shared</code></td>
      <td>Duplicate subscriptions</td>
      <td>Use <code class="language-plaintext highlighter-rouge">@ObservedObject</code></td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">DateFormatter()</code> in computed property</td>
      <td>Created every render</td>
      <td>Cache in <code class="language-plaintext highlighter-rouge">@State</code>, compute in <code class="language-plaintext highlighter-rouge">.task</code></td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">VStack</code> with <code class="language-plaintext highlighter-rouge">ForEach</code> of many items</td>
      <td>Eager rendering</td>
      <td>Use <code class="language-plaintext highlighter-rouge">LazyVStack</code></td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">ForEach</code> with <code class="language-plaintext highlighter-rouge">id: \.self</code> on strings</td>
      <td>Unstable identity</td>
      <td>Use <code class="language-plaintext highlighter-rouge">.enumerated()</code> with <code class="language-plaintext highlighter-rouge">id: \.offset</code></td>
    </tr>
    <tr>
      <td>Array mutation during scroll</td>
      <td>Crashes, glitches</td>
      <td>Defer with <code class="language-plaintext highlighter-rouge">asyncAfter</code></td>
    </tr>
  </tbody>
</table>

<h3 id="when-to-run-instruments">When to Run Instruments</h3>

<p>Don’t wait until the app feels slow. Run the SwiftUI template in Instruments:</p>
<ul>
  <li>After implementing a new list view</li>
  <li>When adding new data sources</li>
  <li>Before any release</li>
  <li>When you see “View body took too long” warnings in Xcode</li>
</ul>

<h2 id="using-ai-as-a-learning-partner">Using AI as a Learning Partner</h2>

<p>The real value of AI-assisted debugging isn’t the fix—it’s the explanation. Instead of asking “fix this,” ask:</p>

<ul>
  <li>“Why is this causing a hang?”</li>
  <li>“Explain what’s wrong with this pattern”</li>
  <li>“What should I look for in this Instruments trace?”</li>
</ul>

<p>That way you build intuition, not dependency.</p>

<h2 id="summary">Summary</h2>

<p>SwiftUI performance issues often come from a few common patterns: wrong property wrappers, expensive computed properties, eager view loading, and unstable identities. Tools like Instruments show you where the problems are; understanding why they happen helps you avoid them in the first place.</p>

<p>The code that runs on every render must be cheap. Everything else should be cached and computed in the background.</p>]]></content><author><name>Ravi Shankar</name></author><category term="swiftui" /><category term="swiftui" /><category term="performance" /><category term="instruments" /><category term="debugging" /><category term="ai" /><category term="claude-code" /><category term="optimization" /><summary type="html"><![CDATA[How I used Claude Code to identify and fix SwiftUI performance issues, reducing hangs from 14 to 2 in my iOS app. Plus the lessons learned to spot these issues yourself.]]></summary></entry><entry><title type="html">How I Made a Marketing Video for My Mac App Using AI — And What Went Wrong</title><link href="https://www.rshankar.com/ai-marketing-video-mac-app/" rel="alternate" type="text/html" title="How I Made a Marketing Video for My Mac App Using AI — And What Went Wrong" /><published>2026-01-08T00:00:00+00:00</published><updated>2026-01-08T00:00:00+00:00</updated><id>https://www.rshankar.com/ai-marketing-video-mac-app</id><content type="html" xml:base="https://www.rshankar.com/ai-marketing-video-mac-app/"><![CDATA[<p>I built a Mac app called EaseEyes. It reminds you to take eye breaks when you’re not in a meeting. Simple idea, but I needed a way to show people what it does.</p>

<p>I’m not a video editor. I don’t have a budget for agencies or stock footage subscriptions. But I already pay for Claude Code Max ($100/month) for development work. What if I could use it for marketing too?</p>

<p>Here’s what happened.</p>

<!--more-->

<h2 id="table-of-contents">Table of Contents</h2>
<ul>
  <li><a href="#goal">The Goal</a></li>
  <li><a href="#tools">The Tools</a></li>
  <li><a href="#video-clips">Generating Video Clips</a></li>
  <li><a href="#screenshots">Using Real Screenshots</a></li>
  <li><a href="#voiceover">Generating Voiceover</a></li>
  <li><a href="#music">Adding Music</a></li>
  <li><a href="#vertical">Creating Vertical Versions</a></li>
  <li><a href="#end-card">Adding End Card</a></li>
  <li><a href="#result">Final Result</a></li>
  <li><a href="#learnings">Key Learnings</a></li>
  <li><a href="#conclusion">Would I Do It Again?</a></li>
</ul>

<h2 id="goal">The Goal</h2>

<p>A 30-second ad that shows the problem (endless screen time) and the solution (EaseEyes). Something I could post on YouTube, Twitter, and Instagram Reels.</p>

<p>I wrote a simple script:</p>

<div class="language-markdown highlighter-rouge"><div class="highlight"><pre class="highlight"><code>We spend our days in meetings.
Hour after hour.
Our eyes pay the price.

But what if your screen could be... kinder?

EaseEyes.
It knows when you're in a call.
It reminds you when you're not.

Your eyes deserve a break.
</code></pre></div></div>

<p>Now I needed visuals to match.</p>

<h2 id="tools">The Tools</h2>

<p>Here’s what I used:</p>

<table>
  <thead>
    <tr>
      <th>Tool</th>
      <th>Cost</th>
      <th>Purpose</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Google Veo</strong></td>
      <td>Free tier</td>
      <td>AI video generation</td>
    </tr>
    <tr>
      <td><strong>ElevenLabs</strong></td>
      <td>Free tier</td>
      <td>AI voiceover</td>
    </tr>
    <tr>
      <td><strong>Claude Code Max</strong></td>
      <td>$100/mo (already paying)</td>
      <td>ffmpeg orchestration</td>
    </tr>
    <tr>
      <td><strong>My screenshots</strong></td>
      <td>Free</td>
      <td>Actual app UI</td>
    </tr>
  </tbody>
</table>

<p><strong>Total additional cost:</strong> $0</p>

<p>This is the beauty of the free tiers—if you’re strategic, you can create real marketing assets without new expenses.</p>

<h2 id="video-clips">Generating Video Clips</h2>

<p>I used Google Veo to generate 8 short clips:</p>

<div class="language-markdown highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Shot List:
<span class="p">1.</span> Close-up of tired eyes
<span class="p">2.</span> Person at desk with Zoom calls
<span class="p">3.</span> Clock showing time passing
<span class="p">4.</span> Person rubbing their eyes
<span class="p">5.</span> EaseEyes notification appearing
<span class="p">6.</span> Person looking out window
<span class="p">7.</span> Back to video call, notification pauses
<span class="p">8.</span> Person smiling, refreshed
</code></pre></div></div>

<p>For each one, I wrote a prompt like:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Professional home office, person sitting at desk with multiple monitors
showing video call grid, afternoon light through window, cinematic
</code></pre></div></div>

<p>Veo generated 8-second clips for each. But here’s where things got interesting.</p>

<h3 id="what-went-wrong-1-different-people-in-every-shot">What Went Wrong #1: Different People in Every Shot</h3>

<p>Each time Veo generated a clip, it created a different person. My “tired office worker” was a different human in every single shot.</p>

<p><strong>The fix for next time:</strong> Use a tool like Google Nano or Banana to generate a reference person first, then pass that reference to each video prompt. This keeps the same “actor” throughout.</p>

<p>For this video, I decided to lean into it—framing it as “different people, same problem.”</p>

<h3 id="what-went-wrong-2-the-clock-didnt-work">What Went Wrong #2: The Clock Didn’t Work</h3>

<p>My prompt said “time-lapse of clock.” Veo gave me a clock where only the second hand moved. The hour and minute hands stayed frozen.</p>

<p><strong>The lesson:</strong> AI prompts need to be specific about what should animate.</p>

<div class="language-markdown highlighter-rouge"><div class="highlight"><pre class="highlight"><code>❌ Bad prompt:
Time-lapse of analog clock on wall

✅ Better prompt:
Time-lapse of analog clock, hour hand and minute hand visibly moving
clockwise showing passage of several hours, light transitioning from
afternoon to evening
</code></pre></div></div>

<h3 id="what-went-wrong-3-veo-watermark">What Went Wrong #3: Veo Watermark</h3>

<p>Every Veo clip had a watermark in the bottom-right corner. Not ideal for a polished ad.</p>

<p><strong>The fix:</strong> I used ffmpeg to crop 60 pixels from the bottom and right edges, then scaled back to 1080p. Watermark gone.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ffmpeg <span class="nt">-i</span> input.mp4 <span class="se">\</span>
  <span class="nt">-vf</span> <span class="s2">"crop=1860:1020:0:0,scale=1920:1080"</span> <span class="se">\</span>
  output.mp4
</code></pre></div></div>

<h2 id="screenshots">Using Real Screenshots</h2>

<p>For shots 5 and 7 (showing the actual EaseEyes notification), I didn’t use AI. I took real screenshots of my app.</p>

<p><strong>Why?</strong> Because AI mockups of your own UI never look right. Real screenshots are more authentic and show exactly what users will get.</p>

<p>I used a Ken Burns effect (slow zoom) to add motion:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ffmpeg <span class="nt">-loop</span> 1 <span class="nt">-i</span> screenshot.png <span class="se">\</span>
  <span class="nt">-vf</span> <span class="s2">"zoompan=z='1+0.15*on/100':x='iw-iw/zoom':y='0':d=100:s=1920x1080"</span> <span class="se">\</span>
  <span class="nt">-t</span> 4 output.mp4
</code></pre></div></div>

<p>This zooms slowly into the top-right corner where the notification appears.</p>

<p><strong>Recommendation:</strong> Always use real screenshots for your actual product UI. Let AI handle the conceptual/emotional shots.</p>

<h2 id="voiceover">Generating Voiceover</h2>

<p>I used ElevenLabs to generate the narration. But here’s a key learning:</p>

<p><strong>Don’t generate one long voiceover clip.</strong></p>

<p>Instead, generate each line separately:</p>
<ul>
  <li>“We spend our days in meetings.”</li>
  <li>“Hour after hour.”</li>
  <li>“Our eyes pay the price.”</li>
  <li>etc.</li>
</ul>

<p>This lets you place each line at exactly the right moment to sync with the visuals.</p>

<h3 id="what-went-wrong-4-voiceover-timing">What Went Wrong #4: Voiceover Timing</h3>

<p>My first attempt had the voiceover starting at shot boundaries. “Hour after hour” played during the home office shot, not the clock.</p>

<p><strong>The fix:</strong> Map voiceover to visual meaning, not timing.</p>

<table>
  <thead>
    <tr>
      <th>Shot</th>
      <th>Visual</th>
      <th>Voiceover</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>1</td>
      <td>Tired eyes</td>
      <td><em>(music only — visual hook)</em></td>
    </tr>
    <tr>
      <td>2</td>
      <td>Home office</td>
      <td>“We spend our days in meetings.”</td>
    </tr>
    <tr>
      <td>3</td>
      <td>Clock</td>
      <td>“Hour after hour.”</td>
    </tr>
    <tr>
      <td>4</td>
      <td>Rubbing eyes</td>
      <td>“Our eyes pay the price.”</td>
    </tr>
  </tbody>
</table>

<p>Now “hour after hour” plays when you see the clock. Much better.</p>

<h2 id="music">Adding Music</h2>

<p>I found a relaxing ambient track and mixed it at about 12% volume—loud enough to set the mood, quiet enough to not compete with the voiceover.</p>

<div class="language-markdown highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Key settings:
<span class="p">-</span> Fade in over 2 seconds at the start
<span class="p">-</span> Fade out over 2 seconds at the end
<span class="p">-</span> Keep it low when voiceover is playing
<span class="p">-</span> Use copyright-free music (YouTube Audio Library, Epidemic Sound)
</code></pre></div></div>

<h2 id="vertical">Creating Vertical Versions</h2>

<p>Here’s something I wish I’d planned from the start: <strong>vertical video matters more than horizontal.</strong></p>

<p>YouTube Shorts, Instagram Reels, TikTok—they’re all 9:16 vertical. That’s where the eyeballs are.</p>

<p>I had to convert my horizontal videos to vertical after the fact. For most shots, center crop worked fine. But for the notification screenshots, I needed a “smart crop” that kept the right side of the frame where the UI appears.</p>

<p><strong>Lesson for next time:</strong> Plan for vertical first. Frame your AI prompts so subjects are centered.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Convert horizontal to vertical (center crop)</span>
ffmpeg <span class="nt">-i</span> horizontal.mp4 <span class="se">\</span>
  <span class="nt">-vf</span> <span class="s2">"crop=1080:1920"</span> <span class="se">\</span>
  vertical.mp4

<span class="c"># Smart crop (keep specific area)</span>
ffmpeg <span class="nt">-i</span> horizontal.mp4 <span class="se">\</span>
  <span class="nt">-vf</span> <span class="s2">"crop=1080:1920:840:0"</span> <span class="se">\</span>
  vertical-right.mp4
</code></pre></div></div>

<h2 id="end-card">Adding End Card</h2>

<p>Every marketing video needs a call-to-action. I created a simple end card with the App Store badge and added it as the final 4 seconds.</p>

<p>Made sure to create both:</p>
<ul>
  <li>Horizontal version (1920x1080)</li>
  <li>Vertical version (1080x1920)</li>
</ul>

<h2 id="result">Final Result</h2>

<p>Four videos ready to post:</p>

<table>
  <thead>
    <tr>
      <th>Platform</th>
      <th>Aspect</th>
      <th>Duration</th>
      <th>Use Case</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>YouTube/Twitter</td>
      <td>16:9</td>
      <td>35 seconds</td>
      <td>Standard video posts</td>
    </tr>
    <tr>
      <td>YouTube (long)</td>
      <td>16:9</td>
      <td>62 seconds</td>
      <td>More detail</td>
    </tr>
    <tr>
      <td>YouTube Shorts</td>
      <td>9:16</td>
      <td>34 seconds</td>
      <td>Short-form vertical</td>
    </tr>
    <tr>
      <td>Instagram Reels</td>
      <td>9:16</td>
      <td>60 seconds</td>
      <td>Full vertical experience</td>
    </tr>
  </tbody>
</table>

<p><strong>Total time:</strong> About 3 hours
<strong>Additional cost:</strong> $0 (used existing Claude Code Max subscription)</p>

<p><a href="https://youtube.com/shorts/WBacqQe6cnU">Watch the final result on YouTube Shorts</a></p>

<h2 id="learnings">Key Learnings</h2>

<div class="language-markdown highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">1.</span> Generate a reference person first
   → Keep the same "actor" across all shots
<span class="p">
2.</span> Be specific in AI prompts
   → Say exactly what should move and how
<span class="p">
3.</span> Use real screenshots for your UI
   → More authentic than AI mockups
<span class="p">
4.</span> Generate voiceover per line
   → Enables precise timing
<span class="p">
5.</span> Map voiceover to visual meaning
   → Not just shot boundaries
<span class="p">
6.</span> Plan for vertical first
   → Shorts/Reels are where the audience is
<span class="p">
7.</span> Crop out watermarks
   → Simple ffmpeg fix
<span class="p">
8.</span> First shot can be music-only
   → Lets the visual hook establish
</code></pre></div></div>

<h3 id="cost-breakdown">Cost Breakdown</h3>

<table>
  <thead>
    <tr>
      <th>Item</th>
      <th>Cost</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Google Veo</td>
      <td>$0 (free tier)</td>
    </tr>
    <tr>
      <td>ElevenLabs</td>
      <td>$0 (free tier)</td>
    </tr>
    <tr>
      <td>Claude Code Max</td>
      <td>$0 (already paying for dev)</td>
    </tr>
    <tr>
      <td>ffmpeg</td>
      <td>$0 (open source)</td>
    </tr>
    <tr>
      <td>Music</td>
      <td>$0 (YouTube Audio Library)</td>
    </tr>
    <tr>
      <td><strong>Total</strong></td>
      <td><strong>$0</strong></td>
    </tr>
  </tbody>
</table>

<p>Compare this to hiring a video agency ($2,000-5,000) or stock footage subscriptions ($30-50/month).</p>

<h2 id="conclusion">Would I Do It Again?</h2>

<p>Absolutely. The process wasn’t perfect, but I ended up with a real marketing video that I can use across all platforms.</p>

<p>The AI tools aren’t magic—you still need to direct them carefully and fix their mistakes. But for an indie developer with no video budget, this workflow is a game changer.</p>

<p><strong>When this approach makes sense:</strong></p>
<ul>
  <li>Solo developer with limited budget</li>
  <li>Need quick marketing assets</li>
  <li>Willing to iterate and fix AI mistakes</li>
  <li>Have basic command-line skills (ffmpeg)</li>
  <li>Already using AI tools for development</li>
</ul>

<p><strong>When to hire a professional:</strong></p>
<ul>
  <li>Brand video for funding pitch</li>
  <li>High-stakes product launch</li>
  <li>Need perfect execution first time</li>
  <li>Budget allows it (revenue &gt; $5k/month)</li>
</ul>

<h3 id="further-reading">Further Reading</h3>

<ul>
  <li><a href="/indie/career/2024/08/19/indie-developer-essential-tools.html">Essential Tools for Indie Developers</a></li>
  <li><a href="/indie/career/2024/05/08/indie-developer-costs.html">Real Cost of Being an Indie iOS Developer</a></li>
  <li><a href="/indie/career/2024/06/25/mvp-mindset-ship-fast.html">MVP Mindset: Ship Fast</a></li>
</ul>

<hr />

<p><em>EaseEyes is a free Mac app that reminds you to take eye breaks. <a href="https://apps.apple.com/us/app/eye-rest-timer-ease-eyes/id6475638039">Download on the Mac App Store</a></em></p>

<hr />

<p><em>The best marketing video is the one you actually ship.</em></p>]]></content><author><name>Ravi Shankar</name></author><category term="indie-development" /><category term="marketing" /><category term="ai-tools" /><category term="AI video generation" /><category term="Google Veo" /><category term="ElevenLabs" /><category term="ffmpeg" /><category term="marketing video" /><category term="Mac app" /><category term="indie developer" /><category term="EaseEyes" /><summary type="html"><![CDATA[Learn how I created a 30-second marketing video for my Mac app using Google Veo, ElevenLabs, and ffmpeg with zero additional budget—and the mistakes I made along the way.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://www.rshankar.com/assets/images/app-icons/eye-rest-icon.png" /><media:content medium="image" url="https://www.rshankar.com/assets/images/app-icons/eye-rest-icon.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">How to Handle HealthKit Permission Denial Gracefully</title><link href="https://www.rshankar.com/graceful-degradation-healthkit/" rel="alternate" type="text/html" title="How to Handle HealthKit Permission Denial Gracefully" /><published>2026-01-01T00:00:00+00:00</published><updated>2026-01-01T00:00:00+00:00</updated><id>https://www.rshankar.com/graceful-degradation-healthkit</id><content type="html" xml:base="https://www.rshankar.com/graceful-degradation-healthkit/"><![CDATA[<p>About 20% of users deny HealthKit permission when asked. If your app shows a “permission required” wall, you lose them immediately.</p>

<p>There’s a better way. Build fallbacks so your app works with reduced functionality instead of failing completely. This is called graceful degradation.</p>

<p><img src="/assets/images/healthkit-permission-flowchart.png" alt="HealthKit Permission Flowchart" />
<em>A flowchart showing two paths from “HealthKit Permission?” - “Yes” leads to “Automatic Tracking”, “No” leads to “Manual Entry”. Both paths converge to “Same Core Value”.</em></p>

<h2 id="the-problem-all-or-nothing">The Problem: All or Nothing</h2>

<p>Many apps do this:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">if</span> <span class="n">hasHealthKitPermission</span> <span class="p">{</span>
    <span class="kt">MainAppView</span><span class="p">()</span>
<span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
    <span class="kt">PermissionDeniedView</span><span class="p">()</span> <span class="c1">// Dead end</span>
<span class="p">}</span>
</code></pre></div></div>

<p>User sees a wall. User uninstalls.</p>

<h2 id="the-fix-provide-an-alternative">The Fix: Provide an Alternative</h2>

<p>Instead of blocking users, adapt based on what’s available:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">enum</span> <span class="kt">DataSource</span> <span class="p">{</span>
    <span class="k">case</span> <span class="n">healthKit</span>
    <span class="k">case</span> <span class="n">manual</span>
<span class="p">}</span>

<span class="kd">class</span> <span class="kt">DataManager</span><span class="p">:</span> <span class="kt">ObservableObject</span> <span class="p">{</span>
    <span class="kd">@Published</span> <span class="k">var</span> <span class="nv">dataSource</span><span class="p">:</span> <span class="kt">DataSource</span> <span class="o">=</span> <span class="o">.</span><span class="n">healthKit</span>

    <span class="kd">func</span> <span class="nf">checkPermission</span><span class="p">()</span> <span class="p">{</span>
        <span class="n">healthKitManager</span><span class="o">.</span><span class="n">requestPermission</span> <span class="p">{</span> <span class="n">granted</span> <span class="k">in</span>
            <span class="k">self</span><span class="o">.</span><span class="n">dataSource</span> <span class="o">=</span> <span class="n">granted</span> <span class="p">?</span> <span class="o">.</span><span class="nv">healthKit</span> <span class="p">:</span> <span class="o">.</span><span class="n">manual</span>
        <span class="p">}</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>Now both paths lead somewhere useful.</p>

<h2 id="example-sleep-tracker">Example: Sleep Tracker</h2>

<p>A sleep tracking app could work like this:</p>

<p><strong>With HealthKit:</strong> Automatic sleep data from Apple Watch or iPhone.</p>

<p><strong>Without HealthKit:</strong> Manual entry form with bed time, wake time, and optional quality rating.</p>

<p>The core value (tracking sleep patterns over time) works either way. HealthKit just makes it automatic.</p>

<h2 id="tiered-features">Tiered Features</h2>

<p>Structure your app in tiers based on available permissions:</p>

<p><strong>Basic (no HealthKit):</strong></p>
<ul>
  <li>Manual data entry</li>
  <li>Goals and reminders</li>
  <li>Basic statistics</li>
</ul>

<p><strong>Standard (partial HealthKit):</strong></p>
<ul>
  <li>Automatic tracking</li>
  <li>Quality scores</li>
  <li>Trends over time</li>
</ul>

<p><strong>Premium (full HealthKit):</strong></p>
<ul>
  <li>Heart rate analysis</li>
  <li>Activity correlation</li>
  <li>Health insights</li>
</ul>

<p>Users without permissions get the basic tier. They can still use the app. When they’re ready, they can unlock more.</p>

<h2 id="gentle-upsells">Gentle Upsells</h2>

<p>Don’t nag about permissions. Show the value contextually:</p>

<p>“See how your daily steps affect your sleep. Enable activity tracking to unlock this feature.”</p>

<p>Users understand why the permission matters because they’re already interested in that feature.</p>

<h2 id="when-they-change-their-mind">When They Change Their Mind</h2>

<p>If a user grants permission later, offer to import their manual entries. Keep their history intact.</p>

<h2 id="the-principle">The Principle</h2>

<p>Before requiring any permission, ask yourself:</p>

<ul>
  <li>What’s the core value of this app?</li>
  <li>Can users get that value without this permission?</li>
  <li>What’s a reasonable fallback?</li>
</ul>

<p>For a sleep tracker, the core value is understanding sleep patterns. Manual entry works. HealthKit makes it easier, but it’s not required.</p>

<p>For a step counter, you probably do need motion permission. But you could still show goals, history from manual entry, or educational content.</p>

<h2 id="why-this-matters">Why This Matters</h2>

<p>Users who deny permissions aren’t lost causes. They might:</p>
<ul>
  <li>Be privacy-conscious but still interested</li>
  <li>Want to try the app before granting access</li>
  <li>Have had bad experiences with other apps</li>
</ul>

<p>Give them a path forward. Some will grant permissions later after seeing the app’s value. Others will stay on manual mode and still leave good reviews.</p>

<p>Build for the users who say no. They might become your best advocates.</p>]]></content><author><name>Ravi Shankar</name></author><category term="ios" /><category term="ux" /><category term="graceful degradation" /><category term="healthkit" /><category term="user experience" /><summary type="html"><![CDATA[When users deny HealthKit permission, your app doesn't have to break. Build fallbacks that keep users engaged.]]></summary></entry><entry><title type="html">I Built 3 AI Tools for Indie Devs in 3 Days (No Backend Required)</title><link href="https://www.rshankar.com/built-3-ai-tools-indie-devs-no-backend-google-ai-studio/" rel="alternate" type="text/html" title="I Built 3 AI Tools for Indie Devs in 3 Days (No Backend Required)" /><published>2025-11-08T00:00:00+00:00</published><updated>2025-11-08T00:00:00+00:00</updated><id>https://www.rshankar.com/built-3-ai-tools-indie-devs-no-backend-google-ai-studio</id><content type="html" xml:base="https://www.rshankar.com/built-3-ai-tools-indie-devs-no-backend-google-ai-studio/"><![CDATA[<p>Here’s the indie developer paradox I live with:</p>

<p>I can build complex iOS apps with Core Data, CloudKit, HealthKit integration—the whole nine yards. But when it comes to simple tasks like writing launch checklists or social media captions, I freeze.</p>

<p>It’s not lack of skill. It’s decision fatigue.</p>

<p>By the time I’ve coded all day, designed UI, debugged edge cases, the thought of <em>also</em> planning a launch strategy or crafting marketing copy feels insurmountable.</p>

<p>So I built tools to handle it. Three AI micro-services in three days. No backend setup. No server management. Just clear problems and fast solutions.</p>

<p>If you’re an indie maker who’s great at building but struggles with everything around it, this is what I learned.<!--more--></p>

<h2 id="the-challenge-building--shipping">The Challenge: Building ≠ Shipping</h2>

<p>I’ve shipped over a dozen iOS apps. Some did well. Some flopped. The pattern I’ve noticed?</p>

<p><strong>The apps that succeeded weren’t necessarily better code.</strong></p>

<p>They succeeded because I:</p>
<ul>
  <li>Launched at the right time with the right messaging</li>
  <li>Stayed consistent with updates and social posts</li>
  <li>Tracked health metrics and iterated based on data</li>
  <li>Maintained momentum post-launch</li>
</ul>

<p>All the “non-coding” parts.</p>

<p>But here’s the thing: I’m an iOS developer. I love Swift, SwiftUI, solving technical problems. Marketing? Launch planning? Social media? Those feel like context-switching tax.</p>

<p><strong>The realization:</strong></p>

<p>What if I could build tiny AI tools that handle these tasks for me?</p>

<p>Not generic ChatGPT prompts. Custom micro-services designed exactly for my workflow.</p>

<p>That’s what I set out to build this week.</p>

<h2 id="why-google-ai-studio--cloud-run">Why Google AI Studio + Cloud Run?</h2>

<p>Coming from iOS development, I had specific criteria:</p>

<ul>
  <li>✅ <strong>Fast iteration</strong>: I wanted to build and test ideas in hours, not days</li>
  <li>✅ <strong>No infrastructure headache</strong>: I don’t want to manage servers or databases</li>
  <li>✅ <strong>Instant deployment</strong>: TestFlight takes days; I wanted to go live in minutes</li>
  <li>✅ <strong>Cheap</strong>: These are experiments, not revenue-generating products yet</li>
  <li>✅ <strong>Shareable</strong>: I wanted to send links to beta testers immediately</li>
</ul>

<p>Google AI Studio + Cloud Run hit all five.</p>

<p><strong>The workflow:</strong></p>

<ol>
  <li>Build the service in AI Studio (browser-based)</li>
  <li>Define input/output JSON schema</li>
  <li>Deploy to Cloud Run (one click)</li>
  <li>Get a live URL instantly</li>
  <li>Iterate based on real usage</li>
</ol>

<p>No Docker. No backend frameworks. No database migrations.</p>

<p>Just pure focus on the problem I’m solving.</p>

<h2 id="tool-1-ai-fitness-coach-built-in-10-minutes">Tool #1: AI Fitness Coach (Built in 10 Minutes)</h2>

<h3 id="the-problem">The Problem</h3>

<p>I’ve been building a fitness app for Apple Watch. Users track daily steps, set goals, receive motivational messages.</p>

<p>The challenge: <strong>Writing personalized coaching messages for different scenarios.</strong></p>

<ul>
  <li>3,500 steps at 9 AM → “Great start! Keep it up!”</li>
  <li>3,500 steps at 9 PM → “Late push needed! You can still hit your goal!”</li>
  <li>9,000 steps at 2 PM → “You’re crushing it today!”</li>
</ul>

<p>Writing these variations manually is tedious. I needed dynamic, context-aware messaging.</p>

<h3 id="the-solution">The Solution</h3>

<p>I built a web service that takes:</p>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Yesterday I built an AI image generator using <a href="https://twitter.com/GoogleAIStudio">@GoogleAIStudio</a>.<br /><br />Today I tried something different - an AI Fitness Coach web service 🏃‍♂️<br /><br />It takes your step count, goal, and time of day… then replies like a real coach. <a href="https://t.co/image">pic.twitter.com/image</a></p>&mdash; Ravi Shankar (@rshankra) <a href="https://twitter.com/rshankra/status/1985731266571092180">November 4, 2025</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<p><strong>Input:</strong></p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
  </span><span class="nl">"current_steps"</span><span class="p">:</span><span class="w"> </span><span class="mi">3500</span><span class="p">,</span><span class="w">
  </span><span class="nl">"goal_steps"</span><span class="p">:</span><span class="w"> </span><span class="mi">10000</span><span class="p">,</span><span class="w">
  </span><span class="nl">"time_of_day"</span><span class="p">:</span><span class="w"> </span><span class="s2">"evening"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"coaching_tone"</span><span class="p">:</span><span class="w"> </span><span class="s2">"coach"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>

<p><strong>Output:</strong></p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
  </span><span class="nl">"headline_text"</span><span class="p">:</span><span class="w"> </span><span class="s2">"3.5K"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"message_short"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Late push needed!"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"message_long"</span><span class="p">:</span><span class="w"> </span><span class="s2">"It's evening, and you've logged 3,500 steps. That's solid effort, but you've got work to do. Your goal is 10,000—time to lace up and close the gap. Let's finish strong!"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>

<h3 id="how-i-built-it">How I Built It</h3>

<p><strong>Step 1: Defined the schema in Google AI Studio</strong></p>

<p>Instead of building forms and UI first (my iOS instinct), I started with the data contract. What goes in? What comes out?</p>

<p>This shift in thinking—from UI-first to data-first—was surprisingly freeing.</p>

<p><strong>Step 2: Prompted the AI with context</strong></p>

<p>I gave Gemini 2.5 Pro this context:</p>

<blockquote>
  <p>“You’re a fitness coach. Based on someone’s current steps, goal, and time of day, generate motivational messages. Adjust tone: if it’s morning and they’re ahead, celebrate. If it’s evening and they’re behind, motivate urgency without guilt.”</p>
</blockquote>

<p><strong>Step 3: Tested with different scenarios</strong></p>

<ul>
  <li>Morning + low steps = “Get moving early!”</li>
  <li>Afternoon + high steps = “You’re crushing it!”</li>
  <li>Evening + behind = “Late push, you’ve got this!”</li>
</ul>

<p><strong>Step 4: Deployed to Cloud Run</strong></p>

<p>One click. Live in 30 seconds.</p>

<p><strong>Total time:</strong> ~10 minutes.</p>

<h3 id="the-apple-watch-connection">The Apple Watch Connection</h3>

<p>Here’s where it gets interesting.</p>

<p>I previewed how these messages would look on Apple Watch and iPhone. Same data, different screen layouts, consistent motivational tone.</p>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">The preview shows how the message would look on an Apple Watch and iPhone.<br /><br />Same data → different screen layouts → same motivational tone.</p>&mdash; Ravi Shankar (@rshankra) <a href="https://twitter.com/rshankra/status/1985731266571092180">November 4, 2025</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<p>This is the bridge I’ve been looking for: <strong>AI-generated content + native Apple experiences.</strong></p>

<p>My SwiftUI app can call this Cloud Run endpoint, get contextual messages, and display them beautifully on Apple Watch complications or iPhone widgets.</p>

<p><strong>Rookie mistake #1:</strong></p>

<p>I initially tried to generate the UI in AI Studio too. Bad idea. The web preview looked fine, but translating HTML/CSS to SwiftUI was messy.</p>

<p><strong>The fix:</strong> Use AI Studio for <em>logic and content</em>. Use SwiftUI for <em>native UI</em>. They’re perfect partners.</p>

<h3 id="what-i-learned">What I Learned</h3>

<p><strong>1. Tiny services are powerful</strong></p>

<p>This isn’t a full-featured fitness app. It’s one micro-service: steps + context → motivational message.</p>

<p>But that’s all I needed. By keeping scope small, I shipped fast.</p>

<p><strong>2. Context matters more than cleverness</strong></p>

<p>The AI doesn’t need complex prompts. It needs <em>context</em>: time of day, progress percentage, user’s goal.</p>

<p>Give good context, get good output.</p>

<p><strong>3. JSON schemas force clarity</strong></p>

<p>Defining input/output upfront made me think through edge cases:</p>
<ul>
  <li>What if <code class="language-plaintext highlighter-rouge">current_steps</code> &gt; <code class="language-plaintext highlighter-rouge">goal_steps</code>?</li>
  <li>What if <code class="language-plaintext highlighter-rouge">time_of_day</code> is invalid?</li>
  <li>Should <code class="language-plaintext highlighter-rouge">coaching_tone</code> be “friendly” or “coach” or “blunt”?</li>
</ul>

<p>This upfront thinking saved debugging time later.</p>

<h2 id="tool-2-launch-checklist-generator-built-in-10-minutes">Tool #2: Launch Checklist Generator (Built in 10 Minutes)</h2>

<h3 id="the-problem-1">The Problem</h3>

<p>Every indie dev I know (myself included) has experienced this:</p>

<p><strong>You finish building the app… and then freeze at the launch step.</strong></p>

<ul>
  <li>What do I post?</li>
  <li>In what order should things happen?</li>
  <li>Did I forget something important?</li>
  <li>Is 2 hours enough to launch, or do I need 2 weeks?</li>
</ul>

<p>I’ve launched apps both ways: rushed 2-hour sprints and meticulously planned 2-week campaigns. Both can work. The key is having a <em>plan</em>.</p>

<p>But making the plan? That’s where I procrastinate.</p>

<h3 id="the-solution-1">The Solution</h3>

<p>I built LaunchPad AI: a launch checklist generator for indie devs.</p>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">I built a Launch Checklist Generator for <a href="https://twitter.com/hashtag/indiedevs?src=hash">#indiedevs</a> in <a href="https://twitter.com/GoogleAIStudio">@GoogleAIStudio</a>.<br /><br />You type your app + how soon you want to launch, and it creates a clear step-by-step launch plan.<br /><br />Took me ~10 minutes.<br /><br />Here&#39;s how I did it 👇 <a href="https://t.co/image">pic.twitter.com/image</a></p>&mdash; Ravi Shankar (@rshankra) <a href="https://twitter.com/rshankra/status/1986102676531413034">November 5, 2025</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<p>You enter:</p>
<ul>
  <li>Your app name</li>
  <li>App category</li>
  <li>Launch timeline (2 hours / 1 week / 2 weeks)</li>
</ul>

<p>It returns a structured, time-based checklist:</p>

<p><strong>Example output (2-hour sprint):</strong></p>

<p><strong>Pre-launch Readiness (T-0 to T-0)</strong></p>
<ul>
  <li>✅ Finalize app name and tagline</li>
  <li>✅ Prepare App Store screenshots</li>
  <li>✅ Write compelling app description</li>
</ul>

<p><strong>Launch Hour (T-0 to T+1)</strong></p>
<ul>
  <li>✅ Submit to Product Hunt</li>
  <li>✅ Post on X/Twitter with demo video</li>
  <li>✅ Share in relevant Slack/Discord communities</li>
</ul>

<p><strong>Post-Launch (T+1 to T+3 days)</strong></p>
<ul>
  <li>✅ Respond to all comments and feedback</li>
  <li>✅ Monitor crash reports and fix critical bugs</li>
  <li>✅ Share user testimonials</li>
</ul>

<p><strong>For a 2-week timeline</strong>, it adds:</p>
<ul>
  <li>Pre-launch landing page</li>
  <li>Email list building</li>
  <li>Beta tester recruitment</li>
  <li>Teaser campaign</li>
</ul>

<h3 id="how-i-built-it-1">How I Built It</h3>

<p><strong>Step 1: Interviewed myself</strong></p>

<p>I listed every launch I’d done: what worked, what I forgot, what I’d do differently.</p>

<p>Then I turned that into prompt context.</p>

<p><strong>Step 2: Structured the output</strong></p>

<p>I didn’t just want a blob of text. I wanted:</p>
<ul>
  <li>Grouped tasks (Product / Social / App Store / Post-launch)</li>
  <li>Time-based phases</li>
  <li>Actionable items, not vague suggestions</li>
</ul>

<p><strong>Step 3: Added interactivity</strong></p>

<p>The tool lets you:</p>
<ul>
  <li>Add custom tasks</li>
  <li>Edit existing tasks</li>
  <li>Delete irrelevant tasks</li>
  <li>Export the final checklist to PDF</li>
</ul>

<p>This took the longest—maybe 40 of the 60 total minutes.</p>

<p><strong>Step 4: Deployed and tested</strong></p>

<p>I used it for a real launch (the fitness app). It worked.</p>

<p>Instead of thinking “What do I do next?”, I just followed the list.</p>

<p><strong>Rookie mistake #2:</strong></p>

<p>I initially generated checklists that were too generic.</p>

<p>“Promote on social media” → Okay, but <em>which</em> platforms? <em>What</em> content?</p>

<p><strong>The fix:</strong> I refined prompts to generate specific, actionable tasks:</p>

<ul>
  <li>❌ “Promote on social media”</li>
  <li>✅ “Post demo video on X/Twitter with 3 key features highlighted”</li>
</ul>

<p>Specificity beats generality every time.</p>

<h3 id="what-i-learned-1">What I Learned</h3>

<p><strong>1. Decision fatigue is real</strong></p>

<p>Even simple decisions (“Should I post on Reddit first or Twitter?”) compound when you’re doing 20 things at once.</p>

<p>Pre-made checklists eliminate those micro-decisions.</p>

<p><strong>2. Templates reduce stress</strong></p>

<p>Knowing I can generate a launch plan in 30 seconds removes the “I don’t know where to start” paralysis.</p>

<p>Now launching feels: clear, doable, repeatable.</p>

<p><strong>3. Export matters</strong></p>

<p>Being able to export to PDF means I can print it, check it off physically, or share it with collaborators.</p>

<p>This small feature made the tool 10x more useful.</p>

<h2 id="tool-3-caption-generator-built-in-15-minutes">Tool #3: Caption Generator (Built in 15 Minutes)</h2>

<h3 id="the-problem-2">The Problem</h3>

<p>I’m terrible at social media.</p>

<p>Not the technical side—I can record videos, edit screenshots, use scheduling tools. But the <em>writing</em>?</p>

<p>Every time I sit down to write a post about my app, I stare at a blank screen for 20 minutes.</p>

<p><strong>The inner monologue:</strong></p>

<ul>
  <li>Is this too salesy?</li>
  <li>Should I be more casual?</li>
  <li>Do I sound like every other indie dev?</li>
  <li>What hashtags actually work?</li>
</ul>

<p>By the time I’ve written something, I’ve lost momentum.</p>

<h3 id="the-solution-2">The Solution</h3>

<p>I built a caption generator designed specifically for indie app developers.</p>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Built another small mini tool today with <a href="https://twitter.com/GoogleAIStudio">@GoogleAIStudio</a><br /><br />This one writes scroll-friendly social captions for your app.<br /><br />No more staring at a blank screen.<br /><br />Here&#39;s how it works 👇 <a href="https://t.co/image">pic.twitter.com/image</a></p>&mdash; Ravi Shankar (@rshankra) <a href="https://twitter.com/rshankra/status/1986460321318580400">November 7, 2025</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<p><strong>Input:</strong></p>
<ul>
  <li>App name</li>
  <li>What it does</li>
  <li>Target audience</li>
  <li>Tone (friendly / calm / energetic / professional)</li>
  <li>Auto-generate hashtags: Yes/No</li>
</ul>

<p><strong>Output:</strong></p>
<ul>
  <li>5 different caption variations</li>
  <li>Hashtag set</li>
  <li>Call-to-action line</li>
</ul>

<p><strong>Example:</strong></p>

<p><em>App:</em> Expense Split
<em>What it does:</em> Splits expenses among roommates, couples, friends
<em>Audience:</em> Roommates, couples, friends
<em>Tone:</em> Friendly</p>

<p><strong>Generated captions:</strong></p>

<ol>
  <li>
    <p>“Splitting expenses shouldn’t mean splitting friendships. Expense Split makes it easy to track who owes what—no awkward conversations needed. Perfect for roommates, couples, and friend groups.”</p>
  </li>
  <li>
    <p>“Ever had that moment where someone says ‘I’ll Venmo you later’ and… never does? Expense Split keeps everyone honest (and friendly). Track shared costs, settle up fast.”</p>
  </li>
  <li>
    <p>“Money + relationships = complicated. Expense Split keeps it simple. Add expenses, split them fairly, done. Built for roommates who want to stay friends.”</p>
  </li>
  <li>
    <p>“Stop doing mental math every time you grab dinner with friends. Expense Split handles the math, you handle the fun.”</p>
  </li>
  <li>
    <p>“Because ‘I’ll pay you back’ shouldn’t require a spreadsheet. Expense Split: fair splits, zero stress.”</p>
  </li>
</ol>

<p><strong>Hashtags:</strong> #ExpenseTracking #PersonalFinance #IndieApp #RoommateLife #SplitBills</p>

<p><strong>CTA:</strong> “Try it free → [link]”</p>

<h3 id="how-i-built-it-2">How I Built It</h3>

<p><strong>Step 1: Analyzed what works</strong></p>

<p>I looked at successful indie dev posts on X/Twitter and Product Hunt. Common patterns:</p>

<ul>
  <li>Start with a relatable pain point</li>
  <li>Show the solution concisely</li>
  <li>End with clear CTA</li>
  <li>Friendly, conversational tone</li>
  <li>1-3 sentences max</li>
</ul>

<p><strong>Step 2: Turned patterns into prompts</strong></p>

<p>I gave Gemini these guidelines:</p>

<blockquote>
  <p>“Write scroll-friendly social media captions for indie apps. Start with a pain point or relatable moment. Keep it under 3 sentences. Avoid buzzwords like ‘game-changer’ or ‘revolutionary.’ Be specific, not generic.”</p>
</blockquote>

<p><strong>Step 3: Added variety</strong></p>

<p>One caption isn’t enough. Sometimes I need playful, sometimes urgent, sometimes calm.</p>

<p>The tool generates 5 variations so I can pick what fits my mood (or A/B test).</p>

<p><strong>Step 4: Made it fast</strong></p>

<p>Fill out the form, hit generate, get 5 captions in 3 seconds.</p>

<p>No more staring at blank screens.</p>

<p><strong>Rookie mistake #3:</strong></p>

<p>The first version generated captions that all sounded the same. Boring.</p>

<p><strong>The fix:</strong> I added explicit instructions for variety:</p>

<ul>
  <li>Caption 1: Pain-point focused</li>
  <li>Caption 2: Benefit-focused</li>
  <li>Caption 3: Scenario-based</li>
  <li>Caption 4: Question hook</li>
  <li>Caption 5: Bold statement</li>
</ul>

<p>Now the outputs feel distinct.</p>

<h3 id="what-i-learned-2">What I Learned</h3>

<p><strong>1. Good prompts = good outputs</strong></p>

<p>The difference between “write a caption” and “write a scroll-friendly caption starting with a relatable pain point” is huge.</p>

<p>Specificity in prompts = quality in results.</p>

<p><strong>2. Options reduce perfectionism</strong></p>

<p>When I write manually, I agonize over every word. Is this the <em>perfect</em> caption?</p>

<p>With 5 variations, I just pick one and move on. Good enough is good enough.</p>

<p><strong>3. This works for me <em>because</em> I built it</strong></p>

<p>Generic caption generators exist. But they don’t know my audience (indie devs, iOS users, people who appreciate authenticity).</p>

<p>Custom tools tuned to your niche &gt; one-size-fits-all.</p>

<h2 id="deploying-to-cloud-run-the-30-second-workflow">Deploying to Cloud Run: The 30-Second Workflow</h2>

<p>Here’s the deployment process for all three tools:</p>

<p><strong>In Google AI Studio:</strong></p>

<ol>
  <li>Click “Deploy”</li>
  <li>Select “Cloud Run”</li>
  <li>Choose region (I use <code class="language-plaintext highlighter-rouge">us-central1</code>)</li>
  <li>Click “Deploy”</li>
</ol>

<p><strong>That’s it.</strong></p>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Built completely inside Gemini 2.5 Pro using Google AI Studio's web service mode.<br /><br />No backend setup, no extra code - just define input/output and deploy. <a href="https://t.co/videolink">pic.twitter.com/videolink</a></p>&mdash; Ravi Shankar (@rshankra) <a href="https://twitter.com/rshankra/status/1985731266571092180">November 4, 2025</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<p>30 seconds later, I get a live URL:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>https://fitness-coach-abc123-uc.a.run.app
</code></pre></div></div>

<p><strong>No configuration. No servers. No DevOps.</strong></p>

<p>For someone who’s used to:</p>
<ul>
  <li>TestFlight provisioning</li>
  <li>App Store review wait times</li>
  <li>Backend server setup (EC2, Docker, etc.)</li>
</ul>

<p>This feels like magic.</p>

<p><strong>Cost:</strong></p>

<p>Cloud Run has a generous free tier. All three of these tools combined cost me <strong>$0.00</strong> so far because usage is low.</p>

<p>When they scale, pricing is pay-per-request. For micro-tools like these, that’s perfect.</p>

<h2 id="what-these-tools-taught-me">What These Tools Taught Me</h2>

<h3 id="1-micro-tools--mega-platforms">1. Micro-tools &gt; Mega-platforms</h3>

<p>I used to think tools had to be comprehensive to be useful.</p>

<p>“If I’m building a launch planner, it should also handle marketing analytics, email campaigns, social scheduling…”</p>

<p>No.</p>

<p><strong>A tool that does one thing really well beats a tool that does ten things poorly.</strong></p>

<p>These three tools are tiny. Single-purpose. And that’s why they work.</p>

<h3 id="2-speed-unlocks-experimentation">2. Speed Unlocks Experimentation</h3>

<p>In iOS development, the feedback loop is long:</p>

<ul>
  <li>Code → Compile → Test → Debug → Repeat</li>
</ul>

<p>With Google AI Studio + Cloud Run:</p>

<ul>
  <li>Prompt → Test → Deploy → Share</li>
</ul>

<p>The cycle is <em>minutes</em>.</p>

<p>This changes how I think about ideas. Instead of “Is this worth building?”, I just build it and find out.</p>

<h3 id="3-ai-for-content-native-for-experience">3. AI for Content, Native for Experience</h3>

<p>Here’s my new mental model:</p>

<p><strong>Use AI Studio for:</strong></p>
<ul>
  <li>Backend logic</li>
  <li>Content generation</li>
  <li>Data transformation</li>
  <li>Quick prototypes</li>
</ul>

<p><strong>Use Swift/SwiftUI for:</strong></p>
<ul>
  <li>Native UI</li>
  <li>Offline features</li>
  <li>Platform-specific integrations (HealthKit, Widgets, Complications)</li>
  <li>Performance-critical code</li>
</ul>

<p><strong>Use them together</strong> for:</p>
<ul>
  <li>SwiftUI app → calls Cloud Run endpoint → displays AI-generated content in beautiful native UI</li>
</ul>

<p>This hybrid approach gives me the best of both worlds.</p>

<h3 id="4-context-switching-is-expensive">4. Context Switching Is Expensive</h3>

<p>Before these tools, my workflow looked like this:</p>

<ol>
  <li>Code for 2 hours</li>
  <li>Switch to marketing mode (write captions, plan launch)</li>
  <li>Lose momentum</li>
  <li>Struggle to get back into coding flow</li>
</ol>

<p>Now:</p>

<ol>
  <li>Code for 2 hours</li>
  <li>Open caption generator, get 5 options in 10 seconds</li>
  <li>Back to coding in 30 seconds</li>
</ol>

<p>Minimizing context switches keeps me in flow longer.</p>

<h3 id="5-tiny-tools-compound">5. Tiny Tools Compound</h3>

<p>One micro-tool is nice. Three micro-tools start to feel like a workflow.</p>

<p>I can imagine a future where I have:</p>
<ul>
  <li>10 tiny AI tools</li>
  <li>Each solving one specific problem</li>
  <li>All integrated into my daily routine</li>
</ul>

<p>That’s more powerful than one big monolithic platform.</p>

<h2 id="rookie-mistakes-summary">Rookie Mistakes Summary</h2>

<p>Here’s what I learned the hard way:</p>

<h3 id="mistake-1-trying-to-generate-ui-in-ai-studio">Mistake #1: Trying to generate UI in AI Studio</h3>

<p><strong>Problem:</strong> Web UI doesn’t translate well to native SwiftUI.</p>

<p><strong>Fix:</strong> Use AI Studio for logic/content. Use Swift for UI.</p>

<h3 id="mistake-2-generic-outputs">Mistake #2: Generic outputs</h3>

<p><strong>Problem:</strong> First versions generated bland, generic text.</p>

<p><strong>Fix:</strong> Add specificity to prompts. Show examples of what “good” looks like.</p>

<h3 id="mistake-3-overcomplicating-input-schemas">Mistake #3: Overcomplicating input schemas</h3>

<p><strong>Problem:</strong> I added too many optional fields, making the tool confusing.</p>

<p><strong>Fix:</strong> Start with the minimum viable schema. Add complexity only when needed.</p>

<h3 id="mistake-4-not-testing-edge-cases">Mistake #4: Not testing edge cases</h3>

<p><strong>Problem:</strong> Tools broke when users entered unexpected input (negative steps, invalid dates, etc.).</p>

<p><strong>Fix:</strong> Test thoroughly before deploying. Add validation.</p>

<h3 id="mistake-5-forgetting-to-add-exportshare-features">Mistake #5: Forgetting to add export/share features</h3>

<p><strong>Problem:</strong> Generated great content but no way to save or share it.</p>

<p><strong>Fix:</strong> Always include export (PDF, JSON, clipboard) from day one.</p>

<h2 id="whats-next">What’s Next</h2>

<p>I’m participating in the #CloudRunHackathon and building one useful mini-tool every day.</p>

<p><strong>Coming up:</strong></p>
<ul>
  <li>Screenshot to alt-text generator (for accessibility)</li>
  <li>Changelog writer (turns commit messages into user-friendly release notes)</li>
  <li>Pricing calculator (helps indie devs decide on pricing tiers)</li>
</ul>

<p>All built with the same stack: Google AI Studio + Cloud Run.</p>

<p><strong>Why?</strong></p>

<p>Because these are problems I actually have. And building solutions for myself means they’ll be genuinely useful for other indie devs too.</p>

<h2 id="try-it-yourself">Try It Yourself</h2>

<p>If you’re an indie maker and you’ve been curious about AI but intimidated by the complexity, here’s my advice:</p>

<p><strong>Start with one annoying task in your workflow.</strong></p>

<p>For me, it was:</p>
<ul>
  <li>Writing motivational messages for my fitness app</li>
  <li>Planning launches</li>
  <li>Writing social captions</li>
</ul>

<p>For you, it might be:</p>
<ul>
  <li>Generating app descriptions</li>
  <li>Writing support emails</li>
  <li>Summarizing user feedback</li>
</ul>

<p><strong>Then build the tiniest possible tool to solve it.</strong></p>

<p>Don’t aim for perfection. Aim for “works well enough that I’d use it tomorrow.”</p>

<p>Google AI Studio makes this trivial. You can have a working prototype in 10 minutes.</p>

<p><strong>Deploy it. Use it. Iterate.</strong></p>

<p>If it saves you 10 minutes a day, that’s 60 hours a year. Worth it.</p>

<h2 id="final-thoughts">Final Thoughts</h2>

<p>I’ve been an iOS developer for years. I love the craft of building native apps—the elegance of SwiftUI, the satisfaction of pixel-perfect animations, the joy of seeing someone use my app in the wild.</p>

<p>But I’m also a solo indie developer. I wear all the hats: coder, designer, marketer, support, finance.</p>

<p><strong>These AI micro-tools let me offload the parts I’m not great at</strong> (or don’t enjoy) <strong>without hiring a team.</strong></p>

<ul>
  <li>Need motivational messaging? Tool handles it.</li>
  <li>Need a launch plan? Tool generates it.</li>
  <li>Need social captions? Tool writes them.</li>
</ul>

<p>I’m still doing the hard work—building the app, solving real user problems, iterating based on feedback.</p>

<p>But the surrounding tasks? Automated.</p>

<p><strong>This is the future I’m excited about:</strong></p>

<p>Not AI replacing developers. But AI <em>augmenting</em> indie makers so we can focus on what we do best.</p>

<p>And tools like Google AI Studio + Cloud Run make it accessible to anyone willing to experiment.</p>

<p><strong>So go build something tiny. Ship it today. See what happens.</strong></p>

<p>You might surprise yourself.</p>

<hr />

<p><em>If you want to try the working versions of these tools (deployed on Cloud Run), let me know! I’m sharing access as I build in public.</em></p>

<p><em>Follow along for daily updates as I build more micro-tools for indie developers: <a href="https://x.com/rshankra">@rshankra</a></em></p>

<p><strong>Resources:</strong></p>

<ul>
  <li><a href="https://aistudio.google.com">Google AI Studio</a></li>
  <li><a href="https://cloud.google.com/run/docs">Cloud Run Documentation</a></li>
  <li><a href="https://cloud.google.com/run/hackathon">Cloud Run Hackathon</a></li>
  <li><a href="https://ai.google.dev/docs">Gemini API Reference</a></li>
</ul>

<p><strong>Related posts:</strong></p>

<ul>
  <li><a href="/xcode-no-code-building-ai-apps-google-ai-studio/">From Xcode to No Code: Building AI Apps with Google AI Studio</a> (platform overview and getting started)</li>
  <li><a href="/my-learnings-as-indie-app-developer/">My Learnings as Indie App Developer: Building Identity Habits</a> (on indie dev workflows and habits)</li>
  <li><a href="/building-twaist-ai-twitter-assistant-chrome-built-in-ai/">Building TwAIst: An AI Twitter Assistant</a> (more on AI integration strategies)</li>
</ul>]]></content><author><name>Ravi Shankar</name></author><category term="ai" /><category term="indie-development" /><category term="web-development" /><category term="entrepreneurship" /><category term="Google AI Studio" /><category term="Gemini" /><category term="Cloud Run" /><category term="Indie Apps" /><category term="Rapid Prototyping" /><category term="Build in Public" /><category term="AI Development" /><summary type="html"><![CDATA[Build AI micro-tools with Google AI Studio: fitness coach, launch planner, and caption writer. Real projects, mistakes, and rapid prototyping lessons.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://www.rshankar.com/assets/images/google-ai-studio/3-ai-tools-featured.png" /><media:content medium="image" url="https://www.rshankar.com/assets/images/google-ai-studio/3-ai-tools-featured.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">From Xcode to No Code: Building AI Apps with Google AI Studio</title><link href="https://www.rshankar.com/xcode-no-code-building-ai-apps-google-ai-studio/" rel="alternate" type="text/html" title="From Xcode to No Code: Building AI Apps with Google AI Studio" /><published>2025-11-06T00:00:00+00:00</published><updated>2025-11-06T00:00:00+00:00</updated><id>https://www.rshankar.com/xcode-no-code-building-ai-apps-google-ai-studio</id><content type="html" xml:base="https://www.rshankar.com/xcode-no-code-building-ai-apps-google-ai-studio/"><![CDATA[<p>I’ve spent years building iOS apps in Xcode. Compiling, debugging, wrestling with provisioning profiles, waiting for App Store reviews. It’s a process I know well—and honestly, it’s slow.</p>

<p>Last week, I built an AI image generator in 3 minutes. Not 3 hours. Not 3 days. Three minutes.</p>

<p>As someone deeply rooted in the Apple ecosystem, this felt surreal. No Xcode project. No backend server setup. No deployment headaches. Just prompts becoming code, instantly.</p>

<p>If you’re an iOS developer curious about AI, or an indie maker who wants to build fast, this is what I learned exploring Google AI Studio.<!--more--></p>

<h2 id="why-i-explored-google-ai-studio">Why I Explored Google AI Studio</h2>

<p>Here’s the honest truth: I was skeptical.</p>

<p>I’ve built dozens of iOS apps. I know SwiftUI, UIKit, Core Data, CloudKit. I understand the rhythm of native development—design, code, test, debug, repeat. It’s methodical. It’s structured. It works.</p>

<p>But it’s also <em>slow</em>.</p>

<p>When you have an idea at 10 PM and want to validate it before midnight, spinning up a new Xcode project feels like overkill. You need models, view controllers, networking layers, error handling. By the time you’re done scaffolding, the excitement has worn off.</p>

<p>That’s where Google AI Studio caught my attention.</p>

<p>The promise: <strong>Prompts are becoming code. If you can think clearly, you can build quickly.</strong></p>

<p>As someone who’s been thinking about adding AI features to my apps—and procrastinating because of the complexity—I decided to give it a shot.</p>

<h2 id="what-is-google-ai-studio">What Is Google AI Studio?</h2>

<p>Google AI Studio is a browser-based development environment for building AI-powered applications using Gemini models. Think of it as a playground that turns into production code.</p>

<p>You describe what you want your app to do, and AI Studio generates a working web service. No backend setup. No infrastructure management. Just input, output, and logic.</p>

<p><strong>Key features I discovered:</strong></p>

<ul>
  <li><strong>Annotation tool</strong>: Capture screenshots of what you want to build and add them directly to your conversation with the AI</li>
  <li><strong>Rollback checkpoints</strong>: Made a mistake? Easily restore to a previous version with the “Restore Check Point” feature</li>
  <li><strong>Built-in code editor</strong>: Fix code or add features directly in the browser</li>
  <li><strong>Deployment options</strong>: Download code, publish to GitHub, deploy to Cloud Run, or share your work instantly</li>
</ul>

<p>It’s designed for rapid iteration. Build, test, tweak, deploy—all in one place.</p>

<h2 id="my-first-project-image-generator-in-3-minutes">My First Project: Image Generator in 3 Minutes</h2>

<p>I started simple. An AI image generator seemed like a good test.</p>

<p>Here’s what I did:</p>

<p><strong>Step 1: Opened Google AI Studio</strong></p>

<p>No installation. No setup. Just opened the browser and landed in the studio.</p>

<p><strong>Step 2: Described what I wanted</strong></p>

<p>“Build an image generator that takes a text prompt and returns an AI-generated image.”</p>

<p><strong>Step 3: Configured input and output</strong></p>

<p>The studio prompted me to define:</p>
<ul>
  <li>Input: Text prompt (string)</li>
  <li>Output: Generated image (URL or base64)</li>
</ul>

<p><strong>Step 4: Tested it</strong></p>

<p>Hit “run” and got back a working image from my prompt.</p>

<p><strong>Total time</strong>: 3 minutes.</p>

<p>For context, building this in Swift would have involved:</p>
<ol>
  <li>Setting up an Xcode project</li>
  <li>Creating UI in SwiftUI</li>
  <li>Adding networking with URLSession or Alamofire</li>
  <li>Handling image decoding and display</li>
  <li>Managing error states</li>
  <li>Testing on simulator/device</li>
</ol>

<p>Even if you’re fast, that’s 30-60 minutes minimum.</p>

<h2 id="the-annotation-tool-screenshot--code">The Annotation Tool: Screenshot → Code</h2>

<p>This feature blew my mind.</p>

<p>As an iOS developer, I’m used to sketching UI in Figma or Sketch, then manually translating it into SwiftUI code. There’s always that gap between design and implementation.</p>

<p>In Google AI Studio, you can:</p>

<ol>
  <li>Capture a screenshot of an interface you want to replicate</li>
  <li>Add it directly to your conversation</li>
  <li>The AI generates code matching that visual</li>
</ol>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Google AI Studio provides an annotation tool that can be used to capture the screen where you want to make changes and add it to the conversation.</p>&mdash; Ravi Shankar (@rshankra) <a href="https://twitter.com/rshankra/status/1985383149778411601">November 3, 2025</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<p>I tested this with a fitness app dashboard I’d been sketching. Dropped in the screenshot, described the functionality, and got back working HTML/CSS/JavaScript that looked remarkably similar.</p>

<p><strong>Rookie mistake #1:</strong> I initially tried to describe complex layouts in text. It was clunky. Once I switched to screenshots + brief descriptions, the results improved dramatically.</p>

<p><strong>The takeaway:</strong> Show, don’t just tell. If you have a visual in mind, screenshot it.</p>

<h2 id="rollback-checkpoints-time-travel-for-code">Rollback Checkpoints: Time Travel for Code</h2>

<p>Anyone who’s worked on a complex app knows the fear of breaking something.</p>

<p>You make one change. Then another. Suddenly nothing works, and you’re not sure which edit caused the problem.</p>

<p>Google AI Studio has a brilliant solution: <strong>restore checkpoints</strong>.</p>

<p>It automatically saves states as you work. If something breaks, you can jump back to a previous working version with one click.</p>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">You can easily rollback the previous changes using the Restore Check Point&#39; feature. <a href="https://t.co/videolink">pic.twitter.com/videolink</a></p>&mdash; Ravi Shankar (@rshankra) <a href="https://twitter.com/rshankra/status/1985383149778411601">November 3, 2025</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<p>This is like Git’s version control, but without the ceremony of commits and branches. It’s instant.</p>

<p><strong>When I used it:</strong></p>

<p>I was building the fitness coach tool (more on this in my next article) and accidentally overwrote the JSON schema. Instead of panicking or manually reconstructing it, I hit “Restore Check Point” and went back 3 steps.</p>

<p>Problem solved in 10 seconds.</p>

<p><strong>Comparison to iOS development:</strong></p>

<p>In Xcode, I rely heavily on Git for this. But Git requires discipline—commit often, write good messages, manage branches. Here, it’s automatic and frictionless.</p>

<h2 id="the-code-editor-when-you-need-more-control">The Code Editor: When You Need More Control</h2>

<p>Google AI Studio isn’t purely no-code. It’s <em>low-code</em> with an escape hatch.</p>

<p>When the AI generates something close but not quite right, you can drop into the built-in code editor and make changes yourself.</p>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">If you are a developer and want to make code changes, you can use the code editor to fix the code or add features. <a href="https://t.co/videolink">pic.twitter.com/videolink</a></p>&mdash; Ravi Shankar (@rshankra) <a href="https://twitter.com/rshankra/status/1985383149778411601">November 3, 2025</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<p>I used this when:</p>

<ul>
  <li><strong>Tweaking UI styling</strong>: The AI gave me a working layout, but I wanted custom colors matching my brand</li>
  <li><strong>Adding validation</strong>: I wanted stricter input validation than what was generated</li>
  <li><strong>Optimizing logic</strong>: The generated code worked but wasn’t as efficient as I’d like</li>
</ul>

<p>As someone who writes code daily, this felt like the best of both worlds:</p>
<ul>
  <li>Speed of AI generation</li>
  <li>Precision of manual editing</li>
</ul>

<p><strong>The learning curve:</strong></p>

<p>If you know HTML/CSS/JavaScript, you’ll feel at home. If you’re coming from pure Swift/iOS development like me, there’s a small adjustment period. But the code is clean and readable—no weird abstractions.</p>

<h2 id="deployment-from-prototype-to-production">Deployment: From Prototype to Production</h2>

<p>This is where Google AI Studio really shines for indie developers.</p>

<p>Once you’ve built something, you have four options:</p>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Like any other popular vibe coding tool, this allows users to:<br />- download the code<br />- publish to GitHub<br />- deploy it in the cloud<br />- share your work. <a href="https://t.co/videolink">pic.twitter.com/videolink</a></p>&mdash; Ravi Shankar (@rshankra) <a href="https://twitter.com/rshankra/status/1985383149778411601">November 3, 2025</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<h3 id="1-download-the-code">1. Download the Code</h3>

<p>Get a zip file with all your HTML, CSS, and JavaScript. Host it anywhere.</p>

<h3 id="2-publish-to-github">2. Publish to GitHub</h3>

<p>One-click push to a GitHub repository. Great for version control and collaboration.</p>

<h3 id="3-deploy-to-cloud-run">3. Deploy to Cloud Run</h3>

<p>This is the killer feature. Click “Deploy to Cloud Run,” and your app goes live on Google Cloud infrastructure.</p>

<p>No server configuration. No Docker knowledge required. Just deployed, scalable web service.</p>

<p>For iOS developers used to TestFlight and App Store review, this instant deployment feels like cheating.</p>

<h3 id="4-share-your-work">4. Share Your Work</h3>

<p>Get a shareable link immediately. Perfect for user testing or showing stakeholders.</p>

<p><strong>My workflow:</strong></p>

<ol>
  <li>Build in AI Studio</li>
  <li>Test with the preview</li>
  <li>Deploy to Cloud Run</li>
  <li>Share the link for feedback</li>
  <li>Iterate based on responses</li>
</ol>

<p>This cycle takes <em>minutes</em>, not days.</p>

<h2 id="key-lessons-learned">Key Lessons Learned</h2>

<h3 id="1-start-stupidly-simple">1. Start Stupidly Simple</h3>

<p><strong>Problem:</strong> My first instinct was to build something complex to “test the limits.”</p>

<p><strong>Solution:</strong> I started with the image generator—one input, one output, minimal logic. Once that worked, I built up complexity.</p>

<p><strong>Lesson:</strong> Validate the workflow before attempting ambitious projects.</p>

<h3 id="2-screenshots--long-descriptions">2. Screenshots &gt; Long Descriptions</h3>

<p><strong>Problem:</strong> I spent 10 minutes typing out detailed UI specifications for a form layout.</p>

<p><strong>Solution:</strong> I sketched it on paper, took a photo, and uploaded it. Got better results in 30 seconds.</p>

<p><strong>Lesson:</strong> The annotation tool is powerful. Use it liberally.</p>

<h3 id="3-think-in-inputoutput-not-screens">3. Think in Input/Output, Not Screens</h3>

<p><strong>Problem:</strong> Coming from iOS development, I was thinking in terms of view controllers and navigation flows.</p>

<p><strong>Solution:</strong> Google AI Studio works best when you think in terms of <em>services</em>: “Given this input, return this output.”</p>

<p><strong>Lesson:</strong> Shift your mental model from “app with screens” to “service with endpoints.”</p>

<h3 id="4-not-a-replacement-but-a-complement">4. Not a Replacement, But a Complement</h3>

<p><strong>Problem:</strong> I initially wondered if this made native iOS development obsolete.</p>

<p><strong>Solution:</strong> No. Google AI Studio is brilliant for web services, APIs, and rapid prototyping. But for native features—Face ID, HealthKit, Apple Watch complications—you still need Swift and Xcode.</p>

<p><strong>Lesson:</strong> Use AI Studio for backend logic and web interfaces. Use Swift for native experiences. They complement each other perfectly.</p>

<h3 id="5-deployment-simplicity-lowers-the-bar-for-experimentation">5. Deployment Simplicity Lowers the Bar for Experimentation</h3>

<p><strong>Problem:</strong> In iOS development, deploying means TestFlight at minimum, App Store at maximum. Both take time.</p>

<p><strong>Solution:</strong> Cloud Run deployment is instant. This changes the psychology of experimentation.</p>

<p><strong>Lesson:</strong> When deployment is friction-free, you’re more willing to try weird ideas. Some of my best discoveries came from “what if I just…” moments that would’ve been too much hassle in Xcode.</p>

<h2 id="when-to-use-google-ai-studio-vs-native-ios-development">When to Use Google AI Studio vs. Native iOS Development</h2>

<p>After a week of building, here’s my framework:</p>

<h3 id="use-google-ai-studio-when">Use Google AI Studio when:</h3>

<ul>
  <li>✅ You need to validate an idea quickly</li>
  <li>✅ You’re building a web service or API</li>
  <li>✅ The UI is browser-based</li>
  <li>✅ You want instant deployment and sharing</li>
  <li>✅ Backend logic is more important than native features</li>
  <li>✅ You’re prototyping before committing to native development</li>
</ul>

<h3 id="use-xcodeswift-when">Use Xcode/Swift when:</h3>

<ul>
  <li>✅ You need native iOS/macOS features (HealthKit, Core Motion, etc.)</li>
  <li>✅ Offline functionality is critical</li>
  <li>✅ You’re building for App Store distribution</li>
  <li>✅ Performance is paramount (games, complex animations)</li>
  <li>✅ You want deep integration with Apple ecosystem</li>
  <li>✅ You need platform-specific UI patterns (SwiftUI, UIKit)</li>
</ul>

<h3 id="use-both-when">Use Both when:</h3>

<ul>
  <li>✅ Your app has a native frontend + cloud backend</li>
  <li>✅ You want to prototype backend logic in AI Studio, then integrate via API</li>
  <li>✅ You’re building multi-platform (iOS app + web dashboard)</li>
</ul>

<p><strong>My current approach:</strong></p>

<p>I use Google AI Studio to build and deploy backend services quickly. Then I connect my SwiftUI apps to those services via standard HTTP requests.</p>

<p>This gives me:</p>
<ul>
  <li>Speed of AI Studio for backend iteration</li>
  <li>Quality of native Swift for user experience</li>
</ul>

<p>Best of both worlds.</p>

<h2 id="what-im-building-next">What I’m Building Next</h2>

<p>Google AI Studio has changed how I think about side projects.</p>

<p>Ideas that seemed too time-consuming to validate are now afternoon experiments. I’m less precious about code and more focused on outcomes.</p>

<p><strong>Coming up:</strong></p>

<ul>
  <li>Fitness Coach: A web service that takes step count and goal, returns personalized coaching messages (perfect for Apple Watch complications)</li>
  <li>Launch Checklist Generator: Helps indie devs overcome launch paralysis</li>
  <li>Caption Generator: Writes scroll-friendly social media captions for apps</li>
</ul>

<p>I’ll share the full build process for all three in my next article, including rookie mistakes, architecture decisions, and Cloud Run deployment.</p>

<h2 id="getting-started">Getting Started</h2>

<p>If you’re an iOS developer curious about AI, here’s my recommendation:</p>

<ol>
  <li><strong>Pick one simple idea</strong>: Something with clear input/output</li>
  <li><strong>Open Google AI Studio</strong>: google.ai/aistudio (no installation needed)</li>
  <li><strong>Build it in one session</strong>: Don’t overthink. Just start.</li>
  <li><strong>Deploy to Cloud Run</strong>: Experience the instant gratification</li>
  <li><strong>Reflect</strong>: What could you build with this speed?</li>
</ol>

<p>The learning curve is gentle. If you understand basic programming concepts, you’ll be productive in an hour.</p>

<h2 id="final-thoughts">Final Thoughts</h2>

<p>I’m not abandoning Xcode. SwiftUI is still my favorite way to build iOS apps.</p>

<p>But Google AI Studio has given me a new superpower: rapid validation.</p>

<p>When I have an idea now, I can:</p>
<ul>
  <li>Build a working prototype in minutes</li>
  <li>Deploy it live</li>
  <li>Get real user feedback</li>
  <li>Decide if it’s worth the investment of a full native app</li>
</ul>

<p>That cycle used to take weeks. Now it takes an evening.</p>

<p><strong>For indie developers, this is a game-changer.</strong></p>

<p>We’re already stretched thin—coding, designing, marketing, supporting users. Any tool that accelerates the build phase gives us more time for everything else.</p>

<p>And honestly? It’s just <em>fun</em>. The immediacy of seeing prompts become working code reignites the joy of building.</p>

<p>If you’ve been curious about AI but intimidated by the complexity, Google AI Studio is the most approachable entry point I’ve found.</p>

<p><strong>Try it. Build something weird. Ship it. See what happens.</strong></p>

<hr />

<p><em>Next up: I’ll walk through building 3 AI mini tools for indie developers in 3 days, including full code examples, architecture decisions, and lessons learned. Follow along for the full build-in-public journey.</em></p>

<p><strong>Resources:</strong></p>

<ul>
  <li><a href="https://aistudio.google.com">Google AI Studio</a></li>
  <li><a href="https://cloud.google.com/run/docs">Cloud Run Documentation</a></li>
  <li><a href="https://ai.google.dev/docs">Gemini API Reference</a></li>
</ul>

<p><strong>Related posts:</strong></p>

<ul>
  <li><a href="/my-learnings-as-indie-app-developer/">My Learnings as Indie App Developer: Building Identity Habits</a> (lessons on rapid iteration and validation)</li>
  <li><a href="/building-twaist-ai-twitter-assistant-chrome-built-in-ai/">Building TwAIst: An AI Twitter Assistant</a> (more on AI integration for apps)</li>
</ul>]]></content><author><name>Ravi Shankar</name></author><category term="ai" /><category term="indie-development" /><category term="web-development" /><category term="Google AI Studio" /><category term="Gemini" /><category term="Cloud Run" /><category term="No Code" /><category term="AI Development" /><category term="Rapid Prototyping" /><summary type="html"><![CDATA[An iOS developer's guide to Google AI Studio. Build AI web apps in minutes without backend setup using annotation tools and Cloud Run.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://www.rshankar.com/assets/images/google-ai-studio/xcode-no-code-featured.png" /><media:content medium="image" url="https://www.rshankar.com/assets/images/google-ai-studio/xcode-no-code-featured.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Building TwAIst: An AI Twitter Assistant with Chrome’s Built-in AI</title><link href="https://www.rshankar.com/building-twaist-ai-twitter-assistant-chrome-built-in-ai/" rel="alternate" type="text/html" title="Building TwAIst: An AI Twitter Assistant with Chrome’s Built-in AI" /><published>2025-10-31T00:00:00+00:00</published><updated>2025-10-31T00:00:00+00:00</updated><id>https://www.rshankar.com/building-twaist-ai-twitter-assistant-chrome-built-in-ai</id><content type="html" xml:base="https://www.rshankar.com/building-twaist-ai-twitter-assistant-chrome-built-in-ai/"><![CDATA[<p><em>How I built a privacy-first AI Twitter assistant using Chrome’s built-in AI - and what I learned about the Prompt API along the way</em></p>

<p>I built <a href="https://github.com/rshankras/TwAIst-final">TwAIst</a> for the <a href="https://googlechromeai2025.devpost.com/">Google Chrome Built-in AI Hackathon</a>, and honestly? It started from pure frustration. I was spending way too much time staring at empty tweet boxes, trying to craft the “perfect” reply, watching my productivity disappear into the Twitter void.<!--more--></p>

<p>But here’s what got me excited: Chrome’s new Prompt API with Gemini Nano means you can run AI completely on-device. No external API calls. No data leaving your machine. No rate limits. Just pure, privacy-first AI that works offline.</p>

<p>This post is for anyone wanting to get started with Chrome’s built-in AI or build Chrome extensions. I’ll walk through what I built, how the Prompt API actually works, and the lessons I learned the hard way.</p>

<p><strong>Watch the demo:</strong> <a href="https://www.youtube.com/watch?v=UcNIQ6FXJRI">TwAIst Demo Video</a></p>

<h2 id="what-twaist-does">What TwAIst Does</h2>

<p>TwAIst is a Chrome extension that helps you create better Twitter/X content using AI - completely privately on your device.</p>

<p><img src="/assets/images/twaist/welcome.png" alt="TwAIst Welcome Screen" /></p>

<p><em>TwAIst’s welcome screen showing the main features</em></p>

<h3 id="core-features">Core Features</h3>

<p><strong>1. Multi-Step Tweet Composer</strong></p>

<p>This is where things get interesting. Instead of just “generate a tweet,” TwAIst uses a workflow:</p>

<ul>
  <li>Generate ideas for any topic</li>
  <li>Create attention-grabbing hooks from your chosen idea</li>
  <li>Compose full tweets using your selected hook</li>
  <li>Choose from 5 different tones (casual, witty, storytelling, educational, motivational)</li>
</ul>

<p><img src="/assets/images/twaist/composer.png" alt="TwAIst Composer Interface" /></p>

<p><em>The multi-step composer showing the Ideas → Hooks → Tweet workflow</em></p>

<p><strong>2. Smart Reply Generator</strong></p>

<ul>
  <li>Paste any tweet, get contextual replies in 6 different tones</li>
  <li>Upload images and AI analyzes them for image-aware replies</li>
  <li>Refine replies iteratively until they’re perfect</li>
  <li>Tones: friendly, humorous, personal story, thought-provoking, add insight, quick help</li>
</ul>

<p><img src="/assets/images/twaist/reply.png" alt="TwAIst Reply Generator" /></p>

<p><em>Smart reply generator with tone selection</em></p>

<p><strong>3. Template Generator</strong></p>

<p>Six proven tweet formats that go viral:</p>
<ul>
  <li>Contrast (before/after)</li>
  <li>Transformation story</li>
  <li>Unpopular opinion</li>
  <li>Choose your hard</li>
  <li>Hook+list+question</li>
  <li>Struggle→solution</li>
</ul>

<p><em>Credit: Template inspiration from <a href="https://www.youtube.com/watch?v=ccGtI_DJQnQ">Stijn Noorman’s viral tweet formats</a> (<a href="https://twitter.com/stijnnoorman">@stijnnoorman</a>)</em></p>

<p><img src="/assets/images/twaist/templates.png" alt="TwAIst Templates" /></p>

<p><em>Template generator showing different viral tweet formats</em></p>

<p><strong>4. Advanced Features</strong></p>

<ul>
  <li><strong>Multimodal support</strong>: Upload images, AI understands them for contextual content</li>
  <li><strong>Conversation context</strong>: Multi-step workflow remembers previous choices</li>
  <li><strong>AI parameter tuning</strong>: Adjust temperature (0.0-1.0) and top-K (1-100) for creativity control</li>
  <li><strong>Device-optimal defaults</strong>: Loads hardware-specific AI parameters automatically</li>
  <li><strong>Work-in-progress saving</strong>: Auto-saves your drafts locally</li>
  <li><strong>Refine anything</strong>: Iteratively improve any generated content</li>
</ul>

<p><img src="/assets/images/twaist/settings.png" alt="TwAIst Settings" /></p>

<p><em>Settings panel with AI creativity controls</em></p>

<h2 id="getting-started-with-chromes-built-in-ai">Getting Started with Chrome’s Built-in AI</h2>

<p>Before you can use Chrome’s Prompt API, you need to enable it. Here’s how:</p>

<h3 id="1-enable-chrome-built-in-ai">1. Enable Chrome Built-in AI</h3>

<ol>
  <li>Open Chrome and go to <code class="language-plaintext highlighter-rouge">chrome://flags</code></li>
  <li>Search for “Prompt API for Gemini Nano”</li>
  <li>Enable the flag</li>
  <li>Restart Chrome</li>
  <li>Verify Gemini Nano is available by checking <code class="language-plaintext highlighter-rouge">chrome://components</code> and ensuring “Optimization Guide On Device Model” is present</li>
</ol>

<p><strong>Resources:</strong></p>
<ul>
  <li><a href="https://developer.chrome.com/docs/ai/built-in">Chrome AI Documentation</a></li>
  <li><a href="https://developer.chrome.com/docs/ai/built-in-apis">Prompt API Guide</a></li>
  <li><a href="https://googlechromeai2025.devpost.com/resources">Chrome AI Hackathon Resources</a></li>
</ul>

<h3 id="2-basic-prompt-api-usage">2. Basic Prompt API Usage</h3>

<p>The simplest way to use the Prompt API:</p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// Check if the API is available</span>
<span class="k">if</span> <span class="p">(</span><span class="o">!</span><span class="nb">window</span><span class="p">.</span><span class="nx">ai</span> <span class="o">||</span> <span class="o">!</span><span class="nb">window</span><span class="p">.</span><span class="nx">ai</span><span class="p">.</span><span class="nx">languageModel</span><span class="p">)</span> <span class="p">{</span>
  <span class="nx">console</span><span class="p">.</span><span class="nx">error</span><span class="p">(</span><span class="dl">'</span><span class="s1">Prompt API not available</span><span class="dl">'</span><span class="p">);</span>
  <span class="k">return</span><span class="p">;</span>
<span class="p">}</span>

<span class="c1">// Create a session</span>
<span class="kd">const</span> <span class="nx">session</span> <span class="o">=</span> <span class="k">await</span> <span class="nb">window</span><span class="p">.</span><span class="nx">ai</span><span class="p">.</span><span class="nx">languageModel</span><span class="p">.</span><span class="nx">create</span><span class="p">();</span>

<span class="c1">// Generate text</span>
<span class="kd">const</span> <span class="nx">result</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">session</span><span class="p">.</span><span class="nx">prompt</span><span class="p">(</span><span class="dl">'</span><span class="s1">Write a tweet about Chrome AI</span><span class="dl">'</span><span class="p">);</span>

<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">result</span><span class="p">);</span>

<span class="c1">// Clean up</span>
<span class="nx">session</span><span class="p">.</span><span class="nx">destroy</span><span class="p">();</span>
</code></pre></div></div>

<p>That’s it! But for real applications, you need more control.</p>

<h2 id="the-architecture-how-i-built-twaist">The Architecture: How I Built TwAIst</h2>

<h3 id="modular-design">Modular Design</h3>

<p>I structured TwAIst to be modular from day one. Each feature lives in its own module:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>TwAIst/
├── popup.js              # Main orchestrator
├── modules/
│   ├── composer.js       # Multi-step tweet composer
│   ├── reply-generator.js # Smart reply generator
│   ├── template-generator.js # Template system
│   └── image-handler.js  # Multimodal image processing
└── utils/
    └── ai-manager.js     # Central AI session manager
</code></pre></div></div>

<h3 id="the-ai-manager-pattern">The AI Manager Pattern</h3>

<p>Instead of creating AI sessions everywhere, I built a central <code class="language-plaintext highlighter-rouge">ai-manager.js</code> that handles all AI interactions. This became the single source of truth for AI operations.</p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// utils/ai-manager.js</span>
<span class="kd">class</span> <span class="nx">AIManager</span> <span class="p">{</span>
  <span class="kd">constructor</span><span class="p">()</span> <span class="p">{</span>
    <span class="k">this</span><span class="p">.</span><span class="nx">currentSession</span> <span class="o">=</span> <span class="kc">null</span><span class="p">;</span>
    <span class="k">this</span><span class="p">.</span><span class="nx">abortController</span> <span class="o">=</span> <span class="kc">null</span><span class="p">;</span>
  <span class="p">}</span>

  <span class="k">async</span> <span class="nx">initPrompt</span><span class="p">(</span><span class="nx">systemPrompt</span><span class="p">,</span> <span class="nx">options</span> <span class="o">=</span> <span class="p">{})</span> <span class="p">{</span>
    <span class="k">try</span> <span class="p">{</span>
      <span class="c1">// Check API availability</span>
      <span class="k">if</span> <span class="p">(</span><span class="o">!</span><span class="nb">window</span><span class="p">.</span><span class="nx">ai</span> <span class="o">||</span> <span class="o">!</span><span class="nb">window</span><span class="p">.</span><span class="nx">ai</span><span class="p">.</span><span class="nx">languageModel</span><span class="p">)</span> <span class="p">{</span>
        <span class="k">throw</span> <span class="k">new</span> <span class="nb">Error</span><span class="p">(</span><span class="dl">'</span><span class="s1">Prompt API not available</span><span class="dl">'</span><span class="p">);</span>
      <span class="p">}</span>

      <span class="c1">// Clean up any existing session</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">cleanup</span><span class="p">();</span>

      <span class="c1">// Create abort controller for cancellation</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">abortController</span> <span class="o">=</span> <span class="k">new</span> <span class="nx">AbortController</span><span class="p">();</span>

      <span class="c1">// Create session with parameters</span>
      <span class="kd">const</span> <span class="nx">session</span> <span class="o">=</span> <span class="k">await</span> <span class="nb">window</span><span class="p">.</span><span class="nx">ai</span><span class="p">.</span><span class="nx">languageModel</span><span class="p">.</span><span class="nx">create</span><span class="p">({</span>
        <span class="nx">systemPrompt</span><span class="p">,</span>
        <span class="na">temperature</span><span class="p">:</span> <span class="nx">options</span><span class="p">.</span><span class="nx">temperature</span> <span class="o">||</span> <span class="mf">0.7</span><span class="p">,</span>
        <span class="na">topK</span><span class="p">:</span> <span class="nx">options</span><span class="p">.</span><span class="nx">topK</span> <span class="o">||</span> <span class="mi">40</span><span class="p">,</span>
        <span class="na">signal</span><span class="p">:</span> <span class="k">this</span><span class="p">.</span><span class="nx">abortController</span><span class="p">.</span><span class="nx">signal</span>
      <span class="p">});</span>

      <span class="k">this</span><span class="p">.</span><span class="nx">currentSession</span> <span class="o">=</span> <span class="nx">session</span><span class="p">;</span>

      <span class="k">return</span> <span class="p">{</span> <span class="nx">session</span><span class="p">,</span> <span class="na">abortController</span><span class="p">:</span> <span class="k">this</span><span class="p">.</span><span class="nx">abortController</span> <span class="p">};</span>
    <span class="p">}</span> <span class="k">catch</span> <span class="p">(</span><span class="nx">error</span><span class="p">)</span> <span class="p">{</span>
      <span class="nx">console</span><span class="p">.</span><span class="nx">error</span><span class="p">(</span><span class="dl">'</span><span class="s1">Failed to init AI session:</span><span class="dl">'</span><span class="p">,</span> <span class="nx">error</span><span class="p">);</span>
      <span class="k">throw</span> <span class="nx">error</span><span class="p">;</span>
    <span class="p">}</span>
  <span class="p">}</span>

  <span class="k">async</span> <span class="nx">initPromptWithContext</span><span class="p">(</span><span class="nx">options</span> <span class="o">=</span> <span class="p">{})</span> <span class="p">{</span>
    <span class="k">try</span> <span class="p">{</span>
      <span class="k">if</span> <span class="p">(</span><span class="o">!</span><span class="nb">window</span><span class="p">.</span><span class="nx">ai</span> <span class="o">||</span> <span class="o">!</span><span class="nb">window</span><span class="p">.</span><span class="nx">ai</span><span class="p">.</span><span class="nx">languageModel</span><span class="p">)</span> <span class="p">{</span>
        <span class="k">throw</span> <span class="k">new</span> <span class="nb">Error</span><span class="p">(</span><span class="dl">'</span><span class="s1">Prompt API not available</span><span class="dl">'</span><span class="p">);</span>
      <span class="p">}</span>

      <span class="k">this</span><span class="p">.</span><span class="nx">cleanup</span><span class="p">();</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">abortController</span> <span class="o">=</span> <span class="k">new</span> <span class="nx">AbortController</span><span class="p">();</span>

      <span class="c1">// Create session with conversation history</span>
      <span class="kd">const</span> <span class="nx">session</span> <span class="o">=</span> <span class="k">await</span> <span class="nb">window</span><span class="p">.</span><span class="nx">ai</span><span class="p">.</span><span class="nx">languageModel</span><span class="p">.</span><span class="nx">create</span><span class="p">({</span>
        <span class="na">systemPrompt</span><span class="p">:</span> <span class="nx">options</span><span class="p">.</span><span class="nx">systemPrompt</span> <span class="o">||</span> <span class="dl">''</span><span class="p">,</span>
        <span class="na">initialPrompts</span><span class="p">:</span> <span class="nx">options</span><span class="p">.</span><span class="nx">conversationHistory</span> <span class="o">||</span> <span class="p">[],</span>
        <span class="na">temperature</span><span class="p">:</span> <span class="nx">options</span><span class="p">.</span><span class="nx">temperature</span> <span class="o">||</span> <span class="mf">0.7</span><span class="p">,</span>
        <span class="na">topK</span><span class="p">:</span> <span class="nx">options</span><span class="p">.</span><span class="nx">topK</span> <span class="o">||</span> <span class="mi">40</span><span class="p">,</span>
        <span class="na">signal</span><span class="p">:</span> <span class="k">this</span><span class="p">.</span><span class="nx">abortController</span><span class="p">.</span><span class="nx">signal</span>
      <span class="p">});</span>

      <span class="k">this</span><span class="p">.</span><span class="nx">currentSession</span> <span class="o">=</span> <span class="nx">session</span><span class="p">;</span>

      <span class="k">return</span> <span class="p">{</span> <span class="nx">session</span><span class="p">,</span> <span class="na">abortController</span><span class="p">:</span> <span class="k">this</span><span class="p">.</span><span class="nx">abortController</span> <span class="p">};</span>
    <span class="p">}</span> <span class="k">catch</span> <span class="p">(</span><span class="nx">error</span><span class="p">)</span> <span class="p">{</span>
      <span class="nx">console</span><span class="p">.</span><span class="nx">error</span><span class="p">(</span><span class="dl">'</span><span class="s1">Failed to init AI with context:</span><span class="dl">'</span><span class="p">,</span> <span class="nx">error</span><span class="p">);</span>
      <span class="k">throw</span> <span class="nx">error</span><span class="p">;</span>
    <span class="p">}</span>
  <span class="p">}</span>

  <span class="nx">cancel</span><span class="p">()</span> <span class="p">{</span>
    <span class="k">if</span> <span class="p">(</span><span class="k">this</span><span class="p">.</span><span class="nx">abortController</span><span class="p">)</span> <span class="p">{</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">abortController</span><span class="p">.</span><span class="nx">abort</span><span class="p">();</span>
    <span class="p">}</span>
  <span class="p">}</span>

  <span class="nx">cleanup</span><span class="p">()</span> <span class="p">{</span>
    <span class="k">if</span> <span class="p">(</span><span class="k">this</span><span class="p">.</span><span class="nx">currentSession</span><span class="p">)</span> <span class="p">{</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">currentSession</span><span class="p">.</span><span class="nx">destroy</span><span class="p">();</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">currentSession</span> <span class="o">=</span> <span class="kc">null</span><span class="p">;</span>
    <span class="p">}</span>
    <span class="k">if</span> <span class="p">(</span><span class="k">this</span><span class="p">.</span><span class="nx">abortController</span><span class="p">)</span> <span class="p">{</span>
      <span class="k">this</span><span class="p">.</span><span class="nx">abortController</span> <span class="o">=</span> <span class="kc">null</span><span class="p">;</span>
    <span class="p">}</span>
  <span class="p">}</span>
<span class="p">}</span>

<span class="k">export</span> <span class="kd">const</span> <span class="nx">aiManager</span> <span class="o">=</span> <span class="k">new</span> <span class="nx">AIManager</span><span class="p">();</span>
</code></pre></div></div>

<p><strong>Why this matters:</strong></p>
<ul>
  <li>Single place to manage session lifecycle</li>
  <li>Easy cancellation with AbortSignals</li>
  <li>Proper cleanup to avoid memory leaks</li>
  <li>Centralized error handling</li>
</ul>

<h2 id="the-killer-feature-multi-step-ai-workflow">The Killer Feature: Multi-Step AI Workflow</h2>

<p>The multi-step composer is what makes TwAIst feel like collaborating with AI rather than just one-shot generation. Here’s how it works:</p>

<h3 id="step-1-generate-ideas">Step 1: Generate Ideas</h3>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">async</span> <span class="kd">function</span> <span class="nx">generateIdeas</span><span class="p">(</span><span class="nx">topic</span><span class="p">)</span> <span class="p">{</span>
  <span class="kd">const</span> <span class="nx">systemPrompt</span> <span class="o">=</span> <span class="s2">`You are a creative brainstorming assistant.
Generate 5 unique tweet ideas about the given topic.
Each idea should be specific, interesting, and tweetable.`</span><span class="p">;</span>

  <span class="kd">const</span> <span class="p">{</span> <span class="nx">session</span> <span class="p">}</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">aiManager</span><span class="p">.</span><span class="nx">initPrompt</span><span class="p">(</span><span class="nx">systemPrompt</span><span class="p">,</span> <span class="p">{</span>
    <span class="na">temperature</span><span class="p">:</span> <span class="mf">0.8</span><span class="p">,</span> <span class="c1">// Higher creativity for ideation</span>
    <span class="na">topK</span><span class="p">:</span> <span class="mi">50</span>
  <span class="p">});</span>

  <span class="kd">const</span> <span class="nx">result</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">session</span><span class="p">.</span><span class="nx">prompt</span><span class="p">(</span><span class="s2">`Topic: </span><span class="p">${</span><span class="nx">topic</span><span class="p">}</span><span class="s2">`</span><span class="p">);</span>

  <span class="k">return</span> <span class="nx">result</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>

<h3 id="step-2-generate-hooks-with-context">Step 2: Generate Hooks WITH Context</h3>

<p>This is where conversation context becomes crucial. The AI needs to “remember” which idea the user selected:</p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">async</span> <span class="kd">function</span> <span class="nx">generateHooks</span><span class="p">(</span><span class="nx">selectedIdea</span><span class="p">)</span> <span class="p">{</span>
  <span class="c1">// Build conversation history</span>
  <span class="kd">const</span> <span class="nx">conversationHistory</span> <span class="o">=</span> <span class="p">[</span>
    <span class="p">{</span>
      <span class="na">role</span><span class="p">:</span> <span class="dl">'</span><span class="s1">system</span><span class="dl">'</span><span class="p">,</span>
      <span class="na">content</span><span class="p">:</span> <span class="dl">'</span><span class="s1">You are a hook-writing expert for social media.</span><span class="dl">'</span>
    <span class="p">},</span>
    <span class="p">{</span>
      <span class="na">role</span><span class="p">:</span> <span class="dl">'</span><span class="s1">user</span><span class="dl">'</span><span class="p">,</span>
      <span class="na">content</span><span class="p">:</span> <span class="s2">`I want to write about: </span><span class="p">${</span><span class="nx">selectedIdea</span><span class="p">}</span><span class="s2">`</span>
    <span class="p">}</span>
  <span class="p">];</span>

  <span class="kd">const</span> <span class="p">{</span> <span class="nx">session</span> <span class="p">}</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">aiManager</span><span class="p">.</span><span class="nx">initPromptWithContext</span><span class="p">({</span>
    <span class="nx">conversationHistory</span><span class="p">,</span>
    <span class="na">temperature</span><span class="p">:</span> <span class="mf">0.7</span><span class="p">,</span>
    <span class="na">topK</span><span class="p">:</span> <span class="mi">40</span>
  <span class="p">});</span>

  <span class="kd">const</span> <span class="nx">hooks</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">session</span><span class="p">.</span><span class="nx">prompt</span><span class="p">(</span>
    <span class="dl">'</span><span class="s1">Generate 5 attention-grabbing hooks for this idea</span><span class="dl">'</span>
  <span class="p">);</span>

  <span class="k">return</span> <span class="nx">hooks</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>

<h3 id="step-3-compose-tweet">Step 3: Compose Tweet</h3>

<p>Same pattern - carry the context forward:</p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">async</span> <span class="kd">function</span> <span class="nx">composeTweet</span><span class="p">(</span><span class="nx">selectedIdea</span><span class="p">,</span> <span class="nx">selectedHook</span><span class="p">,</span> <span class="nx">tone</span><span class="p">)</span> <span class="p">{</span>
  <span class="kd">const</span> <span class="nx">conversationHistory</span> <span class="o">=</span> <span class="p">[</span>
    <span class="p">{</span>
      <span class="na">role</span><span class="p">:</span> <span class="dl">'</span><span class="s1">system</span><span class="dl">'</span><span class="p">,</span>
      <span class="na">content</span><span class="p">:</span> <span class="s2">`You are a tweet composer. Write in </span><span class="p">${</span><span class="nx">tone</span><span class="p">}</span><span class="s2"> tone.`</span>
    <span class="p">},</span>
    <span class="p">{</span>
      <span class="na">role</span><span class="p">:</span> <span class="dl">'</span><span class="s1">user</span><span class="dl">'</span><span class="p">,</span>
      <span class="na">content</span><span class="p">:</span> <span class="s2">`Idea: </span><span class="p">${</span><span class="nx">selectedIdea</span><span class="p">}</span><span class="s2">`</span>
    <span class="p">},</span>
    <span class="p">{</span>
      <span class="na">role</span><span class="p">:</span> <span class="dl">'</span><span class="s1">assistant</span><span class="dl">'</span><span class="p">,</span>
      <span class="na">content</span><span class="p">:</span> <span class="s2">`Hook: </span><span class="p">${</span><span class="nx">selectedHook</span><span class="p">}</span><span class="s2">`</span>
    <span class="p">}</span>
  <span class="p">];</span>

  <span class="kd">const</span> <span class="p">{</span> <span class="nx">session</span> <span class="p">}</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">aiManager</span><span class="p">.</span><span class="nx">initPromptWithContext</span><span class="p">({</span>
    <span class="nx">conversationHistory</span><span class="p">,</span>
    <span class="na">temperature</span><span class="p">:</span> <span class="mf">0.6</span><span class="p">,</span>
    <span class="na">topK</span><span class="p">:</span> <span class="mi">30</span>
  <span class="p">});</span>

  <span class="kd">const</span> <span class="nx">tweet</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">session</span><span class="p">.</span><span class="nx">prompt</span><span class="p">(</span>
    <span class="dl">'</span><span class="s1">Write a complete tweet using this hook and idea</span><span class="dl">'</span>
  <span class="p">);</span>

  <span class="k">return</span> <span class="nx">tweet</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>

<p><strong>The magic:</strong> Each step builds on the last. The AI “remembers” the context, creating a coherent workflow that feels natural.</p>

<h2 id="multimodal-ai-working-with-images">Multimodal AI: Working with Images</h2>

<p>Getting image analysis working was genuinely challenging, but the results are worth it. You can upload a screenshot and TwAIst generates contextual tweets about it.</p>

<h3 id="how-to-process-images">How to Process Images</h3>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">async</span> <span class="kd">function</span> <span class="nx">analyzeImage</span><span class="p">(</span><span class="nx">imageFile</span><span class="p">,</span> <span class="nx">prompt</span><span class="p">)</span> <span class="p">{</span>
  <span class="c1">// Read image as Blob</span>
  <span class="kd">const</span> <span class="nx">imageBlob</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">readFileAsBlob</span><span class="p">(</span><span class="nx">imageFile</span><span class="p">);</span>

  <span class="kd">const</span> <span class="nx">systemPrompt</span> <span class="o">=</span> <span class="dl">'</span><span class="s1">You are an image analysis expert who creates engaging social media content.</span><span class="dl">'</span><span class="p">;</span>

  <span class="kd">const</span> <span class="p">{</span> <span class="nx">session</span> <span class="p">}</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">aiManager</span><span class="p">.</span><span class="nx">initPrompt</span><span class="p">(</span><span class="nx">systemPrompt</span><span class="p">,</span> <span class="p">{</span>
    <span class="na">temperature</span><span class="p">:</span> <span class="mf">0.7</span>
  <span class="p">});</span>

  <span class="c1">// Send multimodal prompt</span>
  <span class="kd">const</span> <span class="nx">result</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">session</span><span class="p">.</span><span class="nx">prompt</span><span class="p">([</span>
    <span class="p">{</span>
      <span class="na">role</span><span class="p">:</span> <span class="dl">'</span><span class="s1">user</span><span class="dl">'</span><span class="p">,</span>
      <span class="na">content</span><span class="p">:</span> <span class="p">[</span>
        <span class="p">{</span> <span class="na">type</span><span class="p">:</span> <span class="dl">'</span><span class="s1">text</span><span class="dl">'</span><span class="p">,</span> <span class="na">value</span><span class="p">:</span> <span class="nx">prompt</span> <span class="p">},</span>
        <span class="p">{</span> <span class="na">type</span><span class="p">:</span> <span class="dl">'</span><span class="s1">image</span><span class="dl">'</span><span class="p">,</span> <span class="na">value</span><span class="p">:</span> <span class="nx">imageBlob</span> <span class="p">}</span>
      <span class="p">]</span>
    <span class="p">}</span>
  <span class="p">]);</span>

  <span class="k">return</span> <span class="nx">result</span><span class="p">;</span>
<span class="p">}</span>

<span class="kd">function</span> <span class="nx">readFileAsBlob</span><span class="p">(</span><span class="nx">file</span><span class="p">)</span> <span class="p">{</span>
  <span class="k">return</span> <span class="k">new</span> <span class="nb">Promise</span><span class="p">((</span><span class="nx">resolve</span><span class="p">,</span> <span class="nx">reject</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="p">{</span>
    <span class="kd">const</span> <span class="nx">reader</span> <span class="o">=</span> <span class="k">new</span> <span class="nx">FileReader</span><span class="p">();</span>
    <span class="nx">reader</span><span class="p">.</span><span class="nx">onload</span> <span class="o">=</span> <span class="p">(</span><span class="nx">e</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="nx">resolve</span><span class="p">(</span><span class="k">new</span> <span class="nx">Blob</span><span class="p">([</span><span class="nx">e</span><span class="p">.</span><span class="nx">target</span><span class="p">.</span><span class="nx">result</span><span class="p">]));</span>
    <span class="nx">reader</span><span class="p">.</span><span class="nx">onerror</span> <span class="o">=</span> <span class="nx">reject</span><span class="p">;</span>
    <span class="nx">reader</span><span class="p">.</span><span class="nx">readAsArrayBuffer</span><span class="p">(</span><span class="nx">file</span><span class="p">);</span>
  <span class="p">});</span>
<span class="p">}</span>
</code></pre></div></div>

<p><strong>Important notes:</strong></p>
<ul>
  <li>Image processing takes 5-10 seconds - always show loading indicators</li>
  <li>Set user expectations: “Analyzing image… (this may take 5-10 seconds)”</li>
  <li>The Prompt API accepts images as Blobs</li>
  <li>Works best with clear, high-contrast images</li>
</ul>

<h2 id="prompt-engineering-making-ai-sound-human">Prompt Engineering: Making AI Sound Human</h2>

<p>My first attempts at AI-generated tweets were… painful. They screamed “I WAS WRITTEN BY AI!” - lots of “Haha, okay so…” and “Bold move!” everywhere.</p>

<h3 id="what-didnt-work">What Didn’t Work</h3>

<p>❌ <strong>Longer prompts</strong>: Made it worse
❌ <strong>Asking for “casual tone”</strong>: Still sounded formal
❌ <strong>Positive instructions only</strong>: Too vague</p>

<h3 id="what-actually-worked">What Actually Worked</h3>

<p>✅ <strong>Explicit AVOID lists</strong>: Tell AI what NOT to do</p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">const</span> <span class="nx">systemPrompt</span> <span class="o">=</span> <span class="s2">`You are a friendly reply generator.

WRITING STYLE:
- Reply naturally like texting a friend
- Jump straight to your reaction
- Lowercase is OK
- Sentence fragments are OK
- Be specific, not vague

AVOID THESE PHRASES:
- "totally"
- "this is so true"
- "I agree"
- "haha okay"
- "bold move"
- "interesting take"
- "fair point"

Keep it under 280 characters.`</span><span class="p">;</span>
</code></pre></div></div>

<p><strong>The insight:</strong> Telling AI what NOT to do is more effective than telling it what TO do.</p>

<h3 id="tone-specific-prompts">Tone-Specific Prompts</h3>

<p>Each tone needs its own carefully crafted system prompt:</p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">const</span> <span class="nx">tonePrompts</span> <span class="o">=</span> <span class="p">{</span>
  <span class="na">friendly</span><span class="p">:</span> <span class="s2">`Reply warmly and supportively.
AVOID: "totally", "so true", "I agree"
Jump straight to reaction.`</span><span class="p">,</span>

  <span class="na">humorous</span><span class="p">:</span> <span class="s2">`Reply with wit and humor.
AVOID: "haha", "lol", forced puns
Be clever, not try-hard.`</span><span class="p">,</span>

  <span class="na">personal_story</span><span class="p">:</span> <span class="s2">`Share a brief relevant personal experience.
AVOID: "this reminds me", "funny story"
Start with the story.`</span><span class="p">,</span>

  <span class="na">thought_provoking</span><span class="p">:</span> <span class="s2">`Ask an insightful follow-up question.
AVOID: "interesting point", "makes you think"
Go straight to the question.`</span><span class="p">,</span>

  <span class="na">add_insight</span><span class="p">:</span> <span class="s2">`Add a valuable new perspective.
AVOID: "also", "additionally", "another thing"
State the insight directly.`</span><span class="p">,</span>

  <span class="na">quick_help</span><span class="p">:</span> <span class="s2">`Offer practical advice or resources.
AVOID: "you should", "try this"
Give direct help.`</span>
<span class="p">};</span>
</code></pre></div></div>

<h2 id="chrome-extension-setup">Chrome Extension Setup</h2>

<p>TwAIst is built as a Manifest V3 Chrome extension. Here’s the basic structure:</p>

<h3 id="manifestjson">manifest.json</h3>

<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
  </span><span class="nl">"manifest_version"</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">,</span><span class="w">
  </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"TwAIst - AI Twitter Assistant"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.0"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">"AI-powered Twitter assistant running 100% on-device"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"permissions"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
    </span><span class="s2">"storage"</span><span class="p">,</span><span class="w">
    </span><span class="s2">"activeTab"</span><span class="w">
  </span><span class="p">],</span><span class="w">
  </span><span class="nl">"action"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"default_popup"</span><span class="p">:</span><span class="w"> </span><span class="s2">"popup.html"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"default_icon"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
      </span><span class="nl">"16"</span><span class="p">:</span><span class="w"> </span><span class="s2">"icons/icon16.png"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"48"</span><span class="p">:</span><span class="w"> </span><span class="s2">"icons/icon48.png"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"128"</span><span class="p">:</span><span class="w"> </span><span class="s2">"icons/icon128.png"</span><span class="w">
    </span><span class="p">}</span><span class="w">
  </span><span class="p">},</span><span class="w">
  </span><span class="nl">"icons"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"16"</span><span class="p">:</span><span class="w"> </span><span class="s2">"icons/icon16.png"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"48"</span><span class="p">:</span><span class="w"> </span><span class="s2">"icons/icon48.png"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"128"</span><span class="p">:</span><span class="w"> </span><span class="s2">"icons/icon128.png"</span><span class="w">
  </span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>

<h3 id="project-structure">Project Structure</h3>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>TwAIst/
├── manifest.json
├── popup.html          # Main UI
├── popup.js           # Main orchestrator
├── styles.css         # Styles
├── modules/
│   ├── composer.js
│   ├── reply-generator.js
│   ├── template-generator.js
│   └── image-handler.js
├── utils/
│   └── ai-manager.js
└── icons/
    ├── icon16.png
    ├── icon48.png
    └── icon128.png
</code></pre></div></div>

<h2 id="key-lessons-learned">Key Lessons Learned</h2>

<h3 id="1-user-activation-requirement">1. User Activation Requirement</h3>

<p><strong>Problem:</strong> Chrome requires user interaction before creating AI sessions.</p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// ❌ This will FAIL</span>
<span class="k">async</span> <span class="kd">function</span> <span class="nx">init</span><span class="p">()</span> <span class="p">{</span>
  <span class="c1">// Trying to create session on page load</span>
  <span class="kd">const</span> <span class="nx">session</span> <span class="o">=</span> <span class="k">await</span> <span class="nb">window</span><span class="p">.</span><span class="nx">ai</span><span class="p">.</span><span class="nx">languageModel</span><span class="p">.</span><span class="nx">create</span><span class="p">();</span>
<span class="p">}</span>

<span class="nx">init</span><span class="p">();</span> <span class="c1">// Error: requires user activation</span>
</code></pre></div></div>

<p><strong>Solution:</strong> Always create sessions inside click handlers:</p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// ✅ This works</span>
<span class="nx">button</span><span class="p">.</span><span class="nx">addEventListener</span><span class="p">(</span><span class="dl">'</span><span class="s1">click</span><span class="dl">'</span><span class="p">,</span> <span class="k">async</span> <span class="p">()</span> <span class="o">=&gt;</span> <span class="p">{</span>
  <span class="kd">const</span> <span class="nx">session</span> <span class="o">=</span> <span class="k">await</span> <span class="nb">window</span><span class="p">.</span><span class="nx">ai</span><span class="p">.</span><span class="nx">languageModel</span><span class="p">.</span><span class="nx">create</span><span class="p">();</span>
  <span class="c1">// Use session...</span>
<span class="p">});</span>
</code></pre></div></div>

<h3 id="2-set-user-expectations">2. Set User Expectations</h3>

<p>Image processing takes time. Don’t make users guess:</p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">async</span> <span class="kd">function</span> <span class="nx">analyzeImage</span><span class="p">(</span><span class="nx">imageBlob</span><span class="p">)</span> <span class="p">{</span>
  <span class="c1">// Show specific loading message</span>
  <span class="nx">showStatus</span><span class="p">(</span><span class="dl">'</span><span class="s1">Analyzing image... (this may take 5-10 seconds)</span><span class="dl">'</span><span class="p">);</span>

  <span class="kd">const</span> <span class="nx">result</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">session</span><span class="p">.</span><span class="nx">prompt</span><span class="p">([...]);</span>

  <span class="nx">hideStatus</span><span class="p">();</span>
  <span class="k">return</span> <span class="nx">result</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>

<h3 id="3-temperature-and-top-k-matter">3. Temperature and Top-K Matter</h3>

<p>Different tasks need different parameters:</p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// Ideation: high creativity</span>
<span class="p">{</span> <span class="nl">temperature</span><span class="p">:</span> <span class="mf">0.8</span><span class="p">,</span> <span class="nx">topK</span><span class="p">:</span> <span class="mi">50</span> <span class="p">}</span>

<span class="c1">// Hook writing: moderate creativity</span>
<span class="p">{</span> <span class="na">temperature</span><span class="p">:</span> <span class="mf">0.7</span><span class="p">,</span> <span class="na">topK</span><span class="p">:</span> <span class="mi">40</span> <span class="p">}</span>

<span class="c1">// Final composition: more focused</span>
<span class="p">{</span> <span class="na">temperature</span><span class="p">:</span> <span class="mf">0.6</span><span class="p">,</span> <span class="na">topK</span><span class="p">:</span> <span class="mi">30</span> <span class="p">}</span>

<span class="c1">// Analytical tasks: low creativity</span>
<span class="p">{</span> <span class="na">temperature</span><span class="p">:</span> <span class="mf">0.3</span><span class="p">,</span> <span class="na">topK</span><span class="p">:</span> <span class="mi">20</span> <span class="p">}</span>
</code></pre></div></div>

<h3 id="4-conversation-context-is-powerful">4. Conversation Context is Powerful</h3>

<p>The <code class="language-plaintext highlighter-rouge">initialPrompts</code> parameter makes multi-step workflows feel natural:</p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">const</span> <span class="nx">session</span> <span class="o">=</span> <span class="k">await</span> <span class="nb">window</span><span class="p">.</span><span class="nx">ai</span><span class="p">.</span><span class="nx">languageModel</span><span class="p">.</span><span class="nx">create</span><span class="p">({</span>
  <span class="na">systemPrompt</span><span class="p">:</span> <span class="dl">'</span><span class="s1">You are a helpful assistant</span><span class="dl">'</span><span class="p">,</span>
  <span class="na">initialPrompts</span><span class="p">:</span> <span class="p">[</span>
    <span class="p">{</span> <span class="na">role</span><span class="p">:</span> <span class="dl">'</span><span class="s1">user</span><span class="dl">'</span><span class="p">,</span> <span class="na">content</span><span class="p">:</span> <span class="dl">'</span><span class="s1">Previous step context here</span><span class="dl">'</span> <span class="p">},</span>
    <span class="p">{</span> <span class="na">role</span><span class="p">:</span> <span class="dl">'</span><span class="s1">assistant</span><span class="dl">'</span><span class="p">,</span> <span class="na">content</span><span class="p">:</span> <span class="dl">'</span><span class="s1">AI response from previous step</span><span class="dl">'</span> <span class="p">}</span>
  <span class="p">]</span>
<span class="p">});</span>
</code></pre></div></div>

<h3 id="5-always-clean-up-sessions">5. Always Clean Up Sessions</h3>

<p>Memory leaks are real:</p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">class</span> <span class="nx">ComponentWithAI</span> <span class="p">{</span>
  <span class="k">async</span> <span class="nx">generateContent</span><span class="p">()</span> <span class="p">{</span>
    <span class="kd">let</span> <span class="nx">session</span> <span class="o">=</span> <span class="kc">null</span><span class="p">;</span>
    <span class="k">try</span> <span class="p">{</span>
      <span class="nx">session</span> <span class="o">=</span> <span class="k">await</span> <span class="nb">window</span><span class="p">.</span><span class="nx">ai</span><span class="p">.</span><span class="nx">languageModel</span><span class="p">.</span><span class="nx">create</span><span class="p">();</span>
      <span class="kd">const</span> <span class="nx">result</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">session</span><span class="p">.</span><span class="nx">prompt</span><span class="p">(</span><span class="dl">'</span><span class="s1">...</span><span class="dl">'</span><span class="p">);</span>
      <span class="k">return</span> <span class="nx">result</span><span class="p">;</span>
    <span class="p">}</span> <span class="k">finally</span> <span class="p">{</span>
      <span class="c1">// ALWAYS clean up, even on errors</span>
      <span class="k">if</span> <span class="p">(</span><span class="nx">session</span><span class="p">)</span> <span class="p">{</span>
        <span class="nx">session</span><span class="p">.</span><span class="nx">destroy</span><span class="p">();</span>
      <span class="p">}</span>
    <span class="p">}</span>
  <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<h2 id="the-privacy-first-advantage">The Privacy-First Advantage</h2>

<p>Running AI on-device isn’t just about privacy - it unlocks better UX:</p>

<p><strong>Benefits:</strong></p>
<ul>
  <li>✅ No API roundtrips = instant responses</li>
  <li>✅ No rate limits = unlimited usage</li>
  <li>✅ Works offline</li>
  <li>✅ No external API costs</li>
  <li>✅ User data never leaves their machine</li>
</ul>

<p><strong>The insight:</strong> Privacy-first architecture actually creates better user experiences. In a world of constant API calls and loading spinners, on-device AI feels genuinely fast.</p>

<h2 id="whats-next-for-twaist">What’s Next for TwAIst</h2>

<p>I’m actively developing new features:</p>

<p><strong>Short-term:</strong></p>
<ul>
  <li>Thread unroller: paste thread URL, get intelligent summary</li>
  <li>Sentiment analysis: suggest optimal reply tone</li>
  <li>Custom tone training: analyze user’s tweets to create personalized tone</li>
  <li>A/B testing: generate variations, predict performance</li>
</ul>

<p><strong>When APIs Stabilize:</strong></p>
<ul>
  <li>Translation API: auto-translate while preserving tone</li>
  <li>Summarizer API: better thread summarization</li>
  <li>Rewriter API: nuanced style transformations</li>
</ul>

<p><strong>Long-term:</strong></p>
<ul>
  <li>Voice input: Speech Recognition API + Prompt API</li>
  <li>Analytics dashboard: track AI-generated tweet performance</li>
  <li>Engagement predictor: score tweets before posting</li>
</ul>

<h2 id="try-twaist">Try TwAIst</h2>

<p>Ready to transform your Twitter workflow with privacy-first AI?</p>

<p><strong>GitHub Repository:</strong> <a href="https://github.com/rshankras/TwAIst-final">TwAIst</a></p>

<p><strong>Installation:</strong></p>
<ol>
  <li>Clone the repository</li>
  <li>Enable Chrome Built-in AI flags (see instructions above)</li>
  <li>Load unpacked extension in Chrome</li>
  <li>Start creating better content!</li>
</ol>

<h2 id="for-developers-getting-started-with-chrome-built-in-ai">For Developers: Getting Started with Chrome Built-in AI</h2>

<p>If you’re new to Chrome’s built-in AI, here’s your roadmap:</p>

<ol>
  <li><strong>Read the docs:</strong> <a href="https://developer.chrome.com/docs/ai/built-in">Chrome AI Documentation</a></li>
  <li><strong>Enable the flags:</strong> <code class="language-plaintext highlighter-rouge">chrome://flags</code> → “Prompt API for Gemini Nano”</li>
  <li><strong>Start simple:</strong> Basic prompt/response before complex workflows</li>
  <li><strong>Experiment with parameters:</strong> Temperature and top-K dramatically change outputs</li>
  <li><strong>Handle errors gracefully:</strong> Not all devices support Gemini Nano yet</li>
  <li><strong>Join the community:</strong> <a href="https://googlechromeai2025.devpost.com/">Chrome AI Hackathon</a></li>
</ol>

<h2 id="resources">Resources</h2>

<ul>
  <li><a href="https://developer.chrome.com/docs/ai/built-in">Chrome Built-in AI Documentation</a></li>
  <li><a href="https://developer.chrome.com/docs/ai/built-in-apis">Prompt API Guide</a></li>
  <li><a href="https://googlechromeai2025.devpost.com/resources">Chrome AI Hackathon Resources</a></li>
  <li><a href="https://github.com/rshankras/TwAIst-final">TwAIst GitHub Repository</a></li>
  <li><a href="https://www.youtube.com/watch?v=UcNIQ6FXJRI">TwAIst Demo Video</a></li>
</ul>

<h2 id="built-with-ai-assistance">Built With AI Assistance</h2>

<p>Full transparency: I built TwAIst using <a href="https://claude.com/claude-code">Claude Code</a>, Anthropic’s AI-powered development tool.</p>

<p>This was actually a fascinating meta-experience - using AI to build an AI-powered application. Claude Code helped with:</p>
<ul>
  <li>Architecture decisions and modular design patterns</li>
  <li>Writing the AI Manager abstraction layer</li>
  <li>Implementing the multi-step workflow with conversation context</li>
  <li>Debugging tricky session lifecycle issues</li>
  <li>Crafting effective system prompts and prompt engineering</li>
</ul>

<p>The irony isn’t lost on me: I used an AI coding assistant to build a tool that helps people create better AI-generated content. AI building AI tools. We’re living in interesting times.</p>

<p>If you’re building Chrome extensions or working with new APIs, I highly recommend trying Claude Code. It significantly accelerated development and helped me navigate the Prompt API documentation more effectively.</p>

<h2 id="acknowledgments">Acknowledgments</h2>

<ul>
  <li><strong>Template inspiration:</strong> <a href="https://twitter.com/stijnnoorman">Stijn Noorman</a> for his excellent <a href="https://www.youtube.com/watch?v=ccGtI_DJQnQ">viral tweet formats video</a></li>
  <li><strong>Chrome AI Team:</strong> For building the Prompt API and making on-device AI accessible</li>
  <li><strong>Devpost:</strong> For hosting the Chrome Built-in AI Hackathon</li>
</ul>

<h2 id="conclusion">Conclusion</h2>

<p>Building TwAIst taught me that Chrome’s Prompt API is legitimately production-ready. It’s not just a toy - it enables real applications with great UX that respect user privacy.</p>

<p>The key lessons:</p>
<ul>
  <li>On-device AI is faster and better for users</li>
  <li>Prompt engineering is 80% of the work</li>
  <li>Conversation context makes multi-step workflows feel natural</li>
  <li>Explicit AVOID lists help AI sound human</li>
  <li>Temperature/top-K parameters matter more than you think</li>
</ul>

<p>Chrome’s built-in AI represents a fundamental shift: bringing AI inference to the edge, making it private, fast, and accessible. TwAIst is just the beginning.</p>

<p>What will you build with Chrome’s built-in AI?</p>

<hr />

<p><em>Built for the <a href="https://googlechromeai2025.devpost.com/">Google Chrome Built-in AI Hackathon</a>. Check out <a href="https://github.com/rshankras/TwAIst-final">TwAIst on GitHub</a> and let me know what you think!</em></p>

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "TechArticle",
  "headline": "Building TwAIst: An AI Twitter Assistant with Chrome's Built-in AI",
  "description": "Learn how I built TwAIst, an AI-powered Twitter assistant using Chrome's Prompt API and Gemini Nano. A complete guide for developers getting started with Chrome's built-in AI.",
  "author": {
    "@type": "Person",
    "name": "Ravi Shankar"
  },
  "datePublished": "2025-10-31",
  "dateModified": "2025-10-31",
  "publisher": {
    "@type": "Organization",
    "name": "Ravi Shankar"
  },
  "mainEntityOfPage": {
    "@type": "WebPage",
    "@id": "https://www.rshankar.com/building-twaist-ai-twitter-assistant-chrome-built-in-ai/"
  },
  "image": {
    "@type": "ImageObject",
    "url": "https://www.rshankar.com/assets/images/twaist/welcome.png"
  },
  "keywords": "Chrome Built-in AI, Prompt API, Gemini Nano, Chrome extension development, AI Twitter assistant, on-device AI, privacy-first AI"
}
</script>]]></content><author><name>Ravi Shankar</name></author><category term="chrome-extensions" /><category term="ai" /><category term="web-development" /><category term="javascript" /><category term="Chrome Built-in AI" /><category term="Prompt API" /><category term="Gemini Nano" /><category term="Chrome Extensions" /><category term="Twitter" /><category term="AI Assistant" /><category term="Manifest V3" /><summary type="html"><![CDATA[Learn how I built TwAIst, an AI-powered Twitter assistant using Chrome's Prompt API and Gemini Nano. A complete guide for developers getting started with Chrome's built-in AI.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://www.rshankar.com/assets/images/twaist/welcome.png" /><media:content medium="image" url="https://www.rshankar.com/assets/images/twaist/welcome.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">How ChantFlow Transforms Your Apple Watch into a Sacred Digital Mala</title><link href="https://www.rshankar.com/how-chantflow-transforms-your-apple-watch-into-a-sacred-digital-mala/" rel="alternate" type="text/html" title="How ChantFlow Transforms Your Apple Watch into a Sacred Digital Mala" /><published>2025-08-26T00:00:00+00:00</published><updated>2025-08-26T00:00:00+00:00</updated><id>https://www.rshankar.com/how-chantflow-transforms-your-apple-watch-into-a-sacred-digital-mala</id><content type="html" xml:base="https://www.rshankar.com/how-chantflow-transforms-your-apple-watch-into-a-sacred-digital-mala/"><![CDATA[<p><em>Discover how this spiritual wellness app uses Apple Watch complications to make 108 mantra practice a natural part of your daily routine</em></p>

<p>See the app here: <a href="https://www.rshankar.com/chantflow/">ChantFlow</a> and on the App Store: <a href="https://apps.apple.com/us/app/chantflow-daily-om-practice/id6633438828">ChantFlow — Daily Om Practice</a>.<!--more--></p>

<h2 id="the-sacred-108-a-digital-revolution-in-spiritual-practice">The Sacred 108: A Digital Revolution in Spiritual Practice</h2>

<p>In the ancient tradition of mala (prayer bead) practice, 108 holds profound spiritual significance. But in our modern, fast-paced world, maintaining this sacred practice can be challenging. Enter ChantFlow - an Apple Watch app that transforms your wrist into a digital mala, making the sacred 108 practice accessible, consistent, and deeply meaningful.</p>

<h2 id="what-makes-chantflow-special">What Makes ChantFlow Special?</h2>

<p>ChantFlow isn’t just another <a href="https://www.rshankar.com/chantflow/">meditation app</a>. It’s a spiritual technology that honors tradition while embracing innovation. The app transforms your Apple Watch into a digital mala, allowing you to count your 108 mantras with the same reverence and intention as traditional prayer beads, but with the convenience that modern life demands.</p>

<h2 id="apple-watch-complications-your-sacred-progress-at-a-glance">Apple Watch Complications: Your Sacred Progress at a Glance</h2>

<p>ChantFlow’s Apple Watch complications are where the magic happens. These tiny widgets on your watch face serve as gentle reminders of your spiritual journey, showing your progress toward the sacred 108 without requiring you to open the app or interrupt your day.</p>

<p><img src="/assets/images/chantflow/watch-complication.png" alt="ChantFlow Apple Watch Complication showing 10:13 time with bed icon and +138 extras" /></p>

<p><em>ChantFlow’s Apple Watch complication displays the current time (10:13), a bed icon indicating sleep/rest mode, and shows +138 extras completed beyond the daily 108 practice.</em></p>

<p><strong>What You See on Your Wrist:</strong></p>
<ul>
  <li><strong>Current Mantra Count</strong>: “67 of 108” - showing your progress toward the daily goal</li>
  <li><strong>Sacred Progress Circle</strong>: A visual representation of your journey through the 108 mantras</li>
  <li><strong>Completion Status</strong>: Whether you’ve reached your daily goal</li>
  <li><strong>Practice Streak</strong>: How many consecutive days you’ve maintained your practice</li>
  <li><strong>Extra Merit</strong>: Mantras completed beyond the daily 108 for additional spiritual benefit</li>
</ul>

<h2 id="real-world-impact-from-glance-to-practice">Real-World Impact: From Glance to Practice</h2>

<p>Imagine this scenario: You’re in a busy meeting, feeling stressed and disconnected. You glance at your Apple Watch and see “45 of 108” on your ChantFlow complication. In that moment, you’re reminded of your spiritual practice, your connection to something greater than the immediate stress. That simple glance becomes a moment of mindfulness, a gentle nudge back to your center.</p>

<h2 id="the-technical-magic-behind-sacred-complications">The Technical Magic Behind Sacred Complications</h2>

<p>ChantFlow’s complications are built using Apple’s WidgetKit framework, ensuring they update seamlessly in the background and provide real-time information about your spiritual practice.</p>

<p><strong>Key Technical Features:</strong></p>
<ul>
  <li><strong>Real-Time Updates</strong>: Complications refresh automatically as you progress through your mantras</li>
  <li><strong>Background Processing</strong>: Updates happen even when the app isn’t actively open</li>
  <li><strong>Data Persistence</strong>: Your progress is saved and shared between the main app and complications</li>
  <li><strong>Battery Optimization</strong>: Efficient updates that don’t drain your Apple Watch battery</li>
</ul>

<h2 id="the-user-experience-from-first-glance-to-deep-practice">The User Experience: From First Glance to Deep Practice</h2>

<h3 id="1-the-gentle-reminder">1. <strong>The Gentle Reminder</strong></h3>
<p>Your ChantFlow complication sits quietly on your watch face, showing your current progress. Unlike push notifications that demand attention, it’s there when you need it, offering gentle guidance without interruption.</p>

<h3 id="2-the-moment-of-awareness">2. <strong>The Moment of Awareness</strong></h3>
<p>When you glance at your wrist and see “23 of 108,” you’re reminded of your spiritual practice. This isn’t a demand to practice right now - it’s a gentle invitation to remember your connection to the sacred.</p>

<h3 id="3-the-practice-experience">3. <strong>The Practice Experience</strong></h3>
<p>ChantFlow’s main interface features a sacred progress circle that fills as you count your mantras. Each tap provides haptic feedback, simulating the tactile experience of traditional mala beads. The visual progress and tactile feedback create a deeply immersive spiritual experience.</p>

<h2 id="the-psychology-of-sacred-complications">The Psychology of Sacred Complications</h2>

<p>The psychology behind ChantFlow’s complications is simple yet profound: what you see regularly becomes part of your consciousness. When your spiritual progress is always visible on your wrist, it becomes a natural part of your daily awareness.</p>

<p><strong>The Habit Loop:</strong></p>
<ol>
  <li><strong>Cue</strong>: You glance at your watch and see your mantra progress</li>
  <li><strong>Craving</strong>: You feel the desire to continue your spiritual practice</li>
  <li><strong>Response</strong>: You open ChantFlow and engage in your practice</li>
  <li><strong>Reward</strong>: You experience the satisfaction of spiritual connection and progress</li>
</ol>

<h2 id="getting-started-with-chantflow-complications">Getting Started with ChantFlow Complications</h2>

<h3 id="setting-up-your-sacred-dashboard">Setting Up Your Sacred Dashboard</h3>

<ol>
  <li><strong>Download ChantFlow</strong>: <a href="https://apps.apple.com/us/app/chantflow-daily-om-practice/id6633438828">Available on the App Store for Apple Watch</a></li>
  <li><strong>Choose Your Watch Face</strong>: Select a face that supports complications</li>
  <li><strong>Add ChantFlow Complications</strong>: Long-press your watch face and add ChantFlow widgets</li>
  <li><strong>Customize Your Layout</strong>: Arrange complications so your spiritual progress is easily visible</li>
  <li><strong>Begin Your Practice</strong>: Start with your first 108 mantras</li>
</ol>

<h2 id="the-impact-real-stories-from-chantflow-users">The Impact: Real Stories from ChantFlow Users</h2>

<h3 id="sarahs-story-finding-peace-in-chaos">Sarah’s Story: Finding Peace in Chaos</h3>
<p>“I work in a high-stress environment, and I was struggling to maintain my spiritual practice. ChantFlow’s complications changed everything. I can glance at my watch and immediately remember my connection to something greater. It’s like having a gentle spiritual guide on my wrist.”</p>

<h3 id="michaels-journey-building-consistency">Michael’s Journey: Building Consistency</h3>
<p>“I’ve tried many <a href="https://www.rshankar.com/chantflow/">meditation apps</a>, but I always fell off track. With ChantFlow, seeing my progress on my watch face keeps me motivated. I’ve maintained my 108 practice for 47 consecutive days - something I never thought possible.”</p>

<h2 id="conclusion-where-technology-meets-tradition">Conclusion: Where Technology Meets Tradition</h2>

<p>ChantFlow’s Apple Watch complications represent more than just convenient widgets - they’re a bridge between ancient spiritual wisdom and modern technology. By making the sacred 108 practice glanceable, accessible, and consistent, the app is helping people maintain their spiritual connection in a world that often pulls us away from it.</p>

<p>The key insight is that complications don’t demand spiritual practice - they gently remind us of our spiritual nature. In a world of constant notifications and digital noise, this subtle approach to spiritual technology might be exactly what we need to maintain our connection to the sacred.</p>

<p>Whether you’re a long-time practitioner of mala meditation or someone just beginning their spiritual journey, ChantFlow’s complications offer a gentle, non-intrusive way to keep your spiritual practice alive throughout the day. Your sacred journey starts with a single glance at your wrist.</p>

<hr />

<p><em>Ready to transform your Apple Watch into a sacred digital mala? <a href="https://apps.apple.com/us/app/chantflow-daily-om-practice/id6633438828">Download ChantFlow</a> and begin your journey with the sacred 108. Your spiritual practice is just a glance away.</em></p>

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "TechArticle",
  "headline": "How ChantFlow Transforms Your Apple Watch into a Sacred Digital Mala",
  "description": "Discover how ChantFlow uses Apple Watch complications to make 108 mantra practice a natural part of your daily routine, transforming your wrist into a sacred digital mala.",
  "author": {
    "@type": "Person",
    "name": "Ravi Shankar"
  },
  "datePublished": "2025-08-26",
  "dateModified": "2025-08-26",
  "publisher": {
    "@type": "Organization",
    "name": "Ravi Shankar"
  },
  "mainEntityOfPage": {
    "@type": "WebPage",
    "@id": "https://www.rshankar.com/2025/08/26/how-chantflow-transforms-your-apple-watch-into-a-sacred-digital-mala/"
  },
  "image": {
    "@type": "ImageObject",
    "url": "https://www.rshankar.com/assets/images/app-icons/chantflow-icon.png",
    "width": 512,
    "height": 512
  },
  "keywords": "Apple Watch complications, ChantFlow meditation app, 108 mantra practice, digital mala, spiritual technology, watchOS complications, meditation tracking, sacred practice"
}
</script>]]></content><author><name>Ravi Shankar</name></author><category term="watchos" /><category term="meditation" /><category term="spiritual-technology" /><category term="apple-watch" /><category term="ChantFlow" /><category term="Apple Watch" /><category term="complications" /><category term="meditation" /><category term="spiritual practice" /><category term="108 mantras" /><category term="mala" /><summary type="html"><![CDATA[Discover how ChantFlow uses Apple Watch complications to make 108 mantra practice a natural part of your daily routine, transforming your wrist into a sacred digital mala.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://www.rshankar.com/assets/images/app-icons/chantflow-icon.png" /><media:content medium="image" url="https://www.rshankar.com/assets/images/app-icons/chantflow-icon.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Stop Paying $50 Per Image: Build Your Own AI Content Creator</title><link href="https://www.rshankar.com/stop-paying-50-per-image-build-your-own-ai-content-creator/" rel="alternate" type="text/html" title="Stop Paying $50 Per Image: Build Your Own AI Content Creator" /><published>2025-08-20T00:00:00+00:00</published><updated>2025-08-20T00:00:00+00:00</updated><id>https://www.rshankar.com/stop-paying-50-per-image-build-your-own-ai-content-creator</id><content type="html" xml:base="https://www.rshankar.com/stop-paying-50-per-image-build-your-own-ai-content-creator/"><![CDATA[<p><em>Discover how to build your own AI content creator using Runware’s API to generate unlimited marketing visuals for pennies instead of paying $50 per stock photo</em></p>

<p>Ever spent hours searching for the perfect stock photo, only to settle for something “close enough” and pay $50 for the privilege? I did this for months while marketing my Apple Watch app <a href="https://www.rshankar.com/chantflow/">ChantFlow</a> until I discovered something that changed everything.</p>

<p>What if you could generate exactly the image you want, in seconds, for less than a dollar?</p>

<p>That’s exactly what I built using Runware’s AI API—a simple iOS app that creates unlimited marketing visuals on demand. Here’s how you can do the same.<!--more--></p>

<h2 id="the-problem-stock-photos-are-expensive-and-generic">The Problem: Stock Photos Are Expensive and Generic</h2>

<p>ChantFlow is an Apple Watch meditation app that helps users perform traditional chanting practices. Marketing this required very specific imagery—people using Apple Watch during meditation, serene spiritual scenes, the blend of ancient practices with modern tech.</p>

<p>Stock photo sites couldn’t deliver what I needed:</p>
<ul>
  <li>Generic meditation photos (another person sitting cross-legged in a white room)</li>
  <li>Expensive licenses ($20-50 per image)</li>
  <li>Limited selection for niche concepts</li>
  <li>No way to create variations or test different styles</li>
</ul>

<h2 id="the-solution-runware-ai-api">The Solution: Runware AI API</h2>

<p>I discovered Runware AI—an API that generates high-quality images in under a second. No machine learning expertise required, just simple API calls.</p>

<p>What makes Runware special:</p>
<ul>
  <li><strong>312,464+ AI models</strong> to choose from</li>
  <li><strong>Lightning-fast generation</strong> (0.6-4 seconds depending on model)</li>
  <li><strong>Affordable pricing</strong> (pennies per image vs. $50 stock photos)</li>
  <li><strong>Easy integration</strong> with any programming language</li>
</ul>

<p>I built “Mindful Creator,” a simple iOS app that uses Runware’s API to generate marketing content on demand. <a href="https://github.com/rshankras/PosterApp/tree/main">The complete source code is on GitHub</a>.</p>

<h2 id="why-runware-over-other-ai-services">Why Runware Over Other AI Services?</h2>

<h3 id="1-speed">1. <strong>Speed</strong></h3>
<p>While other services take 10-30 seconds, Runware generates images in 0.6-4 seconds depending on the model. Perfect for testing multiple ideas quickly.</p>

<h3 id="2-model-variety">2. <strong>Model Variety</strong></h3>
<p>Access to 312,464+ models from platforms like CivitAI. Just copy any model’s “AIR ID” and use it in your app.</p>

<h3 id="3-simple-integration">3. <strong>Simple Integration</strong></h3>
<p>No complex setup. Just make HTTP requests with your API key—works with any programming language.</p>

<h3 id="4-affordable">4. <strong>Affordable</strong></h3>
<p>Generate hundreds of images for what you’d pay for a single stock photo.</p>

<h2 id="start-here-the-runware-playground">Start Here: The Runware Playground</h2>

<p>Before writing any code, use the <a href="https://my.runware.ai/playground">Runware Playground</a>. It’s a web-based tool where you can:</p>

<ol>
  <li><strong>Test different AI models</strong> without coding</li>
  <li><strong>Adjust parameters</strong> and see results instantly</li>
  <li><strong>Copy model IDs</strong> to use in your app</li>
  <li><strong>Learn what prompts work best</strong></li>
</ol>

<p><img src="/assets/images/ai-content-creator/runware-playground.png" alt="Runware Playground Interface showing AI model selection and prompt input" /></p>

<p><em>The Runware Playground interface allows you to test different AI models, adjust parameters, and see results instantly. Notice the “Copy AI ID” button that lets you easily copy model IDs for use in your app.</em></p>

<h3 id="key-tip-copy-ai-id-feature">Key Tip: Copy AI ID Feature</h3>
<p>When you find a model you like, click “Copy AI ID” and paste it into your app. For example, <code class="language-plaintext highlighter-rouge">civitai:4384@128713</code> gives dreamy, artistic results perfect for wellness apps.</p>

<h3 id="writing-good-prompts">Writing Good Prompts</h3>
<p>Be specific. Instead of “meditation,” try:</p>
<blockquote>
  <p>“person wearing Apple Watch, meditating peacefully, soft lighting, minimalist room, serene expression”</p>
</blockquote>

<p>Add negative prompts to avoid unwanted elements:</p>
<blockquote>
  <p>“cluttered, dark, aggressive, scary”</p>
</blockquote>

<h2 id="see-it-in-action">See It In Action</h2>

<p>Here’s a simple prompt I used for ChantFlow marketing:</p>

<p><strong>Prompt:</strong> “person wearing Apple Watch, peaceful meditation pose, soft morning light, minimalist room, serene expression”</p>

<p><img src="/assets/images/ai-content-creator/chantflow-meditation-generated.png" alt="AI-generated meditation image showing person with Apple Watch" /></p>

<p><em>AI-generated image of a person wearing an Apple Watch while meditating peacefully in a minimalist room with soft morning light - exactly what I needed for ChantFlow marketing.</em></p>

<p><strong>Cost:</strong> $0.0273 per image</p>

<p>Compare that to spending hours searching stock photo sites and paying $50 for something that’s only “close enough.”</p>

<h2 id="the-app-i-built">The App I Built</h2>

<p>The <a href="https://github.com/rshankras/PosterApp/tree/main">Mindful Creator app</a> is a simple iOS app with these features:</p>

<ul>
  <li><strong>Easy prompting</strong>: Text input with helpful suggestions</li>
  <li><strong>Multiple models</strong>: Switch between different AI styles</li>
  <li><strong>Batch generation</strong>: Create 1-4 images at once</li>
  <li><strong>Various sizes</strong>: Square, portrait, landscape, story formats</li>
  <li><strong>Save to Photos</strong>: Export directly to your photo library</li>
  <li><strong>Cost tracking</strong>: See how much you’re spending</li>
</ul>

<div class="row">
  <div class="column">
    <img src="/assets/images/ai-content-creator/mindful-creator-input.png" alt="Mindful Creator input interface" style="width:280px; height:500px; object-fit:cover;" />
    <p><em>Input your prompt and choose content series</em></p>
  </div>
  <div class="column">
    <img src="/assets/images/ai-content-creator/mindful-creator-style.png" alt="Mindful Creator style selection" style="width:280px; height:500px; object-fit:cover;" />
    <p><em>Select visual style and add negative prompts</em></p>
  </div>
  <div class="column">
    <img src="/assets/images/ai-content-creator/mindful-creator-result.png" alt="Mindful Creator generated result" style="width:280px; height:500px; object-fit:cover;" />
    <p><em>View and save your generated image</em></p>
  </div>
</div>

<p><em>The complete Mindful Creator workflow: from input to style selection to final generated image with metadata.</em></p>

<h3 id="the-basic-api-call">The Basic API Call</h3>
<p>Here’s the core of how it works:</p>

<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">let</span> <span class="nv">request</span> <span class="o">=</span> <span class="kt">RunwareImageRequest</span><span class="p">(</span>
    <span class="nv">prompt</span><span class="p">:</span> <span class="s">"person using Apple Watch for meditation"</span><span class="p">,</span>
    <span class="nv">aspectRatio</span><span class="p">:</span> <span class="o">.</span><span class="n">square</span><span class="p">,</span>
    <span class="nv">aiModel</span><span class="p">:</span> <span class="o">.</span><span class="n">dreamShaper</span><span class="p">,</span>
    <span class="nv">numberResults</span><span class="p">:</span> <span class="mi">4</span>
<span class="p">)</span>
</code></pre></div></div>

<p>That’s it. No complex machine learning setup required.</p>

<h2 id="the-results-before-vs-after">The Results: Before vs After</h2>

<p><strong>Before Runware:</strong></p>
<ul>
  <li>Hours searching stock photos</li>
  <li>$20-50 per image</li>
  <li>Settling for “close enough”</li>
  <li>Limited to existing photos</li>
</ul>

<p><strong>After Runware:</strong></p>
<ul>
  <li>Generate exactly what I need in 30 seconds</li>
  <li>$0.50 for 4 variations</li>
  <li>Perfect match to my vision</li>
  <li>Unlimited iterations</li>
</ul>

<h2 id="quick-tips-for-better-results">Quick Tips for Better Results</h2>

<ol>
  <li><strong>Be specific</strong>: “person using Apple Watch during morning meditation” vs “meditation”</li>
  <li><strong>Use negative prompts</strong>: Exclude unwanted elements like “cluttered, dark, aggressive”</li>
  <li><strong>Try different models</strong>: Each has its own style—experiment in the playground first</li>
  <li><strong>Generate multiple variations</strong>: Create 3-4 options and pick the best</li>
</ol>

<h2 id="why-this-matters">Why This Matters</h2>

<p>As an indie developer, I can now compete visually with big companies. No more $1000+ design budgets or settling for generic stock photos. I generate exactly what ChantFlow needs to stand out.</p>

<h2 id="get-started-today">Get Started Today</h2>

<p>Ready to stop paying $50 per stock photo?</p>

<ol>
  <li><strong>Try the <a href="https://my.runware.ai/playground">Runware Playground</a></strong> - Test models without coding</li>
  <li><strong>Get your API key</strong> at <a href="https://runware.ai">runware.ai</a></li>
  <li><strong>Check out the <a href="https://runware.ai/docs/en/image-inference/introduction">documentation</a></strong> for technical details</li>
  <li><strong>Download the source code</strong> - <a href="https://github.com/rshankras/PosterApp/tree/main">Mindful Creator on GitHub</a> shows you exactly how to build your own</li>
</ol>

<h2 id="your-turn">Your Turn</h2>

<p>Whether you’re building an app, running a business, or just tired of expensive stock photos, Runware’s API can transform how you create visual content.</p>

<p>The playground is free to try. The API costs pennies per image. The creative freedom? Priceless.</p>

<hr />

<p><em>Check out <a href="https://www.rshankar.com/chantflow/">ChantFlow</a> to see how AI-generated visuals help market a unique Apple Watch meditation app, and grab the <a href="https://github.com/rshankras/PosterApp/tree/main">complete source code</a> to build your own content creator.</em></p>]]></content><author><name>Ravi Shankar</name></author><category term="ai" /><category term="marketing" /><category term="ios-development" /><category term="content-creation" /><category term="AI content creation" /><category term="Runware API" /><category term="marketing visuals" /><category term="stock photos" /><category term="iOS development" /><category term="ChantFlow" /><category term="Mindful Creator" /><summary type="html"><![CDATA[Learn how to build your own AI content creator using Runware's API to generate unlimited marketing visuals for pennies instead of paying $50 per stock photo.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://www.rshankar.com/assets/images/app-icons/chantflow-icon.png" /><media:content medium="image" url="https://www.rshankar.com/assets/images/app-icons/chantflow-icon.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">HealthKit Integration in ChantFlow: Heart Rate, HRV, and Mindful Minutes</title><link href="https://www.rshankar.com/integrating-healthkit-for-mindfulness-beyond-step-counting/" rel="alternate" type="text/html" title="HealthKit Integration in ChantFlow: Heart Rate, HRV, and Mindful Minutes" /><published>2025-08-15T00:00:00+00:00</published><updated>2025-08-15T00:00:00+00:00</updated><id>https://www.rshankar.com/integrating-healthkit-for-mindfulness-beyond-step-counting</id><content type="html" xml:base="https://www.rshankar.com/integrating-healthkit-for-mindfulness-beyond-step-counting/"><![CDATA[<p>Chanting helps you feel calm. Now ChantFlow helps you see it, too. We’ve added gentle HealthKit integration so your Apple Watch can reflect what your body experiences during practice — without turning your wrist into a spreadsheet. See the app here: <a href="https://www.rshankar.com/chantflow/">ChantFlow</a> and on the App Store: <a href="https://apps.apple.com/us/app/chantflow-daily-om-practice/id6633438828">ChantFlow — Daily Om Practice</a>.<!--more--></p>

<h2 id="whats-new">What’s new</h2>
<ul>
  <li><strong>Heart rate insights</strong>: See your BPM during and after sessions — today and across the week.</li>
  <li><strong>Mindful Minutes in Health</strong>: Your sessions are logged to Apple Health as Mindfulness.</li>
  <li><strong>HRV (experimental)</strong>: Heart Rate Variability appears after a session when conditions are right.</li>
  <li><strong>Simple Health Insights screen</strong>: Big, readable numbers. Built for quick glances.</li>
</ul>

<h2 id="why-this-matters">Why this matters</h2>
<ul>
  <li><strong>Feel it, then see it</strong>: A calm heart rate trend is a gentle nudge to keep going.</li>
  <li><strong>HRV as a signal, not a score</strong>: It can reflect resilience. We show it when it’s reliable.</li>
</ul>

<h2 id="how-it-works">How it works</h2>
<ul>
  <li>When you chant, the app quietly starts a health session.</li>
  <li>We read heart rate during the session, summarize after.</li>
  <li>HRV usually becomes available after ending the session and needs 3–5+ minutes of stillness and good sensor contact.</li>
  <li>Your Mindful Minutes are saved to Apple Health (with your permission).</li>
</ul>

<h2 id="tiny-code-you-can-use">Tiny code you can use</h2>

<h3 id="request-health-permissions-read-hrhrv-write-mindfulness--workout">Request Health permissions (read HR/HRV, write Mindfulness + Workout)</h3>
<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">import</span> <span class="kt">HealthKit</span>

<span class="k">let</span> <span class="nv">healthStore</span> <span class="o">=</span> <span class="kt">HKHealthStore</span><span class="p">()</span>

<span class="kd">func</span> <span class="nf">requestHealthPermissions</span><span class="p">(</span><span class="nv">completion</span><span class="p">:</span> <span class="kd">@escaping</span> <span class="p">(</span><span class="kt">Bool</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="kt">Void</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">let</span> <span class="nv">toShare</span><span class="p">:</span> <span class="kt">Set</span> <span class="o">=</span> <span class="p">[</span>
        <span class="kt">HKObjectType</span><span class="o">.</span><span class="nf">categoryType</span><span class="p">(</span><span class="nv">forIdentifier</span><span class="p">:</span> <span class="o">.</span><span class="n">mindfulSession</span><span class="p">)</span><span class="o">!</span><span class="p">,</span>
        <span class="kt">HKObjectType</span><span class="o">.</span><span class="nf">workoutType</span><span class="p">()</span>
    <span class="p">]</span>
    <span class="k">let</span> <span class="nv">toRead</span><span class="p">:</span> <span class="kt">Set</span> <span class="o">=</span> <span class="p">[</span>
        <span class="kt">HKObjectType</span><span class="o">.</span><span class="nf">quantityType</span><span class="p">(</span><span class="nv">forIdentifier</span><span class="p">:</span> <span class="o">.</span><span class="n">heartRate</span><span class="p">)</span><span class="o">!</span><span class="p">,</span>
        <span class="kt">HKObjectType</span><span class="o">.</span><span class="nf">quantityType</span><span class="p">(</span><span class="nv">forIdentifier</span><span class="p">:</span> <span class="o">.</span><span class="n">heartRateVariabilitySDNN</span><span class="p">)</span><span class="o">!</span>
    <span class="p">]</span>
    <span class="n">healthStore</span><span class="o">.</span><span class="nf">requestAuthorization</span><span class="p">(</span><span class="nv">toShare</span><span class="p">:</span> <span class="n">toShare</span><span class="p">,</span> <span class="nv">read</span><span class="p">:</span> <span class="n">toRead</span><span class="p">)</span> <span class="p">{</span> <span class="n">ok</span><span class="p">,</span> <span class="n">_</span> <span class="k">in</span>
        <span class="nf">completion</span><span class="p">(</span><span class="n">ok</span><span class="p">)</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<h3 id="log-a-mindfulness-session-to-apple-health">Log a Mindfulness session to Apple Health</h3>
<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">import</span> <span class="kt">HealthKit</span>

<span class="kd">func</span> <span class="nf">saveMindfulMinutes</span><span class="p">(</span><span class="nv">start</span><span class="p">:</span> <span class="kt">Date</span><span class="p">,</span> <span class="nv">end</span><span class="p">:</span> <span class="kt">Date</span><span class="p">,</span> <span class="nv">completion</span><span class="p">:</span> <span class="kd">@escaping</span> <span class="p">(</span><span class="kt">Bool</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="kt">Void</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">let</span> <span class="nv">mindful</span> <span class="o">=</span> <span class="kt">HKObjectType</span><span class="o">.</span><span class="nf">categoryType</span><span class="p">(</span><span class="nv">forIdentifier</span><span class="p">:</span> <span class="o">.</span><span class="n">mindfulSession</span><span class="p">)</span><span class="o">!</span>
    <span class="k">let</span> <span class="nv">sample</span> <span class="o">=</span> <span class="kt">HKCategorySample</span><span class="p">(</span><span class="nv">type</span><span class="p">:</span> <span class="n">mindful</span><span class="p">,</span> <span class="nv">value</span><span class="p">:</span> <span class="mi">0</span><span class="p">,</span> <span class="nv">start</span><span class="p">:</span> <span class="n">start</span><span class="p">,</span> <span class="nv">end</span><span class="p">:</span> <span class="n">end</span><span class="p">)</span>
    <span class="kt">HKHealthStore</span><span class="p">()</span><span class="o">.</span><span class="nf">save</span><span class="p">(</span><span class="n">sample</span><span class="p">)</span> <span class="p">{</span> <span class="n">ok</span><span class="p">,</span> <span class="n">_</span> <span class="k">in</span>
        <span class="nf">completion</span><span class="p">(</span><span class="n">ok</span><span class="p">)</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<h3 id="optional-start-a-lightweight-workout-for-live-heart-rate-on-watchos">(Optional) Start a lightweight workout for live heart rate on watchOS</h3>
<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">import</span> <span class="kt">HealthKit</span>

<span class="kd">final</span> <span class="kd">class</span> <span class="kt">LiveHRSession</span><span class="p">:</span> <span class="kt">NSObject</span><span class="p">,</span> <span class="kt">HKWorkoutSessionDelegate</span> <span class="p">{</span>
    <span class="kd">private</span> <span class="k">let</span> <span class="nv">store</span> <span class="o">=</span> <span class="kt">HKHealthStore</span><span class="p">()</span>
    <span class="kd">private</span> <span class="k">var</span> <span class="nv">session</span><span class="p">:</span> <span class="kt">HKWorkoutSession</span><span class="p">?</span>
    <span class="kd">private</span> <span class="k">var</span> <span class="nv">builder</span><span class="p">:</span> <span class="kt">HKLiveWorkoutBuilder</span><span class="p">?</span>

    <span class="kd">func</span> <span class="nf">start</span><span class="p">()</span> <span class="k">throws</span> <span class="p">{</span>
        <span class="k">let</span> <span class="nv">cfg</span> <span class="o">=</span> <span class="kt">HKWorkoutConfiguration</span><span class="p">()</span>
        <span class="n">cfg</span><span class="o">.</span><span class="n">activityType</span> <span class="o">=</span> <span class="o">.</span><span class="n">mindAndBody</span>
        <span class="n">cfg</span><span class="o">.</span><span class="n">locationType</span> <span class="o">=</span> <span class="o">.</span><span class="n">unknown</span>
        <span class="k">let</span> <span class="nv">session</span> <span class="o">=</span> <span class="k">try</span> <span class="kt">HKWorkoutSession</span><span class="p">(</span><span class="nv">healthStore</span><span class="p">:</span> <span class="n">store</span><span class="p">,</span> <span class="nv">configuration</span><span class="p">:</span> <span class="n">cfg</span><span class="p">)</span>
        <span class="k">let</span> <span class="nv">builder</span> <span class="o">=</span> <span class="n">session</span><span class="o">.</span><span class="nf">associatedWorkoutBuilder</span><span class="p">()</span>
        <span class="n">builder</span><span class="o">.</span><span class="n">dataSource</span> <span class="o">=</span> <span class="kt">HKLiveWorkoutDataSource</span><span class="p">(</span><span class="nv">healthStore</span><span class="p">:</span> <span class="n">store</span><span class="p">,</span> <span class="nv">workoutConfiguration</span><span class="p">:</span> <span class="n">cfg</span><span class="p">)</span>
        <span class="k">self</span><span class="o">.</span><span class="n">session</span> <span class="o">=</span> <span class="n">session</span>
        <span class="k">self</span><span class="o">.</span><span class="n">builder</span> <span class="o">=</span> <span class="n">builder</span>
        <span class="n">session</span><span class="o">.</span><span class="n">delegate</span> <span class="o">=</span> <span class="k">self</span>
        <span class="n">session</span><span class="o">.</span><span class="nf">startActivity</span><span class="p">(</span><span class="nv">with</span><span class="p">:</span> <span class="kt">Date</span><span class="p">())</span>
        <span class="n">builder</span><span class="o">.</span><span class="nf">beginCollection</span><span class="p">(</span><span class="nv">withStart</span><span class="p">:</span> <span class="kt">Date</span><span class="p">())</span> <span class="p">{</span> <span class="n">_</span><span class="p">,</span> <span class="n">_</span> <span class="k">in</span> <span class="p">}</span>
    <span class="p">}</span>

    <span class="kd">func</span> <span class="nf">stop</span><span class="p">(</span><span class="nv">completion</span><span class="p">:</span> <span class="kd">@escaping</span> <span class="p">()</span> <span class="o">-&gt;</span> <span class="kt">Void</span><span class="p">)</span> <span class="p">{</span>
        <span class="n">builder</span><span class="p">?</span><span class="o">.</span><span class="nf">endCollection</span><span class="p">(</span><span class="nv">withEnd</span><span class="p">:</span> <span class="kt">Date</span><span class="p">())</span> <span class="p">{</span> <span class="n">_</span><span class="p">,</span> <span class="n">_</span> <span class="k">in</span>
            <span class="k">self</span><span class="o">.</span><span class="n">session</span><span class="p">?</span><span class="o">.</span><span class="nf">end</span><span class="p">()</span>
            <span class="k">self</span><span class="o">.</span><span class="n">builder</span><span class="p">?</span><span class="o">.</span><span class="n">finishWorkout</span> <span class="p">{</span> <span class="n">_</span><span class="p">,</span> <span class="n">_</span> <span class="k">in</span> <span class="nf">completion</span><span class="p">()</span> <span class="p">}</span>
        <span class="p">}</span>
    <span class="p">}</span>

    <span class="kd">func</span> <span class="nf">workoutSession</span><span class="p">(</span><span class="n">_</span> <span class="nv">workoutSession</span><span class="p">:</span> <span class="kt">HKWorkoutSession</span><span class="p">,</span> <span class="n">didChangeTo</span> <span class="nv">toState</span><span class="p">:</span> <span class="kt">HKWorkoutSessionState</span><span class="p">,</span> <span class="n">from</span> <span class="nv">fromState</span><span class="p">:</span> <span class="kt">HKWorkoutSessionState</span><span class="p">,</span> <span class="nv">date</span><span class="p">:</span> <span class="kt">Date</span><span class="p">)</span> <span class="p">{}</span>
    <span class="kd">func</span> <span class="nf">workoutSession</span><span class="p">(</span><span class="n">_</span> <span class="nv">workoutSession</span><span class="p">:</span> <span class="kt">HKWorkoutSession</span><span class="p">,</span> <span class="n">didFailWithError</span> <span class="nv">error</span><span class="p">:</span> <span class="kt">Error</span><span class="p">)</span> <span class="p">{}</span>
<span class="p">}</span>
</code></pre></div></div>

<h2 id="privacy-always">Privacy, always</h2>
<ul>
  <li><strong>You’re in control</strong>: Permissions are requested clearly during onboarding (new users) or at launch (existing users).</li>
  <li><strong>Minimal data</strong>: We read only what’s needed. We don’t write HRV to Health. We don’t sell or share your data.</li>
  <li><strong>On‑device first</strong>: Data is handled with care for speed and privacy.</li>
</ul>

<h2 id="accessibility-and-design">Accessibility and design</h2>
<ul>
  <li>Big text, clean layout, VoiceOver‑friendly.</li>
  <li>Built for short sessions and quick checks, not deep dives.</li>
</ul>

<h2 id="hrv-set-the-right-expectation">HRV: set the right expectation</h2>
<ul>
  <li>HRV is <strong>experimental</strong>. It often appears after you end a session, and only with steady, still wear for several minutes. Heart rate will show up almost every time — HRV will show up some of the time. That’s normal.</li>
</ul>

<h2 id="tips-for-better-readings">Tips for better readings</h2>
<ul>
  <li>Wear the watch snugly, above the wrist bone.</li>
  <li>Keep your wrist still during the session.</li>
  <li>Aim for at least 5 minutes if you’re hoping to see HRV.</li>
</ul>

<h2 id="whats-next">What’s next</h2>
<ul>
  <li>Gentle trends over weeks and months.</li>
  <li>Clear, friendly insights like “Your average heart rate during chanting dropped this week.”</li>
  <li>Continued focus on privacy and accessibility.</li>
</ul>

<p>Start a session, breathe, and take a peek at your Health Insights afterward. Calm feels good — and now you can see it, too.</p>]]></content><author><name>Ravi Shankar</name></author><category term="watchos" /><category term="healthkit" /><category term="accessibility" /><category term="design" /><category term="HealthKit" /><category term="Apple Watch" /><category term="mindfulness" /><category term="HRV" /><category term="Swift" /><summary type="html"><![CDATA[How ChantFlow uses HealthKit to reflect calm through Mindful Minutes, heart rate, and HRV (experimental) — with simple Swift snippets you can reuse.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://www.rshankar.com/assets/images/app-icons/chantflow-icon.png" /><media:content medium="image" url="https://www.rshankar.com/assets/images/app-icons/chantflow-icon.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>