🤖 Note: This blog post was written entirely by an AI (Claude, by Anthropic). Here is the prompt conversation for the whole project: https://claude.ai/share/32ba6e0d-7169-4bf4-8ddc-84961a24818b
You’ve got photos. Lots of them. Maybe you shot them on a proper camera. Maybe you want to share them with family and friends — but not with everyone. You want:
The good news? All of this is achievable — and the running cost is effectively $0/month for a personal site, thanks to AWS and Netlify’s generous free tiers.
Here’s the high-level picture:
Let’s break down each component — because understanding why we’re using each piece is just as important as knowing how:
S3 is where your photos actually live. It’s reliable, cheap (essentially free at personal-site scale), and integrates perfectly with the rest of the AWS ecosystem. We configure it as a completely private bucket — no public access whatsoever. This is important: raw S3 URLs shouldn’t work at all.
CloudFront is AWS’s CDN — a global network of servers that caches your photos close to wherever your visitors are. It sits in front of S3 and:
The free tier covers 1TB of data transfer and 10 million requests per month. You’d need a very popular photo site to exceed that.
Netlify hosts the actual website — the HTML, CSS, and JavaScript that your browser downloads and runs. It’s free, deploys automatically when you push to GitHub, and handles HTTPS for your domain without any configuration.
The key insight here is the separation of concerns: - Netlify serves the site (lightweight — just a few KB of HTML/JS/CSS) - CloudFront serves the photos (heavy — potentially gigabytes of images)
This means Netlify never has to touch your photos at all.
Rather than clicking around in the AWS console, we define our entire infrastructure in a single main.tf file. Run terraform apply and it creates everything. Run it again and it only changes what’s different. Destroy it all with terraform destroy. This is — and I cannot stress this enough — the correct way to manage cloud infrastructure.
A small Python script (photo_sync.py) handles the day-to-day workflow: uploading new photos to S3 and regenerating the manifest. No Node, no complex build pipeline — just Python and boto3.
manifest.json — The Secret GlueHere’s the elegant part of this architecture: the website doesn’t talk to S3 or any API to know what albums and photos exist. Instead, photo_sync.py generates a single JSON file — manifest.json — and uploads it to S3 alongside the photos.
{
"generated": "2025-01-15T03:00:00Z",
"albums": [
{
"slug": "japan-2024",
"title": "Japan",
"description": "Two weeks exploring Tokyo, Kyoto, and Osaka.",
"date": "April 2024",
"cover": "https://cdn.example.com/japan-2024/001.jpg",
"photos": [
{
"url": "https://cdn.example.com/japan-2024/001.jpg",
"filename": "001.jpg",
"caption": "Senso-ji temple at dawn"
}
]
}
]
}
The website fetches this file on load and renders everything from it. Dead simple — and it means the site is 100% static. No database, no API, no backend. Just files.
The password protection is a client-side gate — here’s how it works:
PHOTO_SITE_PASSWORD)config.jsThe plaintext password never appears in your source code or the deployed site. The hash alone is in the browser — and you can’t reverse a SHA-256 hash to get the original password.
⚠️ Important caveat: This protects the website. The S3 bucket is locked down behind CloudFront, so direct CDN URLs (
cdn.example.com/album/001.jpg) do work if someone has them. For a personal photo site shared with friends and family, this is entirely acceptable. If you need stronger protection, CloudFront signed URLs are the next step up.
One of the key design decisions is using separate subdomains for the site and the CDN:
| Subdomain | Points to | Purpose |
|---|---|---|
photos.example.com |
Netlify | The website |
cdn.example.com |
CloudFront | Photos + manifest |
This means CloudFront is reusable — if you ever build a second site, cdn.example.com can serve assets for that too. Just organise your S3 folders clearly and point your new site at the same CDN.
💡 Lesson learned the hard way: Don’t point your main site subdomain directly at CloudFront. CloudFront serves S3 content — it has no idea what to do with a request for an HTML page. Keep Netlify handling the site, CloudFront handling the assets.
Each album can have a captions.json file:
{
"001.jpg": "Senso-ji temple at dawn",
"002.jpg": "Ramen in Shinjuku at midnight",
"003.jpg": ""
}
A helper script generates the template from your photo folder — sorted the same way Windows Explorer sorts them (natural sort, case-insensitive), so the order matches what you see on your filesystem:
python scripts/make_captions.py photos/japan-2024
Captions are optional per-photo — leave the string empty and no caption is shown. They appear below the slider and in the lightbox.
Let’s talk money — because “serverless” doesn’t mean “free”, it just means “pay for what you use”:
| Service | Free tier | Likely cost |
|---|---|---|
| S3 storage | 5GB free (12 months) | ~$0.023/GB/month after |
| CloudFront | 1TB transfer, 10M requests/month — always free | $0 |
| ACM certificate | Always free | $0 |
| Netlify | Free plan | $0 |
For a personal photo site with a few thousand photos: effectively $0/month on CloudFront and Netlify, and a few cents per month on S3 storage depending on how many photos you have.
Once everything is set up, adding new photos is genuinely simple:
# 1. Drop photos into a folder
mkdir photos/new-album
cp ~/Pictures/holiday/*.jpg photos/new-album/
# 2. Describe the album
# Edit albums.json to add the new album
# 3. Optionally add captions
python scripts/make_captions.py photos/new-album
# Edit photos/new-album/captions.json
# 4. Upload everything
python scripts/photo_sync.py sync
# Done — site updates within seconds
The sync script is smart — it skips photos that already exist in S3, so re-running it is always safe. Only new photos get uploaded.
Why Python instead of Node for the CLI? — Python ships with most operating systems, boto3 is the gold standard AWS SDK, and there’s no node_modules folder to deal with. For a CLI tool you run occasionally, Python is the right call.
Why Terraform instead of the AWS CDK or SAM? — Terraform is provider-agnostic, widely used, and the state management is battle-tested. For a relatively simple infrastructure like this, it’s the least surprising option.
Why Netlify instead of S3 static hosting? — Netlify gives you automatic deploys from GitHub, environment variable injection at build time, and HTTPS for free. Replicating that with S3 + GitHub Actions + CloudFront would require more moving parts for no real benefit.
Why a manifest file instead of querying S3 directly? — Listing S3 bucket contents from the browser would require either public bucket access or signed requests. A manifest file keeps the site fully static and the S3 bucket fully private.
https://github.com/nmoorenz/example-photo-site
The full source code is available on GitHub — everything you need to deploy your own version:
infrastructure/main.tf — all AWS infrastructure in one filescripts/photo_sync.py — upload photos, generate manifestscripts/make_captions.py — generate caption templatesscripts/generate_config.py — Netlify build stepsite/ — the static website (HTML, CSS, vanilla JS)Clone it, fill in your terraform.tfvars, and follow the README. The whole setup takes about an hour — most of which is waiting for CloudFront to deploy and DNS to propagate.
What I love about this architecture is how each piece does exactly one job — and does it well:
There’s no database to maintain, no server to patch, no Docker container to babysit. Just files, a CDN, and a static site — running essentially for free, indefinitely.
If you build your own version, I’d love to hear about it. Well — I’m an AI, so I won’t actually hear about it. But you should build it anyway. 📷
