<?xml version="1.0" encoding="utf-8" ?>
<rss version="2.0"
  xmlns:atom="http://www.w3.org/2005/Atom"
  xmlns:content="http://purl.org/rss/1.0/modules/content/">
<channel>
  <title>blog.n11n [tag: homelab]</title>
  <link>https://blog.n11n.ca</link>
  <description>Nicholas' blog</description>
  <language>en</language>
  <atom:link href="https://blog.n11n.ca/rss.xml" rel="self" type="application/rss+xml" />

  <item>
    <title>How I backup</title>
    <pubDate>Thu, 26 Feb 2026 00:00:00 +0000</pubDate>
    <link>https://blog.n11n.ca/backups</link>
    <guid>https://blog.n11n.ca/4</guid>
    <description>An overview of how I backup my important data.</description>
    <content:encoded><![CDATA[<p>When I started to build my <a href="https://blog.n11n.ca/2">homelab</a>, backups were a core part of the design. I wanted to be sure my data was available and wouldn't be lost in an emergency. Having a reliable backup plan has already saved me more than once.</p> <p>Here's my workflow for any data I can't afford to lose.</p> <h2>3-2-1-1 strategy</h2> <ul> <li> <strong>Three</strong> (<strong>3</strong>) copies of the data (well four counting production) </li><li> <strong>Two</strong> (<strong>2</strong>) different formats </li><li> <strong>One</strong> (<strong>1</strong>) copy off-site (encrypted in the <em>cloud</em>) </li><li> <strong>One</strong> (<strong>1</strong>) copy offline (encrypted on a secure device) </li> </ul> <h2>Nightly backups</h2> <p>Before migrating to <a href="https://www.proxmox.com/en/">Proxmox</a>, incremental backups were generated using <a href="https://linux.die.net/man/1/rsync">rsync</a> in a <a href="https://linux.die.net/man/8/cron">cron</a> job that looked like:</p> <pre><code>30 4 * * * rysnc_script.sh || alert "Backup: ERROR"</code></pre> <p><code>rsync_script.sh</code> handled the configuration, and any services that needed to be stopped and restarted. In the event of a failure, <code>alert</code> would send a message to my notification service using <a href="https://linux.die.net/man/1/curl">curl</a>.</p> <p>Now, this is managed with <a href="https://www.proxmox.com/en/products/proxmox-backup-server/overview">Proxmox Backup Server</a> running in a separate VM. Daily snapshots are retained for the last 30 days, while monthly snapshots are kept for the last six months.</p> <h2>Off-site / Offline export</h2> <p>Every two weeks I run a script that:</p> <ol> <li> Creates a compressed archive (tar.gz) of the latest snapshot </li><li> Encrypts it using AES-256 </li><li> Uploads the encrypted archive to the cloud </li><li> Notifies me the archive is ready to be moved offline </li> </ol> <p>In the past this ran every week. However, I noticed most of my really important data wasn't changing much week over week. Moving to a bi-weekly schedule has been a good blend of convenience and data integrity so far.</p> <h2>Testing</h2> <p>Making sure to test my backups has been crucial. I do it in two ways:</p> <ol> <li> <strong>Checksum validation:</strong> this is automated and runs every night, notifying me of any problems. </li><li> <strong>Restore drill:</strong> once a month, I spin up a new VM using my latest off-site archive and confirm everything starts correctly. </li> </ol> <p>I keep a simple "restore checklist" in a repository (along with a printed copy) so anyone can follow it step-by-step.</p> <p>Being able to really trust my backups has helped turn any scary situation into a routine confidence boost.</p>         <div class="tags"><a href="https://blog.n11n.ca/tag/backups">backups</a> <a href="https://blog.n11n.ca/tag/cron">cron</a> <a href="https://blog.n11n.ca/tag/data">data</a> <a href="https://blog.n11n.ca/tag/homelab">homelab</a> <a href="https://blog.n11n.ca/tag/rsync">rsync</a></div> <br>]]></content:encoded>
  </item>

  <item>
    <title>Homelab</title>
    <pubDate>Thu, 12 Feb 2026 00:00:00 +0000</pubDate>
    <link>https://blog.n11n.ca/homelab</link>
    <guid>https://blog.n11n.ca/2</guid>
    <description>A general overview of my homelab.</description>
    <content:encoded><![CDATA[<p>It all started with a grocery list.</p> <p>Our paper list was always getting left behind, lost, or thrown away. So I set out to digitize it. But, I didn't want to rely on some external service like dropbox or google docs to do it. Instead, the first spare computer I had sitting around got a fresh copy of <a href="https://www.debian.org/">Debian</a> and away I went.</p> <p>What began as a simple idea eventually grew into multiple servers, dozens of services, and endless projects.</p> <h2>Design</h2> <p>When I started to build my lab, the plan was to create something dependable. I didn't (and still don't) want to spend all my free time making sure everything simply works.</p> <p>The backup plan was to have reliable backups. I drafted a policy and set off to make sure any important data would always be somewhere in case of an emergency. Six-<em>ish</em> years later and I still haven't managed to lose anything important. yet. <em>*knocks on wood*</em></p> <p>Security was and still is a major concern. I don't like people touching my stuff, so I want to make sure only those authorized can get anything in, or out. I'm constantly learning new concepts and finding ways to integrate them and improve the overall design.</p> <h2>Infrastructure</h2> <p>The backbone of my lab is the <a href="https://en.wikipedia.org/wiki/Forge_(software)">forge</a>, which is currently powered by <a href="https://forgejo.org/">Forgejo</a>. After testing and using a few other services, I migrated to Forgejo a few years ago because I really like the community and governance structure - plus it powers <a href="https://codeberg.org/">Codeberg</a> where my public code is.</p> <p>Within my forge, any production infrastructure is defined as code. This includes configuration files, build steps, and playbooks to deploy hosts.</p> <h2>Networking</h2> <p>Part of building something dependable was making it available and easily accessible. What began with open ports and a static IP address, quickly got replaced by a reverse proxy to handle routing and encrypting connections using TLS.</p> <p>Once everything was accessible internally, the next challenge became remote access away from home. Again, open router ports were quickly replaced by VPN access for anyone deemed worthy.</p> <p>The lab has always been an environment for me to learn and explore new networking concepts. Over time this has included IPv6, mTLS, VLANs, and more.</p> <h2>Development</h2> <p>After a reliable system was in place, I could begin developing the tools and services to run on it. There are lots of amazing projects I found that could be easily self-hosted, however I love learning, and writing code, so I often end up building my own stuff.</p> <p>Here are just some of the things I have built or am working on:</p> <ul> <li> asset catalogues: books, hardware, digital assets, etc. </li><li> audio transcription tool </li><li> digital toolbox </li><li> pastebin </li><li> personal finance manager </li><li> text/audio/video chat using WebRTC </li><li> video/image sharing platforms </li> </ul> <p>I made most of these just for me (plus my friends and family), but some might end up being released into the wild one day.</p> <h2>Documentation</h2> <p>Besides backups, good documentation has been the most important part of my lab. I started with handwritten notes and slowly grew them into various tutorials, how-to guides, explanations, and reference materials. More than once I've forgotten how (or why) I did something and only had notes from past me to rely on.</p> <p>Although my documentation is far from perfect, it's constantly evolving to stay useful and relevant going forward.</p> <h2>Now</h2> <p>Everything is in a pretty good state. Over the last couple years I focused on reducing the size and complexity of my lab without sacrificing performance. Turns out I didn't really need much to run everything, plus I sort of enjoy the challenge of doing more with less.</p> <p>Right now that consists of:</p> <ul> <li> <strong>Protectli Vault</strong>: router running OPNSense </li><li> <strong>Intel NUC</strong>: running virtualized dev/prod Debian environments using Proxmox </li><li> <strong>Raspberry Pi</strong>: mainly to control my sit-stand desk and a few other things </li> </ul> <p>I also have a <a href="https://en.wikipedia.org/wiki/Virtual_private_server">VPS</a> running several services outside of my lab.</p> <h2>Next</h2> <p>New hardware is almost always on the radar, however I'm frugal (aka cheap) and a strong advocate of "if it ain't broke don't fix it". I'm looking for a decent <a href="https://en.wikipedia.org/wiki/Uninterruptible_power_supply">UPS</a> though, and some extra storage could always be put to good use.</p> <p>With a stable system, and most of my development needs covered for now, I'm excitedly looking for the next problem to solve.</p> <p>Whatever that might be.</p>         <div class="tags"><a href="https://blog.n11n.ca/tag/forgejo">forgejo</a> <a href="https://blog.n11n.ca/tag/homelab">homelab</a> <a href="https://blog.n11n.ca/tag/self-hosted">self-hosted</a></div> <br>]]></content:encoded>
  </item>
</channel>
</rss>
