tag:blogger.com,1999:blog-22968030281882671162024-03-13T14:16:49.988-04:00Industrial CuriositySoftware, Comics, PoetryAdam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.comBlogger52125tag:blogger.com,1999:blog-2296803028188267116.post-19244364512877277702022-01-08T17:23:00.003-05:002022-01-08T17:23:42.039-05:00Managing site certificates with NGINX and Certbot<h3 style="text-align: left;">And removing a single domain certificate without breaking everything else</h3><p>Do you operate multiple domains from the same webserver? Do you have a webserver operated by NGINX? Do you have Certbot managing your certificates? This is a set of instructions for creating your certificates correctly and removing a single domain from your configuration, after I found some confusing ones that resulted in me knocking out my server for a little while…</p><h2 style="text-align: left;">A note before we begin</h2><p>If you’re rather in a hurry to remove a domain from a messy configuration, STOP. Re-organizing your sites and regenerating your certificates is not only pretty quick and mostly painless — and required, if you want to remove a single domain without making NGINX break down and throw a wobbly — it’s very much the same process.</p><h2 style="text-align: left;">Organizing your existing NGINX sites</h2><p>Ensure that you know which domains are configured in which site files, <i>in particular make sure that you do not include servers for multiple domains in the same file</i>.</p><p>To do this, look through your enabled site files under <b><span style="font-family: courier;">/etc/nginx/sites-enabled</span></b> to find relevant server entries. While you’re there, you might want to note any certificates which are already used by those server entries; those will be the lines starting with <b><span style="font-family: courier;">ssl_certificate</span></b>.</p><p>If you need to reorganize your site files, remember that their actual location must be in the <b><span style="font-family: courier;">/etc/nginx/sites-available</span></b> path. To enable a site <b><span style="font-family: courier;">/etc/nginx/sites-available/example</span></b>, create a symlink in the <b><span style="font-family: courier;">/etc/nginx/sites-enabled</span></b> path with</p><p><b><span style="font-family: courier;">> ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/</span></b></p><p>and to disable a site, remove it from the <b><span style="font-family: courier;">/etc/nginx/sites-enabled</span></b> path with</p><p><b><span style="font-family: courier;">> rm /etc/nginx/sites-enabled/example.com</span></b></p><h2 style="text-align: left;">Generating certificates with Certbot</h2><p>Once your sites are organized in a way that each domain has its own file, generate certificates for each domain and its subdomains with</p><p><b><span style="font-family: courier;">> sudo certbot --nginx -d example.com -d www.example.com</span></b></p><p>This will generate a new certificate if needed and update the site file accordingly.</p><p>To ensure that everything is as it should be, review the updated site files and then validate them with</p><p><b><span style="font-family: courier;">> sudo nginx -t</span></b></p><p>To restart NGINX once you’re ready, run</p><p><b><span style="font-family: courier;">> sudo service nginx restart</span></b></p><h2 style="text-align: left;">Removing obsolete domains and certificates</h2><p>Now that your site files and certificates are configured correctly, it’s time to remove any obsolete certificates that are no longer referenced.</p><p>Run <b><span style="font-family: courier;">sudo certbot certificates</span></b> to list the existing certificates, paying attention to their names as well as their certificate and key paths. These paths will be registered in your NGINX site files so you can review what’s active and required and be certain that the certificate(s) you’re removing are unused.</p><p>When you’re confident that a certificate <b>example.com</b> is no longer in use, simply remove it by running</p><p><b><span style="font-family: courier;">> sudo certbot delete --cert-name example.com </span></b></p><p style="text-align: center;">...</p><p>Originally published at <a href="https://therightstuff.medium.com/managing-site-certificates-with-nginx-and-certbot-4cc8f5dd4a53" target="_blank">https://therightstuff.medium.com</a>.</p>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-32411917831657723282022-01-08T17:15:00.002-05:002022-01-08T17:15:58.573-05:00A VSCode extension to make your code more secure<p> I recently installed <a href="https://marketplace.visualstudio.com/items?itemName=redhat.vscode-yaml" target="_blank">Red Hat’s YAML VS Code extension</a> to assist me with Bamboo Specs, <a href="https://www.youtube.com/watch?v=OjkbonKOzec" target="_blank">convinced by the Bald Bearded Builder that this was the linter for me</a> (check out its schema support!). I don’t usually appreciate extensions recommending things to me (and, to be fair, I don’t know that that’s precisely what happened), but this morning a toaster popped up suggesting that I install their <a href="https://marketplace.visualstudio.com/items?itemName=redhat.fabric8-analytics" target="_blank">Dependency Analytics</a> extension and I am SO glad that I clicked on it!</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjbhkQTNz9ISkKpMURXmQnIYZEsI8l_SIxR3OxI_Jjj3LV9EI1ZKn7elSJFD-2V-YMALyDx_HEoZX2dZyfR6_l2O9rCFhCyLTSbSQ5w7wgwcIC-i5FHlJgoq4n0ufeyRGjVRseULHK2_Z2uuhUFWpaJujMjl6IpggeL1NuY7UuwCfQxIk2N_oL2ohZplQ=s1400" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="581" data-original-width="1400" height="166" src="https://blogger.googleusercontent.com/img/a/AVvXsEjbhkQTNz9ISkKpMURXmQnIYZEsI8l_SIxR3OxI_Jjj3LV9EI1ZKn7elSJFD-2V-YMALyDx_HEoZX2dZyfR6_l2O9rCFhCyLTSbSQ5w7wgwcIC-i5FHlJgoq4n0ufeyRGjVRseULHK2_Z2uuhUFWpaJujMjl6IpggeL1NuY7UuwCfQxIk2N_oL2ohZplQ=w400-h166" width="400" /></a></div><br /><p>Red Hat’s “Dependency Analytics” extension is fantastic, it’s powered by <a href="https://snyk.io/" target="_blank">Snyk</a>’s vulnerability database and when opening one of my projects’ dependency files* I immediately saw red and was able to click my way clear in a matter of minutes**.</p><p>* My current team has projects written in all four of the supported languages, the only thing I’m personally missing is an extension for Visual Studio “proper” for C#…</p><p>** Well, okay, one of the dependency suggestions included a breaking change, but the rest of them were trivial upgrades.</p><p>Well done, Red Hat, for making safety and security just a little bit easier!</p><p style="text-align: center;">...</p><p>Originally published at <a href="https://therightstuff.medium.com/a-vscode-extension-to-make-your-code-more-secure-de4093c167" target="_blank">https://therightstuff.medium.com</a>.</p>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-41196384709479179152022-01-08T17:07:00.005-05:002022-01-08T17:07:57.656-05:00How to open Debian archives with 7-Zip<p> I cannot believe I’m writing this, but here we are: 7-Zip is perfectly capable of opening Debian package files (which are compressed using ar), but for some inexplicable reason <a href="https://sourceforge.net/p/sevenzip/bugs/1130/" target="_blank">they’ve decided to hide the control components by default</a>.</p><p>Fortunately, opening the files properly isn’t too complicated, even if it’s not as convenient as simply opening the file: right-click on the Debian file to access the 7-Zip context menu, then hit “Open archive” with the directional arrow and select “*”.</p><p>Simple enough, I guess... if you know what you’re looking for.</p><p style="text-align: center;">...</p><p>Originally published at <a href="https://therightstuff.medium.com/how-to-open-debian-archives-with-7-zip-34a7edcd8ea0" target="_blank">https://therightstuff.medium.com</a>.</p>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-3386937581159181412022-01-08T16:56:00.000-05:002022-01-08T16:56:02.576-05:00A quick-start guide to setting up a Debian guest on VMWare WorkStation 15/16 Player<p>I don’t know why everything needs to be subtly non-standard, but over the course of the last twenty or so virtual machine reconstructions I’ve come up with a simple checklist for setting up a Debian guest on VMWare’s WorkStation (Windows) and I thought I’d share it here.</p><p></p><ol style="text-align: left;"><li>Download your Debian .iso <a href="https://www.debian.org/releases/buster/debian-installer/" target="_blank">here</a> (I recommend the netinst CD image)</li><li>Create a new virtual machine using the downloaded .iso and check that the default configuration is satisfactory (I tend to need a little more power, usually 2 CPUs does it for me). Note that increasing the size of a hard disk is for more complicated and risky than a regular user would expect, so give yourself a healthy buffer. In my experience, it’s less painful to rebuild a bigger machine than it is to extend the disk size.</li><li>Install the OS — I find the graphical installer to be just fine for my purposes. The two configuration options that are the most impactful are your choice of desktop environment, I usually choose Xfce but I’m starting to like GNOME again. It’s probably a good idea to install the ssh server as well.</li><li>Once installation is complete, click the VMWare button at the bottom of the screen to signal that and restart the machine.</li><li>To grant yourself the ability to use sudo, open the terminal and run either<br /><b><span style="font-family: courier;">> su -</span></b><br />or<br /><b><span style="font-family: courier;">> su -c ‘su -’</span></b><br />if the first isn’t allowed (I’m not sure how to use the correct single quotes in Medium, so note that the above apostrophes aren’t correct).<br />Then run<br /><b><span style="font-family: courier;">> usermod -aG sudo <username></span></b><br />to add yourself to the sudoers group. You will need to log out and back in for this to take effect.</li><li>Install the following to be able to install <b>VMWare Tools</b>, which enables things like copying and pasting between host and guest machines:<br /><b><span style="font-family: courier;">> sudo apt install -y open-vm-tools open-vm-tools-desktop linux-source</span></b></li></ol><p></p><h2 style="text-align: left;">Installing VMWare Tools in WorkStation Player 16</h2><p>Open the <b>Virtual Machine Settings</b>, select the <b>Options</b> tab, then select <b>VMWare Tools</b>: select “Sychronize guest time with host” and “Update automatically”, then restart the virtual machine.</p><h2 style="text-align: left;">Installing VMWare Tools in WorkStation Player 15</h2><p></p><ol style="text-align: left;"><li>Open the <b>VM</b> menu, select <b>Install VMWare Tools</b>.</li><li>Mount the <b>VMWare Tools</b> CD:<br /><b><span style="font-family: courier;">> sudo mount /dev/cdrom</span></b></li><li>Extract the installer to your current directory (or maybe create a subdirectory for it) using tab auto-complete in place of the ellipse:<br /><b><span style="font-family: courier;">> tar -xf /media/cdrom/VMWareTools…</span></b></li><li>Install the required build tools:<br /><b><span style="font-family: courier;">> sudo apt-get install -y autoconf automake binutils cpp gcc linux-headers-$(uname -r) make psmisc</span></b></li><li>Try to run the <b>.pl</b> script in the extracted folder, expect it to fail, restart the machine anyway.</li></ol><p></p><p>At this point you should have your VM up and running and be able to copy / paste / drag files between your machines. Now go grab yourself another cup of coffee, you deserve it! </p><p style="text-align: center;">...</p><p>Originally published at <a href="https://therightstuff.medium.com/a-quick-start-guide-to-setting-up-a-debian-guest-on-vmware-workstation-15-16-player-18b518353fe6" target="_blank">https://therightstuff.medium.com</a>.</p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-73216503791457126082021-10-08T09:08:00.005-04:002021-10-08T09:10:51.753-04:00An Impatient Developer’s Guide to Debian Maintenance (Installation) Scripts and Package Diverts<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIY-_S-gS7X6zwzUeZzVxgxeIak_grdbDMa4wuAzDLmfdE2zvmbPkDyBYPuPYhwIwR_B9ZRJ-_qlIsjNxtoK5UqZ2ALA50dSSQToJA7Z5iqbxBWUQso0No5QZNNJW3eof3IoyNyn26udGM/s2048/pexels-andrea-piacquadio-3768126.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1372" data-original-width="2048" height="214" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIY-_S-gS7X6zwzUeZzVxgxeIak_grdbDMa4wuAzDLmfdE2zvmbPkDyBYPuPYhwIwR_B9ZRJ-_qlIsjNxtoK5UqZ2ALA50dSSQToJA7Z5iqbxBWUQso0No5QZNNJW3eof3IoyNyn26udGM/s320/pexels-andrea-piacquadio-3768126.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;">Photo by <a href="https://www.pexels.com/@olly?utm_content=attributionCopyText&utm_medium=referral&utm_source=pexels" target="_blank">Andrea Piacquadio</a> from <a href="https://www.pexels.com/photo/woman-holding-books-3768126/?utm_content=attributionCopyText&utm_medium=referral&utm_source=pexels" target="_blank">Pexels</a></div><br /> The people involved in coming up with the <b>dpkg </b>scheme for installing / upgrading / downgrading / removing packages are very clever. While unintuitive to the uninitiated, the scheme is mostly logical and reasonable, though there are some points where I feel a little more effort and consideration could have made a world of difference.<p></p><p></p><blockquote><i>“The evil that men do lives on and on” — Iron Maiden</i></blockquote><p></p><p>In addition to regular installation behaviour, I needed to wrap my head around “package diverts”, which is a very clever system for enabling packages to handle file conflicts. Except that it doesn’t handle what I would consider to be a very basic use case:</p><p></p><ol style="text-align: left;"><li>Install an initial version of our package.<br /><br /></li><li>Discover that our package needs to overwrite a file that’s installed by an upstream dependency.<br /><br /></li><li>Create a new version of our package that includes the file and configures a “package divert” to safely stow the dependency’s version.<br /><br /></li><li>Remove the “package divert” on the file if the newer version of the package is uninstalled <i>or downgraded to the previous version that doesn’t include it</i>.</li></ol><p></p><p>That last part, in italics? That’s the kicker right there. Read on to understand why.</p><h2 style="text-align: left;">Debian Installation Script Logic In Plain English</h2><p>After poring over <a href="https://www.debian.org/doc/debian-policy/ap-flowcharts.html" target="_blank">the Debian maintainer scripts flowcharts</a>, I felt I had a pretty good handle on things but there’re a couple of little “gotcha”s, so I feel like it’s worth providing a brief summary of the general flow in common English.</p><p>Debian maintenance scripts are run by <b>apt </b>and <b>dpkg </b>at specific points in the installation process:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://www.debian.org/doc/debian-policy/ap-flowcharts.html" style="margin-left: 1em; margin-right: 1em;" target="_blank"><img border="0" data-original-height="565" data-original-width="700" height="323" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjH6blOjc3FmegMUjtVTZIgY8VdnEcy9Oe3tDMXvZwU_p7C6RcZrA-VgZ7mlwKQGiAMcnLgMiYdWnmYmbx-FzC79OLfVkOI7PDx8QdPwFjlzonnm5YKfIPBWG3RSAI0K-gOgjpRzf3h3H5a/w400-h323/1_XviiNbHJ60zZKmZnP_kgOg.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://www.debian.org/doc/debian-policy/ap-flowcharts.html" style="text-align: left;" target="_blank">https://www.debian.org/doc/debian-policy/ap-flowcharts.html</a></div><p></p><ol style="text-align: left;"><li>If the package is being removed or upgraded (meaning that a different version is being installed, “upgraded” also means “downgraded” to Debian maintainers), the previously installed version’s <b>prerm </b>script is called.<br /><br /></li><li>If the package is being installed or upgraded, the new version’s <b>preinst </b>script is run.<br /><br /></li><li>If the package is not being removed, the new version’s package contents are unpacked. This will overwrite existing files if they were installed by the same package, or fail the installation if it attempts to overwrite another package’s files without a package divert configured.<br /><br /></li><li>If the package is being removed, its files are deleted.<br /><br /></li><li>The previously installed version’s <b>postrm </b>script is called if the package is being removed or upgraded.<br /><br /></li><li>If the package is not being removed, any package contents belonging to the previous version that do not also exist in the new one are removed.<br /><br /></li><li>If the package is not being removed, the new version’s <b>postinst </b>script is called.</li></ol><p></p><p>It is important to note that if a maintenance script fails with a non-zero exit code, the package will be in a broken state that can be very difficult (sometimes impossible) to recover. From our experience, it’s best to catch all exceptions, log them, “exit gracefully” with an exit code of 0, and hope for the best.</p><p>Also from our experience, it’s a good idea for maintenance scripts to log <i>everything </i>to the <b>stderr </b>stream in order to preserve chronological order.</p><h2 style="text-align: left;">Debian Package Diverts for the Uninitiated</h2><p>The principle of <a href="https://www.debian.org/doc/debian-policy/ap-pkg-diversions.html" target="_blank">Debian package diverts</a> is straightforward enough: when you want to include a file in your package contents that conflicts with another package’s file (i.e. the absolute paths are identical), you create a “divert” on that file so that any other package’s versions of that file is “diverted” to a different file name.</p><h3 style="text-align: left;">Creating a Package Divert</h3><p>To create a package divert, your package’s preinst script should run the following command:</p><p><span style="font-family: courier;"><b>dpkg-divert --package my-package-name --add --rename \</b></span></p><p><span style="font-family: courier;"><b> --divert /path/to/original.divert-ext /path/to/original</b></span></p><p></p><blockquote><i>The <b>preinst </b>script is the place to do this because the divert must be in place before the package contents are unpacked. The <b>dpkg-divert</b> commands are idempotent, so having it called in the <b>preinst </b>of every install is fine.</i></blockquote><p></p><h3 style="text-align: left;">Removing a Package Divert</h3><p>When your package is uninstalled, it’s good practice to remove the package divert and rename the diverted files back to their original file names. It’s recommended to remove the package divert in the <b>postrm </b>script, which makes perfect sense when uninstalling a package because the files are deleted <i>before </i><b>postrm </b>is called.</p><p><b><span style="font-family: courier;">dpkg-divert --package my-package-name --remove --rename \</span></b></p><p><b><span style="font-family: courier;"> --divert /path/to/original.divert-ext /path/to/original</span></b></p><p>When removing a package, the package’s files have been deleted already and removing a divert simply renames the diverted file back to its original file name.</p><p>When upgrading a package, however, the files are only deleted <i>after </i><b>postrm </b>has been called. This means that a call to <b><span style="font-family: courier;">dpkg-divert — remove</span></b> will fail because it would have to overwrite the upgraded package’s copy of the file that hasn’t yet been removed.</p><p>It also means that if you delete your package’s file in the <b>postrm </b>in order to remove the divert, <i>the original package’s file will be deleted after your <b>postrm </b>because it will have been identified as belonging to the upgraded package</i>.</p><p></p><blockquote><i>“Insanity is contagious.” ― Joseph Heller, Catch-22</i></blockquote><p></p><p>It is for this reason that if we remove the divert in the <b>postrm </b>during a downgrade to a version that does not include the file in its package contents, we will lose the original file. If we do not remove the package divert, we will retain the diverted original file, but it will be renamed and therefore not serve its purpose. In our downgrading scenario, the <b>postinst </b>script that’s run after the file removal phase of an installation belongs to the older version of the package that didn’t know about the file, or package diverts, so that won’t be of any use to us. In short, the only way to downgrade our package is to completely remove it and install the older version, which for us is simply not an option.</p><h2 style="text-align: left;">Epilogue</h2><p>Fortunately, my team and I are in the position that the original file also belongs to a package that we maintain, and we are able to overwrite the original file in the <b>postinst </b>script* with confidence and impunity. That means no rolling back without removing and then reinstalling the original package, which in our case happens to be impossible.</p><p>* Of course, the file can no longer be included in the package contents with its original path or the installation will fail.</p>
<div style="height: 0px; padding-bottom: 56%; position: relative; width: 100%;"><iframe allowfullscreen="" class="giphy-embed" frameborder="0" height="100%" src="https://giphy.com/embed/Jg07mNy8eKBMh8JqJk" style="position: absolute;" width="100%"></iframe></div>
<p>Hope you’ve found this helpful! Please share in the comments if you’ve had similar experiences, or if you know of any other workarounds!</p><p style="text-align: center;">...</p><p>Originally published at <a href="https://therightstuff.medium.com/an-impatient-developers-guide-to-debian-maintenance-installation-scripts-and-package-diverts-ef8ac6272982" target="_blank">https://therightstuff.medium.com</a>.</p>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-3860845834866213302021-10-07T12:01:00.000-04:002021-10-07T12:01:04.894-04:00The Day Our Python gRPC Connections Died<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNn4L2oknoCDA3g-JTuu1AU6mKx7XX1hy4Hs2SgVIPtTmKjigHz1NeHX2KjVHzbt7pNX3iWoI5wmma4OVWTEoKQSVwYEGHpg6TsipWFGJyykW3xMjAuA0SYBeXigJ7kFNaBRaNrZZ8YYWL/s640/connection-broken-98523_640.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="640" data-original-width="640" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNn4L2oknoCDA3g-JTuu1AU6mKx7XX1hy4Hs2SgVIPtTmKjigHz1NeHX2KjVHzbt7pNX3iWoI5wmma4OVWTEoKQSVwYEGHpg6TsipWFGJyykW3xMjAuA0SYBeXigJ7kFNaBRaNrZZ8YYWL/w200-h200/connection-broken-98523_640.png" width="200" /></a></div><div class="separator" style="clear: both; text-align: center;">Image by <a href="https://pixabay.com/users/openicons-28911/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=98523" target="_blank">OpenIcons</a> from <a href="https://pixabay.com/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=98523" target="_blank">Pixabay</a></div><p>On the 30th of September 2021, a heavily-used root certificate — <i>DST Root CA X3</i> — expired. You can read all about it <a href="https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/" target="_blank">here</a>.</p><p>According to a handful of forum posts and github issues I’ve come across, the change has caused a fair amount of pain to those unfortunates who failed to heed the warnings, but for most of us this really wasn’t a surprise. For our team, the expiration date came and went and we didn’t even notice! Until our primary in-house testing tool began failing its connection tests with the following:</p><p><b>Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED</b></p><p>Our gRPC connection tests are written in Python (using the <b>grpcio</b> and <b>grpcio-tools</b> packages), and run on a variety of linux machines and Docker images. Hunting through the forums, it looked like upgrading to the latest versions of the <b>grpcio</b> dependencies should do the trick, but it didn’t.</p><p>At least not by itself.</p><p>We eventually determined that the problem was that <b>DST Root CA X3</b> was still registered as a certificate authority, and it took so long to figure out how to remove it on Debian that I realized that I had to post about it:</p><p>To see if the <b>DST Root CA X3</b> certificate is configured as a root authority, list the contents of your <b>/etc/ssl/certs</b> folder:</p><p><span style="font-family: courier;"><b>> ls -l /etc/ssl/certs | grep dst</b></span></p><p><span style="font-family: courier;">lrwxrwxrwx 1 root root 53 Sep 11 2020 DST_Root_CA_X3.pem -> /usr/share/ca-certificates/mozilla/DST_Root_CA_X3.crt</span></p><p>2. Edit <b>/etc/ca-certificates.conf</b> and insert a <b>!</b> at the beginning of the name of the <b>DST Root CA X3</b> certificate to flag it as removed:</p><p><span style="font-family: courier;"><b>> sed -i “s@^mozilla/DST_Root_CA_X3.crt@!mozilla/DST_Root_CA_X3.crt@” “/etc/ca-certificates.conf”</b></span></p><p>To update the certificates, run the following:</p><p><b><span style="font-family: courier;">> sudo /usr/sbin/update-ca-certificates -f</span></b></p><p>Note that it must be fully qualified as the <b>/usr/sbin</b> directory is not in the <b>PATH</b> by default, and it might be necessary to install the <b>ca-certificates</b> package using apt. The “f” of the <b>-f</b> flag apparently stands for “fresh”.</p><p>3. Set the <b>GRPC_DEFAULT_SSL_ROOTS_FILE_PATH</b> environment variable, which is required for the above changes to be respected:</p><p><b><span style="font-family: courier;">> export GRPC_DEFAULT_SSL_ROOTS_FILE_PATH=/etc/ssl/certs/ca-certificates.crt</span></b></p><p>Once all that’s done, you should be able to connect successfully!</p><p style="text-align: center;">...</p><p>Originally published at <a href="https://therightstuff.medium.com/the-day-our-python-grpc-connections-died-8dc5bff30fad" target="_blank">https://therightstuff.medium.com</a>.</p>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-77238550619920187292021-08-08T17:28:00.004-04:002021-08-08T17:30:59.656-04:00Weaning Off The Google<div style="text-align: left;"><i> (Or, How I’m Continuing to use Google’s Products Without A Sense of Existential Dread)</i></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTtTpW429EKoAKjkWx63tJWSOBVcKASf5cDvpaJm7n60Ji7q-TqzkqORNvoafa31ngVx4LJZ3E2suSvaU1GKME-WV4G-tXPbQomJwtTrDZsah3txsgoIMgaXdm_ePAlu5AvAxGptW8gZFm/s640/pexels-pixabay-39584.jpeg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="426" data-original-width="640" height="213" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTtTpW429EKoAKjkWx63tJWSOBVcKASf5cDvpaJm7n60Ji7q-TqzkqORNvoafa31ngVx4LJZ3E2suSvaU1GKME-WV4G-tXPbQomJwtTrDZsah3txsgoIMgaXdm_ePAlu5AvAxGptW8gZFm/s320/pexels-pixabay-39584.jpeg" width="320" /></a></div><div style="text-align: center;"><a href="https://www.pexels.com/photo/black-android-smartphone-on-top-of-white-book-39584/" target="_blank">Photo by Pixabay on Pexels</a></div><p style="text-align: center;"><br /></p><h3 style="text-align: left;">My First Brush With Account Recovery Tyranny</h3><p>Just over five years ago, my wife and I decided to leave Canada for South Africa, and with all the madness of the migration (after six months of being new parents with no support, we left in a big rush to be close to our family) it never occurred to me to ensure that our email accounts were all configured correctly for where we were going. When we finally arrived after two weeks of frenzied selling and packing (and deciding which half of our lives to leave behind) and two days of hard travelling, I thought I’d log in to check my emails.</p><p>Google detected that we were logging in from a different country, and wanted us to authenticate using our mobile numbers. Which had been disconnected the day before we left. As some of you may have had the misfortune of discovering, there are no human beings to speak to when attempting to recover access to a Google account, the entire process is automated and even knowing everything about your account (including your personal information, who you last communicated with, previous passwords, etc.) is no guarantee that you’re going to get it back.</p><p>We were very, VERY lucky that a really kind support agent of our Canadian mobile carrier was willing (and able) to reinstate my released number temporarily and read me back the verification codes, and even though we were forced to pay out the nose for that little service it was well worth not losing… pretty much everything that matters to a 21st century digital person.</p><h3 style="text-align: left;">A Reminder That It’s A False Sense Of Security</h3><p>After successfully recovering our accounts, we put the experience behind us and made a mental note to sort out account verification <i>before</i> the next time we move. We have continued to use our gmail accounts as primary email addresses for a thousand other services, further entrenching our already-heavy reliance on a service that’s “easy come, easy go”.</p><p></p><blockquote><a href="https://medium.com/bloomberg/gmail-hooked-us-on-free-storage-now-google-is-making-us-pay-affea6dd5d64" target="_blank"><i>What The Google Giveth, The Google Taketh Away</i></a></blockquote><p></p><p>A few months ago I read an article on Medium, <b><a href="https://medium.com/business-insider/what-its-like-to-get-locked-out-of-google-indefinitely-36c054aa5db0" target="_blank">What it’s like to get locked out of Google indefinitely</a></b>, and that sense of dread came rushing back as I realized that we still have absolutely no way out if we ever get stuck like that again. I’ve been a bit preoccupied with other things, though, so I haven’t really done much about it. Every once in a while I’d look at my task list, be reminded that we’re still at the mercy of a heartless, mindless system, and continue my day carrying just a teensy bit more anxiety than usual.</p><p>(If that article’s not convincing enough, please also take a look at <b><a href="https://medium.com/bling-financial/google-has-threatened-to-delete-all-our-google-accounts-over-nothing-13a05a31a55a" target="_blank">Google has Threatened to Delete all our Google Accounts Over Nothing</a></b>)</p><h3 style="text-align: left;">But I Like GMail!</h3><p>Okay, here’s the deal. I <i>like</i> Google’s products. I like them better than other products. They’re <i>good </i>products. Good enough that I’ll ignore the data mining, the ads, even the fact that it’s Google providing them.</p><p>Honestly, sometimes I think the only two reasons I prefer GMail to any other provider are the fact that organization is by labels instead of folders, and their custom filtering is excellent. I guess that’s all any other provider would have to offer for me to be ready to jump ship.</p><h3 style="text-align: left;">Separating Email From Account Management</h3><p>The first step to safeguarding all my other accounts was to establish one that nobody could take away from me. Fortunately, I already have a domain under my control, but then… a conundrum. What email address do I use to secure the account that manages the server that manages my email address?</p><p>Fortunately, that’s easy — multiple accounts can be used to secure that one. I signed up for a reliable, secure email address from a different provider (ProtonMail), so I at least have backup access in case either of them fails me.</p><p>Having taken care of that, I set up my own email address (which will be described in a separate post specific to configuring a postfix server — my post about <b><a href="https://www.industrialcuriosity.com/2019/08/mail-forwarding-and-piping-emails-with.html" target="_blank">Mail forwarding and piping emails with Postfix for multiple domains</a></b> needs a bit of an update since I learned <a href="https://www.digitalocean.com/community/questions/how-do-i-encrypt-emails-sent-from-my-server?answer=38701" target="_blank">how to set the outgoing encryption</a>, and even <i>that</i> isn’t sufficient for getting past spam filters), which now forwards to both of the other email accounts (my GMail and my new accounts email).</p><p>At this point, I was finally ready to begin the laborious process of switching the primary email of all my other accounts. It’s been an educational experience, with some services easier to update than others, but after investing a good few hours I believe I’m finally through the worst of it and have the essential services covered.</p><h3 style="text-align: left;">Backing Up Account Content</h3><p>I’ve been considering the fact that while getting locked out of my accounts is one of my greatest fears, losing access to gigabytes of email history, documents, and videos wouldn’t be too much fun either.</p><p>During the course of the last couple of weeks, I was struggling to find an old video that I was *certain* I’d uploaded to YouTube, and eventually found it on an unclaimed channel. Google has a channel claim process, though… but it’s also fully automated. After trying and failing to claim it with my active accounts, I realized that it must have been attached to an old account that I’d deleted many years ago.</p><p>Did you know that a deleted Google account is <i>completely</i> unrecoverable? It is literally impossible to reinstate it, and the username will be locked forever so there’s not even a possibility of recreating it.</p><p>Over the course of this weekend I came across another Medium article, <b><a href="https://medium.com/pcmag-access/how-to-quit-gmail-and-reclaim-your-privacy-cb995a99589" target="_blank">How to Quit Gmail and Reclaim Your Privacy</a></b>. There’s a lot of good advice in there, but No. 7, “Don’t Delete Your Old Address”? Consider that a golden rule.</p><p>Personally, I have a terabyte drive (or two) that I use for backups, but I’ve come to the conclusion that I’m not nearly as capable of protecting my physical disks as the professionals. I’m a big fan of DropBox, which has an excellent interface and syncing tools, but I’m not a fan of their pricing models. I’ve now resorted to uploading my backups to an AWS S3 bucket, treating it as cold storage only to be used in case of emergency.</p><p>For the low prices (for my purposes, anything from the standard storage plans to glacier will do), and the safety guarantee, I’m sold.</p><h3 style="text-align: left;">Next Steps</h3><p>I’ve now set myself a regular reminder to download my Google data and upload it to my backup bucket. At this point, I’m considering this little adventure complete and I’m ready to relax and enjoy the remainder of our long weekend in celebration of South Africa’s National Women’s Day.</p><p>I hope this article has been helpful, if you have anything you’d like to add (or disagree with) please let me know in the comments!</p><p style="text-align: center;">...</p><p>Originally published at <a href="https://therightstuff.medium.com/weaning-off-the-google-4a3fc9a36858" target="_blank">https://therightstuff.medium.com</a>.</p>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-86926044135689525582021-08-06T02:51:00.011-04:002021-10-01T13:41:11.322-04:00Bamboo YAML Specs Tips and Tricks (For Fun and Profit)<div class="separator"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaO1KCEAano3PVZY9g0Y6E5ZLBcVyX3xcHSz2hCg0Yo1DFMLrHJAdtmmSj5amrtp2QDf5VxOp6b9yYHIN1qcmKtiVkgytkN5SQDPIxtnPd4zh5-7T-msGeLWZAqzuzTaXj6MAqYeKkCgdt/s960/jay-wennington-s-fD5Tpew2k-unsplash.jpeg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="A cute panda eating bamboo" border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaO1KCEAano3PVZY9g0Y6E5ZLBcVyX3xcHSz2hCg0Yo1DFMLrHJAdtmmSj5amrtp2QDf5VxOp6b9yYHIN1qcmKtiVkgytkN5SQDPIxtnPd4zh5-7T-msGeLWZAqzuzTaXj6MAqYeKkCgdt/w213-h320/jay-wennington-s-fD5Tpew2k-unsplash.jpeg" title="Photo by Jay Wennington on Unsplash" width="213" /></a></div><div style="text-align: left;">[UPDATED 2021.10.01 WITH ADDITIONAL DEPLOYMENT PLAN LEARNINGS]</div><h1 style="text-align: left;">Introduction</h1>“In for a penny, in for a pound” — that’s our team’s current approach regarding Atlassian’s products. As part of our efforts to code as much of our infrastructure as possible, we’ve recently begun migrating old build plans to Bamboo Specs and, as non-Java devs who don’t want more headaches than are absolutely necessary, we’ve chosen to work with the Bamboo Specs YAML offering.<br /><br />YAML specs are pretty great. Migrations are assisted by the fact that manually-created plans will show you their YAML translations, so in most cases a simple copy/paste is all you need. I mean, aside from migrating to linked repositories and ensuring they’re configured correctly…<br /><br />Having said that, Bamboo’s YAML specs are an incomplete product with undocumented critical features, and fail to provide out-of-the-box support for a surprising number of standard use cases that one would expect software engineers building software for other software engineers to appreciate the value of. <div>They’re still pretty great in spite of that, but overcoming the limitations of incomplete specs definitions and deployment plans is not exactly intuitive. This article attempts to cover some of the missing pieces and suggest some workarounds that we’ve found useful.<div><h3 style="text-align: left;"><br /></h3><h1 style="text-align: left;">Configuring a linked repository for Bamboo Specs</h1><div>Bamboo Specs require the configuration of a linked repository. Head to <b>settings</b> -> <b>linked repositories</b> (this will require admin permissions) to create or update a linked repository configuration. The two most important steps to be taken here are as follows:<br /><ol style="text-align: left;"><li>Determine which branch of your repository will be used by Bamboo to read the specs file. Only one branch can be considered the source of truth, my personal recommendation is to make it the <b>development</b> branch.<br /><br /></li><li>Two sets of permissions need to be configured correctly in order for Bamboo Specs to be able to do their job:<br /><br />a. First, the Bamboo project must give permissions to the linked repository to create and modify the build and deployment plans. Head to the project you want your linked repository to operate in, go to <b>Project Settings</b>, then <b>Bamboo Specs repositories</b>, and add the linked repository.<br /><br />b. Under the <b>Bamboo Specs</b> tab of the linked repository, enable both <b>Access all projects</b> and <b>Access all repositories</b>. I’m pretty confident that this requirement is not a security best-practice, but in my experience Bamboo Specs won’t work without them.</li></ol><h1 style="text-align: left;">Testing Bamboo Specs</h1>When experimenting with changes to the specs, I’ve found it useful to do the following:<br /><ol style="text-align: left;"><li>Create a testing branch (a feature / bugfix branch).<br /><br /></li><li>Change the build and deployment plans’ names and keys and push them to origin so that your changes won’t overwrite the existing plans.<br /><br /></li><li>Set the testing branch to be the linked repository’s branch in the <b>General</b> tab and proceed with your changes.<br /><br /></li><li>Once you’re happy with the changes and they’ve been reviewed: set the linked repository’s branch back to the <b>development</b> branch, then revert the plan names and keys in the testing branch to the existing plans’.<br /><br /></li><li>Delete the test plans manually.</li></ol><h1 style="text-align: left;">Unexpected Limitations of Bamboo Specs</h1><h3 style="text-align: left;">Plan dependencies</h3>The most glaring omission in the YAML specs is the inability to set up plan dependencies. While we were initially upset by this, we quickly realized that at the end of the day plan dependencies are a nice hack that we shouldn’t really have been relying on in the first place. Bamboo appears to encourage the download of artifacts from matching branches on other build plans, but this can quickly break down into chaos with dependency versioning and management. I warmly recommend using Bamboo artifacts for build debugging and deployment plans exclusively, and proper package repositories for storing and retrieving versioned build artifacts.<br /><br />When exporting Bamboo Specs from existing plans, build plans and deployment plans are not considered strongly related so you will need to gather and combine the specs from both into a single file. To do this, copy in the deployment plan specs beneath the build plan specs, retaining the --- as separators.</div><div><blockquote><i><b>NOTE</b>: the ordering of the sections is important! The build plan definition must be followed by the build plan permissions, then the deployment plan definition(s), with each deployment plan definition followed by a section for its permissions. See the outline at the end of the article for clarity.</i></blockquote>An interesting omission is the inability to include unset plan variables. This actually makes sense, as manually maintained plans need some way to ensure that they’re all using the same variable names, but with Bamboo Specs it’s really on you to be consistent and it’s obviously much* easier to search through a single file than it is to hunt for variables across different plan branches via the Bamboo interface.</div><div><br /></div><div>* infinitely easier</div><div><h3 style="text-align: left;">Deployment Plans — <strong class="markup--strong markup--h4-strong">sharing build variables and tooling</strong></h3>The principal idea behind a deployment plan is to separate deployment from the build process. Bamboo implements deployment plans as distinct entities with entirely different environments, with the intention that your only interaction with the related build plan is to download your artifacts from it.<br /><br />For us, this proved problematic as we require shared environment variables and tooling to deploy our builds. To work around this, we required the following mechanisms:<br /><ol style="text-align: left;"><li>Environment variable injection. Early in our build plan tasks, we prepare an environment variable file in the following format and include values like build versions and git branch names.<br /><span style="font-family: courier;"><b><br />version=1.0.2<br />git_branch=feature/example</b></span><br /><b><i><br />WARNING: variable values <u>MUST NOT</u> be surrounded by quotations, as this leads to unpredictable behaviour.</i></b><br /><br />It is recommended to use the <span style="font-family: courier;"><b>inject</b></span> namespace for injections. When the scope of the injection is <span style="font-family: courier;"><b>RESULT</b></span>, the variables will be available to all subsequent build tasks as well as the attached deployment plan. In Bamboo Specs they’ll be available in the form <span style="font-family: courier;"><b>${bamboo.inject.git_branch}</b></span> and in inline scripts as <span style="font-family: courier;"><b>$bamboo_inject_git_branch</b></span> (on Unix agents) or <span><span style="font-family: courier;"><b>%bamboo_inject_git_branch%</b></span></span> (on Windows agents).<br /><br />One of my favourite uses of this technique is the ability to name releases automatically based on the build version (see the following section for the example).<br /><br /></li><li>Deployment plans are not really designed to use git directly, but we have found that we sometimes require non-build folders to be available for deployment, such as documentation. In these cases, we simply zip the desired folders and make them available as artifacts as well.<br /><br /></li><li>Running the deployment in a docker container. I find it disconcerting that such an extremely useful feature is undocumented! Deployment plan environments can be configured to run in a docker container just like a build plan, which provides us with all requisite tooling and context.<br /><span style="font-family: courier;"><b><br />DevEnvironment:<br /> docker:<br /> image: golang:1.16.6-buster<br /> docker-run-arguments:<br /> - --net=host<br /> tasks:</b></span></li></ol><h3 style="text-align: left;"><strong class="markup--strong markup--h4-strong">Linking multiple branches to a single deployment plan</strong></h3><p class="graf graf--p graf--empty" name="2aa1"><strike>Ironically, while deployment plans are supposed to operate independently from build plans, they only really function well when linked to specific build branches. If your intention is to build once, then deploy the build artifacts to multiple stages, you're out of luck!</strike></p><p class="graf graf--p graf--empty" name="2aa1"></p><blockquote><i><b>UPDATE</b>: it turns out we missed an important option! If <b>release-naming</b> is set to an environment variable, it only works for the specified branch (even if that specified branch is the default branch). If you want <b>release-naming</b> to be set to an environment variable for any branch, then it needs to be configured as follows:</i></blockquote><p></p><p class="graf graf--p graf--empty" name="2aa1"><span style="font-family: courier;"><b>release-naming</b></span><b style="font-family: courier;">:</b><br /><b style="font-family: courier;"> </b><span><span style="font-family: courier;"><b>next-version-name</b></span><b style="font-family: courier;">: </b></span><span style="font-family: courier;"><b>${bamboo.inject.version}</b></span><br /><b style="font-family: courier;"> applies-to-branches: true</b></p><blockquote><p class="graf graf--p graf--empty" name="2aa1"><i>The disadvantage of using a single deployment plan is that the link to the deployment plan will only be available from the default build plan branch, but in my experience this is a very minor price to pay for the simplicity. The alternative - a single deployment plan for each branch of interest - is not only messier, but is also annoying to configure as you have to know the branch keys in advance so the branches cannot be automatically managed (plan branch keys are autoincremented and uneditable).</i></p></blockquote><p class="graf graf--p graf--empty" name="2aa1">Regardless of your choice, it's probably a good idea to handle your branch management manually:</p><p class="graf graf--p graf--empty" name="2aa1"><span style="font-family: courier;"><b>branches</b></span><b style="font-family: courier;">:</b><br /><b style="font-family: courier;"> </b><span style="font-family: courier;"><b>create</b></span><b style="font-family: courier;">: manually</b><br /><b style="font-family: courier;"> delete: never</b></p><h1 style="text-align: left;">Putting it all together</h1><p class="graf graf--p graf--empty" name="2aa1">My recommendation for the general outline of a YAML specs file is as follows:</p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">---</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">version: 2</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"># build plan definition</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">plan:</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"> project-key: PROJECTKEY</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"> key: PLANKEY</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"> name: Product Name</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"> ...</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">branches:</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"> create: manually</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"> delete: never</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">---</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">version: 2</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"># build plan permissions</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">plan:</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"> key: PROJECTKEY-PLANKEY</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">plan-permissions:</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"> ...</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">---</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">version: 2</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"># deployment plan definition</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">deployment:</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"> # NOTE: deployment plan names must be unique</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"> name: Product Name</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"> source-plan: PROJECTKEY-PLANKEY</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><span style="font-family: courier;"><b>release-naming</b></span><b style="font-family: courier;">:</b><br /><b style="font-family: courier;"> </b><span><span style="font-family: courier;"><b>next-version-name</b></span><b style="font-family: courier;">: </b></span><span style="font-family: courier;"><b>${bamboo.inject.version}</b></span><br /><b style="font-family: courier;"> applies-to-branches: true</b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">...</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">---</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">version: 2</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"># deployment plan permissions</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">deployment:</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"> name: Product Name</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;">deployment-permissions:</span></b></p><p class="graf graf--p graf--empty" name="2aa1"><b><span style="font-family: courier;"> ...</span></b></p></div><div>These are the tips and tricks that have helped us overcome our biggest migration challenges so far, I hope they can help others as well. If you have any others that come to mind, or improvements over the above, please let me know in the comments!</div><div><div style="text-align: center;">...</div><br />Originally published at <a href="https://medium.com/geekculture/bamboo-yaml-specs-tips-and-tricks-1fea57a83728" target="_blank">https://therightstuff.medium.com</a>.</div></div></div>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-26097991297081888332021-06-20T17:54:00.004-04:002021-06-20T17:55:18.530-04:00How Shakespeare’s Four-Hundred Year-Old Sonnets Drove Me To Madness<h3 style="text-align: left;">And How They Tell Me They’re Performing As Intended</h3><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7gJkN4dBvtJFR5Q4hzjGmX-07i4i5-d1I2zYE9uX9YqL8ki5MqdQKg5xLoQMUX1Ma5eFgBr2Rhmj-15HIj5fbJIJOZ-tXQh27CkI0Jh3SIgT0UZeLKleyqJMd6vAuUgSSFmgjeWgp9L00/s695/1_XjjrpGhW9ec7nKsAdODBDA.png" style="margin-left: auto; margin-right: auto;"><img alt="A drawn image of a rose being thrown into Shakespeare's grave" border="0" data-original-height="460" data-original-width="695" height="265" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7gJkN4dBvtJFR5Q4hzjGmX-07i4i5-d1I2zYE9uX9YqL8ki5MqdQKg5xLoQMUX1Ma5eFgBr2Rhmj-15HIj5fbJIJOZ-tXQh27CkI0Jh3SIgT0UZeLKleyqJMd6vAuUgSSFmgjeWgp9L00/w400-h265/1_XjjrpGhW9ec7nKsAdODBDA.png" width="400" /></a><br />Image taken from a sample page of <a href="https://www.patreon.com/posts/samples-22129054" target="_blank">Shakespeare’s Sonnets: A Graphic Novel Adaptation</a></td></tr><tr><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption"><br /></td></tr></tbody></table><p>Something magical happened on the evening of the 28th of January, 2012, just shy of four centuries after the Bard’s body had been buried. I brushed dust off some old words with my fingertips, I breathed out their spells as I read, and a real-life Djinn popped out.</p><h2 style="text-align: left;">Standing Over A Grave</h2><p>I was halfway through the first semester of the second year of my Master’s in English Literature at the Tel Aviv University (which I left incomplete with only a handful of credits to go, but that has nothing to do with this story) and my lecturer, Dr Noam Reisner, had warned us that his seminar entitled “Sonnets and Sonneteers” would either see us quitting, or losing our minds. While I cannot speak for the rest of the class, let me assure you that in my case his assertion was entirely on the nose.</p><p>After weeks covering the history of sonnets, their techniques and their makers, we studied a number of Shakespeare’s sonnets together in the classroom and I’d found myself fixated on an aspect of the first sonnet that I simply couldn’t shake: while reading and re-reading it, it continued to produce a nagging sensation that I was standing over a grave. I could not for the life of me tell you what specifically had that effect on me, but somehow I was certain that it was significant. That feeling had inspired me to make the Bard’s sonnet sequence the focus my mid-term paper, a decision that would forever alter the course of my life.</p><p>Just like most people, I’d encountered quite a few of Shakespeare’s sonnets before, back in high school where our English teachers forced them down our throats until our gag reflexes kicked in, during a number of university courses, at awkwardly inappropriate times in a few movies, but we’d always read selections of them, never all together and never in sequence. I strongly believe that a lack of context and continuity is a factor in why the sonnets are largely under-appreciated today; that, and the fact that the conventional reading that’s taught to us is so narrowly focused on sex and sexual relationships that it thoroughly distracts from everything that makes the text truly magical and terribly awe-inspiring.</p><p>We’re talking here about what is arguably the most beautiful, most insightful, most heartbreakingly tragic writing in the entire history of the English language.</p><h2 style="text-align: left;">Revelation</h2><p>As a part-time student / part-time freelancer, my days generally extended until about 3 or 4am and I was used to being fairly sleep-deprived, which may or may not have contributed to my mental “looseness” at the time. It was evening when I sat down to prepare for my paper — I know this only because I keep a journal, and my first entry after my discovery was written around 10pm.</p><p>I recall that I was just shy of a quarter of the way through reading the sonnets sequentially for the very first time. In a single, sudden, reality-warping moment I uncovered the first amazing secret, something so simple but so powerful that it made the world fade away into a fuzzy background and my heart beat in my ears as the words leap out of the page to grab me by the eyeballs: whenever Shakespeare wrote “thee”, or “thou” in the sonnets, <i>he was referring to the reader</i>.</p><p>He was referring to <i>the reader</i>.</p><p>He was referring to <i>me</i>.</p><p>For the longest time, Shakespeare’s sonnets have been revered or reviled, mystifying and frustrating fans and critics and students alike who have formed some pretty crazy theories about who the Bard’s young male lover was and who his mistress could have been. This interpretation of the sonnets is the basis for all the theories concerning Shakespeare’s sexuality and the common assumption that he was unfaithful to his wife. Over the course of the semester Dr. Reisner had drilled into us that when analysing a sonnet, it’s crucial to identify the speaker and the addressee: those two pieces of information can change <i>everything</i> about a sonnet’s message, and in this case they changed everything about my relationship to Shakespeare’s works.</p><p>For what was quite possibly the very first time in the four-hundred-and-three years since their publication, the sonnets had found their mark. Shakespeare himself was speaking directly to me from beyond the grave, his ghost conversing with me and through me and I — in my role as reader and imaginer — was giving him what he desired most: a willing host; a willing heart and mind and pair of eyes to possess. This was really happening, a bridge had formed between the then and the now, a wormhole through an eternity between the living and the dead, and my world was turned inside out: Shakespeare, in his poetry, was addressing <i>me</i>, instructing <i>me</i>, even foretelling my experience in ways that couldn’t <i>possibly</i> be real, but absolutely and without any doubt <i>were</i>.</p><p>I read on, my mind racing faster and faster, my sense of reality spinning in an ever-expanding inward spiral. According to my journal (my next entry was five hours later), I eventually put aside the reading to get some paid work done, already confident that I knew who Mr. W. H. from the sonnets’ dedication was, then finished up and tried to go to bed. An hour after that, I wrote yet another entry because there was no way that I could sleep. I vividly recall climbing into bed with my phone (I was using a Shakespeare app at the time) and holding it over the edge of the bed so as not to wake my girlfriend while I continued to read; I was utterly mesmerized. The more I read, the more convinced I became that I had just managed to unravel one of the greatest and most celebrated mysteries in English literature.</p><p>Little did I know, I had only peeled back the first layer of the onion.</p><h2 style="text-align: left;">Beyond The Looking Glass</h2><p>I barely slept that morning, having read through the first third of the sequence over and over again until 5am just to be sure that I wasn’t making a huge mistake, each pass reinforcing my confidence in my reading. A couple of hours later I dragged myself out of bed and through an action-packed day. The following evening I dove back in and by 3am had successfully peeled back the second layer: Shakespeare wasn’t just talking to the reader, he was talking to the sonnets themselves. <i>And the sonnets were talking back</i>.</p><p>I was so excited to discuss my theory with Dr Reisner, but every single thing about my reveal went wrong. Firstly, I was unprepared and my theory was completely lacking in academic rigour. I’d figured out an enormous amount in a span of two days, but there were plenty of details missing — most notably, I hadn’t identified the Dark Lady of the second half of the sequence yet — and at the time (in my state of sleep deprivation and overwhelming, palpitation-inducing inspiration and obsession) I couldn’t even begin to consider what would constitute reasonable evidence, let alone form coherent sentences out of the tornado of ideas spinning around my skull.</p><p>I’ll never forget one of the reasons I simply couldn’t reign in my enthusiasm or keep my mouth shut: we happened to be discussing sonnet 128, and I simply could not sit still while my classmates made wild (albeit very interesting) guesses <a href="https://youtu.be/uSGGBlcCsEM" target="_blank">while the premise of the sonnet was so clear to me</a>! To be clear, that was my first reading of sonnet 128, ever, and I’m sure there’s a lot more in there if we look deeper. My analysis stunned the class — and not in a good way — and I had no answers to the barrage of questions fired back at me. It was made perfectly clear to me by all present that I was probably completely off my rocker. Dr Reisner’s use of my assertions to demonstrate how <i>not</i> to argue was memorably instructive.</p><p>Weeks passed by, and as I dived into books by established critics (particularly memorable reads were The Arden Sonnets by Katherine Duncan-Jones, Shakespeare’s Sonnets by Stephen Booth and The Art of Shakespeare’s Sonnets by Helen Vendler) and worked on my first paper, I’d come up with some very stable theories and some very peculiar ones (one exciting, but ultimately fruitless rabbithole involved Shakespeare’s alleged use of narcotics). Ultimately, that paper was able to establish an initial framework for understanding the sonnets, even though it generated a lot more questions than it answered: the sonnets are a dialog, they are a three-way conversation between themselves, their author and their reader, and many of their verses communicate to all three at the very same time.</p><h2 style="text-align: left;">The Girl Next Door</h2><p>That first paper was an exciting start, but not sufficiently convincing. During the second semester of 2012 I attended a course called “Shakespeare’s Narrative Poetry” by Professor Shirley Sharon-Zisser which gave me a wonderful opportunity to connect the dots with two of Shakespeare’s poems: “<a href="http://shakespeares-sonnets.com/complaint" target="_blank">A Lover’s Complaint</a>”, the poem that was originally published along with Shakespeare’s Sonnets, and “<a href="https://www.poetryfoundation.org/poems/45085/the-phoenix-and-the-turtle-56d2246f86c06" target="_blank">The Phoenix and the Turtle</a>”.</p><p>There’s something immensely profound in witnessing a prophecy come true in A Lover’s Complaint, as I realized that the fickle maid of the tale… was me, along with every other reader of the sonnets who has been drawn to their power and found themselves vexed by it.</p><p>Similarly, there’s something quite surreal about reading a work that is, according to Wikipedia, “widely considered to be one of his most obscure works”, and wondering how it could possibly be so misunderstood when it is thematically identical to the sonnets!</p><p>The second paper aimed to tie in the themes of the sonnets and A Lover’s Complaint, and it succeeded. The next step would be to evolve these two papers into my master’s thesis: fortunately, while Dr Reisner may not have believed me at first, he didn’t <i>not</i> believe me, and when I requested his support for my thesis topic he was kind enough to jump on board and agree to advise.</p><p>Our meeting was a memorable one, my favourite part being the moment in which I learned that other people’s reactions to the recent birth of his son had convinced him that what I was seeing in the sonnets — a father’s love for his son, and a son’s elevated status as legacy-bearer — was just as relevant today as it was back then.</p><h2 style="text-align: left;">Epilogue</h2><p>For a number of reasons, professional and personal, I gave up my studies when I decided to make for the snowier pastures of Montréal. While this slammed the brakes on my thesis, I’d been putting a lot of thought into the fact that the sonnets’ medium — words — was not as ideal a method of communicating as their author had hoped.</p><p>The sonnet sequence has a number of themes and threads running through them, most of the sonnets dealing with a few different themes at a time, and each sonnet has up to three audiences simultaneously. This is too much for any reader to keep in mind all at once, and it’s too much to cram into moving pictures in a coherent manner.</p><p>There is one medium that’s rising in popularity for “real” literature*, though, and that’s the graphic novel. It’s a medium that enables writers and artists to combine and recombine text and imagery in infinite ways, and gives the reader the ability to consume the content at their own pace, and read backwards as well as forwards. This, then, promised to be a worthwhile avenue to venture down in my pursuit of justice for the Bard.</p><p>* <i>Arguing the definition of “good” or “real” literature is out of scope for this article, suffice it to say that I’m highly judgemental of anyone who claims that an entire medium could be somehow less valuable than another.</i></p><p>While planning a script for a graphic novel adaptation of the sonnets while in Canada, I came across something that made me revisit Arthur Golding’s translation of Ovid’s Metamorphoses which Shakespeare was fond of using as a reference, and it was immediately apparent that he had not only used the story of Narcissus and Echo as a framing device for the entire sequence, but had frequently quoted it directly! More pieces of the puzzle fell into place, and I raced to find an artist to collaborate with.</p><p style="text-align: center;">...</p><p>It has been nine and a half years since that fateful night, and five years since I met an artist both capable of and interested in helping me bring these crazy comics into existence. As I write this, after a couple of years struggling to get started, we have recently published <a href="https://www.sonnetcomix.com/" target="_blank">the twelfth page of the graphic novel adapation</a> and we’re finally making slow but steady progress in spite of the pandemic and its fallout. Last year I published <a href="https://www.goodreads.com/work/editions/75624061-shakespeare-s-sonnets-exposed-volume-1" target="_blank">a book covering the first 25 sonnets</a> based on <a href="https://industrialcuriosity.com/shakespeares_sonnets_exposed" target="_blank">my podcast of the same name</a>, and over the past couple of years I’ve embarked on the admittedly weirder project of <a href="https://www.instagram.com/explore/tags/154tattoos/" target="_blank">tattooing images representing all 154 of Shakespeare’s Sonnets onto my body</a> (inspired by my beautiful, supportive, and very tattooed wife).</p><p>Why, though?</p><p>Why would someone go to such lengths for an arguably failed four-hundred year old poem?</p><p>My answer is simple: Because Shakespeare asked me to. Because I now have a son of my own, and I cannot stand idly by the grave injustice that has been done to his and his sons’ memory. Because the Bard has earned the right to a magnificent legacy with a masterpiece so far ahead of its time, that it is even amazing by today’s standards.</p><p style="text-align: center;">...</p><p><i>Originally published at <a href="https://therightstuff.medium.com/how-shakespeares-four-hundred-year-old-sonnets-drove-me-to-madness-394ad7ea366d" target="_blank">https://therightstuff.medium.com/</a>.</i></p>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-639956011900742622021-05-05T18:10:00.008-04:002021-05-06T01:50:18.732-04:00Crypto Matters - Just Not For The Reasons You Might Think<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcM2KrBHoWI7RWTD3JQG86uA9Zxv8debnVyVuiIcFQ5pf13H6GuGkFgP73kkgoZTGWNTNoXdgK0weVQCclZ9yvPutRH8hIMl2fD1_PLX4TggTZ3y_7c9ZezSfXYeTpcRSEdLPGLvOZ_tpC/s1280/pexels-suzy-hazelwood-1329644.jpeg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1280" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcM2KrBHoWI7RWTD3JQG86uA9Zxv8debnVyVuiIcFQ5pf13H6GuGkFgP73kkgoZTGWNTNoXdgK0weVQCclZ9yvPutRH8hIMl2fD1_PLX4TggTZ3y_7c9ZezSfXYeTpcRSEdLPGLvOZ_tpC/w400-h225/pexels-suzy-hazelwood-1329644.jpeg" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><span style="background-color: black;"><span face="-apple-system, system-ui, "segoe ui", roboto, oxygen, cantarell, "helvetica neue", ubuntu, sans-serif" style="color: white; font-size: 16px; text-align: start;">Photo by </span><span face="-apple-system, system-ui, "segoe ui", roboto, oxygen, cantarell, "helvetica neue", ubuntu, sans-serif" style="box-sizing: border-box; color: #1a1a1a; font-size: 16px; font-weight: 600; margin-bottom: 0px; margin-top: 0px; text-align: start;"><a href="https://www.pexels.com/@suzyhazelwood?utm_content=attributionCopyText&utm_medium=referral&utm_source=pexels" style="box-sizing: border-box; margin-bottom: 0px; margin-top: 0px; text-decoration-line: none;">Suzy Hazelwood</a></span><span face="-apple-system, system-ui, "segoe ui", roboto, oxygen, cantarell, "helvetica neue", ubuntu, sans-serif" style="color: white; font-size: 16px; text-align: start;"> from </span><span face="-apple-system, system-ui, "segoe ui", roboto, oxygen, cantarell, "helvetica neue", ubuntu, sans-serif" style="box-sizing: border-box; color: #1a1a1a; font-size: 16px; font-weight: 600; margin-bottom: 0px; margin-top: 0px; text-align: start;"><a href="https://www.pexels.com/photo/white-and-purple-monopoly-trading-card-on-gray-surface-1329644/?utm_content=attributionCopyText&utm_medium=referral&utm_source=pexels" style="box-sizing: border-box; margin-bottom: 0px; margin-top: 0px; text-decoration-line: none;">Pexels</a></span></span></div><p>I watched <a class="markup--anchor markup--p-anchor" data-href="https://www.youtube.com/watch?v=HaJpYjO136o" href="https://www.youtube.com/watch?v=HaJpYjO136o" rel="noopener" target="_blank">Bill Maher’s recent diatribe against crypto</a> a day or two ago, and suddenly my feeds seem to be filled with people decrying blockchain phenomena like NFTs as pyramid schemes and nonsense.</p><p class="graf graf--p" name="de5f">They’re not <em class="markup--em markup--p-em">entirely</em> wrong.</p><h4 class="graf graf--h4" name="a8ad">Fiat vs Crypto</h4><p class="graf graf--p" name="d0b4">It’s important, however, to take a good, long look at our existing “fiat” currencies before taking potshots at a technology that is fundamentally the same, but easily better in a myriad of ways.</p><p class="graf graf--p" name="d6c2">Let’s begin by defining money, a fiction that enables us to transact across domains. It’s a fiction that’s sufficiently decoupled from reality that we can make fair transactions where that wouldn’t otherwise be possible: it’s not easy to determine the value of an item of clothing in coconuts, or units of electricity. Once upon a time the value of money was tied to scarce natural resources, but for decades it’s been completely artificial, controlled and manipulated by organizations and forces that generally do not have “the greater good” at heart.</p><p class="graf graf--p" name="66b8">Cryptocurrency, on the other hand, by design has no master. Anyone can mine, anyone can play. If I can earn it, I can spend it, and the requirements for setting up a wallet and transacting are so minimal that the most basic of smartphones can handle it with ease. It’s also much more complicated to steal from someone than cash, and nobody needs a bank to let them participate in the economy, or to rob them of huge portions of their paycheques when sending funds to their families back home.</p><h4 class="graf graf--h4" name="11f5">Fantasy vs Reality</h4><p class="graf graf--p" name="3908">Over the past ten years the idea of cryptocurrency has been creeping into the collective conscious, and the enthusiasts who “get it” have been working tirelessly to usher in an envisioned utopia in which we all transact in a wide variety of crypto tokens, where nobody is “unbanked”, a world in which our governments and credit card companies no longer enjoy the leverage they currently have and we can live our lives in a virtual-cash-based economy where privacy reigns and nobody can freeze our bank accounts or make up silly fees and charges for using them.</p><p class="graf graf--p" name="2032">A world where nobody can “cook the books” because everything is written into an open ledger. A world where reliable, secure, anonymous voting mechanisms are built in to the very fabric of the networks we use.</p><p class="graf graf--p" name="b8cd">These dreams are all very well, but they clearly have not materialized… yet. For more than a decade Bitcoin has been considered the literal and figurative “gold standard” of crypto, and where its popularity meets with <em class="markup--em markup--p-em">somehow unanticipated</em> greed we see the energy invested to mine Bitcoin exceeding that of small countries. Ethereum arrived later on the scene with its promise of smart-contracts, an incredible innovation that opens up fin-tech and safe remittance, micropayments and the ad-free production and consumption of content… but transaction volumes are severely limited, encoded in its ridiculously high “gas fees” that make it impractical to make transfers of anything less than small fortunes.</p><p class="graf graf--p" name="9031">This is not a time to use crypto. This has been a great time to speculate about crypto, as evidenced in the crazy bubbles of the past couple of years, but this is not a time to use crypto.</p><h4 class="graf graf--h4" name="6c6c">The Irony</h4><p class="graf graf--p" name="d032">At present, there simply isn’t inherent value in crypto. Money isn’t worth anything if you can’t <em class="markup--em markup--p-em">buy</em> things with it. Most of the engineers who work with crypto are biding their time building wallets and exchanges because that’s what the market will pay them for, but that’s not what makes them <em class="markup--em markup--p-em">excited</em> about crypto. In fact, hoarding and HODLing are holding crypto back from its true purpose — seamless traceless borderless digital payments for everyone — which means that the behaviour of investors is actually preventing crypto from developing the inherent value that speculators have been banking on!</p><h4 class="graf graf--h4" name="e151">Working vs Staking</h4><p class="graf graf--p" name="f189">For those of you who aren’t familiar: the underlying reason why blockchain mining is so power-hungry, why transaction volumes are so limited and fees so high, is because the mechanism that protects the blockchain is what’s known as “Proof of Work”. To make it nigh-impossible to cheat the system and manipulate the blockchain, miners are required to perform computationally expensive calculations that are simple to validate, and whoever succeeds first achieves the right to write [sorry] the transaction block.</p><p class="graf graf--p" name="9ad7">Proof of Work is an extremely clever concept that made perfect sense ten years ago but, sadly, its creator(s) never foresaw just how poorly it would scale.</p><p class="graf graf--p" name="fc08">The New Thing in blockchain tech is Proof of Stake, and by “new” I mean almost as old as blockchain technology itself but not implemented where it matters most. Unlike Proof of Work, Proof of Stake requires “staking” your crypto to buy the right to validate the transactions — in Ethereum’s case, stake 32 ETH and you get to play miner, only you get paid for doing your part without having to set the Earth on fire. <a class="markup--anchor markup--p-anchor" data-href="https://youtu.be/fr8bp8a2QS4" href="https://youtu.be/fr8bp8a2QS4" rel="noopener" target="_blank">Or your brain</a>.</p><p class="graf graf--p" name="9bdf">For a (in technological terms) long time Ethereum has been promising to evolve to Ethereum 2.0, but the first real measures were only put in place towards the end of 2020 and according to <a class="markup--anchor markup--p-anchor" data-href="https://www.coindesk.com/eth-2-0-validators-earn-record-3m-eth-soars-past-3k" href="https://www.coindesk.com/eth-2-0-validators-earn-record-3m-eth-soars-past-3k" rel="noopener" target="_blank">today’s news</a> things are finally speeding ahead towards this Brand New Day.</p><h4 class="graf graf--h4" name="ad53">Where to with crypto?</h4><p class="graf graf--p" name="ee40">After all this preamble, what’s the real takeaway?</p><p class="graf graf--p" name="480f">It doesn’t matter whether Bitcoin’s value hits $100,000, $1,000,000, or crashes and burns and hits $1, nor does it matter what a single Ether is valued at. It doesn’t matter if you bought in early and made your fortune, or if you missed the boat completely and even now believe it’s too late for your first foray into crypto (it’s not).</p><p class="graf graf--p" name="3037">What does matter is that crypto has a function, and that function is desperately needed these days, especially for the billions of people who aren’t being served by the existing financial institutions. Personally, I cannot wait for a time when I can be paid and pay safely and instantly, whether for groceries, rent or coffee, and the idea of being able to transact outside of my government’s reach is hugely empowering. I‘m excited that we’re so close to money markets that are fair and inherently non-discriminatory. I’m excited to start diving in to new tech that solves the currently-inconceivable problems of living in societies that don’t run on borders and taxes.</p><p class="graf graf--p" name="afc0">Things may get weird (like the current NFT craze) while we learn how to use crypto, but with a brief look back over our shoulders it becomes apparent that no technology ever got introduced without us experiencing some kind of adjustment phase.</p><p class="graf graf--p" name="deee">At least, I <i>hope</i> people’s obsessions with selfies is just a phase.</p>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-54213410731576589172021-04-05T14:33:00.005-04:002021-04-08T04:25:50.342-04:00Choosing the right password manager to keep your secrets safe<figure class="graf graf--figure" name="dba1"><div style="text-align: center;"><img class="graf-image" data-height="2250" data-image-id="1*Z6-oZ-Cqpd6FsdYpJ1Evpw.jpeg" data-width="3375" height="289" src="https://cdn-images-1.medium.com/max/1600/1*Z6-oZ-Cqpd6FsdYpJ1Evpw.jpeg" width="434" /></div><figcaption class="imageCaption" style="text-align: center;">Photo by <a class="markup--anchor markup--figure-anchor" data-href="https://www.pexels.com/@eye4dtail?utm_content=attributionCopyText&utm_medium=referral&utm_source=pexels" href="https://www.pexels.com/@eye4dtail?utm_content=attributionCopyText&utm_medium=referral&utm_source=pexels" rel="noopener" target="_blank">George Becker</a> from <a class="markup--anchor markup--figure-anchor" data-href="https://www.pexels.com/photo/close-up-of-keys-333837/?utm_content=attributionCopyText&utm_medium=referral&utm_source=pexels" href="https://www.pexels.com/photo/close-up-of-keys-333837/?utm_content=attributionCopyText&utm_medium=referral&utm_source=pexels" rel="noopener" target="_blank">Pexels</a></figcaption></figure><p class="graf graf--p" name="ae8d">If you’re not using a password manager by now, you should be. Ever since reading the <a class="markup--anchor markup--p-anchor" data-href="https://xkcd.com/936" href="https://xkcd.com/936" rel="noopener" target="_blank"><strong class="markup--strong markup--p-strong">xkcd: Password Strength</strong></a> comic many years ago, I’ve become increasingly frustrated by how the software industry has continued to enforce bad password practices, and by how few services and applications apply best practices in securing our credentials. </p><blockquote class="graf graf--blockquote" name="8133"><strong class="markup--strong markup--blockquote-strong">The main reason for password reuse or using poor passwords in the first place is because it’s way too hard to remember lots of good ones.</strong></blockquote><p class="graf graf--p" name="b6ac">By forcing us to remember more and more passwords with outdated rules such as demanding symbols, numbers and a mix of uppercase and lowercase characters, most people have turned to using weak passwords, or reusing the same passwords or patterned recombinations of those passwords and leaving us vulnerable to simple exploits.</p><p class="graf graf--p" name="70d9">I recently learned about <a class="markup--anchor markup--p-anchor" data-href="https://haveibeenpwned.com/" href="https://haveibeenpwned.com/" rel="noopener" target="_blank"><strong class="markup--strong markup--p-strong">‘; — have i been pwned?</strong></a>, and I was shocked to discover that some of the breaches that included my personal data included passwords that I had no idea were compromised… for <em class="markup--em markup--p-em">years</em>. Then I looked up my wife’s email address, and together we were horrified.</p><p class="graf graf--p" name="3aa1">Lots of those compromised credentials were on platforms we didn’t even remember we had accounts on, so asking us what those passwords were and whether we’ve reused them elsewhere is futile.</p><h2 style="text-align: left;">A developer’s perspective</h2><p class="graf graf--p" name="0e42">As an experienced software engineer, I understand <em class="markup--em markup--p-em">just</em> enough about security to be keenly aware of how little most of us know and how important it is to be familiar with security best practices and the latest security news in order to protect my clients.</p><blockquote class="graf graf--blockquote" name="7fe4">I will never forget that moment a few years back when, while working for a well-established company with many thousands of users and highly sensitive data, I came across their password hashing solution for the first time: my predecessor had “rolled his own” security by MD5 hashing the password before storing it… a thousand times in a loop. As ignorant as I was myself regarding hashing, a quick search made it clear that this was making the system <em class="markup--em markup--blockquote-em">less</em> secure, not more.</blockquote><blockquote class="graf graf--blockquote" name="c6ef">This was a professional who thought he was caring for his customers.</blockquote><p class="graf graf--p" name="e6da">In 2019 I put together an open-sourced javascript package, the <a class="markup--anchor markup--p-anchor" data-href="https://www.npmjs.com/package/simple-free-encryption-tool" href="https://www.npmjs.com/package/simple-free-encryption-tool" rel="noopener" target="_blank"><strong class="markup--strong markup--p-strong">simple-free-encryption-tool</strong></a>, for simple but standard javascript encryption that’s compatible with C#, after finding the learning curve for system security to be surprisingly steep for something so critical to the safe operations of the interwebs.</p><p class="graf graf--p" name="867c">The biggest takeaways from my little ventures into information security are as follows:</p><ol class="postList"><li class="graf graf--li" name="3cb2">Most websites, platforms and services that we trust with our passwords cannot be relied upon to protect our most sensitive information.</li><li class="graf graf--li" name="2440">Companies should not be relying exclusively on their software developers to protect customer credentials and personal data.</li><li class="graf graf--li" name="0773">As a consumer, or customer, or client, we need to take responsibility for our passwords and secrets into our own hands.</li><li class="graf graf--li" name="652b">Trust (almost) no-one.</li></ol><h2 style="text-align: left;">What’s wrong with writing down my passwords on paper?</h2><p class="graf graf--p" name="a01e">It’s so hard to remember and share passwords that lots of people have taken to recording them on sticky notes, or in a notebook, and I cannot stress enough just how dangerous a practice this is.</p><p class="graf graf--p" name="9c32">First, any bad actor who has physical access to your desk or belongings and (in their mind) an excuse to snoop on you or hurt you, will generally be privy to more of your personal data than some online hacker who picks up a couple of your details off an underground website. This means that it will be far easier for them to get into your secrets and do you harm.</p><p class="graf graf--p" name="cbe2">Second, and far more likely, if those papers are lost or damaged you’re probably going to find yourself in hot water. For example, I’ve run into trouble with my Google credentials before and locked myself out of my account, and even after providing all the correct answers it was still impossible for me to get back in. There are many faceless services like this, so even a simple accident (or just misplacement) and you could find yourself in a very uncomfortable position.</p><h2 style="text-align: left;">What is a password manager?</h2><p class="graf graf--p" name="010c">A password manager is an encrypted database that securely stores all of your secrets (credentials or others) and enables you to retrieve them with a single set of credentials and authentication factors. Modern password managers tend to provide the ability to synchronize these databases on multiple devices and even inject your credentials directly where you need them.</p><h2 style="text-align: left;">Things to consider when picking a password manager</h2><h3 style="text-align: left;">Standalone, cloud-based, or self-hosted</h3><p class="graf graf--p" name="072d">For individuals who aren’t prepared to trust the internet (or even their local networks) with their secrets, there are password managers that are designed to be stored and accessed locally. These are essentially interfaces to encrypted database files that reside on your local hard disk, and you are responsible for backing them up and copying them between devices. A word of caution: if you’re synchronizing these databases by uploading them to a file sharing service like Dropbox, you’re operating in a way that’s likely less secure than using a cloud-based service.</p><p class="graf graf--p" name="97d7">Cloud-based solutions are services provided by an organization that allows you to store your secrets on their platforms and trust in their experts to secure them. While user costs may vary, they don’t require any effort when it comes to maintenance, syncing between devices and backing up and they usually provide great interfaces with integrations for desktops, browsers and mobile phones.</p><p class="graf graf--p" name="9e15">An important aspect to take into consideration when it comes to cloud-based solutions is the provider’s reputation and history of breaches. Nobody’s perfect in the world of security — security is a perpetual arms race between the white hats and the black hats — but what speaks volumes is how an organization comports itself when things go wrong. Do they consistently apply best practices and upgrades? Do they react to breaches quickly, transparently, and in their clients’ best interests?</p><p class="graf graf--p" name="78bf">Self-hosted solutions are where you or your organization are required to install and maintain the service on a web server, preferably on a secure internal network, so that your users (your family or coworkers) can operate as if it’s a cloud-based solution. These are generally cheaper for businesses, but somewhat more difficult to maintain and often less secure than cloud-based solutions (depending on the competence of whoever’s responsible for your network), but from a user’s point of view it amounts to the same thing.</p><h3 style="text-align: left;">Password sharing for family and teams</h3><p class="graf graf--p" name="49a9">Some people need to share credentials more than others. In my family, my wife and I are consistently sharing accounts so it doesn’t make sense for us to have individual duplicate copies of our shared accounts in each of our password accounts, and the same goes for me and my coworkers when it comes to our developer and administrator passwords for some of our products and service accounts. For these uses, it’s a good idea to use a solution that facilitates password sharing, and some of the services make it easy to set up groups and group ownership of credentials.</p><h3 style="text-align: left;">Mobile, OS and desktop browser support</h3><p class="graf graf--p" name="ff9b">Many password managers provide varying levels of integration for the wide variety of devices and browsers available — some solutions simply won’t give you any more than the barest essentials. Some people prefer to be able to unlock their passwords using biometrics, some prefer not to use their mobile devices at all, so before looking at the feature comparisons it’s worth giving a minute or two of thought towards how you intend to use it.</p><p class="graf graf--p" name="130b">The good news is that most of the major solutions allow exporting and importing of your secrets, so if you have any doubts about your decisions you probably won’t have to worry too much about being locked in.</p><h3 style="text-align: left;">Free vs Paid</h3><p class="graf graf--p" name="c8bc">While pricing is obviously an important factor, I feel like one should first have an idea of what features one needs before comparing on pricing. Most of the solutions offer similar prices per user, with some exceptions.</p><p class="graf graf--p" name="d21b">This is one of those rare situations where, depending on your requirements, you might actually be better off with a free product!</p><h2 style="text-align: left;">The Feature Comparison</h2><h3 style="text-align: left;">Standalone, cloud-based, or self-hosted</h3><figure class="graf graf--figure" name="9e59" style="text-align: center;"><img class="graf-image" data-height="424" data-image-id="1*3eWBtUBEGWvFsFFDqAAb_Q.png" data-width="812" height="279" src="https://cdn-images-1.medium.com/max/1600/1*3eWBtUBEGWvFsFFDqAAb_Q.png" width="534" /></figure><div class="graf graf--mixtapeEmbed" name="4ef2"><div style="text-align: center;"><a class="markup--anchor markup--mixtapeEmbed-anchor" data-href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=0" href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=0" title="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=0"><strong class="markup--strong markup--mixtapeEmbed-strong">Password Managers Feature Comparison</strong></a></div><div style="text-align: center;"><a class="markup--anchor markup--mixtapeEmbed-anchor" data-href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=0" href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=0" title="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=0"></a><a class="markup--anchor markup--mixtapeEmbed-anchor" data-href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=0" href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=0" title="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=0"><em class="markup--em markup--mixtapeEmbed-em">Standalone, cloud-based, or self-hosted</em></a></div></div><h3 style="text-align: left;">Password sharing for family and teams</h3><figure class="graf graf--figure" name="f3db" style="text-align: center;"><img class="graf-image" data-height="422" data-image-id="1*HeL4GcMhzlTytNGcJaBeSA.png" data-width="918" height="243" src="https://cdn-images-1.medium.com/max/1600/1*HeL4GcMhzlTytNGcJaBeSA.png" width="532" /></figure><div class="graf graf--mixtapeEmbed" name="7a12"><div style="text-align: center;"><a class="markup--anchor markup--mixtapeEmbed-anchor" data-href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1336926940" href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1336926940" title="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1336926940"><strong class="markup--strong markup--mixtapeEmbed-strong">Password Managers Feature Comparison</strong></a></div><div style="text-align: center;"><a class="markup--anchor markup--mixtapeEmbed-anchor" data-href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1336926940" href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1336926940" title="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1336926940"></a><a class="markup--anchor markup--mixtapeEmbed-anchor" data-href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1336926940" href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1336926940" title="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1336926940"><em class="markup--em markup--mixtapeEmbed-em">Password sharing for family and teams</em></a></div><a class="js-mixtapeImage mixtapeImage u-ignoreBlock" data-media-id="22d84e8306f4d1769a8806684b30ae2c" data-thumbnail-img-id="0*zUmjtKHCffZj-G9m" href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1336926940" style="background-image: url(https://cdn-images-1.medium.com/fit/c/320/320/0*zUmjtKHCffZj-G9m);"></a></div><h3 style="text-align: left;">Mobile, OS and desktop browser support</h3><figure class="graf graf--figure" name="698f" style="text-align: center;"><img class="graf-image" data-height="552" data-image-id="1*eCoZtDCoYKA9LKzXk3dOew.png" data-width="1590" height="184" src="https://cdn-images-1.medium.com/max/1600/1*eCoZtDCoYKA9LKzXk3dOew.png" width="531" /></figure><div class="graf graf--mixtapeEmbed" name="d628"><div style="text-align: center;"><a class="markup--anchor markup--mixtapeEmbed-anchor" data-href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1695005631" href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1695005631" title="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1695005631"><strong class="markup--strong markup--mixtapeEmbed-strong">Password Managers Feature Comparison</strong></a></div><div style="text-align: center;"><a class="markup--anchor markup--mixtapeEmbed-anchor" data-href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1695005631" href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1695005631" title="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1695005631"></a><a class="markup--anchor markup--mixtapeEmbed-anchor" data-href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1695005631" href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1695005631" title="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1695005631"><em class="markup--em markup--mixtapeEmbed-em">Mobile, OS and desktop browser support</em></a></div><a class="js-mixtapeImage mixtapeImage u-ignoreBlock" data-media-id="8caa4828067b4693adec292005901480" data-thumbnail-img-id="0*eGE39qO7jZfnzUHc" href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=1695005631" style="background-image: url(https://cdn-images-1.medium.com/fit/c/320/320/0*eGE39qO7jZfnzUHc);"></a></div><h3 class="graf graf--h3" name="5a59">Free vs Paid</h3><figure class="graf graf--figure" name="66bc" style="text-align: center;"><img class="graf-image" data-height="420" data-image-id="1*0otUeJRN6ILjEwH02Yk56A.png" data-width="842" height="265" src="https://cdn-images-1.medium.com/max/1600/1*0otUeJRN6ILjEwH02Yk56A.png" width="530" /></figure><div class="graf graf--mixtapeEmbed" name="8eb5"><div style="text-align: center;"><a class="markup--anchor markup--mixtapeEmbed-anchor" data-href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=583864853" href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=583864853" title="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=583864853"><strong class="markup--strong markup--mixtapeEmbed-strong">Password Managers Feature Comparison</strong></a></div><div style="text-align: center;"><a class="markup--anchor markup--mixtapeEmbed-anchor" data-href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=583864853" href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=583864853" title="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=583864853"></a><a class="markup--anchor markup--mixtapeEmbed-anchor" data-href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=583864853" href="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=583864853" title="https://docs.google.com/spreadsheets/d/18uOw4R1YXl5wEC5Ng9TkQQplLlgzZlbru1GTrECqaZQ/view#gid=583864853"><em class="markup--em markup--mixtapeEmbed-em">Free vs Paid</em></a></div></div><h2 style="text-align: left;">Summary</h2><p class="graf graf--p" name="41b6">With the wide variety of needs and options available, each solution listed above has its benefits and its tradeoffs. I hope you’ve found this helpful, if you have any questions, corrections, comments or suggestions I look forward to reading them in the comments below!</p>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-15585114451593430022021-04-02T03:31:00.002-04:002021-04-08T07:48:30.677-04:00Simple safe (atomic) writes in Python3<p> In sensitive circumstances, trusting a traditional file write can be a costly mistake - a simple power cut before the write is completed and synced may at best leave you with some corrupt data, but depending on what that file is used for you could be in for some serious trouble.</p><p>While there are plenty of interesting, weird, or over-engineered solutions available to ensure safe writing, I struggled to find a solution online that was simple, correct and easy-to-read and that could be run without installing additional modules, so my teammates and i came up with the following solution:</p>
<p><script src="https://gist.github.com/therightstuff/cbdcbef4010c20acc70d2175a91a321f.js"></script></p>
<h3 style="text-align: left;">Explanation:</h3><p><code>
temp_file = tempfile.NamedTemporaryFile(delete=False,<br /><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span>dir=os.path.dirname(target_file_path))
</code></p><p>The first thing to do is create a temporary file in the same directory as the file we're trying to create or update. We do this because <code>move</code> operations (which we'll need later) aren't guaranteed to be atomic when they're between different file systems. Additionally, it's import to set <code>delete=False</code> as the standard behaviour of the <code>NamedTemporaryFile</code> is to delete itself as soon as it's not in use.</p>
<p><code>
# preserve file metadata if it already exists<br />if os.path.exists(target_file_path):<br /><span> </span>copyWithMetaData(target_file_path, temp_file.name)
</code></p>
<p>We needed to support both file creation and updates, so in the case that we’re overwriting or appending to an existing file, we initialize the temporary file with the target file’s contents and metadata.</p>
<p><code>
with open(temp_file.name, mode) as f:<br /><span> </span>f.write(file_contents)<br /><span> </span>f.flush()<br /><span> </span>os.fsync(f.fileno())
</code></p>
<p>Here we write or append the given file contents to the temporary file, and we flush and sync to disk manually to prepare for the most critical step:</p>
<p><code>
os.replace(temp_file.name, target_file_path)
</code></p><p>This is where the magic happens: <code>os.replace</code> is an atomic operation (when the source and target are on the same file system), so we're now guaranteed that if this fails to complete, no harm will be done.</p><p>We use the <code>finally</code> clause to remove the temporary file in case something did go wrong along the way, but now the very worst thing that can happen is that we end up with a temporary file <span face=""helvetica neue", Helvetica, Arial, sans-serif" style="font-size: 16px;">¯\_(ツ)_/¯</span></p><div><span face=""helvetica neue", Helvetica, Arial, sans-serif" style="font-size: 16px;"><br /></span></div>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-85639778346141867532021-02-13T09:49:00.022-05:002021-03-03T02:08:02.808-05:00Approaching Software Engineering as an evolutionary process<p>A year or two ago I had an opportunity to sit down with AWS's Marcin Kowalski in a cafeteria and discuss the problems of software development at almost-unimaginable scale. I walked away with a new (for me) conception of software engineering that is part engineering, part organic biology, and I've found this perspective has shifted my approach to software development in a powerful and immensely helpful way.</p><p>As Computer Scientists and Software Engineers, we've been trained to employ precision in algorithm design, architecture and implementation: Everything must be Perfect. Everything must be Done Right.</p><p>For smaller, isolated projects, this engineering approach is critical, sound and practical, but once we begin to creep into the world of integrated solutions and micro-services it rapidly begins to break down.<span></span></p><a name='more'></a><p></p><p style="text-align: center;"><i>We simply don't have time to rewrite everything.</i></p><p style="text-align: center;"><i>We don't have time to build "perfect" solutions.</i></p><p style="text-align: center;"><i>We can't predict the future.</i></p><p style="text-align: left;">Context matters. Circumstances matter. The properties and requirements of an emergent system take precedence over those of its constituent parts.</p><h3 style="text-align: left;">"You cannot upgrade an airplane wing in-flight"</h3><p><i>(<a href="https://www.popularmechanics.com/flight/a15987/these-plane-wings-can-repair-themselves/" target="_blank">Or maybe you can?</a>)</i></p><p>In the world of hardware, it is unusual to be able to repair or upgrade something broken, damaged or inferior without disconnecting it or shutting it down. In contrast, software can generally be upgraded "live" using a kind of sleight-of-hand to switch between different iterations of a project, and these days the cost of cloning an entire solution is often negligible.</p><p>For many solutions, though, this doesn't apply and it doesn't really scale. At present, for example, I build software that operates on cargo ships, and the cost of a repair or major overhaul while at sea (or even the cost of getting service technicians on board while in dock during a pandemic) is exceedingly high. For a large cloud provider, even the tiniest risk of outage is entirely unacceptable.</p><p>While it's safe to say that there are more software developers around than ever before, good ones don't come cheap; between failing post-pandemic economies and an interminable lack of resources, companies can rarely afford to throw development hours into anything that isn't critical to their business - no matter how crucial R&D efforts can be to any organization's long-term success, no matter how much better a product or component might be if it was redesigned or rewritten.</p><p>It is due to these circumstances that all companies, large and small, must operate iteratively and rely on incremental improvements.</p><h3 style="text-align: left;">Defining "Technical Debt"</h3><p>"Technical Debt" is a well-known term, but I've come to believe that it's not, in fact, a real thing. As a term we use it to describe anything that's built... sub-optimally. A "temporary" hack, or a workaround, that we pretend we'll get around to sorting out at some unspecified point in the future (I addressed this in <a href="https://www.industrialcuriosity.com/2020/10/keeping-track-of-your-technical-debt.html" target="_blank">a previous article</a>).</p><p>While it's true that "technical debt" describes imperfections that we should try to avoid, it has negative connotations that are not necessarily deserved. It's too easy to view our predecessors and our former selves unfavourably, but I find it more constructive to frame these rushed hacks, workarounds and "suboptimal" decisions as unavoidable constructs that were good ideas at the time. I say "were" good ideas rather than "seemed like" because just like biological evolution, we generally select for short term advantages and adapt to our current environments rather than preparing for an uncertain future.</p><p>I don't have any alternative terms to offer, but I would like to talk about "legitimate" sources of technical debt as opposed to "illegitimate" sources:</p><blockquote><p>A <i>legitimate</i> source of technical debt is a hack or work-around demanded by an unavoidable external source, such as an upstream dependency, or urgent and unanticipated customer pain.</p></blockquote><blockquote><p>An <i>illegitimate</i> source of technical debt is a hack or work-around required to implement unplanned or unsupported features or use-cases generated with artificial urgency by an internal source, such as a product manager, a marketing team, or unhealthy company politics.</p></blockquote><p>Regardless of whether the technical debt was generated legitimately or not, what we really have at any given moment is the current state of the software, and whether it is operational or not. It might be worth investing time in figuring out where in an organization technical debt is coming from and putting processes in place to reduce it, but <i>there will always be technical debt</i> and it isn't very constructive to cry over spilled milk.</p><h3 style="text-align: left;">Caring for your Iron Giant</h3><p>For many years I've complained about the broken state of our "global software ecosystem", drawing on my experiences with proprietary monoliths and micro-service architectures, as well as the enormous emergent system of interdependent open-source software projects that we all know and love (or love to hate). I hadn't yet<span style="font-size: x-small;"><sup>1</sup></span> seen the 1999 film "The Iron Giant", but during that fateful conversation I was struck with an image of a giant, humanoid robot made up of millions of tiny parts and slowly, inexorably walking forward towards some unseen goal.</p><p>While the Iron Giant behaves as single organism, whenever it is smashed into its individual components or is damaged it becomes clear that it is actually a self-organizing and self-repairing community much like those of plants or animals. Come to think of it, individual human beings are, in fact, self-organizing and self-repairing communities of cells, organs and bacteria as well!</p><p>In organic (biological) evolution, organisms are primarily occupied with survival. In order to survive, an organism must not only consume what it needs to operate and propagate, but it must do so in a way that is compatible with the ecosystem in which it resides. When it comes to the most fundamental parts of an organism - its cells - there are neither opportunities nor capabilities for a fresh start, but in the long run those cells will fail if they cannot collectively adapt to their environment.</p><p>That's exactly where slow and iterative evolution comes in. It's the software equivalent of evolving DNA, only it's more like an epigenetic response to environmental pressure and somewhat less random. Small changes that make our software stronger and more resilient, small changes that can be made not only without disabling our machines, or bringing them to an outright halt, but that can be made without even breaking their stride.</p><h3 style="text-align: left;">Theory, Practice and Ideals</h3><p>"Prototypes" are generally developed rapidly in order to gather information about what's possible and viable.</p><p>"Minimum Viable Products" are usually intended as marginally longer-term learning tools, where investment in future planning and architectural extensibility might be welcomed but is generally not budgeted for.</p><p>Over the course of the past few decades the software industry has moved from detailed and lengthy waterfall design processes towards fail-fast iterative approaches, but even if the models have shifted, for individual developers the ideal of building perfect software that will last for all eternity prevails. In a typical modern-day scenario, software solutions tend to begin their journeys into production as prototypes and MVPs, then bear that legacy to the ends of their lifetimes.</p><p>Idealistic developers and managers tend to find "legacy code" a source of frustration, when in fact it is an inevitable outcome both of the evolution of the software and of the developers themselves. Legacy code is akin to a machine of a different age still operating long after we would have expected it to fail, having taken on more responsibilities than it was originally intended for. Perhaps, in our fast-paced and ever-shifting software landscape, it would benefit us to consider this an impressive feat of engineering!</p><p>Please don't get me wrong - I am <i>not</i> advocating for poorly designed architecture or poorly written code. What I am advocating for is dropping emotion-bound perfectionism and taking a more pragmatic approach to design and development that takes circumstances and context into account. I will be the first to admit to compulsive "boy-scouting" - to a fault, I always try to improve any code I work on or around - but for the sake of our own sanity and the success of our enterprises we need to realize that our time and our resources are precious and limited, that we cannot and should not fix or replace everything all in one go, and that it's absolutely acceptable to fix things iteratively rather than tear them down and start all over again.</p><p>In most cases, the truth is that we don't need <i>new</i> software, just <i>better</i> software. Why reinvent the wheel when we can take someone else's slightly squarish wheel and make it round?</p><p>For every element of technical debt we encounter, it would be helpful to ask the following questions:</p><blockquote><p>"If the solution has been written in an inappropriate<span style="font-size: x-small;"><sup>2</sup></span> language, will we be better off maintaining it as is, or migrating it to a new language completely? What are the short and long-term trade-offs?"</p></blockquote><blockquote><p>"If a solution is not sufficiently extensible for our needs, should we invest in rebuilding it, retrofitting it, hacking in what we can? Or should we attempt to shift users to an entirely new solution?"</p></blockquote><p>These are nuanced decisions to make, and it's easy for us to let our biases and prejudices make those decisions for us and get in the way of doing what's right at the time. To illustrate this with an example from my own personal experience:</p><p><i></i></p><blockquote><i>I once worked with a team on a PHP product that was fraught with evil Anti-patterns, Bugs, catastrophically poor Code and Design, and a team of PHP developers that was thoroughly uninterested in developing in any other language. Was PHP itself to blame? Partially<span style="font-size: x-small;"><sup>3</sup></span>, and as someone who really doesn't approve of PHP<span style="font-size: x-small;"><sup>4</sup></span> the easiest solution was to migrate to something like Node.js. In reality, though, what I was looking at was a monolithic beast built with limited resources that was somehow or other managing to pull its weight. It made no sense to throw away the code (or the developers), so we had to take a different approach: finding ways to iteratively improve the organism (both the code and developers) without letting any part fail. This turned out to be a complex problem, but entirely solvable, whereas any solution that didn't include that legacy would have engendered wholesale chaos and was unlikely to result in success.</i></blockquote><p></p><h3 style="text-align: left;">A Taoist's Summary</h3><p>If there's any take-away to this article, let it be this: What we have are obstacles and challenges. What we need are solutions. It doesn't benefit us to come at those obstacles and challenges as if they're somehow in the wrong. If we accept that everything that has brought us to the current state of the solution had a context and purpose, and we accept that we are currently in yet another situation that has context and purpose, then we can ask the questions and provide the solutions that let us influence our software's evolution in a healthy direction, even if it's only just a nudge.<br /></p><p><b>Above all, let us remember that what we a building is just a tiny moving part amongst a myriad of moving parts, and that our role as engineers is not only to keep our Iron Giant operational, but to help it to be a hero and not a villain.</b></p><p><b><br /></b></p><p><b><br /></b></p>
<hr />
<p><span style="font-size: x-small;"><sup>1</sup></span> I picked it up recently for my five year-old, it's now one of my favourite movies.</p><p><span style="font-size: x-small;"><sup>2</sup></span> "Inappropriate" here could mean poorly fitting the solution, or no longer supported, or hard to find developers for, or just unnecessarily difficult to work with</p><p><span style="font-size: x-small;"><sup>3</sup></span> I've had a number of PHP gigs, even written a couple of prototypes for myself using it, and it's always immediately apparent that almost anything else would be an improvement.</p><p><span style="font-size: x-small;"><sup>4</sup></span> To be fair, <a href="https://axonflux.com/5-quotes-by-the-creator-of-php-rasmus-lerdorf" target="_blank">neither does its creator</a>.</p>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-14134726580061746732021-01-23T17:38:00.005-05:002021-01-24T02:58:48.142-05:00Towards a Python version of my Javascript AWS CDK guide<p>After successfully using <a href="https://github.com/therightstuff/aws-cdk-js-dev-guide" target="_blank">a Typescript CDK project</a> to deploy a python lambda on Thursday, I decided to spend some time this evening creating <a href="https://github.com/therightstuff/aws-cdk-python-dev-guide" target="_blank">a Python CDK guide</a>. It's very limited at the moment (just a simple function and a basic lambda layer), but it's a start!</p>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-47039289214992151792020-10-25T04:39:00.005-04:002021-02-13T09:58:54.045-05:00Keeping Track of your Technical Debt<h2 style="text-align: left;">The impact of technical debt</h2><p>Over the years the concept of "technical debt" has become a phrase that can generate anxiety and a lack of trust, as well as setting up developers and their managers for failure. The metaphor might not be perfect (<a href="https://medium.com/clean-code-development/there-is-no-such-thing-as-technical-debt-44cf4ab3ce91" target="_blank">Ralf Westphal makes a strong case for treating it like an addiction</a>), but I feel it's pretty apt if you think of the story of <a href="https://en.wikipedia.org/wiki/Pied_Piper_of_Hamelin" target="_blank">The Pied Piper of Hamelin</a> - if you don't pay the piper promptly for keeping you alive, he'll come back to steal your future away.</p><p>Maybe I'm being a bit dramatic, but I value my time on this planet and over the course of two decades as a software developer in various sectors of the industry I have (along with countless others) paid with so much of my own precious lifetime (sleep hours, in particular) for others' (and occasionally my own) quick fixes, rushed decisions and hacky workarounds that it pains me to even think about it.</p><p>It's almost always worthwhile investing in doing things right the first time, but we rarely have the resources to invest and we are often unable to accurately predict what's right in the first place.</p><p>So let's at least find a way to mitigate the harm.</p><h2 style="text-align: left;">Why TODOs don't help</h2><p>The traditional method of initiating technical debt is the <span style="font-family: courier;">TODO</span>. You write a well-meaning (and hopefully descriptive) comment starting with <span style="font-family: courier;">TODO</span>, and "Hey, presto!" you have a nice, easy way to find all those little things you meant to fix. Right? Except that's usually not the case. We are rarely able to make time to go looking for more things to do, and even when we can, with modern software practices it's unlikely that searching across all of our different repositories will be effective.</p><p>What generally ends up happening, then, is that we only come across <span style="font-family: courier;">TODO</span>s by coincidence, when we happen to be working with the code around it, and the chances are that it'll be written by a different developer from a different time period and be explained in... <i>suboptimal</i> language.</p><p>"Why hasn't this been done?", you may well ask.</p><p>"Leave that alone! We don't remember why it works," you may well be told.</p><h2 style="text-align: left;">Track your technical debt with this one simple trick!</h2><div>No matter the size of your team (even a team of one), you should always be working with some kind of issue tracking software. There's decent, free software out there, <a href="https://lmgtfy.app/?q=best+free+issue+tracking+software" target="_blank">so there's really no excuse</a>.</div><div><br /></div><div>The fix? <b>All you need to do is log a ticket for each </b><span style="font-family: courier;"><b>TODO</b></span>. Every time you find yourself writing a <span style="font-family: courier;">TODO</span>, and I mean <u>every single time</u>, create a ticket. In both the ticket and the <span style="font-family: courier;">TODO</span> comment, explain what you're doing, what needs to be done, why it needs to be deferred, and how urgent completing the <span style="font-family: courier;">TODO</span> task is.</div><div><br /></div><div><i>Then reference the ticket in the </i><span style="font-family: courier;"><i>TODO</i></span><i> comment.</i></div><div><br /></div><div>This way you'll have a ticket - which I like to think of as an IOU - which can be added to the backlog and remembered when grooming and planning. This also provides the developer who encounters the <span style="font-family: courier;">TODO</span> in the code a way to review any details and subsequent conversations in the ticket.</div><div><br /></div><div>One interesting side effect of this approach? I often find that it's more effort to create the <span style="font-family: courier;">TODO</span> ticket than to do the right thing in the first place, which can be a great incentive for the juniors to avoid the practice.</div><div><br /></div><div>Another?</div><div><br /></div>
<div class="tenor-gif-embed" data-aspect-ratio="2.3255813953488373" data-postid="5671167" data-share-method="host" data-width="100%"><a href="https://tenor.com/view/indiana-jones-no-ticket-gif-5671167">Indiana Jones No Ticket GIF</a> from <a href="https://tenor.com/search/indianajones-gifs">Indianajones GIFs</a></div><script async="" src="https://tenor.com/embed.js" type="text/javascript"></script>
<div><br /></div><div>After establishing with my team that I will not approve a <span style="font-family: courier;">TODO</span> unless there's a ticket attached, I must admit... I do sleep just a little bit better at night.</div>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-80456806716221489952020-10-24T13:10:00.000-04:002020-10-24T13:10:23.988-04:00Reading and writing regular expressions for sane peopleYour regular expressions need love. Reviewers and future maintainers of your regular expressions need even more.<div><br /></div><div>No matter how well you've mastered regex, regex is regex and is not designed with human-readability in mind. No matter how clear and obvious you think your regex is, in most cases it will be maintained by a developer who a) is not you and b) lacks context. Many years ago I developed a simple method for sanity checking regex with comments, and I'm constantly finding myself demonstrating its utility to new people.</div><div><br /></div><div>There are some great guides out there, like <a href="https://alexwlchan.net/2016/04/regexes-are-code/" target="_blank">this one</a>, but what I'm proposing takes things a step or two further. It may take a minute or two of your time, but it almost invariably saves a lot more than it costs. I'm not even discussing <a href="https://blog.codinghorror.com/regex-use-vs-regex-abuse/" target="_blank">flagrant abuse</a> or <a href="https://www.loggly.com/blog/regexes-the-bad-better-best/" target="_blank">performance considerations</a>.</div><h3 style="text-align: left;"><br /></h3><h3 style="text-align: left;">Traditional regex: the do-it-yourself pattern</h3><div><br /></div><div>The condescending regex. Here you're left to your own devices. Thoughts and prayers.</div><div><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;"><br /></span></div><div><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;">var parse_url = /^(?:([A-Za-z]+):)?(\/{0,3})(0-9.\-A-Za-z]+)(?::(\d+))?(?:\/([^?#]*))?(?:\?([^#]*))?(?:#(.*))?$/;</span></div><div><br /></div><div>(example taken from <a href="https://softwareengineering.stackexchange.com/q/298564/331147" target="_blank">this question</a>)</div><h3 style="text-align: left;"><br /></h3><h3 style="text-align: left;">Kind regex: intention explained</h3><div><br /></div><div>It's the least you can do! A short line explaining what you're matching with an example or two (or three).</div><div><br /></div><div><div><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;">// parse a url, </span><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">only capture the host part</span></div><div><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">// eg. </span><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">protocol://host</span></div><div><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">// </span><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">protocol://host:port/path?querystring#anchor</span></div><div><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">// host</span></div><div><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">// host/path</span></div><div><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">// host:port/path</span></div><div><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;">var parse_url = /^(?:([A-Za-z]+):)?(\/{0,3})(0-9.\-A-Za-z]+)(?::(\d+))?(?:\/([^?#]*))?(?:\?([^#]*))?(?:#(.*))?$/;</span></div></div><h3 style="text-align: left;"><br /></h3><h3 style="text-align: left;">Careful regex: a human-readable breakdown</h3><div><br /></div><div>Here we ensure that each element of the regex pattern, no matter how simple, is explained in a way that makes it easy to verify that it's doing what we think it's doing and can modify it safely. If you're not an expert with regex, I recommend using one of the many available tools such as <a href="http://regexr.com" target="_blank">regexr.com</a>.</div><div><div><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;"><br /></span></div><div><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;">// parse a url, only capture the host part</span></div><div><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;">// </span><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">(?:([A-Za-z]+):)?</span></div><div><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;">// protocol - an optional alphabetic protocol </span><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;">followed by a colon</span></div><div><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;">// </span><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">(\/{0,3})(0-9.\-A-Za-z]+)</span></div><div><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;">// host - 0-3 forward slashes </span><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;">followed by </span><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;">alphanumeric characters</span></div><div><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;">// </span><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">(?::(\d+))?</span></div><div><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;">// port - an optional colon and a sequence of digits</span></div><div><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;">// </span><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">(?:\/([^?#]*))?</span></div><div><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">// path - an optional forward slash followed by any number of</span></div><div><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">// characters </span><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">not including ? or #</span></div><div><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">// </span><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">(?:\?([^#]*))?</span></div><div><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">// query string - an optional ? followed by any number of</span></div><div><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">// characters not including #</span></div><div><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">// </span><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">(?:#(.*))?</span></div><div><span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">// anchor - an optional # followed by any number of characters</span></div><div><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;">var parse_url = /^(?:([A-Za-z]+):)?(\/{0,3})(0-9.\-A-Za-z]+)(?::(\d+))?(?:\/([^?#]*))?(?:\?([^#]*))?(?:#(.*))?$/;</span></div></div><div><span style="color: var(--black-800); font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px; font-style: inherit; font-variant-caps: inherit; font-variant-ligatures: inherit; font-weight: inherit; white-space: inherit;"><br /></span></div><div>Now that we've taken the time to break this down, we can identify the intention behind the patterns and ask better questions: why is the host the only matched group? Was this tested? (Because <span style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace, sans-serif; font-size: 13px;">(0-9.\-A-Za-z]</span> is clearly an error, and there are almost no restrictions on invalid characters)</div><div><br /></div><div>Unless you're a sadist (or a masochist), this is definitely a better way to operate: be careful, and if you can't be careful then at least be kind.</div>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-67734195021477048322020-08-19T11:17:00.003-04:002020-10-29T07:28:13.855-04:00Priority pinning in apt-preferences with different versions and architectures<p>I'm posting this because I've lost too many hours figuring it out myself, the documentation is missing several important notes and I haven't found any forum posts that really relate to this:</p><p><i>Question</i>: How do I prioritize specific package versions for multiple architectures? In particular, I have a number of different packages which I would like to download for multiple architectures, and I would like to prioritize the versions so that if they're not explicitly provided, apt will try to get a version matching my arbitrary requirements (in my case, the current git branch name) and fall back to our <b>develop</b> branch versions.</p><p>eg. I would like to download package <b>my-package</b> for both <b>i386</b> and <b>amd64</b> architectures and I would like to pull the latest version that includes <b>my-git-branch-name</b> before falling back to the latest that includes <b>develop</b>.</p><p><i>Answer</i>:</p><p>The official documentation is <a href="https://manpages.debian.org/buster/apt/apt_preferences.5.en.html" target="_blank">here</a>.</p><p>1. In order to support multiple architectures, all packages being pinned <u>must</u> have their architecture specified, and there must be an entry for each architecture. A pinning for the package name without the architecture specified will only influence the default (platform) architecture:</p><p><b>Package: my-package</b></p><p><b>Pin: version /your regex here/</b></p><p><b>Pin-Priority: 1001</b></p><p>2. The entries are whitespace-sensitive, although no errors will be reported if you have whitespace. The following pinning will be silently disregarded:</p><p><b>Package: my-package:amd64</b></p><p><b> Pin: version /your regex here/</b></p><p><b> Pin-Priority: 1001</b></p><p>3. <b>apt update</b> must be called <i>after</i> updating the preferences file in order for them to be respected and <i>after</i> adding additional architectures using (for example) <b>dpkg --add-architecture i386</b></p><p>The following excerpt from <b>/etc/apt/preferences</b> solves the stated problem:</p><p><b><br /></b></p><p><b>Package: my-package:amd64</b></p><p><b>Pin: version /-my-git-branch-name-/</b></p><p><b>Pin-Priority: 1001</b></p><p><b><br /></b></p><p><b>Package: my-package:i386</b></p><p><b>Pin: version /-my-git-branch-name-/</b></p><p><b>Pin-Priority: 1001</b></p><p><b><br /></b></p><p><b>Package: my-package:amd64</b></p><p><b>Pin: version /-develop-/</b></p><p><b>Pin-Priority: 900</b></p><p><b><br /></b></p><p><b>Package: my-package:i386</b></p><p><b>Pin: version /-develop-/</b></p><p><b>Pin-Priority: 900 </b></p><p>It may be worthwhile noting that to download or install a package with a specified architecture and version use the command <span style="font-family: courier;"><b>apt download package-name:arch=version</b></span></p>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-7555122813941747722020-08-16T05:24:00.015-04:002020-08-16T05:34:59.840-04:00Enabling Programmable Bank Cards To Do All The ThingsA little while back, Offerzen and Investec opened up their <a href="https://www.offerzen.com/community/investec/" target="_blank">Programmable Bank Card Beta</a> for South Africans, <a href="https://root.co.za/blog/wrapping-up-the-programmable-bank" target="_blank">taking over from Root</a>, and I was excited to be able to sign up!<div><br /></div><div>My long-term goal with programmable banking is to integrate crypto transactions directly with my regular banking - which might be possible quite soon as Investec is starting to roll out its open API offering in addition to the card programme - and as far as I can tell that could be a real game-changer in terms of making crypto trading practical for non-investment purposes.</div><div>When I joined, though, what I found was a disparate group of really interesting projects and only a two-second time window to influence whether your card transaction will be approved or not.</div><div><br /></div><div>I decided to bide my time by building a bridge for everyone else's solutions, one which would provide a central location to determine transaction approval and enable card holders (including myself, of course) to forward their transactions securely to as many services as they like without costing them running time on their card logic.</div><div><br /></div><div>It was obvious to me that this needed to be a cloud solution, and as someone with zero serverless experience<sup>1</sup> I spent some time evaluating AWS, Google and Microsoft's offerings. My priorities were simple:</div>
<ul>
<li>right tool for the job</li>
<li>low cost</li>
<li>high performance</li>
<li>good user experience</li>
</ul>
<div>In the end AWS was the clear winner with the first two priorities, so much so that I didn't even bother worrying about any potential performance tradeoffs (there probably aren't any) and I was prepared to put up with their generally mixed bag of user experience. I also had the added incentive that my current employer uses AWS in their stack so I would be benefiting my employer at the same time.</div><div><br /></div><div>Overall, I'm very glad that I chose AWS in spite of the trials and tribulations, which you can follow in <a href="https://www.industrialcuriosity.com/2020/06/a-templated-guide-to-aws-serverless.html" target="_blank">my earlier post</a> about building <a href="https://github.com/therightstuff/aws-cdk-js-dev-guide" target="_blank">a template for CDK for beginners</a> (like myself). In addition to learning how to use CDK, this project has given me solid experience in adapting to cloud architecture and patterns that are substantially different from anything I've worked with before, and I've already been able to effectively apply my learnings to my contract work.</div><div><br /></div><div>One of the obligations of the Programmable Bank Card beta programme is sharing your work and presenting it; I was happy to do so, but for months I've been locked down and working in the same space as my wife and four year-old (we don't have room for an office or much privacy in our apartment); with all the distractions of home schooling and having to work while my wife and kid play my favourite video games I've barely been able to keep up my hours with my paid gig, so making time for extra stuff? That hasn't been so easy.</div><div><br /></div><div>A couple of weeks ago my presentation was due, and I spent a lot of the preceding two weekends in a mad scramble to get my project as production-ready as I could - secure, reliable, <i>usable</i>. Half an hour before "go time" I finally deployed my offering, and I was a <i>little</i> bit nervous doing a live demo... I mean, sure, I'd tested all the end points after each deployment - but as we all know: Things Happen. Usually During Live Demos.</div><div><br /></div><div>Fortunately, the presentation went very well! I spent the following weekend adding any missing "critical" functionality (such as scheduling lambda warmups and implementing external card usage updates), and I'm hoping that some of my fellow community members get some good use out of it (whether on their own stacks or mine).</div><div><br /></div><div>The code can be found <a href="https://gitlab.com/fisher.adam.online/command-center-bridge" target="_blank">here on GitLab</a>, and a few days ago OfferZen published the recording of my presentation <a href="https://www.offerzen.com/blog/programmable-banking-community-adams-programmable-card-command-bridge" target="_blank">on their blog</a> along with its transcript<sup>2</sup> and my slide deck.</div><div><br /></div><div>Thank you for joining me on my journey!</div><div><br /></div>
<hr />
<div><br /></div><div><sup>1</sup> Although I was employed by one of the major cloud providers for a while, I've only worked "on" their cloud rather than "in", and while I do have extensive experience with Azure none of it included serverless functions.<br /><br /></div><div><sup>2</sup> The transcript has a few minor transcription errors, but they're more amusing than confusing.</div>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-38236826543369358482020-07-28T13:33:00.000-04:002020-08-02T15:31:37.392-04:00Resolving private PyPI configuration issues<h2>The Story</h2>I've spent the last few days wrestling with the serverless framework, growing fonder and fonder of AWS' CDK at every turn. I appreciate that <i>serverless</i> has been, until recently, very necessary and very useful, but I feel confident in suggesting that when deployment mechanisms become more complex than the code they're deploying things aren't moving in the right direction.<br />
<br />
That said, this post is about overcoming what for us was the final hurdle: getting Docker able to connect to a private PyPI repository for our Python lambdas and lambda layers.<br />
<br />
Python is great, kind of, but is marred by two rather severe complications: the split between python and python3, and pip… so it's a great language, with an awful tooling experience. Anyway, our Docker container couldn't communicate with our PyPI repo, it took way too long to figure out why, here's what we learned:<br />
<h2>The Solution</h2>If you want to use a private PyPI repository without typing in your credentials at every turn, there are two options:<br />
<br />
<h3>1.</h3><br />
<span style="font-family: "courier new" , "courier" , monospace;">~/.pip/pip.conf</span>:<br />
<br />
<script src="https://gist.github.com/900a0ff9a35c78a88df4196f460bef54.js?file=pip.conf.1"></script><br />
<br />
<h3>2.</h3><br />
<span style="font-family: "courier new" , "courier" , monospace;">~/.pip/pip.conf</span>:<br />
<br />
<script src="https://gist.github.com/900a0ff9a35c78a88df4196f460bef54.js?file=pip.conf.2"></script><br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">~/.netrc</span>:<br />
<br />
<script src="https://gist.github.com/900a0ff9a35c78a88df4196f460bef54.js?file=.netrc"></script><br />
<br />
It is important that <span style="font-family: "courier new" , "courier" , monospace;">~/.netrc</span> has the same owner:group as <span style="font-family: "courier new" , "courier" , monospace;">pip.conf</span>, and that its permissions are 0600 (<span style="font-family: "courier new" , "courier" , monospace;">chmod 0600 ~/.netrc</span>).<br />
<br />
What's not obvious - or even discussed anywhere - is that special characters are problematic and are handled differently by the two mechanisms.<br />
<br />
In <span style="font-family: "courier new" , "courier" , monospace;">pip.conf</span>, the password <u>MUST</u> be URL encoded.<br />
In <span style="font-family: "courier new" , "courier" , monospace;">.netrc</span>, the password <u>MUST NOT</u> be URL encoded.<br />
<h2>The Docker Exception</h2>For whatever reason, solution 2 (the combination of <span style="font-family: "courier new" , "courier" , monospace;">pip.conf</span> and <span style="font-family: "courier new" , "courier" , monospace;">.netrc</span>) does NOT work with Docker.<br />
<h2>Conclusion</h2>Amazon's CDK is excellent, and unless you have a very specific use-case that it doesn't support it really is worth trying out!<br />
<br />
Oh! And that Python is Very Nice, but simply isn't nice enough to justify the cost of its tooling.Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-76074851065992363652020-07-03T03:30:00.002-04:002020-07-26T15:18:46.453-04:00the rights and wrongs of git push with tagsMy employer uses tags for versioning builds.<br />
<br />
<h3>
<span style="font-family: "courier new" , "courier" , monospace;">git push && git push --tags</span></h3>
I'm guessing this is standard practice, but we've been running into an issue with our build server - Atlassian's Bamboo - picking up commits without their tags. The reason was obvious: we've been using <b><span style="font-family: "courier new" , "courier" , monospace;">git push && git push --tags</span></b>, pushing tags <i>after</i> the commits, and Bamboo doesn't trigger builds on tags, only on commits. This means that every build would be versioned with the previous version's tag!<br />
<br />
The solution should have been obvious, too: push tags with the commits. Or before the commits? This week, we played around with an experimental repo and learned the following:<br />
<br />
<h3>
<span style="font-family: "courier new" , "courier" , monospace;"><b>git push --tags && git push</b></span></h3>
To the uninitiated (like us), the result of running <span style="font-family: "courier new" , "courier" , monospace;"><b>git push --tags</b></span> can be quite surprising! We created a commit locally, tagged it*, and ran <b style="font-family: "Courier New", Courier, monospace;">git push --tags</b>. This resulted in the commit being pushed to the server (Bitbucket, in our case) along with its tag, but the commit was rendered invisible. Not even <b><span style="font-family: "courier new" , "courier" , monospace;">git ls-remote --tags origin</span></b> would return it, and it was not listed under the commits on its branch, although it showed up with its commit on Bitbucket's feature to search commits by tag quite clearly:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHghEPLVqCb1RpBumlCHQ0c4Fd-g1Exbz8VrtW6-rrYR3NhlbPHAu9d1vDbwHS_wYcS0DscernhzpgHZ2vYIIrl9_RJHl2y0PNtcQOE0-W1xUZy_yHYXudjfKa5eazT_WVvLVOB8_4qhlA/s1600/Screenshot+2020-07-03+at+08.24.00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="150" data-original-width="1304" height="45" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHghEPLVqCb1RpBumlCHQ0c4Fd-g1Exbz8VrtW6-rrYR3NhlbPHAu9d1vDbwHS_wYcS0DscernhzpgHZ2vYIIrl9_RJHl2y0PNtcQOE0-W1xUZy_yHYXudjfKa5eazT_WVvLVOB8_4qhlA/s400/Screenshot+2020-07-03+at+08.24.00.png" width="400" /></a></div>
<br />
<br />
* All tags are annotated, automatically, courtesy of <a href="https://www.sourcetreeapp.com/" target="_blank">SourceTree</a>. If you're not using SourceTree then you should be, and I promise you I'm not being paid to say that. If you must insist on the barbaric practice of using the command line, just add <span style="font-family: "courier new" , "courier" , monospace;"><b>-a</b></span> (and <b><span style="font-family: "courier new" , "courier" , monospace;">-m</span></b> to specify the message inline if you don't want the editor to pop up, check out <a href="https://git-scm.com/book/en/v2/Git-Basics-Tagging" target="_blank">the documentation</a> for more details).<br />
<br />
Pushing a tag first isn't the end of the world - simply pushing the commit with <b><span style="font-family: "courier new" , "courier" , monospace;">git push</span></b> afterwards puts everything in order - unless someone else pushes a different commit first. At that point the original commit will need to be merged, with merge conflicts to be expected. Alternatively - and relevant for us - any scripts that perform commits automatically might fail between pushing the tags and their commits, leading to "lost" commits.<br />
<br />
<h3>
<b><span style="font-family: "courier new" , "courier" , monospace;">git push --tags origin refs/heads/develop:refs/heads/develop</span></b></h3>
This ugly-to-type command does what we want it to do: it pushes the commit and its tags together. <a href="https://git-scm.com/docs/git-push" target="_blank">From the documentation</a>:<br />
<br />
<i>When the command line does not specify what to push with <span style="color: red; font-family: "courier new" , "courier" , monospace;"><refspec>...</span> arguments or <span style="color: red; font-family: "courier new" , "courier" , monospace;">--all</span>, <span style="color: red; font-family: "courier new" , "courier" , monospace;">--mirror</span>, <span style="color: red; font-family: "courier new" , "courier" , monospace;">--tags</span> options, the command finds the default <span style="color: red; font-family: "courier new" , "courier" , monospace;"><refspec></span> by consulting <span style="color: red; font-family: "courier new" , "courier" , monospace;">remote.*.push</span> configuration, and if it is not found, honors <span style="color: red; font-family: "courier new" , "courier" , monospace;">push.default</span> configuration to decide what to push (See <a href="https://git-scm.com/docs/git-config" target="_blank"><span style="color: #76a5af;">git-config[1]</span></a> for the meaning of <span style="color: red; font-family: "courier new" , "courier" , monospace;">push.default</span>).</i><br />
<br />
Okay, so that works. But there's no way I'm typing that each time, or asking any of my coworkers to. I want <i>English</i>. I'm funny that way.<br />
<br />
<h3>
<span style="font-family: "courier new" , "courier" , monospace;"><b>git push --follow-tags</b></span></h3>
Here we go: the real solution to our problems. As long as the tags are annotated - which they should be anyway, see the earlier footnote on tagging - running <b><span style="font-family: "courier new" , "courier" , monospace;">git push --follow-tags</span></b> will push any outstanding commits from the current branch <i>along with their annotated tags</i>. Any tags not annotated, or not attached to a commit being pushed, will be left behind.<br />
<br />
Resolved!Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-54944378099969320412020-06-22T19:17:00.002-04:002020-08-16T06:45:41.111-04:00A templated guide to AWS serverless development with CDK<i>If all you're looking for is a no-frills, quick, step-by-step guide, just scroll on down to <b><u>The Guide</u></b>!</i><br />
<h2 style="text-align: center;">
<u><br />
</u></h2>
<h2 style="text-align: center;">
<u>The Story</u></h2>
<div style="text-align: center;">
<i>Or, how I learned to let go of my laptop and embrace The Cloud...</i></div>
<h3>
<u><br />
</u></h3>
<h3>
<u>Motivation</u></h3>
<br />
A while ago I outlined a project that I intend to build, one that I'm rather enthusiastic about, and after spending a few hours evaluating my options I decided that an AWS solution was the right way to go: their options are configurable and solid, their pricing is very reasonable and, as an added bonus, I get to familiarize myself with tech I'll be using for my current contract!<br />
<br />
All good.<br />
<br />
Once I'd made the decision to use AWS, becoming generally familiar with the products on offer was easy enough and it took me very little time to flesh out a design. I was ready to get coding!<br />
<br />
Or so I thought...<br />
<h3>
<u><br />
</u></h3>
<h3>
<u>Hurdle 1</u></h3>
Serverless. And possibly other frameworks, I'm pretty sure I looked at a few but it seems like forever ago. Serverless shows lots of promise, and I know people who swear by it, but it's only free up to a point and I immediately ran into strange issues with my first deployment. I found the interface surprisingly unhelpful, and it looks like once you're using it you're somewhat locked in.<br />
<br />
Pass.<br />
<h3>
<u><br />
</u></h3>
<h3>
<u>Hurdle 2</u></h3>
Cognito. Security first. Cognito sounds like a great solution, but once I'd gotten a handle on how it works I was severely disappointed by how limiting it is, and getting everything set up takes real developer and user effort (and I suffer enough from poor interface fatigue). After playing around with user pools and looking at various code samples, I realized that I'd rather allow users to register using their email addresses / phone numbers as 2nd-factor authentication (<a href="https://www.mailgun.com/" target="_blank">mailgun</a> and <a href="https://www.twilio.com/" target="_blank">twilio</a> are both straightforward options), or use oauth providers like Facebook, Google and GitHub, and I certainly want to encourage my users to use strong, memorable passwords (easily enforced with <a href="https://github.com/dropbox/zxcvbn" target="_blank">zxcvbn</a>, and <i><a href="https://xkcd.com/936/" target="_blank">why is this still a thing?!</a></i>) which Cognito doesn't allow for.<br />
<br />
You'll need to configure <a href="https://www.alexdebrie.com/posts/lambda-custom-authorizers/" target="_blank">Lambda authorizers</a> either way, so I really don't think Cognito adds much value.<br />
<h3>
<u><br />
</u></h3>
<h3>
<u>Hurdle 3</u></h3>
Lambda / DynamoDB. Okay, so writing a lambda function is <i>really</i> easy, and guides and examples for code that reads to and writes from DynamoDB abound. Great! Except for the part where you need to test your functions before deploying them.<br />
<br />
My first big mistake - and so far it's proved to be the most expensive one - was not understanding at this point that "testing locally" is simply not a feasible strategy for a cloud solution.<br />
<h3>
<u><br />
</u></h3>
<h3>
<u>Hurdle 4</u></h3>
The first code I wrote for my project was effectively a lambda environment emulator to test my lambda functions. It was far from perfect, and it did take me a couple of hours to cobble together, but it did the job and I used it to successfully test lambda functions against DynamoDB running in Docker.<br />
<h3>
<u><br />
</u></h3>
<h3>
<u>Hurdle 5</u></h3>
Lambda Layers. Why do most guides not touch on these? Why are there so few layers guides written for Javascript? It took me a little while to get a handle on layers and build a simple script to create them from <span face="" style="font-family: "courier new", courier, monospace;"><b>package.json</b></span> files, but as far as hurdles go this was a relatively short one.<br />
<h3>
<u><br />
</u></h3>
<h3>
<u>Hurdle 6</u></h3>
Deployment. It's nice to have code running locally, but uploading it to the cloud? Configuring API Gateway was a mixed bag of Good Interface / Bad Interface, same with the Lambda functions, and what eventually stumped me was setting up IAM to make everything play nicely together. What's the opposite of intuitive? Not counter-intuitive, in this case, as I don't feel that that word evokes nearly enough frustration.<br />
<br />
Anyway, it became abundantly clear at that point that manual deployment of AWS configurations and components is not a viable strategy.<br />
<h3>
<u><br />
</u></h3>
<h3>
<u>Hurdle 7</u></h3>
A coworker introduced me to two tools that could supposedly Solve All My Problems: CDK and SAM. This seemed like a worthy rabbit-hole to crawl into, but I couldn't find any examples of the two working together!<br />
<br />
I began to build my own little framework, one that would allow me to configure my stack using CDK, synthesize the CloudFormation templates, and test locally using SAM. Piece by piece I put together this wonderful little tool, hour by hour, first handling DynamoDB, then Lambda functions, then Lambda Layers...<br />
<br />
It was at that point that realization dawned: not only are SAM and CDK not interoperable <i>by design</i>, but SAM does not, in fact, provide meaningful "local" testing. Sure, you can invoke your lambda functions on your local machine, but the intention is <i>to invoke them against your deployed cloud infrastructure</i>. Once I got that through my head, it was revelation time: <i>testing in the cloud is cheaper and better than testing locally</i>.<br />
<h2 style="text-align: center;">
<u><br />
</u></h2>
<h2 style="text-align: center;">
<u>The Guide</u></h2>
If you're like me, and you intend your first foray into cloud computing to be simple, yet reasonably production-ready, CDK is the easiest way forward and it's completely free (assuming you don't factor in your time figuring it out as valuable, but that's what I'm here for!).<br />
<br />
Over the course of the past couple of weeks, I've put together the <a href="https://github.com/therightstuff/aws-cdk-js-dev-guide" target="_blank">aws-cdk-js-dev-guide</a>. It's a work in progress (next stop, lambda authenticators!), but at the time of writing this guide it's functional enough to put together a simple stack that includes Lambda Layers, DynamoDB, functions using both of those, API Gateway routes to those functions and the IAM permissions that bind them.<br />
<br />
And that's just the tip of the CDK iceberg.<br />
<h3>
<u><br />
</u></h3>
<h3>
<u>Step 1 - Tooling</u></h3>
It is both valuable and necessary to go through the following steps prior to creating your first CDK project:<br />
<br />
<ul>
<li>Create a programmatic user in IAM with admin permissions</li>
<li>If you're using Visual Studio Code (recommended), configure the <a href="https://docs.aws.amazon.com/toolkit-for-vscode/latest/userguide/setup-toolkit.html" target="_blank">aws toolkit</a></li>
<li>Set up credentials with the profile ID "default"</li>
<li>Get your 12 digit account ID from My Account in the AWS console</li>
<li>Follow <a href="https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html#hello_world_tutorial" target="_blank">the CDK hello world tutorial</a></li>
</ul>
<h3>
<u><br />
</u></h3>
<h3>
<u>Step 2 - Creating a new CDK project</u></h3>
The first step to creating a CDK project is initializing it with <span face="" style="font-family: "courier new", courier, monospace;"><b>cdk init</b></span>, and a CDK project <i>cannot</i> be initialized if the project directory isn't empty. If you would like to use an existing project (like <a href="https://github.com/therightstuff/aws-cdk-js-dev-guide" target="_blank">aws-cdk-js-dev-guide</a>) as a template, bear in mind that you will have to rename the stack in multiple locations and it would probably be safer and easier to create a new project from scratch and copy and paste in whatever bits you need.<br />
<h3>
<u><br />
</u></h3>
<h3>
<u>Step 3 - Stack Definition</u></h3>
There are many viable approaches to setting up stages, mine is to replicate my entire stack for development, testing and production. If my stack wasn't entirely serverless - if it included EC2 or Fargate instances, for example - then this approach might not be feasible from a cost point of view.<br />
<br />
Stack definitions are located in the <b><span face="" style="font-family: "courier new", courier, monospace;">/lib</span></b> folder, and this is where the stacks and all their component relationships are configured. Here you define your stacks programmatically, creating components and connecting them.<br />
<br />
<script src="https://gist.github.com/therightstuff/fa38aa86f83ae5e8da73a22e9b01a931.js?file=lib-aws-cdk-js-dev-guide-stack.ts"></script><br />
<br />
I find that the <b><span face="" style="font-family: "courier new", courier, monospace;">/lib</span></b> folder is a good place to put your region configurations.<br />
<br />
<script src="https://gist.github.com/therightstuff/fa38aa86f83ae5e8da73a22e9b01a931.js?file=lib-regions.json"></script><br />
<br />
Once you have completed this, the code to synthesize the stack(s) is located in the <b><span face="" style="font-family: "courier new", courier, monospace;">/bin</span></b> folder. If you intend, like I do, to deploy multiple replications of your stack, this is the place to configure that.<br />
<br />
<script src="https://gist.github.com/therightstuff/fa38aa86f83ae5e8da73a22e9b01a931.js?file=bin-aws-cdk-js-dev-guide.ts"></script><br />
<br />
See the <a href="https://docs.aws.amazon.com/cdk/api/latest/docs/aws-construct-library.html" target="_blank">AWS CDK API documentation</a> for reference.<br />
<h3>
<u><br />
</u></h3>
<h3>
<u>Step 4 - Lambda Layers</u></h3>
I've got much more to learn about layers at the time of writing, but my present needs are simple: the ability to make npm packages available to my lambda functions. CDK's <b><span face="" style="font-family: "courier new", courier, monospace;">Code.fromAsset</span></b> method requires the original folder (as opposed to an archive file, which is apparently an option), and that folder needs to have the following internal structure:<br />
<br />
<b><span face="" style="font-family: "courier new", courier, monospace;">.</span></b><br />
<b><span face="" style="font-family: "courier new", courier, monospace;">.nodejs</span></b><br />
<b><span face="" style="font-family: "courier new", courier, monospace;">.nodejs/package.json</span></b><br />
<b><span face="" style="font-family: "courier new", courier, monospace;">.nodejs/node_modules/*</span></b><br />
<br />
You can manually create this folder anywhere in your project, maintain the <b><span face="" style="font-family: "courier new", courier, monospace;">package.json</span></b> file and remember to run <b><span face="" style="font-family: "courier new", courier, monospace;">npm install</span></b> and <b><span face="" style="font-family: "courier new", courier, monospace;">npm prune</span></b> every time you update it... or you can just copy in the <a href="https://github.com/therightstuff/aws-cdk-js-dev-guide/blob/master/build-layers.js" target="_blank">build-layers script</a>, maintain a <b><span face="" style="font-family: "courier new", courier, monospace;">package.json</span></b> file in the <b><span face="" style="font-family: "courier new", courier, monospace;">layers/src/my-layer-name</span></b> directory and run the script as part of your build process.<br />
<h3>
<u><br />
</u></h3>
<h3>
<u>Step 5 - Lambda Functions</u></h3>
<div>
There are a number of ways to construct lambda functions, I prefer to write mine as promises. There are two important things to note when putting together your lambda functions:<br />
<br />
1. Error handling: if you don't handle your errors, lambda will handle them for you... and not gracefully. If you do want to implement your lambda functions as promises, as I have, try to use "resolve" to return your errors.</div>
<div>
<br /></div>
<div>
2. Response format: your response object MUST be in the following structure:</div>
<script src="https://gist.github.com/therightstuff/fa38aa86f83ae5e8da73a22e9b01a931.js?file=lambda-response-format.js"></script><br />
<br />
This example lambda demonstrates using a promise to return both success and failure responses:<br />
<br />
<script src="https://gist.github.com/therightstuff/fa38aa86f83ae5e8da73a22e9b01a931.js?file=lambda-promise-resolution.js"></script><br />
<br />
<h3>
<u>Step 6 - Deployment</u></h3>
There are three steps to deploying and redeploying your stacks:<br />
<ol>
<li>Build your project<br />
If you're using Typescript, run<br />
<b><span face="" style="font-family: "courier new", courier, monospace;">> tsc</span></b><br />
If you're using layers, run the build-layers script (or perform the manual steps)<br /></li>
<li>Synthesize your project<br />
<b><span face="" style="font-family: "courier new", courier, monospace;">> cdk synth<br /></span></b></li>
<li>Deploy your stacks<br />
<b><span face="" style="font-family: "courier new", courier, monospace;">> cdk deploy stack-name</span></b><br />
Note: you can deploy multiple stacks simultaneously using wildcards</li>
</ol>
<div>
At the end of deployment, you will be presented with your endpoints. Unless your lambda has its own routing configured - see the sample dynamodb API examples that follow - simply make your HTTP requests to those URLs as is.</div>
<br />
<script src="https://gist.github.com/therightstuff/fa38aa86f83ae5e8da73a22e9b01a931.js?file=cdk-deploy.out"></script><br />
<ul>Sample dynamodb API call examples:
<li>List objects<br />
<span face="" style="font-family: "courier new", courier, monospace;">GET https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/prod/objects</span></li>
<li>Get specific object<br />
<span face="" style="font-family: "courier new", courier, monospace;">GET https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/prod/objects/b4e01973-053c-459d-9bc1-48eeaa37486e</span></li>
<li>Create a new object<br />
<span face="" style="font-family: "courier new", courier, monospace;">POST https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/prod/objects</span></li>
<li>Update an existing object<br />
<span face="" style="font-family: "courier new", courier, monospace;">POST https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/prod/objects/b4e01973-053c-459d-9bc1-48eeaa37486e</span></li>
</ul>
<br />
<h3>
<u>Step 7 - Debugging</u></h3>
<div>
Error reports from the API Gateway tend to hide useful details. If a function is not behaving correctly or is failing, go to your CloudWatch dashboard and find the log group for the function.<br />
<br />
<h3>
<u>Step 8 - Profit!!!</u></h3>
I hope you've gotten good use of this guide! If you have any comments, questions or criticism, please leave them in the comments section or create issues at <a href="https://github.com/therightstuff/aws-cdk-js-dev-guide" target="_blank">https://github.com/therightstuff/aws-cdk-js-dev-guide</a>.</div>
Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-37803246704590596382020-03-28T08:04:00.000-04:002020-03-28T08:54:22.916-04:00An improved (fairer) playlist shuffling algorithmLots of people find playlist shuffling insufficiently random for a variety of reasons, <a href="https://labs.spotify.com/2014/02/28/how-to-shuffle-songs/">some of which have been addressed by the industry</a>.<br />
<br />
There's one aspect that my wife and I haven't seen, though, and that's making sure that no songs get "left behind" whenever a playlist is reshuffled, whether intentionally or by switching back-and-forth between playlists.<br />
<br />
In an attempt to sow seeds, I've just put together an example of an improvement that can easily be applied to any of the existing shuffle algorithms.<br />
<br />
<script src="https://gist.github.com/therightstuff/a1399a192aae85397452755a73898a4e.js"></script>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-75396087173933796362020-03-24T14:48:00.001-04:002020-10-24T16:38:57.247-04:00The value of git (re)parenting<h3 style="text-align: left;">EDIT: I have subsequently learned about <span style="font-family: courier;">git merge --no-ff</span> and no longer re-parent. But I'll leave this up in case someone finds it useful anyway.</h3><div><br /></div>I'm not a command-line person, I may have grown up with it but... let's just say I've developed an allergy. I want GUIs, I want simplicity and I want visual corroboration that I'm doing what I think I'm doing. I really don't see why I need to become an expert in every tool I use before it can be functional.<br />
<br />
This is why I adore SourceTree, and (last time I used it) GitEye. Easier, more intuitive interface, and visualizations to help you see whether what you're doing makes sense. You're less likely to make mistakes!<br />
<br />
I'm also (and for similar reasons) a huge fan of squashed merges in Git; one commit per feature / bugfix / hotfix, and (theoretically) the ability to go through the individual commits on my own time if I ever need to. I use interactive rebasing* a lot to achieve a similar result, but getting other devs to use it responsibly is a lot more trouble than teaching them to add <tt>--squashed</tt> to the merge command. The downside is that because I've become used to interactive rebasing to squash my commits before merging, I've also become used to using regular merges and seeing a neat line on the branch graph showing me where my merges originated. Squashed commits don't give you that. And that's sad. It's also problematic - this morning I freaked out because I thought the code in a branch I'd squash-merged into wasn't where it needed to be, all because I was relying on those parenting lines and didn't immediately think to do a diff**.<br />
<br />
* Ironically, from the command-line. It's the one thing I find less intuitive and more risky using SourceTree.<br />
<br />
** While we're talking about branch diffs, if you're using Bitbucket don't trust the branch diff - the UI uses a three-dot diff and what you actually want is a two-dot, ie <tt>git diff branch1..branch2</tt><br />
<br />
So, after much surfing around the internets and learning lots more about git's inner workings than I care to, I came across an elegant little solution and have wrapped it with a bash script in the hopes that it'll be found useful. All you have to do is run this script as follows:<br />
<br />
<tt>./add_parent.sh TARGET_COMMIT_ID NEW_PARENT_COMMIT_ID</tt><br />
<br />
<script src="https://gist.github.com/therightstuff/1983aaf8bf0c7feec8542df15cac5548.js"></script>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-15422960466387943932020-01-26T02:35:00.001-05:002020-01-26T02:36:12.850-05:00Lessons from my first Direct-to-Print experienceThe direct-to-print paperback edition of <a href="https://www.goodreads.com/book/show/50623800-shakespeare-s-sonnets-exposed" target="_blank"><b>Shakespeare's Sonnets Exposed: Volume 1</b></a> is now up for review, after I finally ironed out the formatting kinks last night and finished fiddling with the cover around 2am. So now that I've had a chance to sleep on it, and re-re-re-review everything before submitting it, I have a few notes for anyone who wants to self-publish a book on Kindle.<br />
<br />
1. Apple's Pages is a fantastic tool for a number of reasons, but what it produces by default really isn't great for reflowable epubs or paperback formats, which are both important if you want your work to be accessible, readable and attractive to your readers. It's a good place to start, though, just as Microsoft Word is, because it exports to Word and once you have a Word doc you can then import your work into Amazon's Kindle Creator.<br />
<br />
2. I wish I'd begun with Kindle Creator, even though my intention is to publish on other platforms as well. Kindle is not only the easiest platform to get started on - and is probably the most accessible for your audience - but from a formatting perspective is effectively the lowest common denominator: it's tough to use custom fonts for Kindle publications, and I suspect they've made it so intentionally in order to standardize the reading experience.<br />
<br />
My advice is to sort out the formatting for the ebook, publish it (see step 5), convert it to EPUB for other platforms, then tweak the formatting (and possibly content) for print publishing.<br />
<br />
3. Don't bother adding <a href="https://www.industrialcuriosity.com/2020/01/isbn-codes-for-dummies.html" target="_blank">your ISBN barcodes</a> to the books yourself. Even if you have one issued for your paperback, it's best to use the KDP generated one for the Kindle direct-to-paperback offering and reserve any others for paperbacks where you have to add the code to the cover manually.<br />
<br />
4. You can download cover templates <a href="https://kdp.amazon.com/en_US/cover-templates" target="_blank">here</a>. I didn't realize that and I made my own, which in retrospect was silly.<br />
<br />
5. After you've "published" your book, you'll have a KPF file that you can upload to KDP. From <a href="https://www.kdpcommunity.com/s/question/0D5f400000oOWpc/converting-kpf-to-epub-format" target="_blank">Converting KPF to EPUB format</a>:<br />
<blockquote><i><br />
I recently managed to successfully convert kpf to epub format using jhowell's KFX conversion plugin for Calibre. Just install the plugin and use drag-and-drop to load your kpf file into Calibre. Then convert the kpf file to epub in the normal way using Calibre. Save your new epub to your desktop and then run Epubcheck on it to ensure that it is a valid epub(it always passes).</i></blockquote><br />
If you run into any issues with these suggestions, please let me know in the comments!Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0tag:blogger.com,1999:blog-2296803028188267116.post-30355688738706734612020-01-20T18:17:00.002-05:002021-07-07T16:30:49.168-04:00ISBN codes for Dummies<h3 style="text-align: left;">Step 1</h3><div>Acquire ISBN codes. For South African residents this is a free service (thank you, NLSA!), and all you have to do is request them and assign them (there's really no need to pay anyone any money, simply look up contact details on <a href="https://www.nlsa.ac.za/" target="_blank">their website</a> and call or email until you reach someone).<br />
<br />
It's important to note that not only does each format (paperback, hardcover, etc) need its own ISBN, but each e-book format (eg. epub, mobi, PDF) does as well!<div><br /></div><div><b><u>IMPORTANT UPDATE</u></b>: I've recently been informed that <a href="http://eBookFairs.com">eBookFairs.com</a> has, in addition to a really cool concept, <a href="https://ebookfairs.com/Home/Barcode" target="_blank">an excellent free online barcode generator</a> that I've now tried out. It's far less effort than what I originally published, and takes care of both steps 2 and 3 below.<div><i>(NOTE: use the full ISBN including the check digit, see below for a detailed explanation).</i></div><div><i><br /></i></div><div><h3 style="text-align: left;">Step 2</h3></div><div>Generate the actual barcode using the ISBN 13 section of <a href="https://barcode.tec-it.com/en/ISBN13?data=978199093158" target="_blank">this free online generator</a>. As explained <a href="https://www.barcodefaq.com/1d/isbn/" target="_blank">here</a>:<br />
<blockquote><i>Before making an ISBN barcode, the user must first apply for an ISBN number. This number should be 10 or 13 digits, for example 0-9767736-6-X or 978-0-9767736-6-5. Once the ISBN number is obtained, it should be displayed above the barcode on the book. All books published after January 1, 2007 must display the number in the new 13-digit format, which is referred to as ISBN-13. Older 10 digit numbers may be converted to 13 digits with the free ISBN conversion tool.<br />
<br />
The last digit of the ISBN number is always a MOD 11 checksum character, represented as numbers 0 through 10. When the check character is equal to 10, the Roman numeral X is used to keep the same amount of digits in the number. Therefore, the ISBN of 0-9767736-6-X is actually 0-9767736-6 with a check digit of 10. The ISBN check digit is never encoded in the barcode.</i></blockquote>Simply remove the hyphens (dashes) and the check digit from your ISBN, paste it into the text box and hit "refresh". I recommend changing the image settings to PNG format with 300 DPI. You can also change the colors if you wish.</div><div><br /><h3 style="text-align: left;">
Step 3</h3></div><div>You now have ISBNs and their barcodes, but for a professional look you'll want to right barcode title in the right font. That's as simple as adding <tt style="font-family: "Source Sans Pro", Arial, sans-serif;">ISBN 0-9767736-6-X</tt> above the barcode image. The free generator uses the Arial font, but the more traditional font is monospace.</div></div></div>Adam Fisher / fisher kinghttp://www.blogger.com/profile/15148904466849828379noreply@blogger.com0