2020-12-24

Xamarin Pipeline Demo

Table Of Contents

Introduction

I'm making this demo repo and writeup because it was surprisingly and frustratingly difficult to get Xamarin.UITest tests for Android to run on a Microsoft-hosted agent in an Azure DevOps pipeline. NO App Center. NO self-hosted agents. I just wanted to do everything in Azure DevOps.

So, this demo shows how to accomplish that, and some other common goals for an Azure Devops continuous integration pipeline for the Android portion of a Xamarin app...

  • Each build gets its own versionCode and versionName.
  • Build the APK.
  • Sign the APK.
  • Publish the APK as a pipeline artifact.
  • Do unit tests (NUnit).
  • Do UI tests (Xamarin.UITest), which involves several Android emulator steps.
  • Publish test results.

This demo is not about getting started on unit testing or UI testing; the demo is about getting these things to work in an Azure DevOps pipeline.

You can see a successful run, a successful job overview, published artifacts, and unit+UI test results (also alernate view for unit test run and UI test run).

This repo is available as a visualstudio.com repo and a github repo. As of 2020-Dec-24, Azure DevOps offers a free tier with 30 build hours per month and 2 GiB of artifact storage. The free tier was more than enough for all the pipeline needs of this demo.

This writeup is available as a github readme, visualstudio.com readme, and blog post. The repo readmes will be kept up to date, but the blog post may not receive many updates after 2020-12-24. Readme section links are oriented for GitHub.

2020-11-20

Suggestions For Creating Passwords

Scope And Purpose Of This Post


Even after someone makes the very wise decision to start using a password manager so they can start having strong, unique passwords, they still have to decide what password generator settings to use.  They have to decide stuff like whether to use digits and punctuation in their passwords, how long their passwords should be, and whether they should use passphrases.

The password generator settings you use should depend on how you're going to use the password.  Passphrases are great for passwords you need to remember, but maybe not for your work password that you manually type >20 times a day.  My recommendations depend on the "password use case":

  • Remembered and typed <8 times a day.
    • This would be your master password for your password manager.
    • Use a passphrase.  Six words for your master password.  Five words are okay for passwords that are far less important than your master password.
    • If your password manager doesn't generate passphrases for you, make it generate a bunch of digits and use the diceware word list or an EFF word list.
  • Not remembered and rarely/never typed.
    • Your most common password use case, for stuff like Facebook.
    • I recommend a "1D+1U+15L" password: 1 digit, 1 upper case letter, 15 lower case letters, for a total length of 17.
    • If your password generator doesn't support that, go for 14 alphanum characters (uses lower case letters, upper case letters, digits)
  • Remembered and typed many times a day.
    • This is possibly the use case for your work password, which you have to type to unlock your computer and log in to many services.
    • Because you are typing this so frequently, you might not need the memorability of a passphrase, and you probably don't want the typing hassle.
    • I recommend something like a "1D+1U+12L" password, even if you have to manually modify a password generated by your password manager.
Note: for this blog post, we'll be assuming we never use an ambiguous letter or digit ("IOlo10"), except in passphrases.  It is tempting to think that if you know that a non-passphrase password only contains lower case letters, then "l" and "o" are unambiguous, but when you're looking at the computer-generated password two years after you generated it, you won't be confident.  For passphrases, you can disambiguate based on the words ("shallow" is a word, and "shaII0w" isn't), so "l" and "o" are okay.

2020-11-13

Password Strength In Dollars

Purpose And Scope Of This Post


When discussing password strength, money-to-crack calculations are far better than time-to-crack calculations. I propose a money-to-crack model that applies to dedicated password cracking rigs as well as cloud computing, and I make some calculations using recent (2020-Nov) data.

This blog post is focused on:

  • Offline attacks (explained in Gentle Introduction)
  • Computer-generated passwords, not human-generated passwords (more info in Background: Guesses-To-Crack)

The cracking costs tables are towards the bottom, feel free to check those out first if you want to see results, not methodology and commentary.

Gentle Introduction

When someone creates a password, sometimes they are concerned about how resistant the password is to being guessed by an attacker.  For instance, a one-letter password for your bank account is probably unacceptably weak and a computer-generated 40-letter password is probably more than acceptably strong.  These two example passwords are so extreme in weakness/strength, that we can make these judgments intuitively, but we will have to be more careful and systematic if we want to determine what sorts of passwords are acceptably strong and not excessively painful to type.

This blog post will focus on password strength in the context of offline attacks, which is where an attacker has hacked some service, obtained hashes of passwords, and can very quickly check password guesses on their own machines.  Offline attacks are a worst-case scenario for password strength, and it makes sense to have the worst-case decide what we consider acceptable password strength.

There are a few ways to think about how strong a password is: guesses-to-crack (GTC, basically entropy), time-to-crack (TTC), and money-to-crack (MTC).  "Crack" simply means "correctly guess".  Each way builds on the previous way.  Discussions of password strength usually focus on GTC and TTC, and only rarely go into MTC.  I think that MTC is far superior to TTC for most discussions about passwords, and I propose my own way of calculating MTC and plug in numbers for hardware available in 2020-Nov.

Interestingly, there's only a 15x-38x difference in costs between realistic upper bounds (AWS cloud computing) and unrealistic lower bounds (arbitrary number of GPUs with no overhead).

2020-11-06

Pareto Principle Gives Extreme Results

 Intro And Thesis

The Pareto Principle ("80/20 rule") is the observation "for many outcomes roughly 80% of consequences come from 20% of the causes", such as 20% of Italian landowners own 80% of Italian land, and has been extended to things like 20% of development time/effort can be use to write 80% of desired software functionality.  This can be written as a {0.2, 0.8} scenario.

Sometimes the principle can be applied recursively at multiple scales.  Perhaps 20% of 20% of landowners (4%) own 80% of 80% of land (64%).  In other words: {0.2, 0.8} at all scales implies {0.04, 0.64}.  Perhaps for some matters (land ownership, wealth), things can be that skewed at many scales.  For stuff like time to develop software functionality, I suspect the Pareto Principle can only be applied at a few carefully chosen scales.

Let's look at the following table to appreciate how recursive application yields some extreme scenarios.

input proportion
output proportion
3.8e-150.01
4.1e-100.05
6.1e-080.1
9.1e-060.2
1.7e-040.3
0.0010.4
0.0070.5
0.0250.6
0.0760.7
0.2000.8
0.4680.9
0.6910.95
0.9300.99
0.9640.995
1.0001

We see the famous {0.2, 0.8}, but we also see {0.007, 0.5} which would imply that you can get 50% of desired software functionality with less than 1% of the effort required to get 100% functionality.  There's even more extreme results like {0.001, 0.4}; one-thousandth the effort to get 40% of the benefit.  Putting these two scenarios together: you work some amount (0.001) to get to 40% functionality and you have to work an additional six times that amount to get to 50% functionality...and then you have to work and additional 9,000 times that amount to get to 100% functionality.

My thesis is that the Pareto Principle leads to proportions that are surprisingly extreme, and thus we should be very hesitant to apply the principle beyond a single well-chosen scale.  There might be lots of naturally-sized tasks where ~20% of the effort gets you ~80% of the benefit, like how carefully you hang a curtain and how good it looks.  But I doubt that {0.2, 0.8} and {0.007, 0.5} both apply to hanging a curtain.

Another question that arises is that if you can get 80% of desired software functionality with 20% of the effort required to get 100% functionality, do we really believe that multiplying the amount of desired software functionality by 1.25 multiplies the required effort by 5 (coming from 1/0.8 and 1/0.2)?  Or imagine someone bloating the desired software functionality to 1.25x so that the actually desired 1x software functionality will supposedly only require 20% of the time of their original 2x-functionality schedule.  I think these scenarios illustrate that the Pareto Principle is very unlikely to be applicable when you are considering different amounts of output to ask for.  Usually when you ask for more output, those outputs will contain easy parts and hard parts.  The only way for the Pareto Principle to hold as you increase desired functionality is whether you are always adding the easiest parts of the easiest features; but humans never do that; humans ask to add complete-enough-to-be-useful features, not the easiest 20% of a feature.

2020-02-15

Royal Road To Async/Await


Scope and Purpose

This post will focus on C#'s async/await/Task stuff, as opposed to async/await for F#/JavaScript/Rust.

First, I will try to explain what the await operator does so that readers learn what is actually going on when you await something, and hopefully a bunch of async/await/Task stuff will start to make sense.  A lot of async/await resources don't tell you what is actually going on, so async/await still seems mysterious and full of obscure pitfalls/guidelines.  I want to help my readers take the "royal road" to async/await, getting that major epiphany as soon as possible.

Second, I will present an async/await/Task reading list that is selected and ordered for the benefit of a beginner, with some notes of my own.  The reading list doubles as my own reference of resources that were helpful for me, and as a place to review best practices and pitfalls.  This reading list is another "royal road" to fleshing out readers' understanding.

Note: I'm having trouble with this blog platform display less-than and greater-than symbols correctly, so please tell me if you suspect a formatting error.