# Sampling Functions and Generative Models

The term ‘probability distribution’ is actually pretty vague. A probability distribution is an abstract object that can be distinguished uniquely (i.e. from other probability distributions) by way of any of a bunch of concrete representations. Probability density or mass functions, moment generating functions, characteristic functions, cumulative distribution functions, random variables, and measures all reify a probability distribution as something tangible that can be worked with. Image measures in particular are sometimes called ‘distributions’, though they still just form a single possible reification of the underlying concept. In formal probability theory the term ‘law’ is often used to refer to the abstract object being characterized.

Different characterizations are useful when doing different kinds of work. Measures are useful when doing proofs. Probability densities and mass functions are useful for a whole whack of applications. Moment generating functions are useful for exercises in introductory mathematical statistics courses.

Sampling functions - procedures that produce samples from some target distribution - also characterize probability distributions uniquely. They are an excellent basis for introducing and transforming uncertainty in probabilistic programming languages. A sampling function takes as its sole input a stream of randomness, which is consumed and transformed to produce a sample from the distribution that it characterizes. The stream can be a lazy list or, similarly, a pseudo-random number generator.

Sampling functions are precursors to generative models: models that take both randomness and causal inputs as arguments and produce a possible effect. Generative models are the main currency of probabilistic programming; they specify a mapping between hypothesized causes and a probability distribution over effects. Sampling functions are the basis for handling all uncertainty in several PP implementations.

It’s useful to look at sampling functions and generative models to emphasize the distinction between the two. In Haskelly pseudocode, they have the following types:

It’s easy to see that a generative model, when provided with some causes, is itself a sampling function.

We can use a monad to abstract out the provisioning of randomness and make everything a little cleaner. Imagine ‘Observable’ to be a monad that handles the propagation of randomness in our functions; any type tagged with ‘Observable’ is a probability distribution over some other type (and maybe we would run it with a function called observe or sample). Using that, we can write the above as follows:

Very clean. Here it’s immediately clear that only difference between a sampling function and a model is the introduction of causes to the latter. A generative model contains some internal logic that manipulates external causes in some way; a sampling function does not.

You can see some in-progress development of this idea here and here.

# Property Testing in Ruby

Testing properties of Haskell functions with QuickCheck is easy and pretty enjoyable. It turns out Ruby has a QuickCheck-like property testing library called rantly.

Let’s test the ‘greed game’ Ruby koan. In the greed game, one rolls up to five dice and calculates a score according to the following rules:

• three ones is 1000 points
• three of the same non-one number is worth 100 times that number
• a one that is not a part of a set of three is worth 100 points
• a five that is not a part of a set of three is worth 50 points
• everything else is worth one point

So, for example, a roll of 1, 1, 1, 5, 1 would yield 1150 points. A roll of 3, 4, 5, 3, 3 would yield 350 points.

The basic scoring mechanism can be implemented like so:

Property tests require generators to cook up requisite input data. Some generators we might be interested in, to start, are those to create valid dice rolls:

valid_roll describes the result of an individual dice roll, while valid_number describes the number of dice that can be rolled for an input score. range generates a value between its provided arguments, inclusive. Rantly comes equipped with a bunch of primitive generators and combinators: choose, array, sized, etc.

We can use those primitive generators to create other generators. In particular, a valid input to the score function should be a 0-to-5 length array in which each element is between 1 and 6 inclusive; that is, an array with length generated from valid_number and elements generated from valid_roll.

Below, I’ll create that composite generator in an RSpec describe block and then test a couple of properties of the score function:

Running this code will test each of those properties on 750 random inputs generated by the rolls generator.

One might also want to test how functions behave on invalid input. Let’s augment the original scoring functions with some exception handling:

and then add generators for invalid rolls:

then, with the addition of two new composite generators, we can test that the exception handling behaves as we expect:

As often happens, property testing tends to suss out weird corner cases in your code and help you understand it better. Just while doing this example I realized that ArgumentError wouldn’t necessarily be thrown for an invalid roll value if the number of input rolls was actually empty. Hence, the addition of if r.count > 0 to the last test.

Property testing also subsumes unit tests. If you use a static or relatively static generator, you’re effectively doing unit testing. You can see this in the cases of the invalid_roll and invalid_number generators, in which each is generating inputs from only a very small domain.

IMO familiarity with a QuickCheck-like property testing library is good to have. Rantly is not quite QuickCheck, but it’s still a joy to use.

# Notes on Another Guy’s Notes

I recently bought Ilya Grigorik’s High Performance Browser Networking, which is an excellent book. Also excellent is that Ilya wrote a great retrospective on his book-writing process.

• A ‘shitty first draft’ is the initial goal of most any writing. Just get to the keyboard and start mashing the keys.

• Consistency is key. Show up and get to work.

• Early feedback is invaluable.

• Writing is an excellent way to expose the initial sloppiness of one’s thinking.

I think these are all excellent insights, but the second and third ones really stand out to me.

Early and constant feedback is just really important. This is something that I’ve had to constantly remind myself of when working on largely-solo projects. Having others examine your work immediately gives you an idea of its promise. Are the other parties excited by it? Indifferent? Confused? Can they point out an area that you haven’t really understood all that well, or something important that you missed?

And above all else, consistency is sacrosanct. This idea is only getting reinforced with time.

# The Unreasonable Effectiveness of Habit

Exactly 41 days ago I started a project called 750 Words, which is simply a habit of writing a meagre 750 words every day. They can be written on anything; just pick a topic (or several topics) in your head and get to writing about it.

When I originally started, I thought that this would be a great way to work on blog posts, research papers, my dissertation, and so on. Not so much the case, I’ve found. After much internal debate as to its merits (the subject of at least one entry), the best use that I have found far 750 Words thus far has been a complete mind dump every morning over breakfast.

Initially I tried rigorously picking a topic and writing essays and technical entries or what have you, but this seemed to actually go against the spirit of the exercise. Nowadays I just open the browser and crank out whatever’s on my mind. It generally takes me about 15 minutes.

Why do I deem this to be a good use of my time? More than anything, I think, it has been by observing the results over time: I’ve sustained a streak now for 37 days straight, and quite enjoy waking up every day and putting another X on the calendar by virtue of writing another entry. I don’t want to stop, and indeed, don’t intend to. The main reward to me has been seeing a basic goal manifest as a string of X’s on a calendar; the little 750-word-minimum mind dumps (which constitute over 32,000 words now) are a bonus.

# Measures and Continuations

I’ve always been interested in measure theory. I’m not even sure why, exactly. Probably because it initially seemed so mysterious to me. As an undergrad, measure theory was this unknown, exotic key to truly understanding probability.

Well, sort of. It’s certainly the key to understanding formal probability, but it no longer seems all that exotic, nor really necessary for grokking what I’d call the true essence of probability. It’s pretty much real analysis with specialized attention paid to notions of factoring (independence) and ratios (conditioning). Nowadays I relate it moreso to accounting; not the most inspiring of subjects, but necessary if you want to make sure everything adds up the way you think it should.

# Basic EC2 Management With Ansible

EC2 is cool. The ability to dynamically spin up a whack of free-to-cheap server instances anywhere in the world at any time, is.. well, pretty mean. Need to run a long computation job? Scale up a distributed system? Reduce latency to clients in a particular geographical region? YEAH WE CAN DO THAT.

The EC2 Management Console is a pretty great tool in of itself. Well laid-out and very responsive. But for a hacker’s hacker, a great tool to manage EC2 instances (amongst other things) is Ansible, which provides a way to automate tasks over an arbitrary number of servers, concurrently.

With EC2 and Ansible you can rapidly find yourself controlling an amorphous, globally-distributed network of servers that are eager to do your bidding.

Again.. pretty mean.

# Managing Learning as a Side Project

I regularly find myself wanting to learn too many things at once. This might be justifiable in some sense; there’s an awful lot out there to learn. Often, skimming over some topic or other feels like enough to develop a sufficiently high-level model of what it is or how it works. Armed with that (dangerously small amount of) knowledge, however, the urge to pick up some other topic tends to arise.. and so the process repeats.

This is all well and good in order to survey what’s out there, but left unchecked, a survey of topics is all one might get. Le dilettantisme can be understandable, but never desirable.

Some time ago, I decided to try restricting myself to learning only one particular topic for two weeks at a time, as a bit of a side project. Think of it as a ‘learning sprint’, if you will. The idea is that the time between iterations is sufficiently short to ensure that one can’t hunger too badly to switch to some other topic mid-sprint. At the same time, each iteration is lengthy enough to ensure a reasonable amount of immersion and depth.

I managed a single iteration, but due to travel and a lack of conviction about the whole thing, never started another. I think I’m going to start again, but with a little more intent this time.

Two weeks can be a long-as time for some topics, though, so I believe I’ll work in one-to-two week commitments, depending on the subject.

To start, I’m going to choose 0MQ, a framework that I sort-of know and definitely love. We’ve used it in production on a previous app I worked on, and I’ve even contributed to the official Ruby bindings. But, I still have a lot to learn about it.

So let’s see how it goes.

# A New Blog

I’ve been feeling an itch to write more often, and this seems as good a place as any to do it. If anything here winds up being useful to anyone else, all the better.