## Featured post

### Textbook: Writing for Statistics and Data Science

If you are looking for my textbook Writing for Statistics and Data Science here it is for free in the Open Educational Resource Commons. Wri...

## Wednesday 29 October 2014

### Fractional data types

There's likely a good solution to this already, but it eludes me:  Why are there no fraction data types at the same basic level at int, char, and float?

A lot of statistical operations, like matrix inversions, are computationally very expensive. Part of the reason they take so long to perform for large matrices is because of all the division of numbers that's going on. Compared to addition and multiplication, division of floating point numbers is very hard for computers. It has to be done in a form of binary long division, which involves a subtraction and multiplication for each bit.

However, division of fractions is simply cross-multiplication, which is cheap.
Consider a double length float in a 32-bit system. It has 64 bits, 52 which store the actual number ( the mantissa), 11 which store how large the number is, and 1 for the plus or minus sign.

Modelled on this, a 64-bit fraction data type could have two 28-bit mantissae, 11 bits for the magnitude, and 1 for the sign.

Note that fractions are not unique representations of numbers. 18/12 is 6/4 is 3/2. We can exploit that to use only a single sign and magnitude for the fraction as a whole.

Conversion from fraction to double is a simple division, so we can't avoid division entirely, but we could do intermediate steps as fractions.

Conversion from double to fraction is harder. It can be done in a arbitrary manner, of setting the denominator to 1, but in the above system, 28 bits of precision are lost, and the whole denominator is wasted storing '1'.

Is there a way to convert a double to fraction to minimize loss? Can it be determined in a way that's cheap enough to make conversion worthwhile? My uninformed hunch is converting x to x^1.5 / x^0.5 would work, but square roots are still a bit costly, and I have no idea if it's particularly good at preserving precision.

Any ideas?

## Sunday 26 October 2014

### SQL - An Introduction in 25 Minutes

Twice a year, the stats grad students from UBC and from SFU meet at the Harbour Centre downtown and have a one day seminar of student talks. Rather than present research, I gave a 25 minute introduction to SQL.

Specifically, it's a few slides on what SQL is, and some example queries using data from Major League Baseball's PITCHf/x data with the pitchRx package in R, and a package called sqldf. sqldf  interprets queries on R data frames that are stored in local memory or in .csv files. This package is very handy for writing practice queries when you don't have access to an external database, and faster than setting up one on localhost through Wamp server.

The example queries in the talk were a showcase of common clauses in a select query, and a simple join. Below are the PDF slides and the TeX code for the presentation if you wish to use it as a template. You won't be able to compile it without the images, so be sure to comment them out first.

SQL - An Introduction in 25 Minutes, PDF Slides, TeX Code

## Saturday 25 October 2014

Playwright (3+)
Action
You may overpay for this card. Gain one coin token for each 1 you overpay.
+1 Card, +1 Coin Token

This is a foil to the Guilds card Masterpiece, which is a poor card on its own, but allows to you overpay to put a Silver into your deck for each amount overpaid.

It's called Playwright because coin token cards traditionally are people, and it seems plausible for a playwright to produce one amazing work and then be only marginally valuable after. What I like about Masterpiece is that it's a viable buy for 7 coins, the 'deadzone', without being board-defining.

Playwright fixes the deadzone problem doubly - it gives you something worth buying for 7, gives you enough coin tokens to prevent issues in the future, or set up for a megaturn.

Dig Site
Victory/Treasure (0*)
This card cannot be bought normally.
You may gain this card when you trash a victory card.
1 VP, +1 Money.

Museum
Victory/Action (0*)
You may gain this card when you trash a card worth 6 or more.
3 VP, +1 Action, +1 VP chip.

Dig Site is intended to give some purchasing power when you trash an Estate, or Overgrown Estate, or beef up Silk Road without much drawback. Museum is a late game vehicle to convert Golds into Victory Points, and it's not a dead card if it ends up in your hand.

Farmland combos brilliantly with either of these cards.

These cards push the bounds of the original game a bit because there's usually at least ten non-basic cards that can be bought in a game. If there's a card with a special gain condition, like Prize, or Madman, it's placed in the game as an attachment to an existing card (Tournament and Hermit, respectively). These could be attached to an... Archeologist card?

The creator of Dominion has said he's not interested in making more cards because the game is complex enough with 220+ cards across all the sets. However, I feel like there's still room for expansion in the single-player mode that's playable online, much like how there are special cards and conditions that are only in Hearthstone's Naxxrammas fights.

### The Value of Choice in Game Design

Choice has value, but it’s not always possible to quantify. This allows a game mechanics to be recycled. Consider the following three cards in Magic: The Gathering.

1. Lightning Bolt - A spell to deal 3 damage to a creature or player once, and at any time.
2. Kird Ape - A creature with 2 attack and 3 defence, as long as a certain easily attainable condition is met, and worse otherwise.

3. Vexing Devil - A creature with 4 attack and 3 defence, but the opposing player may choose to take 4 damage immediately instead of having this creature enter play.

All of the cards have the same requirements to be played (one red mana). There are differences in the years between when each card was tournament legal, but all three cards are considered better than the typical card. In short, all three of these cards are highly interchangeable in a deck. So which one is the best?

Case I:

If the opposing player allows Vexing Devil to come into play, it has more attack than a Kird Ape would, so it's better than using a Kird Ape in that situation.

Case II:

If the opposing player does NOT allow Vexing Devil to enter play, then the opposing player takes 4 damage. This is more than the 3 damage than a Lightning Bolt would deal.

In either case, it seems like Vexing Devil performs better than either card. What makes these three cards fairly comparable? The opponent's choice does.

From the player of Vexing Devil's perspective, they will always end up with the least favourable of the two cases. If either one of those cases were as good as the two alternative cards mentioned, the alternative card should be used instead. So, to make Vexing Devil viable, both cases must be better than those on alternative cards.

From the opponent's perspective, they can choose to let the creature come into play only when they can deal with it in a way that is less costly than taking 4 damage.

In short, the value of choice must be priced into the relative strength of game elements. This is what makes the Queen in chess strictly better than either the Rook or the Bishop. The Queen can only make moves that either the Rook or the Bishop can do, but having the choice to do either is what gives her her power.

Consider one more card:

Forked Bolt – Deals 2 damage to target creature or player, OR 1 damage to each of 2 such targets.

This card is also considered above-par, and has the same mana requirements as the other three. It's arguably weaker than Lightning Bolt, but not by much. Why? The player of this card has the additional choice of splitting the damage.

### American Psychometric

Eight discrete bars tastefully separated from each other, presenting itself as a bar graph of distinct categories instead of a histogram of continuous values. Labelled axes showing popularity values for each car colour on a brushed steel background. My god, it even has the “other” category!

### The Research Potential of Twitch TV

What is Twitch?

Twitch is a live video streaming service marketed to video game and e-sports enthusiasts. It arose from a community of dedicated competitive head-to-head games and "speedrunners" which complete single player games as fast as they can. Twitch, formerly Justin TV, allows players to upload a live video feed of their screen, perhaps with some overlays like a timer or a camera of the player's face or hands. Twitch earns its revenue much how traditional television does, with commercials that intersperse the play. For most players, whom have 0-10 viewers at any given time, these show up for 15-30 seconds upon beginning to watch a channel. Proven high-tier players (proven in terms of viewership, not in-game achievement) can enter into profit sharing agreements with Twitch, and may play additional commercials to generate revenue for themselves and for Twitch. Since many games have predictable moments of downtime, such as a restart of a speed run attempt, or a long loading screen, when an advertisement doesn't detract from the experience. These high-tier players can also monthly subscriptions that give small cosmetic benefits and the removal of ads to subscribed viewers.

Why Twitch is a big deal, and why it's different.

Recently Twitch was purchased by Amazon for nearly \$1 billion USD (or \$1000 million USD if you're using European notation). Aside from being the web's everything store, Amazon also sells a portfolio of large-scale computing services, so Twitch wasn't an out-of-character purchase. They have the hardware means to support the growing operation of receiving, compressing (often into multiple formats for viewers at different bandwidth levels), and sending video streams from thousands of players to, at the moment, roughly half a million viewers.

That is a LOT of bandwidth, and the senders and receivers could be anywhere and could change at any time. Logistically, that makes what Twitch does a lot more impressive than the feat or providing traditional television to 500,000 simultaneous viewers. With traditional television, a few hundred channel streams can be sent to a local station, which can copy those streams as needed for the short distance from the local station to the viewer. Bandwidth on the main lines does not increase as the number of viewers increases for traditional television. Youtube does what it can to get similar cost advantages. Youtube is a one-to-many system with being a central repository of videos that are pre-uploaded. It has the advantage of knowing which videos are popular or becoming popular, and can use that reduce its bandwidth costs by storing copies of popular video files with local internet service providers. I assume Netflix has a similar arrangement. Even with live streaming of large events, mainline bandwidth can be saved by branching if demand can be predicted such as with playoff hockey games, and Olympic events - in that order.

Twitch, however, with its many autonomous content providers and dispersed viewers, cannot predict where enough viewers of a given channel will be to take advantage of a local provider like traditional television can. They also can't store their main product, which is live, on a central repository. In short that massive amount of bandwidth has to go through the internet in an ad-hoc fashion that must make the per-viewer costs much higher than competing entertainment. Now that Amazon has put colossal gobs of money behind Twitch, it surely has some ideas to reduce these costs. Predictability, may have made Youtube more efficient, but what could be predictable about video game streamers?

"I'm tired of dating responsible, burly, musical chef-lumberjacks. What would really seduce me is a lecture on video compression and the power law." - Nobody. Ever. :(

The popularly of games follows the power law. Lots of things follow the power law. You may have heard of this as the 80-20 rule, such as "80% of alcohol is consumed by 20% of the drinkers". For television it would be "80% of the revenue comes from 20% of the shows.". Sometimes it's called the 90-10 rule for similar reasons (as I've heard it in the relative frequency of words in a language), but the basic principle is the same: There are a few things (channels, games, drunks) that far surpass all the others.

For Twitch, this works for games. It's almost 2am on a weeknight in Vancouver, Canada as I write this, which is a lull for Twitch. Three games: DOTA 2, League of Legends, and Hearthstone: Heroes of Warcraft, have 63,000, 40,000, and 33,000 viewers respectively. There are three games with 5000 - 10000 viewers presently, twelve games running between 1000 and 5000 viewers, and fifty games between 100 and 1000 viewers, and hundreds of games with 1-99 people watching. The viewership of channels within a game also follow the power law. 70% of Hearthstone's 33,000 viewers are watching a single channel. 15% are watching the next two most popular channels, and so on.

Why would anyone care that viewers (and therefore, revenues and costs) follow the power law? Because it means that improving the efficiency of the streaming of only a handful of games can go a long way towards improving the efficiency of all the streaming that happens.

By improving efficiency, I'm referring specifically to reducing the number of bits of information that have to be routed across the internet to deliver a stream to a single viewer at a fixed quality. The most common way of doing this is with video compression codecs*. To upload and download something that's essentially thirty pictures per second, digital videos typically use codecs, which are like books of shorthand that are shared between the sender and the receiver. In text, you can think of the acronyms*, jargon, and in-jokes that you know (BC, GLM, Old Gregg) as your personal codec, because you can use them to communicate much more quickly with other people that know the same acronyms, et cetera.*** Computers do similar things with streaming videos, they exploit common knowledge to produce a smooth video without. having. to. send. the. entire. picture. for. every. frame. Among other things, codecs exploit the fact that most frames are similar to the ones just before them. Most of these frames are for movement or animation. Even cuts within a scene share a colour palate, and there are usually transitions between scenes. This is why when things get really fast-paced in a video, or if you're skipping around it wildly, the picture goes a little crazy and you'll see ghosts of previous things or strange colours and blocks.

Twitch already uses codecs, but I imagine that the ones it currently uses are designed for general video, or at best for general video games. However, we already established that Twitch is primarily used to stream a handful of games. Each of these games has their own patterns that could be used to make codecs that will work better for those specific games.

Here are two typical screen captures for a Hearthstone stream, taken half an hour apart. (From Hafu's channel: http://www.twitch.tv/itshafu Screencaps used without permission.)

This is a card game, so there's already a lot of static elements that a codec can use. Most of the outer rim is the same, save for a shading change, so a codec needs only send "no change in this area" each frame. Likewise for most of the centre board space. Several interface elements have changed little between the two pictures and have minimal changes from frame to frame. Aside from the shading change and the movement in the live camera, the biggest changes the play arena are the decorations in the corners. You can't tell from the static images, but the ruby in the moai statue's eye in the left screen glints regularly, and the water in the waterfall burbles down in an animated loop. Likewise, the spiky zeppelin in the right image floats around a fixed pattern, and the smoke the hut slowly billows.

Tor Norretranders would call this "exformation".

If the codec were specifically designed around Hearthstone, it could recognize the decoration elements like the glinting ruby and the billowing smoke. Then, instead of Twitch's servers having to send the slight changes to the ruby and the smoke over time, they could sent a much simpler "carry on" message as if that part of the image was static. The receiving end, knowing it had a ruby or some smoke, could fill in the animation in the video without having to have it described to it explicitly by the server. Since a large portion of the viewers of this stream have a copy of Hearthstone and play it themselves, the codec could even draw upon the art assets of the game to create the animations semi-autonomously.

Other object recognition could be done to leverage the sort of repetition that isn't found in general video, but is found in video games. The dark gems in the lower right corner of each image are repeated. With access to the art assets, Twitch could sent "draw four [dark gems] here" instead of the most longer "draw the following thousands of blue and black pixels.". Without the assets, a more general signal could be send to draw the pixels for one gem, and simply a repeat command for the next three.

Finally, object recognition could be used as a graphical form of automobile amputation autocomplete. See that "Soul of the Forest" card? If you knew every card in the game, you could cover up the bottom 85% of that card and still recognize it. A server at Twitch, streaming this to the 2,400 viewers that there were for this channel, could save a lot of effort by recognizing that card, and telling the viewers with art assets to draw that card in that position, rather than describing the card pixel-by-pixel to every viewer. It helps greatly that cards have many graphical characteristics in common, like the gem with the number in the upper left, that a server could use to recognize that the object in that portion of the screen is card, and where it should look for the rest of the card and what else to look for.

Streaming is bandwidth intensive, and bandwidth has an electricity cost and a carbon footprint. Amazon could save a lot of money, provide a more reliable product, and offset carbon at no extra cost if it found some ways like these to take advantage of the predictable content that people are streaming on Twitch.

My conjecture is that Amazon is already working on such technology, which I'm calling Application Specific Video Codecs for now. But just in case they're not, Fabian, you know how to find me. ;)

To clarify, this is about more than just a codec for gaming, that's just the inspiration. Imagine a means of video streaming that's halfway between pre-rendering and streaming. It could also be used for sports programs with consistent overlay formats, and shows with fancy transitions that get used a lot (and typically are hard to compress because they involve large movements).

It also seems like something a third party would develop and then sell the tailoring service to various streaming systems.

* I apologize for my inexact description of video compression and codecs. This is a blog post, not a textbook, and I am not yet an expert in this particular field.
** Acronyms and initialisms. Insufferable pedant.
*** et cetera. Something you should never end a sentence with. Also see "preposition".

## Wednesday 1 October 2014

### The Order of Data Contains Information, But How Much?

For some sampling designs, especially adaptive ones like those in Thompson and Seber (1996) , the order of the values in the sample matter. Statistics derived from the tuple of observed values (3,10,4,9) may be different than statistics derived from the tuple (9,3,10,4). The order in the round parentheses implies the order in which these values were observed.

In other cases, like simple random sampling, only the observation values and not their order matters. In simple random sampling, the samples {3,10,4,9} and {9,3,10,4} are the same. The curly braces imply set notation, where the order doesn't matter

So there is information embedded in the order of the data, but how much?