Featured post

Textbook: Writing for Statistics and Data Science

If you are looking for my textbook Writing for Statistics and Data Science here it is for free in the Open Educational Resource Commons. Wri...

Wednesday 31 December 2014

GDA-PK, a cleaner powerplay skill measure for NHL hockey

In the Vancouver Canucks' latest game, a 3-1 win against at the Anaheim Ducks on December 29, a paradox happened in the first period:

At the 13:31 mark, Vancouver got a 2-minute minor penalty
At the 14:26 mark, Anaheim got a 2-minute minor penalty
At the 15:18 mark, Vancouver got another 2-minute minor penalty.
All penalties ran their full course without being converted into goals.

So, for
13:31 - 14:25, Anaheim had a 5-4 powerplay (55 seconds)
15:18 - 15:30, Anaheim had a 4-3 powerplay (13 seconds)
16:26 - 17:17, Anaheim had a 5-4 powerplay (52 seconds).

Anaheim enjoyed a total of two minutes of powerplay advantage, which makes sense because Vancouver received one more penalty. However, for the sake of measuring Vancouver's ability to kill penalties or Anaheim's ability to use them, this counts as three distinct powerplays.

To me, this seems counter-intuitive for multiple reasons.
- Anaheim somehow managed to have more powerplays than penalties given in their favour.
- Most powerplays are of two minutes in length, but one of these was only 13 seconds long.
- Had it just been Vancouver's two penalties, this period would be recorded as one long powerplay of 13:31 - 17:17. So by getting a penalty, so it looks like Anaheim got two extra powerplays by getting a penalty.
- Had it just been Vancouver's two penalties and the first ended in a goal, it would be recorded as two powerplays, which would look like Anaheim getting an extra powerplay by scoring a goal.



My general objections to the counting of powerplays, and powerplay-based metrics, PK% and PP%*, include:

- Two-player advantages, although rare, are treated the same as one-player advantages.
- Powerplays from 5-minute major penalties are treated the same as 2-minute minors, even though minor (and double-minor) penalties end when the team with the player advantage scores.
- Shorthanded goals are ignored.
- The output of the statistics, like 82% PK and 24% PP, aren't intuitive to the viewer as to what to expect. We can compare the PK% and PP% teams to say that Anaheim is very good on the powerplay, and that Vancouver is very good shorthanded relative to other teams, but it doesn't give the viewer a good idea of the chances of a powerplay goal.


A better solution already exists; GAA*, as in Goals Against Average, which is normalized by time. I propose to adapt GAA into two alternate measures.

GDA-PK: Goal difference average - penalty killing.
GDA-PP: Goal difference average - power play.

where

(GDA-PK) = (Shorthanded goals scored - Shorthanded goals against) / (minutes shorthanded) 

(GDA-PP) = (Powerplay goals scored - Powerplay goals against) / (minutes shorthanded) 

- Going through the Vancouver-Anaheim anomaly again, we would record 0 goals against Vancouver in 2 total minutes. the fact that those 120 seconds are interrupted twice doesn't factor into the computation. 

- If the powerplay had been cut short by a goal, less powerplay time and 1 goal would be recorded.

- If there had only been the two Vancouver penalties, more powerplay time would have been recorded and the two-player advantage time could be used for something else, such as GDA-2PP.

- The longer time of  five-minute major and four-minute double minor penalties are reflected fairly in GDA-PK and GDA-PP.

- Multiple goals during a major penalty do not result in an absurd measure such as a PP% of more than 100%.

- Shorthanded goals are included and simply treated as goals in the opposite direction.

The results that come out these metrics may look like this (THIS HAS NOT YET BEEN CALCULATED, PLEASE DO NOT QUOTE THESE AS ACTUAL FIGURES) :

----------------------
Vancouver Lumbermen**
- 0.103 Goals/min PK ( 2nd in NHL) 

Anaheim Mighty Mallards**
+ 0.138 Goals/min PP ( 4th in NHL)
----------------------

Numbers-keen viewers can get a sense of the average goals any given penalty will cost, and it provides the same comparative sort of ranking as PP% and PK% but with less noise from special cases.  As an added bonus, a fair comparison to even strength play can be made as well by a similar calculation.

Finally, The fact that some Powerplay intervals are cut short by a goal being scored is a non-issue. It has already been addressed (Bartholomew 1957 for the math, Mullet 1977 for the application)***.

Your thoughts? Disagreements welcome in commentary.

========================

* Definitions:
PK% is short for Penalty-Kill Percent. It is a measure of defensive ability in shorthanded situations and calculated by (Goals against when shorthanded / Number of shorthanded situations) X 100%.

PP% is short for Power Play Percent, a measure of offensive ability in player-advantaged situations, is calculated by (Goals scored when player-advantaged / Number of player-advantaged situations) X 100%

GAA, short for Goals Against Average is a metric used to measure goalies and teams' defensive abilities. It is calculated by (Goals against / Hours of play).


** The names of the teams have been changed to emphasize that these numbers are for demonstration only. A lesson to novice statisticians, numbers you throw out, however casually or verbally, have a habit of being quoted as serious analyses. I'm trying my best to avoid that while still giving an example.


*** References: 
Bartholomew, D. J. (1957), “A problem in Life Testing,” Journal of the American Statistical Association,
52, 350-355

Mullet, G. M. (1977), “Simeon Poisson and the National Hockey League,” The American Statistician,
31, 8-12

Thursday 18 December 2014

New Kudzu Material - The Halls of Lorsem

Here is an update to Scrabble Dungeon, which I'm calling Kudzu for the time being to give it a better search niche, and to avoid future lawsuits.

Google Drive link to 2014-12-17 Version

There are only typo fixes to the material that was in the 2014-12-05, but a lot of new material has been added.

Specifically, a new six-room dungeon called "The Halls of Lorsem" and eight new Relics to draw from a random deck.

To avoid this becoming a single unwieldy document, new material will likely be given as separate modules while rule changes will follow future versions of this document.

Sunday 14 December 2014

Examples of the Play to Donate model

It's December, so in the spirit of giving, I've been looking for ways to make better use of my phone. Specifically, uses of the Play to Donate business model, because it's motivation to continue developing the scrabble style dungeon crawler (see previous post).

There have been a few of these in the last 10 years, but they never seem to take off. In the Play Store for Android, I found three apps:

Give a Heart, which is the app port of the website of the same name. They have a simple catch the falling objects game, which occasionally rewards players with donation hearts. Donation hearts can be given to your choice of many charities, and they translate to 10 cents per heart.

Give a Heart has almost no activity, which is probably good because anyone with skill could produce more donations than the ad revenue being produced. The mobile port of the game is buggy, and playing it is truly an act of charity. I do like that skill increases the donation potential, although this is a tricky thing to pull off from a designer standpoint. A pity really, because a better game and a more active or aggressive revenue stream and this could go somewhere.

ALS Ice Bucket Challenge, by Fortee Too Games, is a good foil. This is another catch falling object game, but it's native to mobile and runs much better. In this game, skill is rewarded by unlocking videos of the ice bucket challenge. There are full screen ads between every three games, which is about one every 90 seconds.

42% of the ad revenue goes to ALS research, but there's no way to see your personal contributions. Also, being good at the game means seeing fewer ads, so if you're playing out of charity, it is tempting to throw games intentionally. Also, the game is only amusing for ten minutes, shorter when you realize the video rewards are YouTube links.

Swagbucks, is a paid to surf system with an option to donate earnings. It's mostly on the traditional web, but it has a mobile search widget that functions similarly.

Swagbucks has games, licensed of commissioned from third parties, but the large majority of player earnings come from other activities like watching videos and searching the web.

In short, for existing systems, either the games are ineffective or the donation potential is. Disappointing really.

Friday 5 December 2014

Scrabble Dungeon - The Cavern

Scrabble Dungeon is a crossword style game for 1 player. Turn-for-turn, it plays similarly to Scrabble, and has RPG elements like resource management and a modular presentation like Dungeons and Dragons.

This is a rough draft of the first module, in tabletop format.


Scrabble Dungeon - The Cavern

I have ambitions to make several of these modules, because I think there's a lot of fun mechanics and maps that can be slotted into the game without much difficulty. I'd also like to explore the possibility of making it an Android app in the future, potentially with ads and under a play-to-donate model.

If any of you could have a look at this, especially reading the rulebook (2 main pages + 1 appendix page), I'd appreciate it greatly. Is the rulebook simple enough? Does it leave unanswered questions? Does it need a turn by turn example? Making it easy to digest is my current concern.

Please get back to me in the comments or at jackd@sfu.ca

Monday 17 November 2014

First look at the statistical thesaurus


Part of my work at the Institute for the Study of Teaching and Learning in the Disciplines, or ISLTD for short, is to develop a handbook for statistical design and analysis.

The clients of the ISTLD are Simon Fraser University faculty across all disciplines that are looking to incorporate new teaching ideas and methods into their courses. This handbook is intended for faculty and grad student research assistants with little statistical background. As such, the emphasis is on simplicity rather than accuracy.

One wall I've run into in making this document as accessible as possible is terminology. Different fields use different terms for the same statistical ideas and methods. There's also a lot of shorthand that's used, like "correlation" for "Pearson correlation coefficient".

Why is spatial autocorrelation referred to as 'kriging'? Why is spatial covariance described in terms of the 'sill' and the 'nugget'? Because those are the terms that the miners and geologists came up with when they developed it to predict mineral abundance in areas.

Why are explanatory variables still called 'independent variables' in the social sciences even though it causes other mathematical ambiguities? Because they're trying not to imply a causal relationship by using terms like 'explain' and 'response'.

For the sake of general audience readability field specific language will be kept to a minimum, and shortenings will be used whenever a default option is established, as it is with correlation. However, the alternate terms and shortenings will be included and explained in a statistical thesaurus to be included with the handbook.

Here are three pages from the rough draft of that thesaurus. Since such a thesaurus, to my knowledge, has not been published before, I would very much appreciate your input on its readability, or what terms should be included.


https://docs.google.com/document/d/15IWtH9a_bpfhu2cvvtBOCL6FCCaH7zyPwDDsLEXz7d4/edit?usp=sharing

Thanks for reading!
- Jack

Tuesday 4 November 2014

Sabremetrics, A.K.A. applied metagaming.

In baseball, sabremetrics started a push for batters to hold off for more and better pitches. By 2011 or 2012, strikeout rates were higher than they were in a century, probably in part of the reduced hitting but also due to pitchers throwing more pitches, knowing they would be swung at less often. How long until batters adapt and capitalize on the higher quality pitches they are receiving to increase hit rates and home runs instead of waiting for many pitches?

This is an example of perfect imbalance, as explained in this video by Extra Credits.  Players of strategic games that involve a lot of pre game decisions often refer to these decisions and the information leading to them as "the meta game". In MOBA games like League of Legends or fighting games like Smash Bros. or Soul Caliber this amounts to selecting one's avatar character and the bonuses they will bring into the start of a match. In collectible and living card games like Magic: The Gathering and Android: Netrunner the meta game is one's deck building process and the popular types of decks among one's opponents.

The pre game decisions, the meta game, in sports include who to hire and how to train. In sports where game-to-game fatigue is a factor, such as hockey with goalies or baseball with pitchers, the meta also involves choosing who will start the game, and how well that matches against the opposing goalie or pitcher.

The general idea of metagaming is to make choices that counter the likely choices of opponents. Against a learning opponent, this requires constant adaptation.

Consider fashion, where to win is to receive  attention, admiration as a consumer and sales as a designer. Fashion is played by wearing/creating an outfit that stands out from that of the existing crowd.

Consider the red queen hypothesis, a biological principle whereby a species succeeds by apadting to its prey predators and competitors. Specifically, the red queen hypothesis is that since all species are doing this, the best a species can hope for is to keep up in evolution, never get ahead.

Sabremetrics got so big as a statistical toolset not simply because it was interesting or novel, but because it was actionable. It provided information that could be converted in decisions, instead of just being elegant or produce a pretty graph.

If hitting in baseball is due for a comeback, might it make sense to load up your farm team with sluggers and curveball pitchers now? Do you reduce your emphasis on stolen bases in anticipation of fewer pitches per at bat?

Would a change from walks and strikeouts to hitting favour franchises that traditionally rely on high scores like Texas, or ones that rely on many small hits like Kansas City?

As always, comments welcome, including those stating I'm wrong about everything.

Monday 3 November 2014

Teach the Controversy

What if we started including short (500 words, 2 pages double spaced) essay assignments into the stats curriculum?  Students could choose from various controversial topics in stats. They could be referred to a small amount of literature, such as a paper or a couple of book chapters on their chosen issue.
 
Essay questions could include:
- Take a side, or compare Bayesian vs Frequentist methods.
- Take a side, or compare parametric vs non parametric methods.
- ... or simulations vs real data.
- How important is parsimony vs accuracy?
- How valuable is null hypothesis testing, what does it mask?
- Should pvalues be the gold standard?
- How feasible are causality studies?
- Is multiple testing a valid remedy?
- Is imputation a valid remedy?
- Are the flexibility of ultra-complex methods like neural networks worth the fact that they can't be explained or described reasonably?
- Why do we use the mean as the basis for everything instead of the median?
 
In service courses, the essays could be more about social issues that statistics illuminate rather than the methodology itself.
 
- Discuss what multiple regression reveals about the wage gap between sexes. 
- Discuss publication bias and how asymmetric tests like funnel plots can expose it.
- Discuss the pros and cons of bar graphs and pie graphs.
- Consider [attached infographic]. Describe the author is trying to convey. Describe, as best you can from this graphic, what is really going on? How could the graphic more clearly convey this?
 
There are a lot of articles in chance magazine that a social studies ugrad could add their own perspective to while learning to incorporate statistical arguments into their own essays. Math background students benefit from the writing practice and wider perspective, and writing students will have an opportunity to use their strengths in an otherwise intimidating class.
 
The essay prompts above can be answered at multiple levels of depth, allowing them to be slotted into different courses. Finally, this gives the instructor license to remove other written questions from the remaining assignments, which can offset the change in marking load. The necessary material for writing would come at the cost of either some methods or some mathematical depth, but given the challenges in modern statistics, being able to consider questions like those above is worth that cost.

Wednesday 29 October 2014

Fractional data types

There's likely a good solution to this already, but it eludes me:  Why are there no fraction data types at the same basic level at int, char, and float?

A lot of statistical operations, like matrix inversions, are computationally very expensive. Part of the reason they take so long to perform for large matrices is because of all the division of numbers that's going on. Compared to addition and multiplication, division of floating point numbers is very hard for computers. It has to be done in a form of binary long division, which involves a subtraction and multiplication for each bit.

However, division of fractions is simply cross-multiplication, which is cheap.
Consider a double length float in a 32-bit system. It has 64 bits, 52 which store the actual number ( the mantissa), 11 which store how large the number is, and 1 for the plus or minus sign.

Modelled on this, a 64-bit fraction data type could have two 28-bit mantissae, 11 bits for the magnitude, and 1 for the sign.

Note that fractions are not unique representations of numbers. 18/12 is 6/4 is 3/2. We can exploit that to use only a single sign and magnitude for the fraction as a whole.

Conversion from fraction to double is a simple division, so we can't avoid division entirely, but we could do intermediate steps as fractions.

Conversion from double to fraction is harder. It can be done in a arbitrary manner, of setting the denominator to 1, but in the above system, 28 bits of precision are lost, and the whole denominator is wasted storing '1'.

Is there a way to convert a double to fraction to minimize loss? Can it be determined in a way that's cheap enough to make conversion worthwhile? My uninformed hunch is converting x to x^1.5 / x^0.5 would work, but square roots are still a bit costly, and I have no idea if it's particularly good at preserving precision.

Any ideas?

Sunday 26 October 2014

SQL - An Introduction in 25 Minutes

Twice a year, the stats grad students from UBC and from SFU meet at the Harbour Centre downtown and have a one day seminar of student talks. Rather than present research, I gave a 25 minute introduction to SQL.

Specifically, it's a few slides on what SQL is, and some example queries using data from Major League Baseball's PITCHf/x data with the pitchRx package in R, and a package called sqldf. sqldf  interprets queries on R data frames that are stored in local memory or in .csv files. This package is very handy for writing practice queries when you don't have access to an external database, and faster than setting up one on localhost through Wamp server.

The example queries in the talk were a showcase of common clauses in a select query, and a simple join. Below are the PDF slides and the TeX code for the presentation if you wish to use it as a template. You won't be able to compile it without the images, so be sure to comment them out first.

SQL - An Introduction in 25 Minutes, PDF Slides, TeX Code

Saturday 25 October 2014

Some fan-made Dominion cards.

Playwright (3+)
Action
You may overpay for this card. Gain one coin token for each 1 you overpay.
+1 Card, +1 Coin Token

This is a foil to the Guilds card Masterpiece, which is a poor card on its own, but allows to you overpay to put a Silver into your deck for each amount overpaid. 

It's called Playwright because coin token cards traditionally are people, and it seems plausible for a playwright to produce one amazing work and then be only marginally valuable after. What I like about Masterpiece is that it's a viable buy for 7 coins, the 'deadzone', without being board-defining.

Playwright fixes the deadzone problem doubly - it gives you something worth buying for 7, gives you enough coin tokens to prevent issues in the future, or set up for a megaturn.



Dig Site
Victory/Treasure (0*)
This card cannot be bought normally.
You may gain this card when you trash a victory card.
1 VP, +1 Money.


Museum
Victory/Action (0*)
You may gain this card when you trash a card worth 6 or more.
3 VP, +1 Action, +1 VP chip.


Dig Site is intended to give some purchasing power when you trash an Estate, or Overgrown Estate, or beef up Silk Road without much drawback. Museum is a late game vehicle to convert Golds into Victory Points, and it's not a dead card if it ends up in your hand.

Farmland combos brilliantly with either of these cards.

These cards push the bounds of the original game a bit because there's usually at least ten non-basic cards that can be bought in a game. If there's a card with a special gain condition, like Prize, or Madman, it's placed in the game as an attachment to an existing card (Tournament and Hermit, respectively). These could be attached to an... Archeologist card?

The creator of Dominion has said he's not interested in making more cards because the game is complex enough with 220+ cards across all the sets. However, I feel like there's still room for expansion in the single-player mode that's playable online, much like how there are special cards and conditions that are only in Hearthstone's Naxxrammas fights.

The Value of Choice in Game Design

Choice has value, but it’s not always possible to quantify. This allows a game mechanics to be recycled. Consider the following three cards in Magic: The Gathering.

1. Lightning Bolt - A spell to deal 3 damage to a creature or player once, and at any time.
2. Kird Ape - A creature with 2 attack and 3 defence, as long as a certain easily attainable condition is met, and worse otherwise.

3. Vexing Devil - A creature with 4 attack and 3 defence, but the opposing player may choose to take 4 damage immediately instead of having this creature enter play.


 

All of the cards have the same requirements to be played (one red mana). There are differences in the years between when each card was tournament legal, but all three cards are considered better than the typical card. In short, all three of these cards are highly interchangeable in a deck. So which one is the best?

Case I:

If the opposing player allows Vexing Devil to come into play, it has more attack than a Kird Ape would, so it's better than using a Kird Ape in that situation.

Case II:

If the opposing player does NOT allow Vexing Devil to enter play, then the opposing player takes 4 damage. This is more than the 3 damage than a Lightning Bolt would deal.

In either case, it seems like Vexing Devil performs better than either card. What makes these three cards fairly comparable? The opponent's choice does.

From the player of Vexing Devil's perspective, they will always end up with the least favourable of the two cases. If either one of those cases were as good as the two alternative cards mentioned, the alternative card should be used instead. So, to make Vexing Devil viable, both cases must be better than those on alternative cards.

From the opponent's perspective, they can choose to let the creature come into play only when they can deal with it in a way that is less costly than taking 4 damage.

In short, the value of choice must be priced into the relative strength of game elements. This is what makes the Queen in chess strictly better than either the Rook or the Bishop. The Queen can only make moves that either the Rook or the Bishop can do, but having the choice to do either is what gives her her power.

 

Consider one more card:

Forked Bolt – Deals 2 damage to target creature or player, OR 1 damage to each of 2 such targets.

This card is also considered above-par, and has the same mana requirements as the other three. It's arguably weaker than Lightning Bolt, but not by much. Why? The player of this card has the additional choice of splitting the damage.

American Psychometric

Eight discrete bars tastefully separated from each other, presenting itself as a bar graph of distinct categories instead of a histogram of continuous values. Labelled axes showing popularity values for each car colour on a brushed steel background. My god, it even has the “other” category!







The Research Potential of Twitch TV

 






What is Twitch?

Twitch is a live video streaming service marketed to video game and e-sports enthusiasts. It arose from a community of dedicated competitive head-to-head games and "speedrunners" which complete single player games as fast as they can. Twitch, formerly Justin TV, allows players to upload a live video feed of their screen, perhaps with some overlays like a timer or a camera of the player's face or hands. Twitch earns its revenue much how traditional television does, with commercials that intersperse the play. For most players, whom have 0-10 viewers at any given time, these show up for 15-30 seconds upon beginning to watch a channel. Proven high-tier players (proven in terms of viewership, not in-game achievement) can enter into profit sharing agreements with Twitch, and may play additional commercials to generate revenue for themselves and for Twitch. Since many games have predictable moments of downtime, such as a restart of a speed run attempt, or a long loading screen, when an advertisement doesn't detract from the experience. These high-tier players can also monthly subscriptions that give small cosmetic benefits and the removal of ads to subscribed viewers.


Why Twitch is a big deal, and why it's different.

Recently Twitch was purchased by Amazon for nearly $1 billion USD (or $1000 million USD if you're using European notation). Aside from being the web's everything store, Amazon also sells a portfolio of large-scale computing services, so Twitch wasn't an out-of-character purchase. They have the hardware means to support the growing operation of receiving, compressing (often into multiple formats for viewers at different bandwidth levels), and sending video streams from thousands of players to, at the moment, roughly half a million viewers.

That is a LOT of bandwidth, and the senders and receivers could be anywhere and could change at any time. Logistically, that makes what Twitch does a lot more impressive than the feat or providing traditional television to 500,000 simultaneous viewers. With traditional television, a few hundred channel streams can be sent to a local station, which can copy those streams as needed for the short distance from the local station to the viewer. Bandwidth on the main lines does not increase as the number of viewers increases for traditional television. Youtube does what it can to get similar cost advantages. Youtube is a one-to-many system with being a central repository of videos that are pre-uploaded. It has the advantage of knowing which videos are popular or becoming popular, and can use that reduce its bandwidth costs by storing copies of popular video files with local internet service providers. I assume Netflix has a similar arrangement. Even with live streaming of large events, mainline bandwidth can be saved by branching if demand can be predicted such as with playoff hockey games, and Olympic events - in that order.

Twitch, however, with its many autonomous content providers and dispersed viewers, cannot predict where enough viewers of a given channel will be to take advantage of a local provider like traditional television can. They also can't store their main product, which is live, on a central repository. In short that massive amount of bandwidth has to go through the internet in an ad-hoc fashion that must make the per-viewer costs much higher than competing entertainment. Now that Amazon has put colossal gobs of money behind Twitch, it surely has some ideas to reduce these costs. Predictability, may have made Youtube more efficient, but what could be predictable about video game streamers?

"I'm tired of dating responsible, burly, musical chef-lumberjacks. What would really seduce me is a lecture on video compression and the power law." - Nobody. Ever. :(

The popularly of games follows the power law. Lots of things follow the power law. You may have heard of this as the 80-20 rule, such as "80% of alcohol is consumed by 20% of the drinkers". For television it would be "80% of the revenue comes from 20% of the shows.". Sometimes it's called the 90-10 rule for similar reasons (as I've heard it in the relative frequency of words in a language), but the basic principle is the same: There are a few things (channels, games, drunks) that far surpass all the others.

For Twitch, this works for games. It's almost 2am on a weeknight in Vancouver, Canada as I write this, which is a lull for Twitch. Three games: DOTA 2, League of Legends, and Hearthstone: Heroes of Warcraft, have 63,000, 40,000, and 33,000 viewers respectively. There are three games with 5000 - 10000 viewers presently, twelve games running between 1000 and 5000 viewers, and fifty games between 100 and 1000 viewers, and hundreds of games with 1-99 people watching. The viewership of channels within a game also follow the power law. 70% of Hearthstone's 33,000 viewers are watching a single channel. 15% are watching the next two most popular channels, and so on.

Why would anyone care that viewers (and therefore, revenues and costs) follow the power law? Because it means that improving the efficiency of the streaming of only a handful of games can go a long way towards improving the efficiency of all the streaming that happens.

By improving efficiency, I'm referring specifically to reducing the number of bits of information that have to be routed across the internet to deliver a stream to a single viewer at a fixed quality. The most common way of doing this is with video compression codecs*. To upload and download something that's essentially thirty pictures per second, digital videos typically use codecs, which are like books of shorthand that are shared between the sender and the receiver. In text, you can think of the acronyms*, jargon, and in-jokes that you know (BC, GLM, Old Gregg) as your personal codec, because you can use them to communicate much more quickly with other people that know the same acronyms, et cetera.*** Computers do similar things with streaming videos, they exploit common knowledge to produce a smooth video without. having. to. send. the. entire. picture. for. every. frame. Among other things, codecs exploit the fact that most frames are similar to the ones just before them. Most of these frames are for movement or animation. Even cuts within a scene share a colour palate, and there are usually transitions between scenes. This is why when things get really fast-paced in a video, or if you're skipping around it wildly, the picture goes a little crazy and you'll see ghosts of previous things or strange colours and blocks.

Twitch already uses codecs, but I imagine that the ones it currently uses are designed for general video, or at best for general video games. However, we already established that Twitch is primarily used to stream a handful of games. Each of these games has their own patterns that could be used to make codecs that will work better for those specific games.

Here are two typical screen captures for a Hearthstone stream, taken half an hour apart. (From Hafu's channel: http://www.twitch.tv/itshafu Screencaps used without permission.)

This is a card game, so there's already a lot of static elements that a codec can use. Most of the outer rim is the same, save for a shading change, so a codec needs only send "no change in this area" each frame. Likewise for most of the centre board space. Several interface elements have changed little between the two pictures and have minimal changes from frame to frame. Aside from the shading change and the movement in the live camera, the biggest changes the play arena are the decorations in the corners. You can't tell from the static images, but the ruby in the moai statue's eye in the left screen glints regularly, and the water in the waterfall burbles down in an animated loop. Likewise, the spiky zeppelin in the right image floats around a fixed pattern, and the smoke the hut slowly billows.

Tor Norretranders would call this "exformation".

If the codec were specifically designed around Hearthstone, it could recognize the decoration elements like the glinting ruby and the billowing smoke. Then, instead of Twitch's servers having to send the slight changes to the ruby and the smoke over time, they could sent a much simpler "carry on" message as if that part of the image was static. The receiving end, knowing it had a ruby or some smoke, could fill in the animation in the video without having to have it described to it explicitly by the server. Since a large portion of the viewers of this stream have a copy of Hearthstone and play it themselves, the codec could even draw upon the art assets of the game to create the animations semi-autonomously.

Other object recognition could be done to leverage the sort of repetition that isn't found in general video, but is found in video games. The dark gems in the lower right corner of each image are repeated. With access to the art assets, Twitch could sent "draw four [dark gems] here" instead of the most longer "draw the following thousands of blue and black pixels.". Without the assets, a more general signal could be send to draw the pixels for one gem, and simply a repeat command for the next three.

Finally, object recognition could be used as a graphical form of automobile amputation autocomplete. See that "Soul of the Forest" card? If you knew every card in the game, you could cover up the bottom 85% of that card and still recognize it. A server at Twitch, streaming this to the 2,400 viewers that there were for this channel, could save a lot of effort by recognizing that card, and telling the viewers with art assets to draw that card in that position, rather than describing the card pixel-by-pixel to every viewer. It helps greatly that cards have many graphical characteristics in common, like the gem with the number in the upper left, that a server could use to recognize that the object in that portion of the screen is card, and where it should look for the rest of the card and what else to look for.

Streaming is bandwidth intensive, and bandwidth has an electricity cost and a carbon footprint. Amazon could save a lot of money, provide a more reliable product, and offset carbon at no extra cost if it found some ways like these to take advantage of the predictable content that people are streaming on Twitch.

My conjecture is that Amazon is already working on such technology, which I'm calling Application Specific Video Codecs for now. But just in case they're not, Fabian, you know how to find me. ;)

To clarify, this is about more than just a codec for gaming, that's just the inspiration. Imagine a means of video streaming that's halfway between pre-rendering and streaming. It could also be used for sports programs with consistent overlay formats, and shows with fancy transitions that get used a lot (and typically are hard to compress because they involve large movements).

It also seems like something a third party would develop and then sell the tailoring service to various streaming systems.


* I apologize for my inexact description of video compression and codecs. This is a blog post, not a textbook, and I am not yet an expert in this particular field.
** Acronyms and initialisms. Insufferable pedant.
*** et cetera. Something you should never end a sentence with. Also see "preposition".

Wednesday 1 October 2014

The Order of Data Contains Information, But How Much?


For some sampling designs, especially adaptive ones like those in Thompson and Seber (1996) , the order of the values in the sample matter. Statistics derived from the tuple of observed values (3,10,4,9) may be different than statistics derived from the tuple (9,3,10,4). The order in the round parentheses implies the order in which these values were observed.

In other cases, like simple random sampling, only the observation values and not their order matters. In simple random sampling, the samples {3,10,4,9} and {9,3,10,4} are the same. The curly braces imply set notation, where the order doesn't matter

So there is information embedded in the order of the data, but how much?