art with code


More extrapolations: India

India's purchasing power parity adjusted GDP looks to overtake the US in 12 - 22 years. The lower number of years is based on India's growth continuing to accelerate this decade in the same way it did during the last decade. The higher number of years is based on India growing at the lower 1993-2003 pace.

Let's say, India 2030, bigger economy than the US. Nice capstone for the fourth term of Trump.

As for the EU-28, India would overtake it in somewhere between 16 and 41 years (Europe's bigger and poorer than the US, so it's doing more catchup growth).

As for China, I remember calculating that India would be the bigger economy in 2080, based on India's higher population growth. The other factor is where the GDP/capita growth of these giant regions plateau, and that, my friend, is a more difficult thing to guess.

Tracing orbits

Planetary chisels, engraving their orbits in space-time.


Euro slump end in sight?

According to historical records, 2017 is the last year of the European slump, followed by a decade of rapid growth. Historical records being the other times the Euro GDP has dropped below the US GDP in the early eighties and the late nineties. In both cases the slump lasted around 3 years.

Also according to historical records, EU GNI/capita is projected to equal the US in 22 years. (EU GNI/capita has been growing at 5.5% annually for the last 40 years, for the US that's around 3.9%.)

As always, extrapolations are silly, "this time's different", etc.



Acceleration Design Concepts

Continuing from the Project Proposal, here are some design concepts and notes that I jotted down in my phone notebook (Samsung Note, it's great. The last couple ones are Photoshop.) Some of these are a bit cryptic, especially the last page, which is a scattering of random ideas, let me do a quick briefing.

The core idea running through the design is avoiding having to model by typing in numbers. The layout would be based on snapping and grids to get clean alignments and consistent rhythms. For example, the default translation/scaling/rotation mode would be snapped to a grid. That way you can quickly block things out in a consistent fashion, and go off-grid later when the basic composition is solid.

Another thing (that's not shown here) to speed up creating compositions would be repetition, randomization and symmetry tools. Throw a model into an array cloner, pick the shape of the array, tweak randomizer parameters for the array, set it symmetric along an axis: very little work and you get a symmetrical complex model. Add in a physics engine, and you can throw in a bunch of objects, clone, run physics and get something natural looking very quickly.

As the concept behind the app is doing quick dailies, the default setup should already look good. A nice customizable skybox, animated clouds, good materials, and a classy intro camera pan. The camera would be a based on real cameras in that you'd have aperture size, focal length and depth of field that work like you'd expect them to. The exposure would stay static over changes to camera params, you'd have an exposure slider to adjust it. The camera would have selectable "film stocks" to change the color tone, and a post-pro glows, flares and vignetting.

I was thinking of basing the workflow around kit-bashing. You'd have a library of commonly used objects and materials (e.g. landscapes, rocks, clouds, fire, water, smoke, wood, plants, metals and so on) and could drag them from the library to the scene and build something interesting-looking and polished very quickly. The inspiration for this are UE4 speed modeling videos like this.

The tool wouldn't have modeling features, but focus on importing pre-made models and mashing them together. This would make the tool (that's already sounding pretty intense) simpler to make. The focus for the tool in my mind is quickly building and animating WebGL scenes in a WYSIWYG fashion: you'd always see the final render quality and performance, and could work with that (instead of having to guess).

Been doing these this year

Acceleration Project Proposal

Here's another thing I made. I doubt I'll build it, so have at thee.


7 DEC 2016


Screen Shot 2016-11-16 at 18.24.21.png
Acceleration is an artist-oriented tool for quickly making good-looking animated 3D websites and VR experiences. Export the created animation timelines for use by developers.
Acceleration plugs a gap in the market between game engines, static scene editors and non-interactive 3D art tools: artist-driven creation of beautiful interactive 3D websites.
Acceleration is a tool for creating daily interactive animated artworks. See these two examples of Cinema4D daily renders (scenes made in a few hours, rendered, posted on Instagram) by Raw & Rendered and Beeple:

14701175_574041902798809_2501218477348487168_n.jpg 14718402_129621827509529_3629888565082587136_n.jpg

Now imagine that you could put interactive versions of those up on the web. Production value of a quickly made website would go way up and make high-end 3D sites - now restricted to agencies and major brands - an option for more companies and individuals.
Tool gap
Currently if you want to build an animated 3D scene, you either have to use a game engine or a lower-level 3D library. Building something that sits between the two extremes would be readily usable for digital agency work.
The marketing and feature sets for game engines are aimed at making games: if you want to make an art piece / 3D site / VR experience, they seem like the wrong tool. They’re difficult to learn, difficult to integrate to web sites, and come with all kinds of junk that you don’t need as an artist. Game engines often require programming to make interactive scenes, which makes graphic artists tune out and go back to making static content. Most stand-alone app interactive agency work is built using one of these engines, usually Unity or Unreal Engine.
Lower-level 3D libraries are even more difficult to learn and really require working with a developer. On the plus side, you get more native integration with web sites. Most agency work 3D websites out there are built using these lower-level libraries, primarily with three.js. The problem with lower-level libraries is the way they move the artist away from solving art problems, and instead put the developer in that position. The result is expensive programmer time wasted on programmers doing substandard art with bad tools, a disillusioned artist who sees their work screwed by the programmer, and a disappointed client.
Non-interactive 3D content tools excel at giving artists easy-to-use ways to build and hone beautiful scenes and animations. Artists are using tools like Cinema4D, After Effects, Maya, Octane and Keyshot to quickly build, layout, light and model scenes that look amazing. But when they try to bring this content over to the web, it’s nearly impossible. Either you export videos or image sequences, and lose the 3D aspect of the work, or you work with a developer to bring the 3D scene into a 3D engine and then spend a large amount of artist time and dev effort to try and match the look of the rendered images. The economics don’t bear out fast exploration: that’s why artists do “dailies” in Cinema4D and Unreal Engine, but not in three.js.
  1. Tool for artists to make 3D animations to be used by front-end developers
  2. Tool for artists to make and share daily 3D animations
  3. Tool for artists to make and share daily 3D interactive experiences
Create WYSIWYG animations using F-curves, easing equations, dope sheet and 3D motion curves. Export the animation as JSON data. Play the animation data with a runtime library.
Making and sharing 3D animations
Save animation data and scene data into a cloud service or export it as a HTML file. Sequencing scenes to make cuts. Simple sharing to social media platforms to increase your reach. Feed algorithm that makes you work harder.
Import assets from industry-standard software. Beat-synchronize animation to music. Material editor with physically based materials. Rendering backends for interactive 3D, offline rendering.
3D websites
Behavior editor to create event listeners and hook them up to handlers. Multiple timelines with tweens between current state and new timeline start. Nested timelines to create re-usable clips that can run independently of the main timeline.
Create links from objects to other scenes and timelines.
  1. Initial prototype
A four-view scene editor, featuring a timeline with a single-object dopesheet and f-curves, online at Showcases technical feasibility and how much a tool like that would help in making smooth professional-looking animations.
Screen Shot 2016-11-16 at 18.24.21.png

  1. Mock-ups and market research
Photoshopped a mockup GUI of the proposed tool by slapping together bits and pieces of Cinema4D, new UI designs, and screenshots of the prototype. Posted the mockup on Twitter to test the waters.
Based on 90 likes and 16 retweets, plus numerous “Yes, please!” replies from front-end devs and agency folks, there seems to be some demand for a tool like this. The actual parameters of the demand are still very fuzzy.

Should I start developing this three.js animation editor? Anyone wanna use it?

@ilmarihei awesome! Yes!
Dani Valldosera, Develop lead & Front end developer at Dimo Visual Creatives -
@ilmarihei Yes! 👏
Joe Branton, Grow Digital Agency -
@ilmarihei yes! that’s incredible.
Niall Thompson, Co-founder and head of Web & Interactive -
@ilmarihei please!
Edan Kwan, Co-founder / Creative Technologist at @wearekuva -
@ilmarihei YESYESYES!
Ricardo Cabello, Mr.doob, creator of Three.js -
@ilmarihei @mrdoob Yes! That looks superb!
Octavector, Web designer / Illustrator -
@ilmarihei YES PLS
Vanessa Yuenn, Javascript Developer at Inc. -
トキオZBMAN, Developer at a Bangkok digital agency -
@ilmarihei omg yes!
Iván Uru, Web Developer and Digital Artist in México -
@ilmarihei yes please!
Adam Sauer, Electrical Engineer building telecom systems and a 3D productivity app.
  1. Build a client team
Find motion graphics designer to work part-time as the client on the project, using the tool to build an animated site scenario. That’ll keep the UI honest and useful, and produce marketable example content.
  1. Secure income for the project build
The second prototype build is likely to take a month. Expanding the second prototype to a more production-ready version would take around three months. These figures are based on a single person working alone on everything. To fund the development, I’d need to secure around $15,000 funding for the second prototype and $30,000 for the first production version and demo content.

What's going on?

I published a bunch of post drafts from the past ten years. Of varied quality.


Filezoo, the plan for month two

Okay, so I have this quite feature-complete, if an unpolished memory hog of a file manager. Where to go from here? What are my goals?

The goals I have for Filezoo are: 1) taking care of casual file management and 2) looking good.

Putting the second part aside, since it's just a matter of putting in months of back-breaking, err, finger-callousing graphics work, let's focus on casual file management.

Casual file management is about having an always-on non-intrusive file manager at your beck and call in a place where you can summon it from with the flick of a wrist. I.e. in the bottom right corner of my desktop.

A casual file manager doesn't necessarily do every single thing imaginable, but draws a line between stuff you want do every day and stuff that you think might be nice to have if you were the most awesome secret agent file managing around the filesystems of your adversaries. The major difference there being that while, yes, the second category is totally awesome and full of great ideas and groundbreaking UI work, you'll end up doing all that stuff on the shell anyhow.

So, a casual file manager should do the file managery stuff and leave the rest to the shell. Which, by the way, is also a great way to cut down on the amount of work and expectations. "It's just a casual file manager, it doesn't need to have split views and all that other crazy shit! Use the shell, dude!"

Things that file managers are better at than the shell: presenting a clickable list of the filesystem tree, selecting files by clicking, showing thumbnails, looking pretty. Things that the shell is better at: doing stuff to lists of files, opening files in the program you want.

Type with your face

[From mid-2013]

My grandma had a stroke and was very much paralyzed. She could understand what people said and move her eyes. But she couldn't control her tongue so her speech was guttural sounds that you couldn't understand. And she couldn't move her hands. She couldn't swallow because of the paralysis and shriveled blood vessels made IV infeasible. I wanted to find some way for her to communicate before she died of dehydration. 

Couldn't find any gaze-controlled keyboards that would work with just a webcam. 

I made a face-controlled keyboard with the Google+ Hangouts API, check it out: . I'd like it to use gaze tracking but the API doesn't have that. So you need to move your nose to type.

Calibrated for an 11" MacBook Air, use the buttons at the bottom to recalibrate. Turn your head so that your nose faces M and press the "left"-button. Turn to face R and press "right". Repeat for C - top and Z - bottom.

To type a letter, look at it and turn your head to move the cursor on top of it. Keep the cursor on the letter until it changes color completely. The letters appear in the text box at the bottom.

Viral lessons from ideologies

  • Promise benefits. Make them non-cashable. "If you sign up, you'll get a reward after you die!"

  • Promise damages. Make them non-cashable. "Anyone who is not signed up, will be punished after they die! Also if you're not following the EULA, you will be punished after you die!"

  • Think big when promising non-cashable benefits and damages. You don't have to cash them so you can promise anything you want. "If you sign up, you'll be in a state of eternal bliss and happiness and will be able to do anything you want after you die!", "After they die, all people who don't sign up will be tortured forever and will never be happy!"

  • Promise immediate damages if you can. "Breaking the EULA is punishable by death!", "Anyone who lives here must sign up or die!"

  • What immediate benefits you give can be very small. "You can talk with people inside the site after you sign up.", "You'll get a free subscription to our email newsletter."

  • Make your EULA exclusive. "You can't sign up anywhere else after you sign up here."

  • Make recruiting new users the number one tenet. "Your mission is to go and get everyone else to sign up. Once everyone is signed up, you'll get a reward after you die, even if you're already dead."

  • Make it easy to sign up. "If your parents were signed up, you're signed up by default.", "You only need to say one sentence to sign up.", "If you were born in this area, you're already signed up."

  • Make leaving hard. "If you leave, you'll be killed." and "It's illegal to have any contact with people who have left." and "It is not possible to leave." (try quitting your citizenship for an example of that.)

  • Continuously tell the signed up people that they're signed up and reinforce their identity as signed up people.

  • Have a derogatory term for people who are not signed up.

  • Profess to be tapped into infinite wisdom and capability. You don't have to cash it and can claim that the receivers of the wisdom do not understand it. "The Founder of the site is the omnipotent creator, ruler and maintainer of the entire universe. The Founder knows everything. The Founder is very busy. When things go well for you, it is because the Founder is personally helping you. The Founder's words are difficult to understand, but our EULA department interprets them for you."

Thought experiment on autoparallelizing loops

One thought I've been having these days is if you could make JavaScript faster by executing loops in parallel. JavaScript feels like a pretty good language for autoparallelization as it has no pointers and no shared state concurrency: you know if two objects share memory and the variables in your thread of execution can't suddenly change (i.e. no external threads mucking with your data). So proving that a loop can be parallelized should be easier than in C, and if you can prove that a loop can be parallelized, you can go ahead and do it with reasonable confidence that the results will be the same as for serial execution. And JS has another dubious advantage as well: it's slow. A slow language is less likely to run into a memory bandwidth bottlenecks (assuming that it's slow because it's doing more boilerplate computation), so it should benefit from parallelization more easily than a language that's already memory-bound.

On a high level, you'd keep track of the input variables and output variables of a block of code, then figure out if any of the inputs are also outputs. If not, you've got a pure block and can execute it in parallel. If the outputs overlap (each iteration is writing to a variable outside the block) but that output is not used as an input, use the last iterations value for it.

If the inputs and outputs overlap, you've got a reduce tree. The shape of the tree depends on the associativity of the input->output -function. If it's associative, e.g. step = step + myValue, you can make any shape of tree that you like. If it's not associative, e.g. step = step / myValue, the tree is sequential. You can still execute the computation of myValue in parallel, but need to reduce the result of step for each block instance before it can proceed. This is a bit complex though. A simple heuristic would be to execute fully associative blocks in parallel and everything else sequentially.

How to implement... first, detect loops. Second, try to prove that output values are independent of each other (for starters, deal with output[i] = myValue where i is incremented by one on each iteration and terminates at a known value). Third, estimate serial loop cost vs parallel loop cost + thread creation overhead. If serial cost > parallel cost + overhead, turn the loop into a parallel one.

If output values depend on each other but you can prove that the reduce operation is associative (start by handling the case of one number output written to by all threads, with + as reduce op), estimate the reduce tree cost vs serial reduce cost. Pick minimum cost from parallel map + serial reduce, parallel map + parallel reduce and serial map + serial reduce.

Durr... maybe some maps could be turned into a sequence of loop-wide vector ops (e.g. a[i] = b[i] + c[i] * d[i] => a = b + c * d) which could then be split down into parallel SIMD chunks + loop fusion it back into idx = thread_id*thread_block_size+i; a_simd[idx] = b_simd[idx] + c_simd[idx] * d_simd[idx].

Decision making, part N

(Warning: kooky stuff ahead)

Continuing on the "what's a good decision making system"-thread, here's a vague and incomplete idea that I've been turning over. A system of governance finds a problem to solve, generates a solution to it and implements it. To find a problem, it needs to know about it, which requires information about the state of the governed system and the ability to filter the information to recognize problems. To generate a solution, it needs to generate and evolve several different solutions and pick the best one. To implement the solution, it needs traction in the governed system.

In abstract: sensory information -> problem filter -> problem broadcast -> solution generation -> solution filter -> implementation plan -> implementation broadcast -> implementation.

Gathering sensory information is a [streaming] parallel map pass. The problem filter is a pattern recognizer, maybe a reduction tree of some sort. The problem broadcast make the problem known to solution generators. The solution generation is another parallel map pass, and the solution filter is a tournament. The implementation plan generation is similarly map-reduce, followed by the broadcast to implementers who then get to work.

It's not quite as simple though, as each step requires continuous feedback loops to optimize the implementation. Some parts of the problem are only found out at implementation time and the solution and plan need to evolve with the problem.

One anecdote from AI is that the quality of your algorithm is secondary to the amount of data you have. So you want the map passes gather as much data as possible and have a reduction network on top to do the filtering. The quality of a working reduction network is less important than the width of the gather pass. And I guess the reduction network functions the better the larger part of the population it involves.

In sports the best results are within an order of magnitude from average results. Maybe the same is true for intellectual pursuits: the world's best dictator may work as well or better than a parallelized council of ten average ministers, but a lot worse than a couple hundred average ministers, never mind a few dozen million.

Traction. For an implementation to actually get done, there needs to be buy-in among the implementers. For that the implementers need to be involved in figuring out the problem, solution and the implementation plan. To fix a problem, you need to know what problem you are fixing, otherwise you're just doing random pointless things and can't evolve the solution. Implementation is yet another of those things that benefits a lot from parallelization.

What do reduction networks and voting have to do with each other? Each filtering step needs a decision to be made, decisions need to be informed and informed decisions need a wide base of decision makers to provide the information. So, uh, grab a big part of the population, run the selection by them, go with the majority? Or is there a better way to get the information from the population, get the things that really matter and use that to do the selection?

The problem with small governments is that the smaller a government, the easier it is to bias. Bribery, threats, cronyism, nepotism, lobbying, you name it. Heck, just paying the decision makers an above-average salary is enough to bias the decisions. The problem with large governments is that you're sampling a much noisier pool. Uninformed people are easier to sway with negotiation skill? How does that differ from swaying a small amount of a bit differently uninformed people (i.e. MPs)? The republic battle-cry is "against mob rule!", but is it just a smaller mob that rules in a republic? Does a system that uses a small amount of elected lawyers do a better job at solving problems than a system that uses the whole population?

How do you filter out flagrantly anti-minority decisions? What's the threshold in the ratio between majority advantage and minority disadvantage. How do the current systems guard against that? Make the decision making body small enough to be outnumbered by the relevant minorities? But they also have guards and all this force boosting going on... demonstrations by thousands seem to have very little effect even on single parties, much less the whole government.

(Yes, there is a threshold in majority:minority-decisions: murder is outlawed, no? Much to the chagrin of the murder society. More controversial are decisions such as not providing street signs in every language of the world. It would be good to have that, and it is making the life more difficult for a significant part of the population, but currently the benefit is too low compared to the cost. So we compromise by having English signs at the airports, Swedish signs at most places in the south and west, Russian signs at shops in border towns, Japanese signs at Helsinki design shops and so on.)

The anti-democracy strawman usually goes like this: Suppose you have a vote that devolves into a nasty argument. In the next session, the winners of the previous vote propose hanging the losers of the previous vote. Continue until you have only two people left. Now, why don't we see that in parliaments? Surely the ruling party votes to have the opposition parties and their supporters shot. All the way until you have two MPs left, one stabs the other and declares himself emperor. .. Oh wait, that does actually happen. How do you avoid this kind of thing?

How do governments go wrong? By wrong here I mean something like "does not implement policies in the interest of the population". In other words, the governmental idea of good policy diverges from the population's idea of good policy. Or is it from the population's benefit? Does good policy do good or is it merely something seen to be good. How do you pass "bitter pill"-policies if everyone making the decision will take a short-term loss for a long-term gain. Same way as we do now?

The goal of a system of governance is to implement policies that are beneficial to the population. When a system benefits a sub-group disproportionally, the system is biased. When a system is generating policies worse than best known, it is uninformed. When a system can't implement the generated policies effectively, it lacks traction. A system of governance should strive to be unbiased, informed and popular.

To be unbiased, the system should be unbiased from the start and the cost of biasing the system should be high enough to be prohibitive. For the system to be unbiased, the individual actors in the system need to be close to an unmodified representative sample of the population. For the cost of biasing to be prohibitive, the number of individual actors in the system times the average cost of biasing an actor should be as high as possible.

To be informed, the system needs information. The more relevant information the system has, the better decisions it can make. Find fixable problems, find good solutions, find good implementation plans, refine through attempts at implementation. Each step requires a lot of sensory data and processing. To maximize the sensory input of the system, you need to maximize the number of sensors times the power of the sensor. Similarly, the processing power of the system is the number of processors times the power of each processor.

What would I like to see in a programming language?

If I was given a new programming language & runtime, what would I like to see? First of all, what would I like to achieve with the language? Programs that work. Programs that work even on your computer. Programs that use limited computing resources efficiently. Programs that are fast to develop and easy to maintain.

So, portability, correctness, performance, modularity and understandability.

Portability is one of those super hard things. Either you end up in cross-compiling hell or runtime downloading hell. Not to mention library dependency hell, OS incompatibility hell and hardware incompatibility hell. Ruling over all these hells stands Adimarchus, the Prince of Madness, Destroyer of Programmers, Defender of the Platform, Duke of Endianness, Grand Vizier of GUI Toolkits, &c. &c.

...Screw portability.

Correctness then. The main thing with correctness is removing the ability to write broken code. And because that's difficult to achieve in the general case, the second main thing is the ability to discern the quality of a piece of code. The less code you write, the fewer bugs you'll have. The more you know about your code, the fewer bugs you'll have.

The compiler should catch what errors it can and inform the programmer about the quality of their code. Testing and logging should be integral parts of the language. There should be a proof system to prove correctness of functions (but one that's easy to use and understand...) The compiler should lint the code. The language should have built-in hooks for code review and test coverage. The compiler should generate automated tests for functions and figure out their behavior and complexity. I/O mocking with fault injection should be a part of the language.

The standard library should have fast implementations of common algorithms and data structures so that the programmer doesn't have to roll their own buggy versions.

Speed, efficient memory use, no runtime pauses to screw up animations, automatic memory management, static type system, purely functional core. Testing, logging and profiling as a part of the language. Some sort of proof system? Code review hooks in compiler? Localized source code?


Filter Bubbles

Filter bubbles are the latest trend in destroying the world. Let's take a look at how one might construct this WMD of our time and enter a MAD detente with other social media aggregators.

First steps first, what is it that we're actually building here? A filter bubble is a way to organize media items into disjoint sets. A sort of sports team approach to reality: our filter bubble picks these items, the others' filter bubble picks those items. If a media item appears in multiple filter bubbles, it's called MSM (or, mainstream media.)

How would you construct a filter bubble? Take YouTube videos for an example. You might have a recommendation system that chooses which videos to suggest for watching next. Because more binge-watching equals to more ad views on your network, increasing the amount of ad money coming your way, which lets you buy out competition and become the sole survivor. At the same time, time spent on YouTube equals to less time spent on other ad networks, which makes YouTube ads more valuable and further increases the amount of ad money coming your way. So, recommendation system for binge viewing it is!

Suppose you watch a video about the color purple. The recommendation system would flag you as someone who's interested in this kind of stuff (hey, you watched one.) So it'd go and check other people who also watched that video and try to find some commonalities. Maybe 60% of them also checked out a video about the color red. The red video would score high on "watch next". Suppose that 10% of the purple-watchers also gave a thumbs down to a video about the color blue. The recommendation system would avoid showing you the blue video because you might not like it.

So you go and watch the red video. Now, red video viewers might have very strong opinions about blue, and 20% of them vote down blue videos. Some of them also voted down purple videos. The recommendation system would now know not to show you blue videos under any circumstance, and steer away from purple videos as well. On the other hand, the red video viewers quite liked some extremist red videos that dove deep into the esoteric minutiae of the color red. Might be something that pops up on your watch next list, then.

You go and watch an extreme red video. Suppose the extreme red viewers have started disliking more mainstream red videos, not to mention their great dislike for blue and purple videos. Now the recommendation system avoids showing you blue, purple and mainstream red, and populates your watch next list with the purest shade of extreme red.

Welcome to the filter bubble.


The extreme red videos here are an example of an attractor. If you think of the recommendation system as a vector field that guides the viewer in the direction of the recommendation vectors, the extreme red topic would form a sort of a black hole. Topics around it have recommendation vectors that point towards extreme red, but extreme red doesn't have recommendation vectors that point out of it. Once you enter the topics that surround extreme red, there's a high likelihood that you get sucked into it. If you don't get sucked into extreme red, the company would regard that as a failing of their recommendation system and devote time and effort to improve its capability to suck you towards extreme red.

Attractors are special topics. Special in that they make people inside the attractor pull more people into the attractor and prevent their escape. Otherwise they'd be more like regular popular topics: you get drawn into a popular topic, but there's always an escape route towards another popular topic. To make an attractor, the content in the attractor needs to promote behavior that blocks escape from the attractor. For example, an extreme red video that says that mainstream red, purple and blue are all paid shills plotting to destroy the world would call its viewers to vote down other views.

Breaking filter bubbles

If there's an attractor in your recommendation vector field, scramble it. Mark that topic as something where all the watch next links go to far away places or even to places preferentially disliked (i.e. stuff that the group dislikes more than whole population) by the people in the attractor. Reduce the ad payout to content near attractors. Decrease the recommendation weighing of attractor neighborhoods so that escape is more likely.

Create legislation to warn people of viral attractors. Require explicit user consent to apply binge-inducing user engagement systems. Ban binge-inducing products from public spaces and require binge-inducing sites to post warning signs with pictures and cautionary tales of addicts who had their lives ruined by Skinner boxes.


Exchange rates

If you're looking at nominal GDP figures for the last few years, you'll notice a weird thing. Most of the world seems to be in a slump. Canada's GDP has decreased by 20%, UK's GDP has decreased by 15%, Germany's GDP is down by 10%, Japan, same story. Even fast-growing economies like South Korea, Armenia, Vietnam and India have stalled at their 2014 levels.

There are a few exceptions though. The US is still growing as normal. China and Hong Kong as well, though China's had a slight dip in its growth rate. Not to be outdone, Grenada's growing at a good clip, ditto for other East Caribbean states.

What's going on? Here'a graph that explains things:

The USD has appreciated like crazy against other currencies over the last couple of years. If your currency is the USD or fixed to the USD, everything seems to be as usual: economic growth is steady, things are normal, all is well. Imports from other currency areas are cheaper, but your exports are getting more expensive.

If your currency is tracking the US dollar, but isn't fully fixed to it, you'll see something like China. A slowdown in growth, exports get more expensive, imports are a bit cheaper. Enough to get some downcast news articles going on.

If your currency is floating free against the dollar, the sky is falling. Your GDP has just crashed by 20%, your economy is shrinking like a dried grape, the good times are behind you and all that's left is a grim meathook future where you hunt cockroaches and scavenge carrion left behind by the radioactive mutant wolves. On the plus-side your exports are cheap and popular.

But, well, this has happened before. Here's the chart from 2000-2002:

And here's what happened over the next two years.

What goes up, must come down. Another example of this was around the 2008 crash. First the USD depreciated 25% against the euro over two years. Then the sub-prime crisis hit, and the USD appreciated 25% over two years.

Exchange rates go up and down. There are usually some reasons for why, but in the long run reversion to mean takes over. If the USD gets too expensive due to policy differences, either the US changes its policy to make the USD cheaper, or other economies change their policies to make their currencies more expensive.

Even if there is some real "the entire US economy figured out how to 1.5x the productivity of a person, therefore the rest of the world is doomed"-thing going on, it's not a lasting effect. The rest of the world is going to do the same thing. If you've got super good 3D printers and robots and AI and automated manufacturing and design to make the 300 million people in the US as productive as 1.4 billion Chinese, what's to prevent the Chinese from using the same tech and becoming as productive as 6 billion non-augmented Chinese. If you force-multiply people, you force-multiply people.


How's the EU economy doing (part 2)

Europe's not doing so great. Eastern Europe is 10x poorer than the US, Bulgaria is even poorer than China by nominal GDP per capita. Even rich countries in Europe are poor compared to the US, with just 70% of the nominal GDP per capita.

Wait, what? Didn't you just write yesterday that everything is fine, don't worry? Welcome to the land of currency exchange rates. The US dollar is valued at levels unseen since the early 2000s, so suddenly everyone else gets thrown into the poorhouse. A few years earlier, the dollar was valued a third lower, so the story was that EU was an economic juggernaut, eclipsing the US economy. Now the USD/EUR exchange rate is nearly even, and the story is that the US is an unstoppable economic powerhouse, and the EU's economic policy is one of total and utter failure.

What's going on then? Exchange rate fluctuations? One day you're rich, the next you're poor, the day after tomorrow you're rich again.

There are a couple historical analogies to this situation. Both helpfully involving the US as one of the economies involved. First time around, there was the Soviet Union. A serious economic competitor to the US. The USSR had a population 20% larger than the US, but it was culturally fragmented and started off significantly poorer, so it never really reached the total economic bulk of the US before it got sideswiped by Japan into the third place in the late 1980s. Japan was clocking massive annual growth rates at that point and overtook the Soviet Union, despite having only half the population. And Japan kept on growing. At the peak of the Japanese bubble, Japan's GDP per capita was higher than the US one and there was all sorts of crazy talk of the Japanese buying up the entire United States and becoming the largest economy in the world by 2005. Then the bubble burst and Japan was pushed to the third place by the new-born European Monetary Union.

So, is this history repeating itself? Is the EU in a valuation bubble, which burst during the euro crisis, and now the Union is going to disintegrate and disappear from the world stage, much like USSR and Japan? While the situation is a bit similar to the one with the Soviet Union, as both the EU and the USSR are culturally and politically fragmented economies that started off poorer than the US, there are some crucial differences. First off, the population of the EU is about 50% larger than the US, making it harder to overtake without veering into bubble territory. Second, a significant chunk of the EU is made up of historically rich countries, who should know how to manage an economy. Third, the EU GDP per capita has never exceeded that of the US, so it's less likely that we're experiencing a wild valuation bubble bursting. The euro crisis did have a valuation bubble involved in it, but it was confined to the newly-rich fast-growing South Europe and Ireland.

Taken together, these points make it less likely that we're witnessing a USSR / Japan -style situation where the other economy crashes, the US recovers from a slump, and returns to being the biggest economy. In fact, it feels a bit like the Japanese situation, but with the US in Japan's position. You've got a tightly integrated economy with a smaller population overtaking a more loosely-bound economy with a significantly larger population.

At the same time, China has passed both the US and EU to become the largest economy by GDP PPP (and a nominal number one in a few years). At 3x the population size of the EU and a faster-growing economy, China's going to be number one with a comfortable margin for a good while.

With that in mind, you could also think of this as a larger-scale re-enactment of the 1990s. In that scenario, the US would pass the EU in GDP to retake the number two position after China. The resulting shock would cause the EU to disintegrate USSR-style while the US surfs an asset bubble. In a few years, some "let's create a counterweight to China" economic union comes online, overtakes the US and the US asset bubble bursts.

So you've got a weird situation. On one hand, the US and EU are nominally the number 1 and 2 economies in the world, waiting to be passed by China. On the other, they're numbers 3 and 2 by PPP. This is playing out in both empires crumbling in various ways, instead of the more usual "number 3 disintegrates as number 1 and 2 turn on it." Brexit pulling the UK out of the EU, Canada-EU trade deal pulling Canada towards Europe, US talk about breaking up NAFTA, SE Asia pivoting away from the US, EU neighbors dropping plans to join the Union, California fringe movements lobbying for Calexit, and a Eurasian pivot towards China.

How's the EU economy doing?

Things are going well in Europe. The difference in GDP per capita (PPP) between rich and poor EU countries is shrinking. The average EU GDP per capita (41k) is somewhere around South Korea (39k) and Japan (39k).

The high-GDP EU countries are grouped around 45-50,000 PPP dollars per capita, with a few outliers. Starting from the top, Luxembourg (102k) and Ireland (69k) are a bit special: both are tax havens, which inflates their figures significantly by routing a lot of money through the economy with very little sticking around. Up next is Netherlands (51k), also a bit tax havenish. Then we get to the rest of the list, composed of Sweden (50k), Austria (48k), Germany (48k) and Denmark (47k). Belgium (45k) rounds up the list.

The medium-GDP EU countries are in the 30-45,000 PPP dollar range, and compose the majority of the EU. First, we've (still) got UK (43k), nearly tied with neighboring France (42k) and far-flung Finland (42k). Malta (38k), Spain (36k) and Italy (36k) form another closely knit group. Rounding up the 30k+ group, we've got Cyprus (34k), Czech Republic (33k), Slovenia (32k), Slovakia (31k), Lithuania (30k) and Estonia (30k).

We can (and probably have to in a few years) extend the middle-income group with some fast-growing countries falling just short of the 30k divider: Portugal (29k), Poland (28k), Hungary (27k), Greece (27k) and Latvia (26k).

The sub-25k category is composed of new member states. Led by newcomer Croatia (22k), fast-growing Romania (22k) and Bulgaria (20k) finish the list.

To understand this number mess a bit better, let's do a historical comparison. Bulgaria is now at a similar GDP per capita level as the UK was in 1994 (or Sweden in 1991). Poland and Greece are like the UK was in 2001. Czech Republic is like the UK was in 2005. Italy and Spain are like the UK was in 2010. Today's Germany is like the UK might be in 2020. The Netherlands is like the UK in 2026. The EU as a whole is at a similar GDP per capita level as the US was in 2004.

Ten years ago in 2006, the difference in GDP per capita between the Netherlands and Bulgaria was 3.2x. Now the difference has come down to 2.55x, as Bulgaria's economy has caught up with the rest of the Union.

Compared to the US, the EU economy is oscillating at the usual 68-70% range per capita. The exceptional performance of the US economy is weird as usual, it's hanging out with petrostates, tax havens and finance-driven city-states, while having a large population. How does that happen? Is it real? How to replicate it? Who knows.


Brush stroke blending

Brush stroke blending is somewhat of a black art. Which is a shame, since Photoshop's been doing it for more than 15 years now.

Lemme try and explain.

Suppose you've got a pressure-sensitive drawing tablet where the pressure controls the opacity of the brush. If you do a low-pressure stroke, you'd like to have a flat surface of color with roughly the same alpha everywhere (say, alpha 0.5 for half pressure). If you use the usual source-over blend, each of the brush stroke segments would add to the alpha since it's src.a + (1-src.a) * dst.a. As the brush stroke intersects with itself, the alpha builds up all the way to 1. Which is not what you want if you're trying to paint an alpha 0.5 surface.

For a brush stroke with fixed alpha, this would be no problem, you could just vary the blend opacity of the stroke layer. But if you want to have varying alpha inside the stroke layer, you need a different blend. What I was using for hard-edged brushes (and what my Drawmore Touch prototype is using) is MAX blend for the alpha: max(src.a, dst.a). If an alpha 0.5 brush is stamped over a previous alpha 0.5 brush stroke pixel, the result is going to be alpha 0.5. This prevents the stroke layer from accumulating opacity above the brush opacity and makes it possible to do smooth surfaces using pressure-controlled opacity.

But it's broken for soft brushes. With soft brushes you'd like to paint a smooth surface with a soft edge. If you do alpha max, the brush intersections have cross-shaped blending artifacts and it's very difficult to do a smooth surface (you can try with the Drawmore thing: drag left on the brush 100% control to make it 0% hardness, then try to paint a smooth surface. No can do.) GIMP & Krita probably suffer from this too, Photoshop does something more magical.

What I've got in ShaderPaint is my latest attempt at solving this. It does brush stamping and alpha max mixed with source-over clamped to current stamp alpha.

void strokeBlend(vec4 src, float srcA, vec4 dst, out vec4 color)
    // Source-over blend for non-premultiplied alpha.
    color.a = src.a + (1.0-src.a)*dst.a;
    color.rgb = src.rgb*src.a + dst.rgb*dst.a*(1.0-src.a);
    color.rgb /= color.a;

    // Saturate color alpha to brush stamp max alpha.
    // For hard brushes this should be roughly the same as max(dst.a, src.a).
    // For soft brushes, the stroke accumulates alpha up to the brush stamp alpha,
    // which results in a flat stroke area with a smooth edge.
    if (color.a > srcA) {
        color.a = max(dst.a, srcA);

The brush stamping shader is pretty simple. You pass it the last stamp position and the current mouse position (& use them to calculate the last stamp position for the next line segment). The shader then steps along the last stamp -> mouse position -vector with the brush spacing offset. For each of the brush stamp points, accumulate brush color with the in-stroke blending function.

strokeDirection = normalize(strokeDirection);
float stampSeparation = brushRadius * 0.5;
float stampCount = floor(strokeLength / stampSeparation + 0.5);

for (float i = 1.0; i < 200.0; i++) { // Max 200 stamps per segment.
    if (i > stampCount) { // Break once we're done with the stamps.

    // Distance from circular brush.
    float d = length(fragCoord - (lastPoint + strokeDirection*i*stampSeparation)) / brushRadius;

    if (d < 1.0) { // The pixel is inside the brush stamp.
        vec4 src = currentColor;
        // Create a soft border for the brush.
        src.a *= smoothstep(1.0, hardness * max(0.1, 1.0 - (2.0 / (brushRadius))), d);
        strokeBlend(src, opacity, fragColor, fragColor); // Blend the brush stamp into the stroke layer.

Dunno if there's a nicer solution, given the messiness of the blending function.


Easy 3D on the web

Problem: can't do 3D graphics on the web. Solution: WebGL. New problem: no, I mean, I want to put this 3D model onto a web page. Solution: Sketchfab / Three.js / Babylon.js / ShaderToy. New new problem: I need to download libraries and code stuff or host the files on a SaaS or develop a third brain to model using modulo groups of signed distance fields moving along Lissajous curves.

Could easy 3D be part of the web platform? And how? With images, the solution was simple. The usual image file is a 2D rectangle of static pixels and all you really needed to figure out was how to layout, size and composite it with regards to the rest of the web page. When you step outside of simple static 2D images, all hell breaks loose.

Animated GIFs have playback timing and that's not so easy to figure out, and so animated GIFs are pretty broken. Videos need playback controls with volume and scrubbing, potential subtitle tracks, and a bunch of codecs both on the video and audio sides, so they were pretty broken as well. (Then YouTube started using HTML5 videos because mobiles don't do Flash. And magically the video issues were fixed~)

SVGs also need to handle input events and have morphed from "umm, put this in an embed and it'll maybe work" to their current (pretty awesome!) state where an SVG can be used as a static image, an animated image, embedded document and an inline document. Something for everyone!

Displaying a 3D model on a web page is a bit like mixing SVG with video elements. You'd like to have controls to turn the model around - it is 3D after all. And you'd like to have the model animate - again, it's 3D and 3D's good for animations. It'd be also nice to have the model be interactive. I mean, we were promised a 3D data matrix by a bunch of fiction writers in the 80s, and that's going to require some serious animated 3D model clicking action (to be fair, they also promised us a thermonuclear war, but let's not go there right now.)

So. 3D model. Web page. <3d src="castle_in_the_sky.3d" onclick="'#gate').classList.add('open')"> Right?

What file format do you standardize on? How do you load in textures and geometry? How do you control the camera? How do you do shaders for the materials? How do you handle user interaction? What are the limits of this 3D model format? How do you make popular 3D packages output this file format (plus animations, plus semantic model for interactivity)? How do you compress the thing and progressively stream it?



I wrote a small painting program on ShaderToy, using the new multipass rendering feature. It's called ShaderPaint and it was a lot of fun to write.

The fun part about writing programs in ShaderToy is the programming model it imposes on you. Think of it as small pixel-size chunks of memory wired together in a directed graph. Each chunk of memory runs a small program that updates its contents. The chunks are wired together so that a chunk can read in the contents of chunks that it's interested in. The read gives you the contents on the last time step, so there's no concurrent crosstalk happening.

This is... rather different. Your usual programming model is all about a single big program, executing on a CPU, reading and writing to a massive pool of memory, modifying anything, anywhere, at any time. To go from that to ShaderToy's memory-centric approach is a big shift in perspective. Instead of thinking in terms of "The program is going to do this and then it's going to do that and then it's going to increment that counter over there.", you start to think like "If the program is running on this memory location, do this. If the program is running on that memory location, do that. If the program is running on the counter's memory location, increment the value." You go from having a single megaprogram to an army of small programs, each assigned to a memory location.

In the figure above, I've sketched the data flow of an output pixel. First, the UI inputs coming from the shader uniforms modify the UI state layer, which is read by the stroke layer to draw a brush stroke on it. The stroke layer is then composited with the draw layer when the brush stroke has ended. The final step is to composite the stroke layer and draw layer onto the output canvas, and draw the UI controls on it, based on the values on the UI state layer.

Blog Archive

About Me

My photo

Built art installations, web sites, graphics libraries, web browsers, mobile apps, desktop apps, media player themes, many nutty prototypes, much bad code, much bad art.

Have freelanced for Verizon, Google, Mozilla, Warner Bros, Sony Pictures, Yahoo!, Microsoft, Valve Software, TDK Electronics.

Ex-Chrome Developer Relations.