art with code

2017-07-14

Acceleration, 2

Oh, yeah, right. I was working on this.

But got distracted by adding features to my path tracing Shadertoy. So. I've got pictures if nothing else. Pictures of the same scene to test bidirectional path tracing, bokeh, diffraction, etc. There you go.

Tune in next week for more of .. something?

2017-07-03

Acceleration, 1

Working on Acceleration.

It's not fast going, but it's going bit by bit. I currently have some color pickers, auto-keyframing, save, load, hi(gher)-quality still render creation, on top of the very visual-oriented animation editor. There used to be a 4-view for moving things about but that felt clunky and the shader implementation wasn't great, so it's dormant for now.

Now I've been working on two workstreams: 1) event handling dataflow graph and 2) rendering research. Rendering research is going towards, uh, realtime bi-directional path tracing. Which might kill the whole thing due to "I don't know how to make an acceleration structure for triangle models", but at least I'll get cool screenshots out of it.

Event handling dataflow graph. It's one of those things. You know. You think that it'll just be some "on click, set variable Y to 20"-thing. And then you think about it and end up with some sort of loosely bound lazily evaluated array language execution graph with a query language to select objects. And then you start thinking "How would I build shaders with this?", "Could you run this in parallel?", "Should I compile this down into WebAssembly?"

In a word: utmost care must be taken to avoid rabbit holes that lead to endless destruction in the fiery magma caves under the Earth's crust.

Anyway. The event graph nodes. To execute a node, you first evaluate all its inputs. To evaluate an input, you need to find the object referred by the input object and resolve its value. Why? Passing objects by reference feels brittle. Like. If I've got a node with an input and I want to pass that input to another node (say, I want to modify the scale of the clicked object: OnClick(obj) -> ModifyScale(obj)). If I pass it by reference, the two nodes need to point to the same object. When OnClick's input's value changes, ModifyScale's input's value needs to change as well. And how do you draw it? How do you draw a line from OnClick's input to ModifyScale's input? You need to know that they are the same object, referred to from two different places, and figure out the coordinates for those two places. So a value needs to carry a reference to its render model, so that you can figure out where it's located. Or the value can be defined as a loosely bound address that's resolved at runtime "OnClick.inputs.Object" -> obj = graph.objects["OnClick"]; input = obj.inputs["Object"]; point = obj.renderModel.inputs["Object"].connectorPoint;.

Node {
  renderModel: Model,
  func: Function,
  inputs: {string: Value, ...},
  outputs: {string: Value, ...},
  futures: [Node] // array because if-then-else/switch-statements
  futureIndex: int // which future to follow
}

Maybe this is .. workable?

On the rendering research side of things, considering a few options. SDFs? Raytraced geometry? Simple primitives and/or triangle soup? Path tracing with a procedural environment map as the main light source? In real-time? Progressive renderer for high-quality stills. HTML elements overlaid on top of the 3D scene. Fancy SDF text that scales to millions of letters in realtime? 3D text meshes? Images, video, particles, what? What's the goal here? Build animations in 15 minutes. Make animation timelines that compose. Renderer to make cool-looking interactives with a unique look.

Right. Anyhow, rendering goals: nice motion blur, shiny CG look, high-quality stills, depth-of-field, glowy blooms, volumetrics. All of which point towards: "just path trace it". It'll impose definite limitations on the scenes that work alright on it. Maybe that's fine? The underlying timeline + event graph stuff should be generic enough to plug in a Three.js renderer. I wonder about the transformation widgets, animation 3D paths, and other "way easier to rasterize"-stuff though. So, rasterize those on top of the scene. The path tracer can write to depth buffer with the primary rays too. Hybrid renderer!

It's complex. Do it piece by piece. Make it simpler until it's possible.

Part 2 on 10th of July. Goals: event graph prototype working.

Blog Archive