Software is Art

I watched Jack Dorsey’s “Golden Gate Speech” today. I was seriously impressed with him, and in particular his view of software products as the output of designers.

Software is Art. I’m not just talking about designing pretty web pages either. I’m talking about code, whether it’s MC68HC11 Assembly, Dylan, Java, or even C#. I’ve struggled my entire life with the frustration of being traditionally uncreative. While I can draw a little, I’m pretty poor at just about every other traditionally creative outlet. The cards are kind of stack against me. I’m red-green color blind. I’m tone def. I can’t seem to keep a beat. I do have good balance on two wheels, but when’s the last time you heard someone say, “that was an inspired way to get down that singletrack today!”

I have a huge appreciation for human creativity and a tremendous amount of respect for artists. I don’t always like their art, but I’m always inspired by their bravery. Actually, envious would describe it better. I’m envious by their ability to think beyond what they observe and to find beauty beneath the obvious. Moreover, I’m envious of their ability to deeply understand that beauty and to materialize it through their pieces in a way that helps others understand it as well.

Macs Make Anyone Feel Creative

I switched to Macs about six years ago. I had a really talented and inspirational designer friend, Lea, who “showed me the way”. I was the typical PC guy that loved to tinker and solve problems. I didn’t realize that I wasn’t really using my computers as much as I was constantly fixing them. I finally decided to rid myself of the Microsoft plaque at home; a boycott prompted by the premature release of the Xbox 360. At that point, I had not decided if I was going to go the Linux or the OS X route.

Lea and I discussed the merits of Macs several times. She couldn’t really break through to me. Largely because of the language barrier. We were both geeks, but very different kinds of geeks. She ended the discussion one day with a seemingly absurd yet wonderfully insightful comment.

“Dave, sometimes you just need to be surrounded by beautiful things.” – Lea

Using a Mac made me feel more traditionally creative, but in all actuality I’m still not a very good photographer or videographer. However, my Mac did open me up to the wonderful world of design. Apple products are both wonderful examples of design as well as super conductors for creativity.

Software Design as Art

Early on in my career, I started to use really abstract terms in code reviews. I sounded more like an art critic to my peers than a software engineer. I would typically describe code as elegant, balanced, structured, aesthetically pleasing, and inspired. These intangible descriptions usually fell on def ears, and often lead me to think I was another wanker with a crappy computer science vocabulary. What I didn’t realize, was that I was solidifying my early intuition that software design is a highly creative pursuit, especially if you pursue software design with as much passion as I did.

Computer Science is an amazing field since it’s largely a made up affair. The realm in which our minds play is rarely observable (blinky LEDs maybe) and is always built upon previously invented constructs that are themselves recondite. They’re always inspired by natural occurrences, but then they’re distilled in order to be made useful. I mean, has anyone ever seen a B-Tree in nature. Fractals maybe, but not a fully-balanced B-Tree. And why would a dining philosophers needs so many forks to eat? Besides, who share’s forks anyhow?

Why Software Design Is Important

I mentored an intern five years ago. I really liked the guy, but I was worried for him. Our company was in the middle of a heavy period of outsourcing. All of the typical entry level engineering jobs like testing, bug fixing, and even implementation was being sent overseas. What kind of work was I going to find for this guy if he accepted a position with us?

I spent a lot of time thinking about the perils of outsourcing. It was primarily a result of our labor-intensive waterfall software development process. We had too few efficient tools, and so we spent lots of engineering hours writing and rewriting documents in Microsoft Word and even Framemaker. We couldn’t get anything done, but our army of overnight elves were making quick time of everything, albeit with a reduction in quality. I was even worrying about my own job to some degree.

I quickly realized though, that my domestic colleagues and I were still vital. We were far more creative and our innovation was unmatched. America’s frontier mentality and self-reliance makes us extremely good entrepreneurs and very good software designers. This intern was no exception. I took him out for coffee one day, away from the cubicles and and fluorescent lights. We sat down in comfy chairs with other presumptuously creative types around us and discussed the merits of software design. I gave him case after case of poorly designed code leading to bugs, misunderstandings, and creating maintenance nightmares. We talked about software design working hand-in-hand with wonderful ui design.

Hopefully he took it to heart. I hope he’s not hitting a roadblock in his current job and thinking of jumping off for an MBA with the hopes of landing a biz-dev job like so many of my other friends.

Best Buy iFails with Customers

I was still on the fence about replacing my iPad with an iPad 2. I was honestly holding out for an Android tablet with a Qualcomm MSM8660 chip. When the iPad 2 was announced, I realized that the competition was once again set back by 9-12 months. At 4:15 pm I headed to the Best Buy in the Brier Creek area of North Raleigh, NC.

Best Buy Fail

This particular Best Buy is somewhat tucked away, so I figured the line wouldn’t be horrible. When I walked in, there was a group of people off the the right and then a line through the center of the store. I figured the line was in two parts due to its length. I joined the line at about 15 deep, with another 20 or so in the first group ahead of us.

I guess I was mildly embarrassed to be in line for an iPad 2, so I didn’t ask any blue-shirts for the status. Neither did anyone else. They just filed behind be, eventually growing in total number to around 60-70.

Blue-shirts came through the line exactly twice. The first time was to offer everyone a Best Buy credit card. The second time was at 5:15 pm. The forward line had been mulling about, but our line had not moved. The blue-shirt was letting us know that he “did not know how many iPads they had in stock”, but that we could reserve one for $100 in the event that they ran out.

I decline the offer, but before he could walk away, I asked him what was taking so long. He said their procedure had some kinks, but things were moving now. They had “tickets” to pass out and… “Tickets?” I asked? We were not offered tickets. We had no idea what was going on. “So if we didn’t get a ticket we are likely not getting an iPad?” I asked. He regretfully confirmed.

Apple Wouldn’t Pull That

We walked out, shocked that Best Buy would keep us there in line without offering an explanation, only a credit card application. This is a dramatically different experience from my friends in the line at the Apple Store at South Point. We were exchanging tweets the whole time. When I told them that we were screwed, they took my order and picked one up for me. So yes I waited in line for an iPad 2, I just received it later that night at their place. Thank you, Jessica and Roger.

Murdoch Might Just Break Into My Daily Routine

I’m a typical Gen X geek when it comes to news consumption. I get my news through online outlets only. Easily 3/4 of that news is through what I refer to as semi-pro blogs and the rest is through sites of traditional media companies. I don’t read local newspapers, at all. I don’t watch the news on television. I even eschew local radio for satellite radio (save for our local NPR affiliate on occasion). I subscribe to one magazine (Roadracing World)…and I’m riddled with guilt over the paper it’s printed on.

Longing for the Old Days?

However, like many other Gen X’ers, I still fondly remember getting the comic section from my parents’ Sunday paper and hiding away with it. I’ve spend many mornings sharing coffee and a doughnut with my grandmother over the morning paper. I’m pretty sure she still watches the local news too, and then promptly switches back to Fox News. :(

I have a morning routine myself, it just involves flying through my myriad RSS feeds and trying to consume as much as I can. I feel I have to stay on top of them to make sure I’m staying relevant amongst my geek cohorts. And even though these articles are merely byte-sized, I can’t even seem to retain them. I’m constantly half quoting articles, that themselves only half cite their sources. How many times have you read an article that is effectively a layperson’s weak attempt at drawing a popular conclusion from a scientific study that the scientists themselves refuse to reach conclusions about? It’s sloppy, pseudo-journalism.

Micro-Attention Spans

The Internet and mobile computing have made us more plugged-in than ever. This leads to a barrage of interruptions that has wrecked our attention spans; though you might argue that MTV started it. Or was it the remote control that allowed us to channel surf during commercials?

The constant connection and micro-attention spans ultimately mean one thing to publishers: There is no time that’s more important than this very instant. They have to deliver their content quickly, and make that content just as quickly consumable.

Semi-pro blogs have mastered this. They publish numerous articles every hour of every day. They are short and often devoid of much human interpretation. A screenshot and a short quip is often all that’s needed…oh and of course there’s the requisite “[via JoesBlog via TechMunchismo via SomeGuysAss]“.

I’m Not Saying Blogs Are Evil

In the defense of semi-pro blogs, the larger players are often staffed by journalists with traditional publishing experience. This has led to a great improvement in their practices and credibility. These blogs provide an undeniably great service too. Their light-weight style of journalism is efficient and somewhat reckless, but they’re breaking stories and scooping the old guard. I think it’s pretty amazing every time I see an article on a traditional news outlet that is reporting on stories that broke in blogs, and they’re citing the blogs.

While I am being critical of semi-pro blogs, I’m not trying to paint them as some sort of scourge on civilization. What I am saying, is that I’m looking for deeper, slower, and slightly more responsible news reporting. I don’t need to stay up-to-the-minute. Sometimes, I want a few more details, maybe some backstory. And I don’t mean I want to search myself for all of the past blog posts on a topic. This is where I’m starting to miss a daily newspaper. They publish daily, and spend days, even weeks on articles. They go out and hit the streets, not just the tubes.

Enter The Daily

The Daily could be just the crutch that helps keep traditional, quality journalism alive. I read RSS feeds on my iPad every morning. I’ve tried a fews news apps, but none held my attention. So far, The Daily has good, deep writing, while still being a little brief to ensure that the issues holds your attention. The longer articles still hold your attention, because they mix in enough distractions such as slide-shows, in-page video, animated panoramic photos, and audio clips.

Their mixed-media approach works well to provide those said breaks, but they also enrich the experience. For instance, Friday’s (Feb. 4, 2011) edition had an article on Egypt that talked about the surprising organization of the protesters. It talked about the how they maintained a central office, patrolled looking for Mubarak supporters (“thugs”), and were seemingly humanely interrogating them to gain intelligence on their movements. They even have a doctor on site to take care of their detainees. I’m sure that their interrogation practices are far from simple Q&A, but I was surprised at how well the protesters are focused on the public relations aspect of their efforts. They’re quire careful to ensure that they’re viewed as the good guys, not falling back on the harsh tactics that the secret police have reportedly used to subjugate suck unrest previously.

While that level of reporting was certainly deep, the thing that set this article apart was the embedded audio commentary from the reporter himself. The tone of his voice was enough to instantly discern the tone of his article and to remove any chance of misinterpreting the article. Furthermore, it provided an insight to the emotion of the situation and to the humanity of the protesters that was then reinforced through details in the article, such as their practice of protecting the detainees in the central office by using a human chain to shield against less civilized elements of the protests.

But Will I Subscribe

The first two weeks are free. I think I’ll subscribe for another month or two after that, but the verdict is out whether or not I’ll make the $40 yearly commitment. As much as I’m pulling for professional newspaper journalism to survive and morph into something more current, I’m a little worried about getting so much of my news from one source. I mean, Rupert’s the same guy that owns Fox News after all. :)

What I really hope, is that The Daily’s format is duplicated by other news outlets. Actually, I’d love to have a single, standards-based, newspaper reader app which can download issues from a variety of papers.

Office Charging Stations: Breaking Ground

I’ve mentioned before that the facilities manager at my office is a true-blue believer in EVs. He’s had a long career working with industrial electric motors, and understands them to their very core. He’s really supported me and the Enertia from day one. He even putting up with its charging fans blowing right outside of his office inside of our shipping and receiving area. He’s dead set on getting a Nissan Leaf too, because its got the range to suit his commuting needs.

They’re Here

He’s been giving me progress reports on the company’s initiative to install Coulomb Charging Stations at work. There have been some delays with the contractors, but I’m happy to say that they’ve broken ground this week. From the looks of it, we should have five posts serving ten spots with Level 1 and Level 2 charging.

Progress for Day 1

They made a little more progress on day two. There are trenches behind the ledges and some electrical utility boxes installed.The boxes are kind of ugly, so I hope they do something to disguise them. The last thing that I want to hear is people condemning them because they’re ugly. As it is, the location is already taking up exterior spaces where the car worshiping d-bags double park their cars like it’s some sort of Grease era car show.

Progress from day 2

I can’t wait to see them operation. From what I’ve been told, they’ll be open to the public too. So anyone with a ChargePass Card (like me) can use them. I’m not sure if that policy will be permanent, but I can’t imagine that there will be too many non-employees using them. When they go online, hopefully they’ll show up on Coulomb’s Awesome Webapp.

Of course, when they do go online, it means the end of my indoor parking. Oh well. :)

COMP 770 Program 4: 3D Rasterizer

Download the source (c++ w/ XCode project): Program4.tar.gz
Download the binary (Intel Mac OS X): rasterizer.gz

I know this is a naive thing to say type, but after finishing this program I kind of feel like I just implemented OpenGL minus shaders. :) My approach was to get the scene parsing implemented first and then to get the GL Preview feature working. This allowed me to very quickly setup my light and camera and then to see my goal. Then I started in on my raster pipeline.

Features

Here’s a quick list of the features for my Raster Pipeline:

  • Moveable camera
  • Moveable light
  • Orthographic Projection
  • Perspective Projection
  • Per-Vertex Color
  • Wireframe
  • Flat Shading
  • Gouraud Shading
  • Phong Shading
  • Use full Phong lighting with light intensity falloff
  • Configurable (on/off) backface culling
  • Configurable (on/off) cheap clipping
  • Efficient span-based triangle fill
  • z-buffer with epsilon for z-fighing resolution
  • Timing instrumentation
  • PNG output

And here are the features support in the OpenGL Preview mode:

  • Moveable camera
  • Moveable light
  • Orthographic Projection
  • Perspective Projection
  • Per-Vertex Color
  • Wireframe
  • Flat Shading
  • Smooth (Gouraud?) Shading

Shading

After doing two raytracing assignments, I really doubted that rasterizing would hold a candle in terms of aesthetics. I was stunned when I saw how good the OpenGL preview looked, so I really wanted to dive into shading. I ended up implementing wireframes, flat shading, Gouraud shading and Phong shading.

Challenges

I then started in on my own raster pipeline. As I stumbled through a myriad of problems with my transformations. In particular, the projection transformations were troublesome. I tried to implement them in a way similar to the class notes, but I was getting the results that I was looking for…or any results. I kept segfaulting. I turned to the text, and found that they did a great job explaining both orthographic and projection transformations.

Clamping and normals were also a problem for me. Interestingly enough, once you fix one clamping or normal bug, you tend to clamp and normalize everything. The clamping problem was worst with my color calculations. Specular highlights produce some very illuminated pixels. I ended up bleeding past 1.0 on several of the channels which caused several rainbow effects. Additionally, when I was calculating barycentric coordinate, floating pointer errors led to scenarios where the coordinates were being returned beyond [0.0,1.0]. Normally this would mean that the point was off of the triangle, but I was attempting to calculate for pixels that were known to be on the triangle.

Normals were by far the most difficult problem. At least it was the toughest one I had to solve. My specular highlights were causing a grid pattern along the edges of triangles. I fought it for two days. My problem resulted from normals interpolated between to vertices on the edges. The were not unit length, and so they increased the effect of the specular highlights when I calculated the dot product with the half viewing vector. Normalizing these fixed the problem.

Optimizations

Backface culling was a really straight-forward optimization to make. To implement it, I added a check right before the viewing and projection transformations. The check involved computing the dot product of each of the normals with the viewing vector. If none of those normals were visible, then the entire triangle is back facing and was culled. It yielded a significant speedup on Andrew’s dragon model.

rasterizer –projection persp -0.1 0.1 -0.0 0.2 3.0 7.0 –camera 0 0 5 0 1 0 –light 0.1 0.1 0.1 –nocull scenes/dragon.txt
Render scene: 1287.702000 ms

versus

rasterizer –projection persp -0.1 0.1 -0.0 0.2 3.0 7.0 –camera 0 0 5 0 1 0 –light 0.1 0.1 0.1 scenes/dragon.txt
Render scene: 708.403000 ms

I really wanted to implement full clipping, but I found out that “cheap clipping” is pretty effective by itself. The first step is to add a check if a pixel is in the viewport before calculating the color for it. Calculating color is pretty expensive, so this eliminated a lot of cost. Then next step was to use Cohen-Sutherland clipping to determine when a line or triangle was completely outside of the viewport. I didn’t do a thorough test either. I did the simple bit-wise and operation on the bit codes for each point and rejected the triangle if it was not zero. This means that some of the corner cases were missed.

By cheating like this, I was able to avoid a lot of triangles without having to implement the clipping of individual triangles into separate polygons. This meant that I was still rasterizing parts of triangle that were outside of the viewport, but at least with my check above I wasn’t calculating the color for them. The results were rather satisfactory, especially compared to the cost of implementing it.

rasterizer –camera 0 0 5 0 1 0 –projection persp -0.1 0.1 -0.1 0.1 3.0 7.0 //zoom_in –noclip scenes/beethoven.txt
Render scene: 414.369000 ms

was reduced to

rasterizer –camera 0 0 5 0 1 0 –projection persp -0.1 0.1 -0.1 0.1 3.0 7.0 //zoom_in –output img/beethven_clipped.png scenes/beethoven.txt
Render scene: 310.444000 ms

Although a span-based triangle fill was pointed out as an opportunity for extra credit, it was really the most straightforward way to implement this for triangles, since they’re convex. At one point in my career, I did a lot of 2D raster graphics work for J2ME cellphones. Most of our displays were optimized to send data to the display in rows. So I attacked this problem the same way. I found the top most pixel. I then started drawing each leg using the midpoint line algorithm. Each time I placed a pixel which changed y, I added it to an edge list. When I reached the end of a leg, I switched to the third segment…unless that leg was already horizontal. I then went back and drew horizontal lines from one edge map to the other. Since this was the only triangle fill algorithm I used, I didn’t get any timing numbers for comparison.

The use of a Z-Buffer to determine the rendering order is so genius in its simplicity, that I didn’t even consider any other ways to implement it. So this is another scenario where I didn’t try to implement another method for comparison. However, I was able to throw in a small improvement that resolve the z-fighting example that I threw at it. When determining when to paint over another pixel, I checked that the new pixel was closer to the camera by a margin, epsilon. I set epsilon to 0.000001. It resolve my test model without causing any visible changes to the other models. My testing certainly wasn’t extensive, and so I’m sure that it would fail on scenarios where a camera with a very narrow FOV caused massive magnification. Perhaps in that situation, I could use a dynamic epsilon that is calculated based on the camera’s FOV.

Remaining Images

Here are the remaining rendering of the models provided, including Andrew’s dragon model from the Stanford 3D Scan Repository.

Mental Focus: An Argument for Modal UIs

For an Operating System / Window Manager Engineer, focus usually means the application in the foreground. The application with focus is receiving keyboard and mouse events. On some systems, only the application with focus can make sounds. Furthermore, the applications without focus may be running at a lower priority, thus receiving less compute time.

Modal / Full Screen UI

In the mobile space, this question of focus is rather straight forward. Displays are so small, that the window manager will display the application with focus on the entire screen. Although I think it’s a bit of a misnomer, more and more people are refer to such as scheme as a modal UI. These modal, or full-screen, UIs have been getting a lot of news lately. Steve Jobs announced that full-screen apps are going to play a more serious role in Mac OS X Lion.

Apple Aperture for Mac in Full Screen

I was a little apprehensive with fear that he was going to dumb down my Mac desktop user experience. I gained more confidence in the idea when I thought about all of the [semi]-pro apps that I use on my Mac that already had full screen modes. I always figured that those apps were full screen to give creative professionals the maximum amount of real estate. Now, I actually think it has more to do with minimizing distraction and allowing for better mental focus.

Full Screen Equals Full Mental Focus

This point hit me late last night. I bought an iPad yesterday. I bought it primarily for leisure computing. I found that my MacBook Pro was constantly in the middle of 2-3 school/geek projects. I tend to just leave things open when I’m in the middle of them. I feel it encourages me to pick back up more easily. What it actually does is stress me out and distract me. I couldn’t even enjoy a cup of coffee and read RSS feeds without wanting to touch up some OpenCL. My idea for the iPad was to get away from a desk and relax a little. I could ignore all of those open projects and relax for a few minutes.

Papers for iPad

That lasted about an hour last night before I found myself downloading class notes and sitting at the kitchen table with a beer for some late night studying. It was really effective too. When you’re working in a modal UI, all you can do is what’s in focus. And if you turn off status updates, you won’t even be bothered by incoming emails, tweets, calendar notifications, etc. I was easily able to stay on task, only briefly popping over to another browser window to look things up.

Apple Might Be On To Something

I’m definitely going to dwell on this some more and make some personal observations about my usage, but I think Steve might be on to something. We’ve long known that multi-tasking hits a point of diminishing returns after two or three tasks. I personally struggle with the constant context switching. Having a modal UI might help me focus on the task at hand, whether it’s studying, coding, or relaxing.

BTW, Google Reader Play is an absolute joy on the iPad. Too bad it doesn’t use my feeds. :(

COMP 770 Program 3: Ray Tracing Part 2

Download Project Source/Scenes/Mac-Binaries: Program3.tar.gz

Overview

For the first part of the Ray Tracing project, I added a quite a few extra features. One of those extra features was the recursive calculations of Specular Reflection, Dielectric Reflection, and Dielectric Transmission. I considered myself pretty lucky, considering that this feature is one of the two features that we were required to add for this second part. However, I wasn’t quite in the clear. Lets just say that I had a looser understanding of ray tracing than I thought.

New Features

Many of the features of my Ray Tracer were implemented in the first part. That list can be seen on that project post. The following are new features:

KD Tree

  • Mid-Axis Partitioning
  • SAH Partitioning
  • Cost-Based Termination of leaf nodes for both
  • Recursive KD Tree Traversal
  • KD Tree Printing in Debug Builds

Miscellaneous

  • Fixed Ray Tracing bugs
  • Dramatically Improved Ray Tracing Performance
  • Interactive Ray Shooting in Debug Builds

Default Configuration

When you launch my Ray Tracer, the following are the defaults that are used unless you specify otherwise.

  • Dimensions: 500×500
  • Sampling: 4 x 4 Adaptive Jittered Supersampling
  • Ray Casting: Blinn-Phong Lighting with an Ambient Factor
  • Ray Tracing: Specular Reflections, Dielectric Reflections, and Dielectric Transmission supported through recursion terminated based on a contribution threshold
  • Multi-Processing: Uses multiple CPU cores through OpenMP
  • KD Tree: Built using SAH cost analysis to determine best split and when to terminate branches

Ray Tracing

The optics involved with refraction are not very intuitive to me. Initially, I thought that an image seen through a glass sphere would be reduced, but instead it’s actually magnified. I had a rather serious bug when calculating refraction. I was refracting my rays with a dot product of the ray and surface normal with the wrong sign. Correcting that made a tremendous difference.

Once I started rendering the scene with 16 spheres, I started to realize that I had some serious additive errors on calculations of the transmissive component. There were two reasons for this. Firstly, I was calculating the reflective and transmissive components inside of the loop that calculated the phong shading for each light source. Fixing this corrected several of the bright spots, and it led to a signficant speed improvement.

Secondly, I was calculating the phong shading and reflective components for for illumination points inside of a sphere. This scenario arose whenever I was calculating the color for a refracted ray transmitted through a sphere. The refracted ray would intersect the other side of the sphere on the inside, and at that point I should have only been calculating the transmissive component. Making this change also led to a dramatic speedup.

KD Tree

My KD Tree was actually a deceleration structure for much of the project. I had several issues when creating the tree, as well as the traversal. I started by creating a KD Tree that simply divided the space at the midpoint of the split axis.

The first issue that I had, was that I was creating a new bounding box around the primitives in each of the newly split subspaces. This is very problematic, because it created overlapping spaces whenever I had the occurrence of a single primitive shared between both spaces. Once drew a clear delineation between space partitioning and bounding volume hierarchies, I was able to clean up my KD Tree and I saw fewer artifacts.

My next hurdle was understanding how to traverse the tree correctly and to fix the remaining artifacts. Initially, while I was trying to learn how KD Trees work, I was only considering an algorithm where the ray always intersected the outermost bounding box. This was fundamentally flawed for several reasons. First off, the ground sphere made the bounding box really large, but it didn’t create to many artifacts for me. The second major instance of rays originating inside of the outermost bounding box where theway used when calculating shadows, reflections, and transmissions. Through some digging, I found a very helpful post that illustrated the different cases that have to be handled when traversing a KD Tree.

His diagrams were very helpful. They were so implanted in my brain that I ended up adopting his algorithm completely from the code that he posted. He still missed two cases, that I initially had some trouble finding. I ended up implementing a special, interactive debug feature that would allow me to use the mouse to point at a pixel in the viewing window and result in the rendering of that single view ray. I would use it by rendering the entire scene, setting a breakpoint, then clicking on the pixel that I needed to test. This was invaluable in finding the remaining artifacts as well as some ray tracing issues.

At this point, my KD Tree still proved to be more of a deceleration structure. I fired up a profiler, and found a number of slowdowns related to mallocs when operating on C++ vectors. I reduced my use of vectors and passed them by reference throughout the KD Tree traversal. This brought significant gains, but my KD Tree was still slower.

The next step was to implement a smarter space partitioning scheme based on comparing the Surface Area Heuristic of each new subspace. This cost calculation was also critical to determining when to make a leaf node. I had a few bugs that led to excessive node duplication. Once I sorted those out, I finally got the gains I was hoping for. My resulting tree for the more complicated scene was 11 nodes deep, and contained several empty leaf nodes.

16 Sphere Scene Without a KD Tree

Size: 500x500
Total Primitive Intersection Checks: 223084940
Total Node Traversals: 0
Total Render: 29.763917 seconds

16 Sphere Scene With KD Tree

Size: 500x500
Build KD Tree: 0.000580 seconds
Total Primitive Intersection Checks: 55731412
Total Node Traversals: 161224507
Total Render: 24.878577 seconds

I reduced the number of sphere intersections from 225m to 161m while only adding 55m node traversals. The time savings wasn’t as dramatic as I had hoped, likely because my algorithm still used functional recursion instead of maintaining a smaller stack inside of a loop. For my final project, I’m likely going to go stackless altogether since I’ll be using the GPU too.

However, I really started to notice gains when I upped the complexity of the scene. I created a model of 140 reflective spheres arranged into a tightly-packed pyramid.

140 Sphere Scene Without KD Tree

Size: 500x500
Total Primitive Intersection Checks: 1016735174
Total Node Traversals: 0
Total Render: 103.413575 seconds

140 Sphere Scene With KD Tree

Size: 500x500
Build KD Tree: 0.091852 seconds
Total Primitive Intersection Checks: 230486448
Total Node Traversals: 161218665
Total Render: 38.800175 seconds

Sample Images

First 2500+ Miles on the Enertia

Finally turned 2500 miles on the Enertia.

I haven’t posted about my Enertia for a while. At first, I feared that the novelty had worn off. I really haven’t been riding it much…until this last weekend. And with that fresh seat time, my enthusiasm for the Enertia picked right back up where it left off. Coincidently, I passed the 2500 mile mark too.

Extra Leg on the Commute

A few changes in my circumstances have led to my lessened use of the Enertia. Firstly, I’m commuting from the office to school two days a week. Parking on campus is a nightmare. You basically have to park in a commuter lot and hop a bus in.

However…when I ride a motorcycle in, I can park right next to my building. This is exactly the time savings I was looking for to reduce my time away from the office, so I’ve been happily riding a motorcycle on those days. Unfortunately I haven’t found a place to charge the Enertia on campus. Furthermore, in the spirit of saving time, I take the interstate. All of this means that I ride my V-Strom gasser instead of the Enertia. :(

Weekend Passenger

I’ve reduced my Enertia riding on the weekend too, which is a shame, because the Enertia is perfect for running errands around home. I’ve got a roommate now, and we do a lot of things together. Unfortunately there’s no room on the Enertia for a passenger.

Empulse?

My changed circumstances have highlighted the Enertia’s range and capacity issues and affected the utility of the Enertia somewhat. At the same time though, riding my bulky, stinky, and loud V-Strom have made me appreciate the Enertia even more. It’s a bit of a conundrum.

Enter the Empulse. This bike could will directly solve two of my three problems. I should easily be able to commute on this, even on days when I’m on campus too. Even though I can’t easily charge on campus, the extended range will mean that I likely won’t need to. Furthermore the liquid cooled motor means that I’ll be able to sustain highway speeds on the Interstate and avoid taking a circuitous route at lower speeds. This will prove to be a huge time saver.

Unfortunately, there still isn’t room for a passenger. But riding two-up is for old folks anyway…except for the time I took too laps at Jennings GP with Jason Pridmore. We definitely didn’t lap like old folks. Although I nearly lost control of my bowels like a grandpa.