The computational network toolkit (CNTK) is a general purpose C++ based machine learning language and toolkit for models that can be described as a computational network. The language accepts a high level description of an arbitrary neural network and automatically generates BLAS code for efficient evaluation. We describe the algorithm to compute gradients automatically given the network, and prove its correctness. We also provide a low-cost automatic learning rate selection algorithm and show that it works well in practice.
Brian Guenter, Dong Yu, Adam Eversole, Oleksii Kuchaiev . OPT 2013. Stochastic Gradient Descent Algorithm in the Computational Network Toolkit
The human eye captures high resolution only in a small central region, called the fovea, and much lower resolution elsewhere. A 3D rendering system could exploit this by tracking the viewer's gaze and rendering only the central part of the field of view at full resolution. However a naive implementation doesn't work because the low spatial sampling in the periphery causes unacceptable aliasing. Reducing the aliasing using conventional methods requires increasing the spatial sampling rate, so nothing is gained. This paper describes a system and set of antialiasing algorithms that allow low sampling rates in the periphery without introducing unacceptable aliasing. The foveated renderer uses approximately 4x-5x less computation to render images of comparable quality. User studies verify the effectiveness of the method.
Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, John Snyder. Siggraph Asia 2012. Foveated 3D Rendering
All lenses have optical aberrations which reduce image sharpness. These aberrations can be reduced by deconvolving an image using the lens point spread function (PSF). However, fully measuring a PSF is laborious. Alternatively, one can simulate the PSF if the lens model is known. However, due to manufacturing tolerances lenses differ subtly from their models, so often a simulated PSF is a poor match to measured data. We present an algorithm that uses a PSF measurement at a single depth to calibrate the nominal lens model to the measured PSF. The fitted model can then be used to compute the PSF for any desired setting of lens parameters for any scene depth, without additional measurements or calibration. The fitted model gives deconvolution results comparable to measurement but is much more compact and require hundreds of times fewer calibration images.
Shih, Y., Guenter, B., and Joshi, N. Image Enhancement using Calibrated Lens Simulations. IEEE ECCV 2012. Lens Fitting.
Derivatives arise frequently in graphics applications. To meet this need we have added symbolic differentiation as a built in language feature in the HLSL shading language, available in the June 2010 DirectX SDK release (a much improved version is in the Windows 8 SDK preview). The symbolic derivative is computed at compile time so it is available in all types of shaders (geometry, pixel, vertex, etc.). The algorithm for computing the symbolic derivative is simple and has reasonable compilation and run time overhead. The latest preview implementation is described in this paper, Symbolic Differentiation On The GPU , which also has several detailed example HLSL programs. This is joint work with Mark Finch and John Rapp. Tutorials on this work were presented at GDC 2011 by Kev Gee ( "Direct 3D 11: Symbolic Derivatives and HLSL") and at GameFest 2011 by Mark Finch: Symbolic Differentiation in HLSL.
Download the source code examples here . The source code includes an interactive editor for generating procedural geometric models, and several DirectX examples showing how to use the symbolic differentiation feature to generate the geometry and texture detail at runtime.
Videos for HLSL symbolic differentiation. These show procedural surfaces and textures defined using symbolic differentiation. Automatic level of detail
The download includes the D* executable and full source code for the following: basic D* programming examples, Lagrangian dynamics, and a basic interactive geometric modeling tool. This is unsupported code but if you find bugs or have feature requests send email to firstname.lastname@example.org. Download here: DStarDownload
Video with example real time dynamics simulations: Dynamics Video
There is a book that describes the dynamics algorithms, as well as simple procedural modeling techniques. You will probably want the book if you are downloading the code, since it has more detailed documentation of the software than is available in the download. You can get the book here Symbolic Dynamics and Geometry: Using D* in Graphics and Game Programming
Efficient Symbolic Differentiation
D* generates symbolic derivatives which can be thousands of times faster than those generated by Mathematica or automatic differentiation. Functions with densely interconnected expression graphs, which arise in applications such as dynamics, spacetime optimization (also known as the optimal control problem), and PRT, can be difficult to efficiently differentiate using existing symbolic or automatic differentiation techniques. The D* algorithm computes efficient symbolic derivatives for these functions by symbolically executing the expression graph at compile time to eliminate common subexpressions and by exploiting the special nature of the graph that represents the derivative of a function. This graph has a sum of products form; the new algorithm computes a factorization of this derivative graph along with an efficient grouping of product terms into subexpressions. For the test suite problems D* generates symbolic derivatives which are up to 4.6ÃƒÆ’Ã¢â‚¬â€10^3 times faster than those computed by the symbolic math program Mathematica and up to 2.2ÃƒÆ’Ã¢â‚¬â€10^5 times faster than the non-symbolic automatic differentiation program CppAD. In some cases the D* derivatives rival the best manually derived solutions.
The paper on this website is a revised and expanded version of the paper I presented at SIGGRAPH 2007. I will be updating it as I find the time. (most recently updated 11/1/2007)
In a modern data center the cost of power, for computers and air conditioning, can be more than the cost of the computer hardware. Modern computers have a variety of power states with different power vs. response time tradeoffs: off, sleep, hibernate, etc. With thousands of computers in a typical data center it is challenging to determine what power state each computer should be in at any moment in order to minimize power while maximizing responsiveness. I developed an algorithm which breaks the problem into two pieces: predicting future demand and determing power state transitions to minimize power while meeting demand in the best way. Any prediction scheme can be used but in our first implementation we used simple linear prediction. The optimal power state transitions are computed with linear programming. In the general case this is a integer, rather than a linear, programming problem, but a novel representation of the system allows linear programming to be used, while guaranteeing integer results. This makes the algorithm very fast even for data centers with tens of thousands of computers. Our evaluation on three very different data center workloads shows that the energy savings are close to optimal, saving 96%-99.5% of the maximum possible. This paper was presented at IEEE INFOCOM 2011 (Brian Guenter, Navendu Jain and Charles Williams). Managing Cost, Performance, and Reliability Tradeoffs for Energy-Aware Server Provisioning
Exact Procedural CSG Modeling for Real Time Graphics
Generative CSG models, while having the desirable characteristics of compactness and resolution independence, have never been used for real time rendering because no algorithms existed which could both maintain their compact representation and render them efficiently at run time. The key difficulty in doing this was finding a compact, exact representation of the implicit curve of intersection that arises from CSG operations. The primary contribution of this paper is a new algorithm for finding a piecewise parametric representation for this intersection curve. The parametric representation is compact and exact to the limits of precision of floating point arithmetic. Arbitrary points on the intersection curve can be efficiently evaluated at run time which allows triangulation density to be adapted dynamically. Using this representation we have made complex procedural objects that have a memory footprint of just 7-11 KBytes, which render at approximately 20 million triangles/ sec. on an NVidia 6800 GPU.
Brian Guenter and Marcel Gavriliu
Making Faces (published in SIGGRAPH 98) with Cindy Grimm, Henrique Malvar, Daniel Wood, and Frederic Pighin
In the video the actress is reading from a script designed to provide maximal phonetic coverage (we were planning to do automatic lip sync as a follow-on research project but never got around to it). Hence the funky monologue.
Real-time, Photo-realistic, Physically Based Rendering of Fine Scale Human Skin Structure
A. Haro, B. Guenter, and I. Essa, Proceedings 12th Eurographics Workshop on Rendering, London, England, June 2001
Stephen R. Marschner,
Brian Guenter, and Sashi Raghupathy
11th Eurographics Rendering Workshop (2000)
Lossless Compression of Computer-Generated Animation Frames (published in Transactions on Graphics, October 97) with Hee Cheol Yun, and Russel M. Mersereau
ACM Transactions on Graphics, v.16, no. 4, October 1997, pp. 359-396
Through an unfortunate sequence of events the original files have been, ahem, misplaced. You'll have to go to the ACM web site to look at this paper.
Quadrature Prefiltering for High Quality Antialiasing (published in Transactions on Graphics, October 96) with Jack Tumblin
Efficient Generation of Motion Transitions using Space-time Constraints (published in SIGGRAPH 96) with Charles F. Rose, Bobby Bodenheimer, Michael F. Cohen. wmv video (14 MBytes)
Motion Compensated Compression of Computer Animation Frames (published in SIGGRAPH 93) with Hee Cheol Yun, Russell M. Merseareau
Motion Compensated Noise Reduction For Computer Animation
word document 128KBytes
postscript document 60KBytes
This is a movie made by my graduate students in 1993 when I was an assistant professor at Georgia Tech. I hadn't looked at this video for more than 15 years. It holds up remarkably well and I'm still impressed by the incredible job they did. My contribution was negligible even though I'm listed as a producer on the credits -- the students deserve all the credit. Unfortunately we made some sort of scaling error when we rendered the frames so the entire film came out much darker than we intended (the producer is supposed to make sure that things like this don't happen). When it was shown at the electronic theater the projector was also very dim so it was almost impossible to see what was going on. A few batlike people who could see in the dark complimented us on the film but my students were still terribly disappointed that after all their hard work no one had really seen their film. Now you can see it in all its original glory, and by using the brightness and contrast controls in Windows media player you can largely eliminate the darkness problem. wmv video (31 MBytes) wmv video (4.5 MBytes)