Eurographics Workshop on Natural Phenomena (2007)D. Ebert, S. Mérillou (Editors)
Eulerian Motion Blur
Doyub Kim
†
and HyeongSeok Ko
‡
Seoul National University
Abstract
This paper describes a motion blur technique which can be applied to rendering ﬂuid simulations that are carried out in the Eulerian framework. Existing motion blur techniques can be applied to rigid bodies, deformable solids,clothes, and several other kinds of objects, and produce satisfactory results. As there is no speciﬁc reason todiscriminate ﬂuids from the above objects, one may consider applying an existing motion blur technique to render ﬂuids. However, here we show that existing motion blur techniques are intended for simulations carried out in the Lagrangian framework, and are not suited to Eulerian simulations. Then, we propose a new motion blur techniquethat is suitable for rendering Eulerian simulations.
Categories and Subject Descriptors
(according to ACM CCS)
: I.3.7 [Computer Graphics]: ThreeDimensionalGraphics and Realism
1. Introduction
Motion blur is essential for producing highquality animations. The frame rate of most ﬁlms and videos is either 24or 30 Hz, whereas human vision is reported to be sensitiveup to 60 Hz [Wan95,CJ02]. Due to the lower frame rate of
ﬁlm and video, when each frame is drawn as a simple instantaneous sampling of the dynamic phenomena, artifactssuch as temporal strobing can occur. The graphics community has been aware of this problem, and several motion blurtechniques have been proposed to solve this problem.Fluids are often important elements of a dynamic scene,and for the artifactfree production of such a scene, ﬂuidsneed to be rendered with motion blur. Since the graphicsﬁeld already has an abundance of motion blur techniques,one may consider applying existing techniques to the motion blur of ﬂuids. Unfortunately, existing techniques do notproduce satisfactory results. This paper describes why theexisting solutions do not work for ﬂuids and how to modifyexisting motion blur techniques to make them applicable toﬂuids.Motion blur techniques developed so far are intended for
†
kim@graphics.snu.ac.kr
‡
ko@graphics.snu.ac.kr
Figure 1:
A motion blurred image (left) produced with thealgorithm presented in this paper and an unblurred image(right): A slice of water is falling along the wall, whichhits the logo and makes the splash. To factor out the effectscaused by the transparent material, we rendered the water as opaque.
c
The Eurographics Association 2007.
Doyub Kim & HyeongSeok Ko / Eulerian Motion Blur
rendering simulations that are performed in the Lagrangianframework.
†
We will call this type of motion blur techniques
Lagrangian motion blur
(LMB). The majority of objects encountered in 3D graphics scenes (including rigid bodies, articulated ﬁgures, deformable solids, and clothes) are simulated in the Lagrangian framework; thus their motion blurcan be readily rendered with LMB.Simulation of ﬂuids, however, is often carried out in theEulerian framework. Considering the high quality and broadapplicability of LMB, and considering there is no speciﬁcreason to discriminate ﬂuids from other 3D objects, one mayconsider employing LMB for rendering ﬂuids. An interesting ﬁnding of this paper is that LMB is not suitable for rendering the results generated by an Eulerian simulation. Sofar no algorithm has been proposed that can properly rendermotion blur of ﬂuids that are simulated using the Eulerianframework. In this paper, we explain why Lagrangian motion blur should not be used for rendering Eulerian simulations. Insight obtained during this process led us to developa simple step that can be added to existing motion blur techniques to produce motion blur techniques that are applicableto Eulerian simulations (i.e.,
Eulerian motion blur
(EMB)).
2. Previous Work
Motion blur was ﬁrst introduced to the computer graphics ﬁeld by Korein and Badler [KB83], and Potmesil andChakravarty [PC83]. Korein and Badler proposed a methodthat works on an analytically parameterized motion and creates a continuous motion blur. Potmesil and Chakravartyproposed another method that creates continuous motionblur by taking the imagespace convolution of the objectwith the moving path. We will classify the above sort of motion blur techniques as
analytic methods
.The next class of motion blur we introduce is the
temporal supersampling methods
. Korein and Badler [KB83] proposed another method that renders and accumulates whole(not partial) images of the object at several supersampledinstants, resulting in a superimposed look of the object.The distributed ray tracing work of Cook et al. [CPC84]brought improved motion blur results. Their method successfully increased the continuity of the motion blur byretrieving pixel values from randomly sampled instants intime. Recently, Cammarano and Jensen [CJ02] extended thistemporal supersampling method to simulate motionblurred
†
We note the intrinsic differences of the physical quantities used inthe Lagrangian and Eulerian frameworks. In the Lagrangian framework, the simulator deals with the quantities
carried
by the movingobjects (e.g., the position, velocity, acceleration of the objects). Inthe Eulerian framework, on the other hand, the domain is discretizedinto grids and the simulator deals with the quantities
observed
fromﬁxed 3D positions (e.g., the velocity and density of the ﬂuid at aﬁxed grid point).
globalilluminationandcausticsusingraytracingandphotonmapping.The third class of motion blur is known as
imagebased methods
. Max and Lerner [ML85] proposed an algorithmto achieve motion blur effect by considering the motion onthe image plane. Brostow and Essa [BE01] also proposedan entirely imagebased method which can create motionblur from stop motion or raw video image sequences. Thesemethods are suited to cases where the 3D motion is not available or the motion is already rendered. A more completesurvey of motion blur techniques can be found in Sung etal. [SPW02]We assume in this work that the 3D data of the ﬂuid atevery frame are available, but the data are not given in a parameterized forms. Therefore the temporal supersamplingmethod seems to ﬁt to the situation. In this paper, we develop a motion blur technique based on the temporal supersampling method.Realistic rendering of ﬂuids has been studied as wellas the ﬂuid simulation itself in the graphics community.Fedkiw et al. [FSJ01] visualized smoke simulation using aMonte Carlo raytracing algorithm with photon mapping,and Nguyen et al. [NFJ02] presented a technique based onMonte Carlo ray tracing for rendering ﬁre simulations. Techniques for rendering liquids were also developed by Enrightet al. [EMF02]. However, motion blur was not considered inthose studies.Müller et al. [MCG03] used blobby style rendering for visualizing water represented with particles, and their methodwas subsequently improved by Zhu and Bridson [ZB05]to have smoother surfaces. For the visualization of Lagrangian particles, Guan and Mueller [GM04] proposedpointbased surface rendering with motion blur. Geundelman et al. [GSLF05] and Lossaso et al. [LIG06] attempted
to include the rendering of the escaped levelset particles tocreate the impression of water sprays.Motion blur of Eulerian simulation has rarely mentioned/practiced before; To our knowledge, there have beenonly two reports on motion blur of Eulerian simulations incomputer graphics thus far. In rendering water simulation,Enright et al. [EMF02] mentioned that a simple interpolationbetweentwosigneddistancevolumescanbeappliedinorderto ﬁnd ray and water surface intersection. A few years later,Zhu and Bridson [ZB05] mentioned that the method will destroy surface features that move further than their width inone frame.
3. Computing Motion Blur
The basic principle of motion blur is to add up the radiancecontributions over time, which can be expressed as
L
p
=
t
s
A
L
(
x
,
ω
,
t
)
s
(
x
,
ω
,
t
)
g
(
x
)
dA
(
x
)
dt
,
(1)
c
The Eurographics Association 2007.
Doyub Kim & HyeongSeok Ko / Eulerian Motion Blur
t
1
t
2
t
3
t
4
t
1
t
2
t
3
t
4(a)(b)(c)(d)(e)(f)
Figure 2:
Motion blur with temporal supersampling
where
g
()
is the ﬁlter function,
s
()
represents the shutter exposure, and
L
()
is the radiance contribution from theray [CJ02]. The above principle applies to both Lagrangianand Eulerian motion blurs. In the equation,
x
is the placewhere the movement of objects jumps into the motion blur;for the evaluation of
x
, the locations of the objects at arbitrary (supersampled) moments need to be estimated, whichforms a core part of motion blur.For the development of a motion blur technique based ontemporal supersampling, we use Monte Carlo integration.It computes the integral in Equation (1) by accumulating theevaluations of the integrand at supersampled instants.More speciﬁcally, imagine the situation shown in Figure 2(a) in which a ball is moving horizontally. Suppose that wehave to create a blurred image for frame
t
n
. Let
η
be theshutter speed. For each pixel, we associate a time samplepicked within the interval
[
t
n
−
η
/
2
,
t
n
+
η
/
2
]
; the samplesare taken from both past and future. Figure 2 (b) shows that,for example, the time samples
t
1
,
t
2
,
t
3
, and
t
4
(which do notneed to be in chronological order) are associated with thefour pixels in a row.For each pixel, we now shoot the ray at the associatedtime, test for intersection, and estimate the radiance contribution. Shooting a ray at a certain time and testing for intersection implies that the location of the objects at that timeshould be estimated. Figure 2 (c) shows the object locationsat
t
1
,
t
2
,
t
3
, and
t
4
. In this particular example, only the rayshot at
t
3
hits the moving object. Figure 2 (d) shows the ﬁnalresult. Figure 2 (e) shows an image produced with an actual ray tracer. Usually multiple rays are shot for each pixelfor better results (Figure 2 (f)), which can be easily done byassociating multiple time samples to a pixel.
4. Lagrangian Motion Blur
LMB is used for rendering objects that have explicit surfaces such as rigid bodies, deformable solids, and clothes.The core part of the LMB approach is to compute, from thegiven 3D data of each frame, ray–object intersection at arbitrary supersampled instants. In order to do this, the locationof the surface at an arbitrary moment has to be estimated. InLMB,theestimationisdonebytakingthe
timeinterpolation
of the vertices of the two involved frames; When the positions
x
(
t
n
)
and
x
(
t
n
+
1
)
of the vertices at
t
n
and
t
n
+
1
aregiven, the estimated position
x
L
(
τ
)
at supersampled time
τ
is calculated by
x
L
(
τ
) =
τ
−
t
n
t
n
+
1
−
t
n
x
(
t
n
+
1
)+
t
n
+
1
−
τ
t
n
+
1
−
t
n
x
(
t
n
)
.
(2)We now brieﬂy consider the physical meaning of the estimation given by rearranging Equation (2) into the form
x
L
(
τ
) =
x
(
t
n
)+(
τ
−
t
n
)
x
(
t
n
+
1
)
−
x
(
t
n
)
t
n
+
1
−
t
n
.
(3)This equation shows that the estimation is the result of assuming the movement was made with a constant velocity
(
x
(
t
n
+
1
)
−
x
(
t
n
))
/
(
t
n
+
1
−
t
n
)
. However, how valid is this assumption? Movement of any object with nonzero mass hasthe tendency to continue its motion, and thus has an inertial component. When speciﬁc information is not available,calculation of the object position based on the inertial movement turns out to give quite a good estimation in many cases, judging from images rendered using LMB. The error of theestimation is proportional to the acceleration.
5. Eulerian Motion Blur
In developing Eulerian motion blur, we assume that the simulation result for each frame is given in the form of 3Dgrid data. The grid data consists of the levelset (or density)and velocity ﬁelds. As in the Lagrangian motion blur, it isnecessary to know how a ray traverses the ﬂuid at an arbitrary supersampled instant. However, rendering Euleriansimulations needs a different type of information: insteadof the raysurface intersection, the required information isthe levelset (in the case of water) or density (in the caseof smoke) values at the cell corners of all the cells the raypasses.
‡
‡
When the ﬂuid has a clear boundary, as is the case for water,the surface can be extracted from a Eulerian simulation using themarching cube algorithm [LC87]. In such a case, rendering can bedone with ray–surface intersections. However, this approach is notapplicable to surfaceless ﬂuids such as smoke which do not have
c
The Eurographics Association 2007.
Doyub Kim & HyeongSeok Ko / Eulerian Motion Blur
t
n
= t
n
+0.4
t
n
+1
= 0.58
= 2.54
1.36
τ
τ t
n
t
n
+1
0
0.29
φ
φ φ
(a)(b)(c)(d)
TI
φ φ
Figure 3:
Characterization of the levelset change in a sim ple example: (a) the snapshot at t
n
, (b) the snapshot at t
n
+
1
,(c) the situation at t
n
+
0
.
4
, (d) the levelset changes.
5.1. Why TimeInterpolation Does Not Work
Since the grid data are available only for the frames, we mustsomehow
estimate
the levelset values at arbitrary time sample
τ
. For the estimation, Enright et al. [EMF02] presenteda method which interpolates the level set data between twoframes. Note that this is just as same as LMBstyle estimation. An LMBstyle solution would be to make the estimation with
φ
TI
(
τ
) =
φ
(
t
n
)+(
τ
−
t
n
)
φ
(
t
n
+
1
)
−
φ
(
t
n
)
t
n
+
1
−
t
n
.
(4)Contrary to expectation, the above estimation gives incorrect results. Imagine the simple case shown in Figure 3, inwhich a spherical ball of water is making a pure translationalmovement along the horizontal direction at a constant velocity. Figures 3 (a) and 3 (b) show two snapshots taken at
t
n
and
t
n
+
1
, respectively. At the marked grid point, the levelsetvalues are
φ
(
t
n
) =
0
.
58 and
φ
(
t
n
+
1
) =
2
.
54. The question iswhat would be the levelset value
φ
(
τ
)
at
τ
=
t
n
+
0
.
4
at thatposition? Since the ﬂuid movement is analytically known inthis example, we can ﬁnd out the exact location of the waterball at
τ
, as shown in Figure 3(c). At
τ
, the marked positioncomes within the body of ﬂuid; therefore
φ
(
τ
)
has a negativevalue. In fact, we can ﬁnd out the trajectory of
φ
(
t
)
for theduration
[
t
n
,
t
n
+
1
]
, which is plotted as a solid curve in Figure 3(d). On the other hand, the timeinterpolated result is
φ
TI
(
τ
) =
1
.
36, which is far from what has happened. Variation of
φ
TI
(
t
)
within the duration follows a straight line andis plotted with a dashed line in Figure 3(d). Here, we notethat (1) the timeinterpolation gives an incorrect result eveninsuchasimple,nonviolent,analyticallyveriﬁablecase;(2)
a distinct boundary. Even for cases where the surface extraction ispossible (as in the water), when the topology changes over frames,LMB is difﬁcult because ﬁnding the vertex correspondence is a nontrivial process.
(
τ
t
n
)
u
(
τ
)
τ t
n
????
Figure 4:
Estimation of the levelset values for Eulerian motion blur. The grid points marked with ?s are the locationswhose levelset values must be estimated. The short solid arrows at those points represent the estimated velocity
u
(
x
,
τ
)
.
the error is remarkable; and (3) the error is not related to thegrid resolution.We now investigate why the timeinterpolation gives suchan incorrect result. When speciﬁc information about themovement is not available, exploiting the inertial component of the movement works quite well. The reason the LMBmethod works so well for Lagrangian simulations can be attributed to the fact that the LMBestimation of object location exploits the inertia. We can adopt this idea of
exploitinginertia
in the development of Eulerian motion blur. A question that arises here is whether the timeinterpolation
φ
TI
isexploiting the inertia.It is critical to understand that it cannot be assumed thatthe levelset/density change at a grid point will continue tohappen at the current rate. The space in which the ﬂuid experiences inertia in the conventional sense is the 3D space.The inertial movement of the ﬂuid in 3D space is
reﬂected
tothe levelset ﬁeld by updating the levelset according to theequation
∂φ∂
t
+
u
·∇
φ
=
0
.
(5)This equation states that the levelset should be advected inthe direction
u
at the rate

u

.
5.2. Proposed Method
FortheEulerianmotionblurtoexploittheinertialmovementof ﬂuids, therefore, we propose that the estimation of thelevelset values at arbitrary supersampled instants be basedon the levelset advection, rather than the timeinterpolationof the level set values. More speciﬁcally, we propose to estimate
φ
E
(
x
,
τ
)
of the level set value at a 3D position
x
,at a supersampled time
τ
with the semiLagrangian advection [Sta99,SC91]
φ
E
(
x
,
τ
) =
φ
(
x
−
(
τ
−
t
n
)
·
u
(
x
,
τ
)
,
t
n
)
.
(6)This equation states that
φ
E
(
x
,
τ
)
takes the levelset value of
t
n
at the backtracked position
x
−
(
τ
−
t
n
)
·
u
(
x
,
τ
)
. In the
c
The Eurographics Association 2007.