Equations of Motion
In order to model hydrodynamic surface waves, I am following a general model for interactive water graphics based on Bernoulli’s equation. This approach, along with the basic equations I will use, are those outlined in Tessendorf’s paper ‘Interactive Water Surfaces’.[1] The general algorithm he develops is what I will implement. I will try to supplement his derivation rather than directly list his steps. The general equation of motion used is a linearized Bernoulli’s equation as is below. [1]
}{\partial&space;t^{2}}+\alpha&space;\frac{\partial&space;h(x,y,t)}{\partial&space;t}&space;=&space;-g\sqrt{-\triangledown&space;^{2}}&space;h(x,y,t))
The first two terms will be solved using rather straight forward finite difference methods. The Laplacian term on the RHS will be solved in a similar, yet more complex manner: use of a convolution kernel.
Convolution Kernel as a Linear Operator
Written more explicitly, the calculus of interest is as follows
&space;\equiv&space;-g\sqrt{&space;-\frac{\partial^{2}}{\partial&space;x^{2}}&space;-\frac{\partial^{2}}{\partial&space;y^{2}}}h(x,y,t))
This term, with the strange negative root of a Laplacian operator, apparently accounts for conservation of mass and a gravitational restoring force. The g is a gravitational constant while h is our 2D time dependent height map. I would be interested to see how this strange operator appears but we’ve got more pressing issues. To numerically calculate this quantity efficiently we are going to need to make use of a convolution kernel – and if you don’t do image processing on a regular basis like me you may need a bit of background to swallow how they work. The following explanation of convolution follows very closely this lecture/paper on image processing [2] .
So lets start with some hopefully familiar ground, good ole’ standard definition of a partial derivative:
)}{\partial&space;x}=&space;\lim_{\Delta&space;x\rightarrow&space;0}&space;\frac{F(x+\Delta&space;x,y)-F(x,y)}{\Delta&space;x})
Now numerically we could do this several ways, most common are simple two point methods and higher order methods using several weighted points. We will see that one advantage of using convolution kernels is that changing this method numerically is simple and the general code is very generic. Anyhow, numerically we could estimate this partial derivative roughly as:
&space;}{\partial&space;x}&space;=&space;F(i+1,j)&space;-&space;F(i,j))
Where i and j are integer indices corresponding to x = i*dx, y = j*dy and dx =1, AKA finite difference. Similarly we could do a centered finite difference
&space;}{\partial&space;x}&space;=&space;\frac{F(i+1,j)&space;-&space;F(i-1,j)}{2})
Or we could get even fancier a do a five point method
&space;}{\partial&space;x}&space;=&space;\frac{-F(i+2,j)&space;+&space;8F(i+1,j)-8F(i-1,j)&space;+&space;F(i-2,j)}{12})
Now we can begin to see that this could be rewritten as a second function modifying or convolving our function F. This convolution function can as Tessendorf points out, perform any linear operator on F. But lets not get too far ahead, lets rewrite this five point stencil method as
&space;}{\partial&space;x}&space;=&space;\frac{-F(i+2,j)}{12}&space;+&space;\frac{8F(i+1,j)}{12}-\frac{8F(i-1,j)}{12}&space;+&space;\frac{F(i-2,j)}{12})
or
&space;}{\partial&space;x}&space;=&space;\sum_{k=-2}^{2}&space;G(k)&space;F(i+k,j))
Where G(k) is simply a function defining the coefficients above: [1/12, -8/12, 0, 8/12, -1/12]. In this context, the space over which the convolution function operates (from k=-2 to 2 in this case), is known as the kernel.
Okay now, so what about that somewhat comparatively nasty square root of negative 2D Laplacian? Well Tessendorf of course applies an appropriate convolution kernel. How he chose this convolution function is currently a mystery to me, I suppose similar to how the exact coefficients in the five point stencil method were chosen – which I am also in the dark on. If anyone has some insight, do let me know. For the mean time, I will assume that these weighting functions are probably effective depending on the function you are differentiating – and that Tessendorf chose a good function for this application. Again, if anyone knows anything about this decision I am truly curious. Anyhow, here we go. Tessendorf writes the operator acting on h(x,y,t) as
where
Jo is a Bessel function of the first kind, N is some integer such that N>> n , sigma is chosen so that the series is convergent, Tessendorf uses 1,
, and r is simply
The normalization constant, Go, is given by
)
With this ‘convolution kernel’ now known and somewhat demystified (at least for me), we can move on to solving the full differential equation. There is a bit of a numerical trick in the paper in which Tessendorf exploits the symmetry of the convolution kernel to drastically reduce the number of operations.
Finite Difference – Acceleration and Velocity terms
Now on the left hand side of the linearized Bernoulli’s equation we have our vertical acceleration and a dampening term, both derivatives in time. The terms are as follows
}{\partial&space;t^{2}}+\alpha&space;\frac{\partial&space;h(x,y,t)}{\partial&space;t})
for the dampening term on the right, we can simply write it as
}{\partial&space;t}&space;=\alpha&space;\left&space;(&space;\frac{h(x,y,t+\Delta&space;t)-h(x,y,t))}{\Delta&space;t}&space;\right&space;))
While the second derivative on the left is a bit more complicated. Following [1] we will use a second order central difference.
}{\partial&space;t^{2}}&space;=&space;\frac{h(x,y,t+\Delta&space;t)-2h(x,y,t)+h(x,y,t-\Delta&space;t)}{\Delta&space;t^{2}})
All together our numerical differential equation looks like
-2h(x,y,t)+h(x,y,t-\Delta&space;t)}{\Delta&space;t^{2}}&space;+&space;\alpha&space;\left&space;(&space;\frac{h(x,y,t+\Delta&space;t)-h(x,y,t))}{\Delta&space;t}&space;\right&space;)&space;=&space;-g\sum_{k=-P}^{P}\sum_{l=-P}^{P}&space;G(k,l)&space;F(i+k,j+l))
In order to advance our solution to the next time – or rather in order to propagate the wave, we need to solve for
\left&space;(&space;1+&space;\Delta&space;t\alpha\right&space;)-h(x,y,t)\left&space;(&space;2+&space;\Delta&space;t\alpha&space;\right&space;)+h(x,y,t-\Delta&space;t)&space;=&space;-g\Delta&space;t^{2}\sum_{k=-P}^{P}\sum_{l=-P}^{P}&space;G(k,l)&space;F(i+k,j+l))
&space;=h(x,y,t)\frac{2+&space;\Delta&space;t\alpha&space;}{&space;1+&space;\Delta&space;t\alpha}&space;-h(x,y,t-\Delta&space;t)\frac{1&space;}{&space;1+&space;\Delta&space;t\alpha}&space;-\frac{g\Delta&space;t^{2}}{1+&space;\Delta&space;t\alpha}\sum_{k=-P}^{P}\sum_{l=-P}^{P}&space;G(k,l)&space;F(i+k,j+l))
Someone who is really paying attention (I would be super surprised), would notice that my final equation above has a slight difference than in [1]. I have 2+dt alpha, while [1] has 2-dt alpha in the first term on the RHS. I may be wrong, but I double and triple checked this and got the same result each time. It may be that I found a mistake – seems unlikely, I would follow [1] if I were you and use 2-dt alpha, but I have to see this thing through and try +.
So that’s it, in terms of Math, now we just need to write some code!
[1] Jerry Tessendorf, Interactive Water Surfaces
[2] Tomasi, Carlos, Convolution, Smoothing, and Image Derivatives , https://www.cs.duke.edu/courses/spring03/cps296.1/handouts/Image%20Processing.pdf