Skip to content

yet *another* hydrodynamics/hyperbolic conservation law solver, this one in LuaJIT using OpenCL/OpenGL

License

Notifications You must be signed in to change notification settings

thenumbernine/hydro-cl-lua

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Computational Physics Solver in OpenCL and LuaJIT

Donate via Stripe

About:

hydro-cl-lua is a GPU-driven collection of computational physics schemes, integrators, and equations all housed under one framework.

It uses LuaJIT as the scripting due to the performance and memory advantages that Lua and LuaJIT have over other comparable languages (Python, JavaScript, etc).

It uses OpenCL for the GPU code.

hydro-cl-lua has been maintained and developed by me, Christopher Moore.

Features:

  • script-generated OpenCL GPGPU code regenerated on the fly as soon as you change GUI options
  • automatic tensor index representation of equations / symbolic differentiation (via symmath project)
  • 1D, 2D, 3D simulations and visualizations
  • solvers are usually a Roe scheme, though some implementations vary
  • various flux limiters
  • PLM for certain solvers
  • various boundary conditions
  • integrators: Forward Euler, several Runge-Kutta and Runge-Kutta TVD, and implicit linearized GMRES on the GPU
  • GUI-driven everything. no more restarting the program to switch solvers or initial conditions.
  • Euler equations from Toro's book (with some modifications for curvilinear coordinate systems)
  • Maxwell equations from Trangenstein's book with poisson solver constraints
  • Maxwell equations with GLM from 2000 Munz
  • ideal MHD from Stone et al 2008
  • two-fluid electron/ion plasma model from 2014 Abgrall, Kumar
  • SRHD from Marti & Muller 2008
  • GRHD from Font 2008
  • numerical relativity via Bona-Masso formalism described in Alcubierre 1997 and Alcubierre's 2008 book
  • numerical relativity via finite difference BSSNOK (Baumgarte & Shapiro 2010)
  • self-gravitation for some schemes (Euler equations)
  • Z4c from Cao, Hilditch 2011
  • nonlinear Schrodinger equation from Colliander, Simpso, Sulem "Numerical Simulations of the Energy-Supercritical nonlinear Schrodinger equation" 2010
  • Modular system. Integrators go in the 'int' folder. Schemes go in the 'solver' folder. Equations go in the 'eqn' folder. The idea is that you can mix and match as you choose, provided the functionality matches up.

Example Videos:

Rotating Black Hole / Ergosphere Formation

3D Rotating Black Hole

3D Alcubierre warp bubble

3D Alcubierre warp bubble

Dependencies

my other libraries:

Project History:

HydroJS was the first.

Then Hydro, a C++/multithread version of HydroJS.

Then HydroGPU, a OpenCL version of Hydro.

Then I added Lua script config.

Then the Lua got out of hand until the C++ was doing nothing but managing strings.

But C++ is much worse at strings than scripting languages are.

Now this project, hydro-cl-lua, pushes the middleman (C++) out completely.

Also somewhere in the middle I developed a simple gravitational wave demo which also got out of hand and has a lot of the same equations, but implemented in pure-Lua (no GPU/ffi-CPU support).

Also along the way I developed an unstructured mesh solver in pure lua and in pure C++ before porting their capabilities into this framework.

TODO:

  • ADM3D with shift as a hyperbolic conservation law system
  • ADM3D (and BSSNOK, and any other GR solver) for minimal-distortion elliptical shift solved as a Poisson equation -- which doesn't require extra time-iterating variables.
  • GR horizon tracking / moving puncture
  • FOBSSN would be nice. Something with the analytic stability of BSSN and the algorithmic stability of finite-volume.
  • Z4 2008 Alcubierre implementation (I have the eigenvectors in 'numerical relativity codegen/run.lua') vs 2008 Yano (in 'numerical relativity codegen/verify 2008 yano')
  • Implement eigen-stuff code in SRHD so that PLM can work
  • PLM for Euler-Burgers
  • Any kind of slope extrapolation for finite-difference solvers? (BSSN and Z4c)
  • PPM
  • Better divergence removal. multigrid GPU?
  • Finish GLM-(ideal)MHD ... especially implement the source term as a second step with an exp(dt) (which I'm not doing at the moment)
  • Rename mhd to ideal-mhd
  • Calculate and implement source terms for curvilinear coordinate systems (working on a tool to do this)
  • Get two-fluid-separate EMHD working, so I can update the glm-maxwell with an implicit and update the ion and electron with an explicit solver
  • Two-fluid plasma EMHD combined has numerical issues: the ion wavespeeds surpass the speed of light
  • Currently seeing errors when two solvers run simultaneously ... which makes EM+HD difficult
  • Add HLLD solver
  • Finish implementing Navier-Stokes, compressible & incompressible
  • BSSN connections based on difference with grid coordinate system
  • Test out the GR+HD solvers
  • Add source terms to GRHD -- or at least plugins for 'gr-hd-separate' to fill in from the NR solver
  • Finish the GR+EM solver
  • Add EM+GR+HD by winning
  • Change vector field from immediate mode to buffered geometry, and gometry shaders if they're available
  • Coroutines to iterative solvers? so they don't stall the app execution?
  • RHD W error in >1 dimension
  • GR flat space simulations make an initial wave. but shouldn't flat space be stable?
  • Right now I'm implementing weno similar to the 1996 Jiang Shu paper: 1) calc interface flux, 2) weno reconstruct interface flux 3) finite volume integrate. There are alternative uses of WENO (using PLM or flux-limiters to find the initial interface flux values, using WENO on left and right states and then applying a flux (HLL, Roe, etc), etc). Try these out?
  • Also, right now I am performing the PLM slope extrapolation in a separate kernel. Not for WENO. Combining kernels seems to run faster. Maybe I should just inline the PLM stuff into the same kernel?
  • Cram as much into a single kernel as possible. More inlining, more redundant calculations. This seems to run faster than separate solvers and separate buffers. Best would be a way to automate the inlining.
  • BUGS:
    • the names of dif solvers in the gui used to include the flux name, but not anymore

Sources:

About

yet *another* hydrodynamics/hyperbolic conservation law solver, this one in LuaJIT using OpenCL/OpenGL

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages