Testing a graphical physics simulation involves checking values of the calculations at each point in the simulation. In order to write tests for a graphical simulation, it's probably most straightforward to come up with some sort of file format to output the simulation to, which you then read in by your test script. Here's an example session using the test oracle for my class:
$ ./simulation -s test02.xml -o binary_output.bin FOSSSim message: Saving simulation to: binary_output.bin Progress: -------------------------------------------------- ................................................. FOSSSim message: Simulation complete at time 3. Exiting. FOSSSim message: Saved simulation to file 'binary_output.bin'.
$ oracle -s test02.xml -i binary_output.bin Scene: assets/t2m1/VertexEdgeTests/test02.xml Integrator: Implicit Euler Collision Handling: Simple Collision Handling Description: Two twirling batons collide in mid-air. FOSSSim message: Benchmarking simulation in: binary_output.bin Progress: -------------------------------------------------- ........... Collision Detection Error: Missed collision: Particle 3 - Edge 0 with normal = -0.291687 0.20956 Collision Detection Error: Missed collision: Particle 3 - Edge 0 with normal = -0.22631 0.150577 Collision Detection Error: Missed collision: Particle 3 - Edge 0 with normal = -0.154823 0.0951871 Superfluous collision: Particle 1 - Edge 1 with normal = 0.197274 -0.235839 Collision Detection Error: Missed collision: Particle 3 - Edge 0 with normal = -0.159027 0.0882626 Collision Detection Error: Missed collision: Particle 3 - Edge 0 with normal = -0.154444 0.0818601 .Collision Detection Error: Collision with wrong normal: Particle 3 - Edge 0 with normal = -0.101047 0.322061 should be Particle 3 - Edge 0 with normal = -0.143955 0.072883 ..................................... FOSSSim message: Simulation complete at time 3. Exiting. FOSSSim message: Colsing benchmarked simulation file 'binary_output.bin'. Accumulated Position Residual: 0 Passed. Accumulated Velocity Residual: 60.6793 Failed. Maximum Position Residual: 0 Passed. Maximum Velocity Residual: 11.4964 Failed. Collisions Passed: No. Overall success: Failed. FOSSSim message: Saved residual to residual.txt.
Because the simulation and the test are both compiled C++ programs, this gives us instant feedback. It's easy to imagine a CI server like travis (or the ambitious Hydra, which I've yet to figure out) compiling the simulation.
Can we learn anything from this approach that's applicable or helpful to testing Django applications, or web development in general? Is there any work we're doing that would be simplified by using a file to represent the results of our code? Also, is it useful to consider the xml "scene" file as a representation of a particular setup of a Django application? Test fixtures are commonplace, and have a specific meaning in Django: JSON files used to populate database fields. We use factories instead of fixtures because they're more flexible.
I don't really have an answer to these questions. Testing Django models and views is well understood, and we have the practice down. I'm just asking these questions to find out if we can speed up the process of running the tests, a pervasive issue when testing Django. Maybe the solution isn't some new approach to testing. Instead, maybe I should just try preserving the test database between test runs so it doesn't have to be set up each time.