Differentiable Physics#
Differentiable physics simulation utilities
This module provides advanced techniques for differentiable physics simulation:
Smooth Contact Approximations: Replace hard constraints with smooth barrier functions that maintain gradient flow
Implicit Differentiation: Compute gradients through iterative solvers using the implicit function theorem without storing the full computation graph
Gradient Checkpointing: Memory-efficient backpropagation through long simulations by recomputing forward passes instead of storing all intermediate states
Learnable Materials: Material models with parameters that can be optimized via gradient descent
All components are designed to preserve gradients for automatic differentiation while maintaining numerical stability.
- class diffsim.diff_physics.DifferentiableBarrierContact(barrier_stiffness=10000.0, barrier_width=0.01)[source]#
Bases:
objectSmooth, differentiable contact model using barrier functions
This class implements smooth contact forces that maintain gradient flow for automatic differentiation. Instead of hard constraints or projections that introduce discontinuities, it uses smooth barrier potential functions that activate near contact.
The barrier potential is:
\[b(d) = -(d - \hat{d})^2 \log(d/\hat{d}) \text{ for } d < \hat{d}\]where \(d\) is the distance to the surface and \(\hat{d}\) is the barrier activation distance. The potential smoothly increases as \(d \to 0\), preventing penetration while maintaining \(C^2\) continuity.
- Parameters:
- __init__(barrier_stiffness=10000.0, barrier_width=0.01)[source]#
- Parameters:
barrier_stiffness – controls contact force magnitude
barrier_width – distance at which barrier activates
- class diffsim.diff_physics.ImplicitDifferentiation[source]#
Bases:
objectImplicit differentiation for backward Euler and Newton solvers
Instead of differentiating through all Newton iterations, use the implicit function theorem:
If F(x*, p) = 0, then dx*/dp = -(dF/dx)^{-1} @ (dF/dp)
This avoids storing the full computation graph
- static implicit_backward(forward_fn, x_star, params, atol=1e-06)[source]#
Compute gradients using implicit differentiation
- Parameters:
forward_fn – function that computes residual F(x, params) = 0
x_star – solution where F(x_star, params) = 0
params – parameters to differentiate wrt
atol – tolerance for linear solve
- Returns:
solution with proper gradients attached
- Return type:
x_star
- diffsim.diff_physics.conjugate_gradient(A, b, x0=None, max_iters=100, atol=1e-06)[source]#
Solve Ax = b using conjugate gradient
Jacobian-free: A is a function that computes matrix-vector products
- Parameters:
A – function that computes A @ v
b – right-hand side
x0 – initial guess
max_iters – maximum iterations
atol – absolute tolerance
- Returns:
solution
- Return type:
x
- class diffsim.diff_physics.CheckpointedRollout[source]#
Bases:
objectMemory-efficient rollout using gradient checkpointing
Instead of storing all intermediate states, recompute them during backward pass This trades computation for memory
- static rollout(step_fn, state0, num_steps, checkpoint_every=10)[source]#
Perform rollout with checkpointing
- Parameters:
step_fn – function that computes next state: s_{t+1} = step_fn(s_t)
state0 – initial state
num_steps – number of steps
checkpoint_every – save state every N steps
- Returns:
list of states [s_0, s_1, …, s_T]
- Return type:
trajectory
- class diffsim.diff_physics.DifferentiableMaterial(youngs_modulus, poissons_ratio, requires_grad=True)[source]#
Bases:
ModuleMaterial model with learnable parameters
This class wraps material properties as PyTorch parameters, enabling gradient-based optimization of material constants. It implements the Stable Neo-Hookean energy density with learnable Young’s modulus \(E\) and Poisson’s ratio \(\nu\).
The energy density is:
\[\Psi(\mathbf{F}) = \frac{\mu}{2}(I_C - 3) - \mu \log J + \frac{\lambda}{2}(J-1)^2\]where \(\mu\) and \(\lambda\) are computed from \(E\) and \(\nu\).
- Parameters:
- E#
Learnable Young’s modulus
- Type:
torch.nn.Parameter
- nu#
Learnable Poisson’s ratio
- Type:
torch.nn.Parameter
Example
>>> material = DifferentiableMaterial(1e5, 0.4, requires_grad=True) >>> optimizer = torch.optim.Adam(material.parameters(), lr=1e3) >>> # Run simulation and optimize material.E and material.nu
- __init__(youngs_modulus, poissons_ratio, requires_grad=True)[source]#
- Parameters:
youngs_modulus – (scalar or per-element) Young’s modulus
poissons_ratio – (scalar or per-element) Poisson’s ratio
requires_grad – whether to track gradients
- property mu#
Lamé parameter μ (differentiable)
- property lam#
Lamé parameter λ (differentiable)
- class diffsim.diff_physics.SpatiallyVaryingMaterial(num_elements, base_youngs=100000.0, base_poisson=0.4)[source]#
Bases:
ModuleMaterial with spatially varying properties
This class allows each element in the mesh to have independent material properties, enabling optimization of heterogeneous material distributions. Young’s modulus is parameterized in log-space to ensure positivity:
\[E_i = \exp(\log E_i)\]This is particularly useful for:
Inverse material identification problems
Topology optimization
Functionally graded material design
Material distribution learning
- Parameters:
- log_E#
Log-space Young’s modulus per element \((M,)\)
- Type:
torch.nn.Parameter
- nu#
Poisson’s ratio per element \((M,)\)
- Type:
torch.nn.Parameter
Example
>>> material = SpatiallyVaryingMaterial(mesh.num_elements, 1e5, 0.4) >>> optimizer = torch.optim.Adam([material.log_E], lr=0.01) >>> # Optimize spatial distribution of stiffness
- __init__(num_elements, base_youngs=100000.0, base_poisson=0.4)[source]#
- Parameters:
num_elements – number of elements in mesh
base_youngs – base Young’s modulus
base_poisson – base Poisson’s ratio
- property E#
Young’s modulus (always positive via exp)
- property mu#
Per-element μ
- property lam#
Per-element λ