fire¶
- torch_sim.optimizers.fire(model, *, dt_max=1.0, dt_start=0.1, n_min=5, f_inc=1.1, f_dec=0.5, alpha_start=0.1, f_alpha=0.99)[source]¶
Initialize a batched FIRE optimization.
Creates an optimizer that performs FIRE (Fast Inertial Relaxation Engine) optimization on atomic positions.
- Parameters:
model (Module) – Model that computes energies, forces, and stress
dt_max (float) – Maximum allowed timestep
dt_start (float) – Initial timestep
n_min (int) – Minimum steps before timestep increase
f_inc (float) – Factor for timestep increase when power is positive
f_dec (float) – Factor for timestep decrease when power is negative
alpha_start (float) – Initial velocity mixing parameter
f_alpha (float) – Factor for mixing parameter decrease
- Returns:
- A pair of functions:
Initialization function that creates a FireState
Update function that performs one FIRE optimization step
- Return type:
Notes
FIRE is generally more efficient than standard gradient descent for atomic structure optimization
The algorithm adaptively adjusts step sizes and mixing parameters based on the dot product of forces and velocities