fire¶
- torch_sim.optimizers.fire(model, *, dt_max=1.0, dt_start=0.1, n_min=5, f_inc=1.1, f_dec=0.5, alpha_start=0.1, f_alpha=0.99, max_step=0.2, md_flavor=ase_fire_key)[source]¶
Initialize a batched FIRE optimization.
Creates an optimizer that performs FIRE (Fast Inertial Relaxation Engine) optimization on atomic positions.
- Parameters:
model (Module) – Model that computes energies, forces, and stress
dt_max (float) – Maximum allowed timestep
dt_start (float) – Initial timestep
n_min (int) – Minimum steps before timestep increase
f_inc (float) – Factor for timestep increase when power is positive
f_dec (float) – Factor for timestep decrease when power is negative
alpha_start (float) – Initial velocity mixing parameter
f_alpha (float) – Factor for mixing parameter decrease
max_step (float) – Maximum distance an atom can move per iteration (default value is 0.2). Only used when md_flavor=’ase_fire’.
md_flavor ("vv_fire" | "ase_fire") – Optimization flavor. Default is “ase_fire”.
- Returns:
Initialization function that creates a FireState
Update function (either vv_fire_step or ase_fire_step) that performs one FIRE optimization step.
- Return type:
tuple[Callable, Callable]
Notes
md_flavor=”vv_fire” follows the original paper closely, including integration with Velocity Verlet steps. See https://doi.org/10.1103/PhysRevLett.97.170201 and https://github.com/Radical-AI/torch-sim/issues/90#issuecomment-2826179997 for details.
md_flavor=”ase_fire” mimics the implementation in ASE, which differs slightly in the update steps and does not explicitly use atomic masses in the velocity update step. See https://gitlab.com/ase/ase/-/blob/66963e6e38/ase/optimize/fire.py#L164-214 for details.
FIRE is generally more efficient than standard gradient descent for atomic structure optimization.
The algorithm adaptively adjusts step sizes and mixing parameters based on the dot product of forces and velocities (power).