unit_cell_gradient_descent¶
- torch_sim.optimizers.unit_cell_gradient_descent(model, *, positions_lr=0.01, cell_lr=0.1, cell_factor=None, hydrostatic_strain=False, constant_volume=False, scalar_pressure=0.0)[source]¶
Initialize a batched gradient descent optimization with unit cell parameters.
Creates an optimizer that performs gradient descent on both atomic positions and unit cell parameters for multiple systems in parallel. Supports constraints on cell deformation and applied external pressure.
This optimizer extends standard gradient descent to simultaneously optimize both atomic coordinates and unit cell parameters based on forces and stress computed by the provided model.
- Parameters:
model (Module) – Model that computes energies, forces, and stress
positions_lr (float) – Learning rate for atomic positions optimization. Default is 0.01.
cell_lr (float) – Learning rate for unit cell optimization. Default is 0.1.
cell_factor (float | Tensor | None) – Scaling factor for cell optimization. If None, defaults to number of atoms per batch
hydrostatic_strain (bool) – Whether to only allow hydrostatic deformation (isotropic scaling). Default is False.
constant_volume (bool) – Whether to maintain constant volume during optimization Default is False.
scalar_pressure (float) – Applied external pressure in GPa. Default is 0.0.
- Returns:
- A pair of functions:
Initialization function that creates a BatchedUnitCellGDState
- Update function that performs one gradient descent step with cell
optimization
- Return type:
Notes
To fix the cell and only optimize atomic positions, set both constant_volume=True and hydrostatic_strain=True
The cell_factor parameter controls the relative scale of atomic vs cell optimization
Larger values for positions_lr and cell_lr can speed up convergence but may cause instability in the optimization process