gradient_descent

torch_sim.optimizers.gradient_descent(model, *, lr=0.01)[source]

Initialize a batched gradient descent optimization.

Creates an optimizer that performs standard gradient descent on atomic positions for multiple systems in parallel. The optimizer updates atomic positions based on forces computed by the provided model. The cell is not optimized with this optimizer.

Parameters:
  • model (Module) – Model that computes energies and forces

  • lr (Tensor | float) – Learning rate(s) for optimization. Can be a single float applied to all batches or a tensor with shape [n_batches] for batch-specific rates

Returns:

A pair of functions:
  • Initialization function that creates the initial BatchedGDState

  • Update function that performs one gradient descent step

Return type:

tuple

Notes

The learning rate controls the step size during optimization. Larger values can speed up convergence but may cause instability in the optimization process.