gradient_descent¶
- torch_sim.optimizers.gradient_descent(model, *, lr=0.01)[source]¶
Initialize a batched gradient descent optimization.
Creates an optimizer that performs standard gradient descent on atomic positions for multiple systems in parallel. The optimizer updates atomic positions based on forces computed by the provided model. The cell is not optimized with this optimizer.
- Parameters:
- Returns:
- A pair of functions:
Initialization function that creates the initial BatchedGDState
Update function that performs one gradient descent step
- Return type:
Notes
The learning rate controls the step size during optimization. Larger values can speed up convergence but may cause instability in the optimization process.