The adjoint mode of Algorithmic Differentiation (AD) is particularly attractive for computing gradients. However, this mode needs to use the intermediate values of the original simulation in reverse order at a cost that increases with the length of the simulation. AD research looks for strategies to reduce this cost, for instance by taking...
-
January 17, 2017 (v1)PublicationUploaded on: March 25, 2023
-
June 2016 (v1)Conference paper
Checkpointing is a classical technique to mitigate the overhead of adjoint Al-gorithmic Differentiation (AD). In the context of source transformation AD with the Store-All approach, checkpointing reduces the peak memory consumption of the adjoint, at the cost of duplicate runs of selected pieces of the code. Checkpointing is vital for long...
Uploaded on: March 25, 2023 -
February 22, 2016 (v1)Report
Checkpointing is a classical technique to mitigate the overhead of adjoint Algorithmic Differentiation (AD). In the context of source transformation AD with the Store-All approach, checkpointing reduces the peak memory consumption of the adjoint, at the cost of duplicate runs of selected pieces of the code. Checkpointing is vital for long...
Uploaded on: March 25, 2023 -
July 2016 (v1)Conference paper
Checkpointing is a classical strategy to reduce the peak memory consumption of the adjoint. Checkpointing is vital for long run-time codes, which is the case of most MPI parallel applications. However, for MPI codes this question has always been addressed by ad-hoc hand manipulations of the differentiated code, and with no formal assurance of...
Uploaded on: March 25, 2023 -
September 14, 2015 (v1)Conference paper
Efficient Algorithmic Differentiation of Fixed-Point loops requires a specific strategy to avoid explosion of memory requirements. Among the strategies documented in literature, we have selected the one introduced by B. Christianson. This method features original mechanisms such as repeated access to the trajectory stack or duplicated...
Uploaded on: March 25, 2023