Memory and Performance Issues with starting_solution in PyBaMM for RL Environment

Hello,

I’m trying to use PyBaMM to build a reinforcement learning environment, but I’ve encountered an issue. To allow the current to change in real-time based on the values provided by the agent, I adopted the following approach: continuously creating experiments and simulations, and using starting_solution to ensure the next simulation continues from the previous result.

I want to account for aging in the loop, so this cycle will continue until the SOH reaches 0.8. However, possibly due to the nature of the starting_solution feature, a large amount of data from the first cycle is retained, causing the program to eventually run out of memory and slow down significantly. How can I resolve this issue?

The code is like this:
def step(self, action):
self.current = action[0]
experiment = pybamm.Experiment([f"Charge at {self.current} C for {self.dt} seconds"])
self.simulation = pybamm.Simulation(
self.model,
parameter_values=self.param,
experiment=experiment,
solver=self.solver
)
self.start = self.simulation.solve(starting_solution=self.start, calc_esoh=False)

I noticed self.start contains a reference to self.start, which will trigger a reference cycle in Python’s memory management process. Is that intentional?

1 Like

Thanks for your response!
Yes, this was intentional—I intended for each simulation cycle to continue from the last result to adjust the current in real-time. But with increasing cycles, memory usage grows until it crashes, and the slowdown becomes severe.
What I wanted was real-time current adjustment (via the agent) on the same battery, but this implementation isn’t viable.

Hi @squirrel, can you try replacing

 self.start = self.simulation.solve(starting_solution=self.start, calc_esoh=False)

with

sol = self.simulation.solve(starting_solution=self.start, calc_esoh=False)
self.start = sol.sub_solutions[-1]
1 Like

Thank you for your reply! This solution works perfectly - I really appreciate your help!