I am running the following code in order to see how the degradation of the cell is effected by different drive cycles. My drive cycle has spiky currents and I have found that using the Casadi Solver fails unless I use a small timestep which is unreasonable for the other steps. I have tried the IDAKLU solver and this solved that issue but seemed to cause another on.
import pybamm
import numpy as np
import time
ExperimentFolder = f"D:/Stack Exchange"
model = pybamm.lithium_ion.DFN()
### This is a Sinusoidal Discharge Current using the below values
DCOffset = 5
ACAmplitude = 5
ACFrequency = 0.01
NumberOfCycles = 10000
Resolution = 100 # This is the number of points per cycle
timeseries = np.linspace(0, 1 / ACFrequency * NumberOfCycles, NumberOfCycles * Resolution)
current = DCOffset + ACAmplitude * (np.sin(2 * np.pi * timeseries * ACFrequency))
Discharge_Cycle = np.column_stack([timeseries, current])
Steps = [
pybamm.step.current(Discharge_Cycle, termination="2.5 V"),
"Charge at 0 A for 5 minutes",
"Charge at 10 A until 4.2 V",
"Hold at 4.2 V until 50 mA",
"Charge at 0 A for 5 minutes"
]
experiment = pybamm.Experiment(Steps)
Number_Of_Cycles = 500
for i in range(Number_Of_Cycles):
Start_Time = time.time()
# sim = pybamm.Simulation(model, experiment=experiment, solver=pybamm.IDAKLUSolver())
sim = pybamm.Simulation(model, experiment=experiment, solver=pybamm.CasadiSolver(mode="safe", extra_options_setup={"max_step_size": 1 / (ACFrequency * Resolution)}))
solution = sim.solve()
model = model.set_initial_conditions_from(solution.last_state, inplace=False)
solution.save(f"{ExperimentFolder}/Cycle_{i}.pkl")
Stop_Time = time.time()
print(f"Performed Cycle {i} out of {Number_Of_Cycles}, taking {Stop_Time - Start_Time}s to solve the last ### Cycle")
Here you can see the Casadi solver:
however when using the IDAKLU solver there seems to be a memory leak, however the simulation runs a lot more smoothly:
Any help as to what I am doing wrong would be much appreciated (or other ways of accomplishing what I am trying to do).
@Nicholas_Budenberg representing the sinusoidal step as a symbolic pybamm expression using the symbolic time pybamm.t can be more efficient. Building on @martinjrobins’ suggestion:
tf = 1 / ACFrequency * NumberOfCycles
current = DCOffset + ACAmplitude * (np.sin(2 * np.pi * pybamm.t * ACFrequency))
Number_Of_Cycles = 500
Steps = [
pybamm.step.current(current, termination="2.5 V", duration=tf),
"Charge at 0 A for 5 minutes",
"Charge at 10 A until 4.2 V",
"Hold at 4.2 V until 50 mA",
"Charge at 0 A for 5 minutes"
] * Number_Of_Cycles
experiment = pybamm.Experiment(Steps)
This runs about 10x faster than the previous version on my machine (tested for 10 cycles).
As a side note, you can see the simulation progress with
Thank you very much for your timely reply, I have run the code with your speed-up, and can certainly notice it, however I am running into another error:
Where might I be able to understand more efficiency techniques like this one?
Also one of the reasons that I was running the long experiments like this broken up into cycles was so that I was able to “look into” the experiment as it was running (Before I was letting it run for a few days as the duty-cycle had frequencies around 100Hz). Can you think of any way I might be able to do this using this method?
I have tried this and it indeed doesn’t use up as much RAM, however when performing experiments like this I am unable to “look into” how the simulation is progressing by loading one of the already completed files, is there a way to do this using this method?
The key thing is to move the creation of the pybamm.Simulation object outside the loop. You can then do something similar to what you originally had, saving the solution incrementally. See this example on resuming a simulation solve Simulating long experiments — PyBaMM v25.1.1 Manual
ie. use sim.solve(starting_solution=sol) rather than what you had originally, as the model used by the simulation will not be model anymore but rather an internal built model