Parallelism – How is this simulation parallelized in Python?

I have a Dungeon and Dragons 5th Edition simulation game. Players, enemies, etc. are represented with Python code. I have a simulation class and the key part is here.

def run(self, n=10000):
"""
Run the Encounter n times, then print the results
"""
self.reset_stats()
for _ in range(n):
    self._encounter.run()
    self.collect_stats()
    self._encounter.reset()
self.calculate_aggregate_stats(n)
self.print_aggregate_stats()

Each simulation has its own Encounter Example. When you run the simulation, do it encounter multiple times. After every run of the encounter, you collect stats (A dictionary that you put in an instance variable of Simulation) and then put the encounter because that encounter and the objects it contains are changed in the course of a run. After you've done all the runs encounter, calculate the overall statistics (basically the average of all statistics in self._statswhich is why you have to surrender n).

I want to parallelize this code so that the n Times through the loop are distributed among multiple threads / processes. If necessary to avoid competition conditions, I can just make a copy of it encounter and change self._stats from an instance variable to a local variable (and I would change that calculate_aggregate_stats appropriate).

How do I do that in parallel? I have experience with MPI, OpenMP, and Pthreads in C and C ++, but I do not understand how parallel processing works in Python. I suspect I should do multiprocessing instead of multithreading because of GIL, but how do I do multiprocessing?

(If this question is better for another platform (eg StackOverflow), please let me know, I just assumed that it is a design question that should be answered here.)