python - multiprocessing | catching an exiting interpreter -


i'm working on evolutionary computing problem, i'm implementing excellent ecspy module. fitness value i'm using derived pretty complex dynamics simulation. thing not approach of making simulation bomb-proof; pretty useless since evolutionary process come situations simu engine not built able solve. however, constraining generator return scenes solvable on constraining things.

so approach simple; if simulation takes long, or crashes, well, i'll let darwin's mercy handle it.

i'm using multiprocessing module evaluate fitness of candidates. how catch segfaulting interpreter or kill given number of seconds?

many in advance,

-jf

use subprocess "wrap" python interpreter inside python script.

  • start python interpreter runs thing.

  • start clock.

  • wait until clock runs out or child process crashes.

the easy, lazy way poll subprocess periodically see if it's dead yet. yes, it's "busy waiting", it's simple implement , relatively low resource cost if don't want instant notification when subprocess finishes.

import subprocess import time timeout = # timeout interval real_work= subprocess.popen( "python the_real_work.py" ) start= time.time() status= real_work.poll() while time.time()-start < timeout , not status:     time.sleep( 10.0 )     status= real_work.poll() if not status:      real_work.kill() 

something might work out. has race condition if happens exit right @ timeout interval; kill fail.


Comments

Popular posts from this blog

c# - How to set Z index when using WPF DrawingContext? -

razor - Is this a bug in WebMatrix PageData? -

visual c++ - Using relative values in array sorting ( asm ) -