Abaqus / CAE Python Multiprocessing

I am using a commercial application called Abaqus / CAE 1 with built-in Python 2.6 interpreter and API. I have developed a long running script that I am trying to split into concurrent, independent tasks using a Python module multiprocessing

. However, once spawned, processes simply freeze.

The script itself uses various objects / methods available only through the native cae

Abaqus module , which can only be loaded when Python is run bundled with Abaqus / CAE, which then executes my script with Python execfile

.

To try and get multiprocessing working, I tried running a script that avoids accessing any Abaqus objects and instead just performs the computation and outputs the result to file 2 . This way I can run the same script from a regular Python system as well as from Python bundled with Abaqus.

The code example below works as expected when run from the command line using one of the following values:

C:\some\path>python multi.py         # <-- Using system Python
C:\some\path>abaqus python multi.py  # <-- Using Python bundled with Abaqus

      

This spawns new processes and each one runs a function and writes the result to a file as expected. However, when called from within the Abaqus / CAE Python environment using:

abaqus cae noGUI=multi.py

      

Then Abaqus will run, import its own modules automatically, and then execute my file using:

execfile("multi.py", __main__.__dict__)

      

where the global namespace arg is __main__.__dict__

configured by Abaqus. Then Abaqus checks the licenses for each process successfully, spawns new processes and ... and that's it. Processes are created, but they all hang and do nothing. There are no error messages.

What can cause the freeze and how can I fix it? Should an environment variable be set? Are there other commercial systems out there that use a similar procedure that I can learn from / emulate?

Please note that any solution must be available in the Python 2.6 standard library .

System Information: Windows 10 64-bit, Python 2.6, Abaqus / CAE 6.12 or 6.14

Script test example:

# multi.py
import multiprocessing
import time

def fib(n):
    a,b = 0,1
    for i in range(n):
        a, b = a+b, a
    return a

def workerfunc(num):
    fname = ''.join(('worker_', str(num), '.txt'))
    with open(fname, 'w') as f:
        f.write('Starting Worker {0}\n'.format(num))
        count = 0
        while count < 1000:  # <-- Repeat a bunch of times.
            count += 1
            a=fib(20)
        line = ''.join((str(a), '\n'))
        f.write(line)
        f.write('End Worker {0}\n'.format(num))

if __name__ == '__main__':
    jobs = []
    for i in range(2):       # <-- Setting the number of processes manually
        p = multiprocessing.Process(target=workerfunc, args=(i,))
        jobs.append(p)
        print 'starting', p
        p.start()
        print 'done starting', p
    for j in jobs:
        print 'joining', j
        j.join()
        print 'done joining', j

      

1 Well-known finite element analysis package

The 2 script is a mixture of a fairly standard Python function forfib()

, and examples from PyMOTW

+3


source to share


2 answers


I need to write an answer as I cannot comment yet.

What I can think of as a reason is that the python multiprocessor spawns a whole new process with its own memory of its own. So if you create an object in your script, start a new process, that new process contains a copy of memory, and you have two objects that can go in opposite directions. When something from abaqus is present in the original python process (which I suspect) that is copied as well, and that copy can create this behavior.



As a solution, I think you could extend python with C (which is capable of multiple cores in one process) and use threads there.

0


source


Just wanted to say that I ran into this exact problem. My solution at the moment is to separate my scripts. This might work for you if you are trying to run parameter scrolling on a given model, or perform geometric variations on the same model, etc.

First, I create scripts to perform each part of my modeling process:

  • Creating an input file using CAE / Python.
  • Extract the data I want and put it in a text file.

With these created, I use text replacement to quickly generate N python scripts of each type, one for each set of discrete parameters I'm interested in.

Then I wrote a parallel processing tool in Python to call multiple Abaqus instances as subprocesses. It does the following:



  • Call the CAE via subprocess.call for each generation of the script model. The script lets you choose how many instances to run at once so that you don't accept every license on the server.

  • Run Abaqus's solution for the generated models using the same parameter with the parameters for cores per job and the total number of cores used.

  • Extract the data using the same process as 1.

There is a bit of overhead of re-checking licenses for CAE when building models, but in my testing it is far outweighed by the ability to generate 10+ input files at the same time.

I can put some scripts on Github if you think the above process will be useful for your application.

Cheers, Nathan

0


source







All Articles