GC3Libs is a python package for controlling the life-cycle of a Grid or batch computational job.
GC3Libs provides services for submitting computational jobs to Grids and batch systems, controlling their execution, persisting job information, and retrieving the final output.
GC3Libs takes an application-oriented approach to batch computing. A generic Application class provides the basic operations for controlling remote computations, but different Application subclasses can expose adapted interfaces, focusing on the most relevant aspects of the application being represented.
This document is the technical reference for the GC3Libs programming model, aimed at programmers who want to use GC3Libs to implement computational workflows in Python.
The Overview section presents the main concepts behind GC3Libs programming.
The GC3Libs modules section is a comprehensive list of all the modules, classes and functions comprising GC3Libs; its content is automatically generated from the docstrings in the source code.
GC3Libs takes an application-oriented approach to asynchronous computing. A generic Application class provides the basic operations for controlling remote computations and fetching a result; client code should derive specialized sub-classes to deal with a particular application, and to perform any application-specific pre- and post-processing.
The generic procedure for performing computations with GC3Libs is the following:
- Client code creates an instance of an Application sub-class.
- Asynchronous computation is started by submitting the application object; this associates the application with an actual (possibly remote) computational job.
- Client code can monitor the state of the computational job; state handlers are called on the application object as the state changes.
- When the job is done, the final output is retrieved and a post-processing method is invoked on the application object.
At this point, results of the computation are available and can be used by the calling program.
The Application class (and its sub-classes) alow client code to control the above process by:
Specifying the characteristics (computer program to run, input/output files, memory/CPU/duration requirements, etc.) of the corresponding computational job. This is done by passing suitable values to the Application constructor. See the Application constructor documentation for more info.
Providing methods to control the “life-cycle” of the associated computational job: start, check execution state, stop, retrieve a snapshot of the output files. There are actually two different interfaces for this, detailed below:
A passive interface: a Core or a Engine object is used to start/stop/monitor jobs associated with the given application. For instance:
a = GamessApplication(...) # create a `Core` object; only one instance is needed g = Core(...) # start the remote computation g.submit(a) # periodically monitor job execution g.update_job_state(a) # retrieve output when the job is done g.fetch_output(a)The passive interface gives client code full control over the lifecycle of the job, but cannot support some use cases (e.g., automatic application re-start).
As you can see from the above example, the passive interface is implemented by methods in the Core and Engine classes (they implement the same interface). See those classes documentation for more details.
An active interface: this requires that the Application object be attached to a Core or Engine instance:
a = GamessApplication(...) # create a `Core` object; only one instance is needed g = Core(...) # tell application to use the active interface a.attach(g) # start the remote computation a.submit() # periodically monitor job execution a.update_job_state() # retrieve output when the job is done a.fetch_output()With the active interface, application objects can support automated restart and similar use-cases.
When an Engine object is used instead of a Core one, the job life-cycle is automatically managed, providing a fully asynchronous way of executing computations.
The active interface is implemented by the Task class and all its descendants (including Application).
Providing “state transition methods” that are called when a change in the job execution state is detected; those methods can implement application specific behavior, like restarting the computational job with changed input if the alloted duration has expired but the computation has not finished. In particular, a postprocess method is called when the final output of an application is available locally for processing.
The set of “state transition methods” currently implemented by the Application class are: new(), submitted(), running(), stopped(), terminated() and postprocess(). Each method is called when the execution state of an application object changes to the corresponding state; see each method’s documentation for exact information.
In addition, GC3Libs provides collection classes, that expose interfaces 2. and 3. above, allowing one to control a set of applications as a single whole. Collections can be nested (i.e., a collection can hold a mix of Application and TaskCollection objects), so that workflows can be implemented by composing collection objects.
Note that the term computational job (or just job, for short) is used here in a quite general sense, to mean any kind of computation that can happen independently of the main thread of the calling program. GC3Libs currently provide means to execute a job as a separate process on the same computer, or as a batch job on a remote computational cluster.
An Application can be regarded as an abstraction of an independent asynchronous computation, i.e., a GC3Libs’ Application behaves much like an independent UNIX process (but it can actually run on a separate remote computer). Indeed, GC3Libs’ Application objects mimic the POSIX process model: Application are started by a parent process, run independently of it, and need to have their final exit code and output reaped by the calling process.
The following table makes the correspondence between POSIX processes and GC3Libs’ Application objects explicit.
os module function | Core function | purpose |
---|---|---|
exec | Core.submit | start new job |
kill(..., SIGTERM) | Core.kill | terminate executing job |
wait(..., WNOHANG) | Core.update_job_state | get job status |
Core.fetch_output | retrieve output |
Note
POSIX encodes process termination information in the “return code”, which can be parsed through os.WEXITSTATUS, os.WIFSIGNALED, os.WTERMSIG and relative library calls.
Likewise, GC3Libs provides each Application object with an execution.returncode attribute, which is a valid POSIX “return code”. Client code can therefore use os.WEXITSTATUS and relatives to inspect it; convenience attributes execution.signal and execution.exitcode are available for direct access to the parts of the return code. See Run.returncode() for more information.
However, GC3Libs has to deal with error conditions that are not catered for by the POSIX process model: for instance, execution of an application may fail because of an error connecting to the remote execution cluster.
To this purpose, GC3Libs encodes information about abnormal job termination using a set of pseudo-signal codes in a job’s execution.returncode attribute: i.e., if termination of a job is due to some grid/batch system/middleware error, the job’s os.WIFSIGNALED(app.execution.returncode) will be True and the signal code (as gotten from os.WTERMSIG(app.execution.returncode)) will be one of those listed in the Run.Signals documentation.
At any given moment, a GC3Libs job is in any one of a set of pre-defined states, listed in the table below. The job state is always available in the .execution.state instance property of any Application or Task object; see Run.state() for detailed information.
GC3Libs’ Job state | purpose | can change to |
---|---|---|
NEW | Job has not yet been submitted/started (i.e., gsub not called) | SUBMITTED (by gsub) |
SUBMITTED | Job has been sent to execution resource | RUNNING, STOPPED |
STOPPED | Trap state: job needs manual intervention (either user- or sysadmin-level) to resume normal execution | TERMINATED (by gkill), SUBMITTED (by miracle) |
RUNNING | Job is executing on remote resource | TERMINATED |
TERMINATED | Job execution is finished (correctly or not) and will not be resumed | None: final state |
When an Application object is first created, its .execution.state attribute is assigned the state NEW. After a successful start (via Core.submit() or similar), it is transitioned to state SUBMITTED. Further transitions to RUNNING or STOPPED or TERMINATED state, happen completely independently of the creator program: the Core.update_job_state() call provides updates on the status of a job. (Somewhat like the POSIX wait(..., WNOHANG) system call, except that GC3Libs provide explicit RUNNING and STOPPED states, instead of encoding them into the return value.)
The STOPPED state is a kind of generic “run time error” state: a job can get into the STOPPED state if its execution is stopped (e.g., a SIGSTOP is sent to the remote process) or delayed indefinitely (e.g., the remote batch system puts the job “on hold”). There is no way a job can get out of the STOPPED state automatically: all transitions from the STOPPED state require manual intervention, either by the submitting user (e.g., cancel the job), or by the remote systems administrator (e.g., by releasing the hold).
The TERMINATED state is the final state of a job: once a job reaches it, it cannot get back to any other state. Jobs reach TERMINATED state regardless of their exit code, or even if a system failure occurred during remote execution; actually, jobs can reach the TERMINATED status even if they didn’t run at all!
A job that is not in the NEW or TERMINATED state is said to be a “live” job.
One of the purposes of GC3Libs is to provide an abstraction layer that frees client code from dealing with the details of job execution on a possibly remote cluster. For this to work, it necessary to specify job characteristics and requirements, so that the GC3Libs scheduler can select an appropriate computational resource for executing the job.
GC3Libs Application provide a way to describe computational job characteristics (program to run, input and output files, memory/duration requirements, etc.) loosely patterned after ARC’s xRSL language.
The description of the computational job is done through keyword parameters to the Application constructor, which see for details. Changes in the job characteristics after an Application object has been constructed are not currently supported.
GC3Libs is a python package for controlling the life-cycle of a Grid or batch computational job.
GC3Libs provides services for submitting computational jobs to Grids and batch systems, controlling their execution, persisting job information, and retrieving the final output.
GC3Libs takes an application-oriented approach to batch computing. A generic Application class provides the basic operations for controlling remote computations, but different Application subclasses can expose adapted interfaces, focusing on the most relevant aspects of the application being represented.
Support for running a generic application with the GC3Libs. The following parameters are required to create an Application instance:
files that will be copied from the local computer to the remote execution node before execution starts.
There are two possible ways of specifying the inputs parameter:
- It can be a Python dictionary: keys are local file paths, values are remote file names.
- It can be a Python list: each item in the list should be a pair (local_file_name, remote_file_name); a single string file_name is allowed as a shortcut and will result in both local_file_name and remote_file_name being equal. If an absolute path name is specified as remote_file_name, then an InvalidArgument exception is thrown.
list of files that will be copied back from the remote execution node back to the local computer after execution has completed.
There are two possible ways of specifying the outputs parameter:
The following optional parameters may be additionally specified as keyword arguments and will be given special treatment by the Application class logic:
Any other keyword arguments will be set as instance attributes, but otherwise ignored by the Application constructor.
After successful construction, an Application object is guaranteed to have the following instance attributes:
Return a string, suitable for invoking the application from a UNIX shell command-line.
The default implementation just concatenates executable and arguments separating them with whitespace; this is hardly correct for any application, so you should override this method in derived classes to provide appropriate invocation templates.
Invocation of Core.fetch_output() on this object failed; ex is the Exception that describes the error.
If this method returns an exception object, that is raised as a result of the Core.fetch_output(), otherwise the return value is ignored and Core.fetch_output returns None.
Default is to return ex unchanged; override in derived classes to change this behavior.
Called when the job state is (re)set to NEW.
Note this will not be called when the application object is created, rather if the state is reset to NEW after it has already been submitted.
The default implementation does nothing, override in derived classes to implement additional behavior.
Called when the final output of the job has been retrieved to local directory dir.
The default implementation does nothing, override in derived classes to implement additional behavior.
Get an SGE qsub command-line invocation for submitting an instance of this application. Return a pair (cmd, script), where cmd is the command to run to submit an instance of this application to the SGE batch system, and script –if it’s not None– is written to a new file, whose name is then appended to cmd.
In the construction of the command-line invocation, one should assume that all the input files (as named in Application.inputs) have been copied to the current working directory, and that output files should be created in this same directory.
The default implementation just prefixes any output from the cmdline method with an SGE qsub invocation of the form qsub -cwd -S /bin/sh + resource limits. Note that there is no generic way of requesting a certain number of cores in SGE: it all depends on the installed parallel environment, and these are totally under control of the local sysadmin; therefore, any request for cores is ignored and a warning is logged.
Override this method in application-specific classes to provide appropriate invocation templates.
Called when the job state transitions to RUNNING, i.e., the job has been successfully started on a (possibly) remote resource.
The default implementation does nothing, override in derived classes to implement additional behavior.
Called when the job state transitions to STOPPED, i.e., the job has been remotely suspended for an unknown reason and cannot automatically resume execution.
The default implementation does nothing, override in derived classes to implement additional behavior.
Invocation of Core.submit() on this object failed; exs is a list of Exception objects, one for each attempted submission.
If this method returns an exception object, that is raised as a result of the Core.submit(), otherwise the return value is ignored and Core.submit returns None.
Default is to always return None; override in derived classes to change this behavior.
Called when the job state transitions to SUBMITTED, i.e., the job has been successfully sent to a (possibly) remote execution resource and is now waiting to be scheduled.
The default implementation does nothing, override in derived classes to implement additional behavior.
Called when the job state transitions to TERMINATED, i.e., the job has finished execution (with whatever exit status, see returncode) and its execution cannot resume.
The default implementation does nothing, override in derived classes to implement additional behavior.
Invocation of Core.update_job_state() on this object failed; ex is the Exception that describes the error.
If this method returns an exception object, that is raised as a result of the Core.update_job_state(), otherwise the return value is ignored and Core.update_job_state returns None.
Note that returning an Exception instance interrupts the normal flow of Core.update_job_state: in particular, the execution state is not updated and state transition methods will not be called.
Default is to return None; override in derived classes to change this behavior.
Return a string containing an xRSL sequence, suitable for submitting an instance of this application through ARC’s ngsub command.
The default implementation produces XRSL content based on the construction parameters; you should override this method to produce XRSL tailored to your application.
A specialized dict-like object that keeps information about the execution state of an Application instance.
A Run object is guaranteed to have the following attributes:
- log
- A gc3libs.utils.Log instance, recording human-readable text messages on events in this job’s history.
- info
- A simplified interface for reading/writing messages to Run.log. Reading from the info attribute returns the last message appended to log. Writing into info appends a message to log.
- timestamp
- Dictionary, recording the most recent timestamp when a certain state was reached. Timestamps are given as UNIX epochs.
For properties state, signal and returncode, see the respective documentation.
Run objects support attribute lookup by both the [...] and the . syntax; see gc3libs.utils.Struct for examples.
A simplified interface for reading/writing entries into log.
Setting the info attribute appends a message to the log:
>>> j1.info = 'a message'
>>> j1.info = 'a second message'
Getting the value of the info attribute returns the last message entered in the log:
>>> j1.info
'a second message'
The returncode attribute of this job object encodes the Run termination status in a manner compatible with the POSIX termination status as implemented by os.WIFSIGNALED and os.WIFEXITED.
However, in contrast with POSIX usage, the exitcode and the signal part can both be significant: in case a Grid middleware error happened after the application has successfully completed its execution. In other words, os.WEXITSTATUS(job.returncode) is meaningful iff os.WTERMSIG(job.returncode) is 0 or one of the pseudo-signals listed in Run.Signals.
Run.exitcode and Run.signal are combined to form the return code 16-bit integer as follows (the convention appears to be obeyed on every known system):
Bit Encodes... 0..7 signal number 8 1 if program dumped core. 9..16 exit code
Note: the “core dump bit” is always 0 here.
Setting the returncode property sets exitcode and signal; you can either assign a (signal, exitcode) pair to returncode, or set returncode to an integer from which the correct exitcode and signal attribute values are extracted:
>>> j = Run()
>>> j.returncode = (42, 56)
>>> j.signal
42
>>> j.exitcode
56
>>> j.returncode = 137
>>> j.signal
9
>>> j.exitcode
0
See also Run.exitcode and Run.signal.
The “signal number” part of a Run.returncode, see os.WTERMSIG for details.
The “signal number” is a 7-bit integer value in the range 0..127; value 0 is used to mean that no signal has been received during the application runtime (i.e., the application terminated by calling exit()).
The value represents either a real UNIX system signal, or a “fake” one that GC3Libs uses to represent Grid middleware errors (see Run.Signals).
The state a Run is in.
The value of Run.state must always be a value from the Run.State enumeration, i.e., one of the following values.
Run.State value | purpose | can change to |
---|---|---|
NEW | Job has not yet been submitted/started (i.e., gsub not called) | SUBMITTED (by gsub) |
SUBMITTED | Job has been sent to execution resource | RUNNING, STOPPED |
STOPPED | Trap state: job needs manual intervention (either user- or sysadmin-level) to resume normal execution | TERMINATED (by gkill), SUBMITTED (by miracle) |
RUNNING | Job is executing on remote resource | TERMINATED |
TERMINATED | Job execution is finished (correctly or not) and will not be resumed | None: final state |
When a Run object is first created, it is assigned the state NEW. After a successful invocation of Core.submit(), it is transitioned to state SUBMITTED. Further transitions to RUNNING or STOPPED or TERMINATED state, happen completely independently of the creator progra; the Core.update_job_state() call provides updates on the status of a job.
The STOPPED state is a kind of generic “run time error” state: a job can get into the STOPPED state if its execution is stopped (e.g., a SIGSTOP is sent to the remote process) or delayed indefinitely (e.g., the remote batch system puts the job “on hold”). There is no way a job can get out of the STOPPED state automatically: all transitions from the STOPPED state require manual intervention, either by the submitting user (e.g., cancel the job), or by the remote systems administrator (e.g., by releasing the hold).
The TERMINATED state is the final state of a job: once a job reaches it, it cannot get back to any other state. Jobs reach TERMINATED state regardless of their exit code, or even if a system failure occurred during remote execution; actually, jobs can reach the TERMINATED status even if they didn’t run at all, for example, in case of a fatal failure during the submission step.
Mix-in class implementing a facade for job control.
A Task can be described as an “active” job, in the sense that all job control is done through methods on the Task instance itself; contrast this with operating on Application objects through a Core or Engine instance.
The following pseudo-code is an example of the usage of the Task interface for controlling a job. Assume that GamessApplication is inheriting from Task (and it actually is):
t = GamessApplication(input_file)
t.submit()
# ... do other stuff
t.update()
# ... take decisions based on t.execution.state
t.wait() # blocks until task is terminated
Each Task object has an execution attribute: it is an instance of class Run, initialized with a new instance of Run, and at any given time it reflects the current status of the associated remote job. In particular, execution.state can be checked for the current task status.
Retrieve the outputs of the computational job associated with this task into directory output_dir, or, if that is None, into the directory whose path is stored in instance attribute .output_dir.
See gc3libs.Core.fetch_output() for a full explanation.
Returns: | Path to the directory where the job output has been collected. |
---|
Terminate the computational job associated with this task.
See gc3libs.Core.kill() for a full explanation.
Download size bytes (at offset offset from the start) from the associated job standard output or error stream, and write them into a local file. Return a file-like object from which the downloaded contents can be read.
See gc3libs.Core.peek() for a full explanation.
Advance the associated job through all states of a regular lifecycle. In detail:
- If execution.state is NEW, the associated job is started.
- The state is updated until it reaches TERMINATED
- Output is collected and the final returncode is returned.
An exception Task.Error is raised if the job hits state STOPPED or UNKNOWN during an update in phase 2.
When the job reaches TERMINATED state, the output is retrieved, and the return code (stored also in .returncode) is returned; if the job is not yet in TERMINATED state, calling progress returns None.
Raises: | exception Task.UnexpectedStateError if the associated job goes into state STOPPED or UNKNOWN |
---|---|
Returns: | job final returncode, or None if the execution state is not TERMINATED. |
Block until the associated job has reached TERMINATED state, then return the job’s return code. Note that this does not automatically fetch the output.
Parameter: | integer interval – Poll job state every this number of seconds |
---|
Configure the gc3.gc3libs logger.
Arguments level, format and datefmt set the corresponding arguments in the logging.basicConfig() call.
If a user configuration file exists in file NAME.log.conf in the Default.RCDIR directory (usually ~/.gc3), it is read and used for more advanced configuration; if it does not exist, then a sample one is created.
Top-level interface to Grid functionality.
Submit tasks in a collection, and update their state until a terminal state is reached. Specifically:
- tasks in NEW state are submitted;
- the state of tasks in SUBMITTED, RUNNING or STOPPED state is updated;
- when a task reaches TERMINATED state, its output is downloaded.
The behavior of Engine instances can be further customized by setting the following instance attributes:
- can_submit
- Boolean value: if False, no task will be submitted.
- can_retrieve
- Boolean value: if False, no output will ever be retrieved.
- max_in_flight
- If >0, limit the number of tasks in SUBMITTED or RUNNING state: if the number of tasks in SUBMITTED, RUNNING or STOPPED state is greater than max_in_flight, then no new submissions will be attempted.
- max_submitted
- If >0, limit the number of tasks in SUBMITTED state: if the number of tasks in SUBMITTED, RUNNING or STOPPED state is greater than max_submitted, then no new submissions will be attempted.
- output_dir
- Base directory for job output; if not None, each task’s results will be downloaded in a subdirectory named after the task’s permanent_id.
- fetch_output_overwrites
- Default value to pass as the overwrite argument to Core.fetch_output() when retrieving results of a terminated task.
Any of the above can also be set by passing a keyword argument to the constructor:
>>> e = Engine(can_submit=False)
>>> e.can_submit
False
Update state of all registered tasks and take appropriate action. Specifically:
- tasks in NEW state are submitted;
- the state of tasks in SUBMITTED, RUNNING or STOPPED state is updated;
- when a task reaches TERMINATED state, its output is downloaded.
The max_in_flight and max_submitted limits (if >0) are taken into account when attempting submission of tasks.
Return a dictionary mapping each state name into the count of jobs in that state. In addition, the following keys are defined:
Warning
This module is deprecated and will be removed in a future release. Do not depend on it.
Facade to store and retrieve Job information from permanent storage.
Save and load objects in a given directory. Uses Python’s standard pickle module to serialize objects onto files.
All objects are saved as files in the given directory (default: gc3libs.Default.JOBS_DIR). The file name is the object ID.
If an object contains references to other Persistable objects, these are saved in the file they would have been saved if the save method was called on them in the first place, and only an “external reference” is saved in the pickled container. This ensures that: (1) only one copy of a shared object is ever saved, and (2) any shared reference to Persistable objects is correctly restored when restoring the container.
The default idfactory assigns object IDs by appending a sequential number to the class name; see class Id for details.
The protocol argument specifies the pickle protocol to use (default: pickle protocol 0). See the pickle module documentation for details.
Return list of IDs of saved Job objects.
This is an optional method; classes that do not implement it should raise a NotImplementedError exception.
An automatically-generated “unique identifier” (a string-like object). Object identifiers are temporally unique: no identifier will (ever) be re-used, even in different invocations of the program.
The unique job identifier has the form “PREFIX.XXX” where “XXX” is a decimal number, and “PREFIX” defaults to the object class name but can be overridden in the Id constructor.
Two object IDs can be compared iff they have the same prefix; in which case, the result of the comparison is the same as comparing the two sequence numbers.
A mix-in class to mark that an object should be persisted by its ID.
Any instance of this class is saved as an “external reference” when a container holding a reference to it is saved.
Interface for storing and retrieving objects on permanent storage.
Each save operation returns a unique “ID”; each ID is a Python string value, which is guaranteed to be temporally unique, i.e., no two save operations in the same persistent store can result in the same IDs being assigned to different objects. The “ID” is also stored in the instance attribute _id.
Any Python object can stored, provided it meets the following conditions:
- it can be pickled with Python’s standard module pickle.
- the instance attribute persistent_id is reserved for use by the Store class: it should not be set or altered by other parts of the code.
Return list of IDs of saved Job objects.
This is an optional method; classes that do not implement it should raise a NotImplementedError exception.
Support for running a generic application with the GC3Libs.
Return an instance of the specific application class associated with tag. Example:
>>> app = get('gamess')
>>> isinstance(app, GamessApplication)
True
The returned object is always an instance of a sub-class of Application:
>>> isinstance(app, Application)
True
Specialized support for computational jobs running GAMESS-US.
Specialized Application object to submit computational jobs running GAMESS-US.
The only required parameter for construction is the input file name; any other argument names an additional input file, that is added to the Application.inputs list, but not otherwise treated specially.
Any other kyword parameter that is valid in the Application class can be used here as well, with the exception of input and output. Note that a GAMESS-US job is always submitted with join = True, therefore any stderr setting is ignored.
Specialized support for computational jobs running programs in the Rosetta suite.
Specialized Application object to submit one run of a single application in the Rosetta suite.
Required parameters for construction:
- application: name of the Rosetta application to call (e.g., “docking_protocol” or “relax”)
- inputs: a dict instance, keys are Rosetta -in:file:* options, values are the (local) path names of the corresponding files. (Example: inputs={"-in:file:s":"1brs.pdb"})
- outputs: list of output file names to fetch after Rosetta has finished running.
Optional parameters:
- flags_file: path to a local file containing additional flags for controlling Rosetta invocation; if None, a local configuration file will be used.
- database: (local) path to the Rosetta DB; if this is not specified, then it is assumed that the correct location will be available at the remote execution site as environment variable ROSETTA_DB_LOCATION
- arguments: If present, they will be appended to the Rosetta application command line.
Specialized Application class for executing a single run of the Rosetta “docking_protocol” application.
Currently used in the grosetta app.
Authentication support for the GC3Libs.
Authentication support for accessing resources through the SSH protocol.
Authentication support with Grid proxy certificates.
Interface to different resource management systems for the GC3Libs.
Base class for interfacing with a computing resource.
Free up any remote resources used for the execution of app. In particular, this should delete any remote directories and files.
Call this method when app.execution.state is anything other than TERMINATED results in undefined behavior and will likely be the cause of errors later on. Be cautious.
Download size bytes (at offset offset from the start) from remote file remote_filename and write them into local_file. If size is None (default), then snarf contents of remote file from offset unto the end.
Argument local_file is either a local path name (string), or a file-like object supporting a .write() method. If local_file is a path name, it is created if not existent, otherwise overwritten.
Argument remote_filename is the name of a file in the remote job “sandbox”.
Any exception raised by operations will be passed through.
Submit an Application instance to the configured computational resource; return a gc3libs.Job instance for controlling the submitted job.
This method only returns if the job is successfully submitted; upon any failure, an exception is raised.
Note:
- job.state is not altered; it is the caller’s responsibility to update it.
- the job object may be updated with any information that is necessary for this LRMS to perform further operations on it.
Job control on ARC0 resources.
Manage jobs through the ARC middleware.
Free up any remote resources used for the execution of app. In particular, this should delete any remote directories and files.
Call this method when app.execution.state is anything other than TERMINATED results in undefined behavior and will likely be the cause of errors later on. Be cautious.
Download size bytes (at offset offset from the start) from remote file remote_filename and write them into local_file. If size is None (default), then snarf contents of remote file from offset unto the end.
Argument local_file is either a local path name (string), or a file-like object supporting a .write() method. If local_file is a path name, it is created if not existent, otherwise overwritten.
Argument remote_filename is the name of a file in the remote job “sandbox”.
Any exception raised by operations will be passed through.
Submit an Application instance to the configured computational resource; return a gc3libs.Job instance for controlling the submitted job.
This method only returns if the job is successfully submitted; upon any failure, an exception is raised.
Note:
- job.state is not altered; it is the caller’s responsibility to update it.
- the job object may be updated with any information that is necessary for this LRMS to perform further operations on it.
Query the state of the ARC job associated with app and update app.execution.state accordingly. Return the corresponding Run.State; see Run.State for more details.
The mapping of ARC job statuses to Run.State is as follows:
ARC job status Run.State ACCEPTED SUBMITTED ACCEPTING SUBMITTED SUBMITTING SUBMITTED PREPARING SUBMITTED PREPARED SUBMITTED INLRMS:Q SUBMITTED INLRMS:R RUNNING INLRMS:O RUNNING INLRMS:E RUNNING INLRMS:X RUNNING INLRMS:S STOPPED INLRMS:H STOPPED FINISHING RUNNING EXECUTED RUNNING FINISHED TERMINATED CANCELING TERMINATED FINISHED TERMINATED KILLED TERMINATED FAILED TERMINATED DELETED TERMINATED
Any other ARC job status is mapped to Run.State.UNKNOWN. In particular, querying a job ID that is not found in the ARC information system will result in UNKNOWN state, as will querying a job that has just been submitted and has not yet found its way to the infosys.
Job control on SGE clusters (possibly connecting to the front-end via SSH).
Job control on SGE clusters (possibly by connecting via SSH to a submit node).
Free up any remote resources used for the execution of app. In particular, this should delete any remote directories and files.
Call this method when app.execution.state is anything other than TERMINATED results in undefined behavior and will likely be the cause of errors later on. Be cautious.
Download size bytes (at offset offset from the start) from remote file remote_filename and write them into local_file. If size is None (default), then snarf contents of remote file from offset unto the end.
Argument local_file is either a local path name (string), or a file-like object supporting a .write() method. If local_file is a path name, it is created if not existent, otherwise overwritten.
Argument remote_filename is the name of a file in the remote job “sandbox”.
Any exception raised by operations will be passed through.
Submit an Application instance to the configured computational resource; return a gc3libs.Job instance for controlling the submitted job.
This method only returns if the job is successfully submitted; upon any failure, an exception is raised.
Note:
- job.state is not altered; it is the caller’s responsibility to update it.
- the job object may be updated with any information that is necessary for this LRMS to perform further operations on it.
Compute the number of total, free, and used/reserved slots from the output of SGE’s qstat -F.
Return a dictionary instance, mapping each host name into a dictionary instance, mapping the strings total, available, and unavailable to (respectively) the the total number of slots on the host, the number of free slots on the host, and the number of used+reserved slots on the host.
Cluster-wide totals are associated with key global.
Note: The ‘available slots’ computation carried out by this function is unreliable: there is indeed no notion of a ‘global’ or even ‘per-host’ number of ‘free’ slots in SGE. Slot numbers can be computed per-queue, but a host can belong in different queues at the same time; therefore the number of ‘free’ slots available to a job actually depends on the queue it is submitted to. Since SGE does not force users to submit explicitly to a queue, rather encourages use of a sort of ‘implicit’ routing queue, there is no way to compute the number of free slots, as this entirely depends on how local policies will map a job to the available queues.
Parse SGE’s qstat output (as contained in string qstat_output) and return a quadruple (R, Q, r, q) where:
- R is the total number of running jobs in the SGE cell (from any user);
- Q is the total number of queued jobs in the SGE cell (from any user);
- r is the number of running jobs submitted by user whoami;
- q is the number of queued jobs submitted by user whoami
The Transport class hierarchy provides an abstraction layer to execute commands and copy/move files irrespective of whether the destination is the local computer or a remote front-end that we access via SSH.
Generic Python programming utility functions.
This module collects general utility functions, not specifically related to GC3Libs. A good rule of thumb for determining if a function or class belongs in here is the following: place a function or class in the utils if you could copy its code into the sources of a different project and it would not stop working.
A generic enumeration class. Inspired by: http://stackoverflow.com/questions/36932/whats-the-best-way-to-implement-an-enum-in-python/2182437#2182437 with some more syntactic sugar added.
An Enum class must be instanciated with a list of strings, that make the enumeration “label”:
>>> Animal = Enum('CAT', 'DOG')
Each label is available as an instance attribute, evaluating to itself:
>>> Animal.DOG
'DOG'
>>> Animal.CAT == 'CAT'
True
As a consequence, you can test for presence of an enumeration label by string value:
>>> 'DOG' in Animal
True
Finally, enumeration labels can also be iterated upon:
>>> for a in Animal: print a
DOG
CAT
A list of messages with timestamps and (optional) tags.
The append method should be used to add a message to the Log:
>>> L = Log()
>>> L.append('first message')
>>> L.append('second one')
The last method returns the text of the last message appended:
>>> L.last()
'second one'
Iterating over a Log instance returns message texts in the temporal order they were added to the list:
>>> for msg in L: print(msg)
first message
second one
Append a message to this Log.
The message is timestamped with the time at the moment of the call.
The optional tags argument is a sequence of strings. Tags are recorded together with the message and may be used to filter log messages given a set of labels. (This feature is not yet implemented.)
An object that is greater-than any other object.
>>> x = PlusInfinity()>>> x > 1 True >>> 1 < x True >>> 1245632479102509834570124871023487235987634518745 < x True>>> x > sys.maxint True >>> x < sys.maxint False >>> sys.maxint < x True
PlusInfinity objects are actually larger than any given Python object:
>>> x > 'azz'
True
>>> x > object()
True
Note that PlusInfinity is a singleton, therefore you always get the same instance when calling the class constructor:
>>> x = PlusInfinity()
>>> y = PlusInfinity()
>>> x is y
True
Relational operators try to return the correct value when comparing PlusInfinity to itself:
>>> x < y
False
>>> x <= y
True
>>> x == y
True
>>> x >= y
True
>>> x > y
False
Derived classes of Singleton can have only one instance in the running Python interpreter.
>>> x = Singleton()
>>> y = Singleton()
>>> x is y
True
A dict-like object, whose keys can be accessed with the usual ‘[...]’ lookup syntax, or with the ‘.’ get attribute syntax.
Examples:
>>> a = Struct()
>>> a['x'] = 1
>>> a.x
1
>>> a.y = 2
>>> a['y']
2
Values can also be initially set by specifying them as keyword arguments to the constructor:
>>> a = Struct(z=3)
>>> a['z']
3
>>> a.z
3
Ensure that configuration file filename exists; possibly copying it from the specified template_filename.
Return True if a file with the specified name exists in the configuration directory. If not, try to copy the template file over and then return False; in case the copy operations fails, a NoConfigurationFile exception is raised.
If parameter filename is not an absolute path, it is interpreted as relative to gc3libs.Default.RCDIR; if template_filename is None, then it is assumed to be the same as filename.
Return the first element of sequence or iterator seq. Raise TypeError if the argument does not implement either of the two interfaces.
Examples:
>>> s = [0, 1, 2]
>>> first(s)
0
>>> s = {'a':1, 'b':2, 'c':3}
>>> first(sorted(s.keys()))
'a'
Return the contents of template, substituting all occurrences of Python formatting directives ‘%(key)s’ with the corresponding values taken from dictionary kw.
If template is an object providing a read() method, that is used to gather the template contents; else, if a file named template exists, the template contents are read from it; otherwise, template is treated like a string providing the template contents itself.
Return if_true is argument test evaluates to True, return if_false otherwise.
This is just a workaround for Python 2.4 lack of the conditional assignment operator:
>>> a = 1
>>> b = ifelse(a, "yes", "no"); print b
yes
>>> b = ifelse(not a, 'yay', 'nope'); print b
nope
Print dictionary instance D in a YAML-like format. Each output line consists of:
- indent spaces,
- the key name,
- a colon character :,
- the associated value.
If the total line length exceeds width, the value is printed on the next line, indented by further step spaces; a value of 0 for width disables this line wrapping.
Optional argument only_keys can be a callable that must return True when called with keys that should be printed, or a list of key names to print.
Dictionary instances appearing as values are processed recursively (up to maxdepth nesting). Each nested instance is printed indented step spaces from the enclosing dictionary.
Return a positive integer, whose value is guaranteed to be monotonically increasing across different invocations of this function, and also across separate instances of the calling program.
Example:
>>> n = progressive_number()
>>> m = progressive_number()
>>> m > n
True
If you specify a positive integer as argument, then a list of monotonically increasing numbers is returned. For example:
>>> ls = progressive_number(5)
>>> len(ls)
5
In other words, progressive_number(N) is equivalent to:
nums = [ progressive_number() for n in range(N) ]
only more efficient, because it has to obtain and release the lock only once.
After every invocation of this function, the last returned number is stored into the file ~/.gc3/next_id.txt.
Note: as file-level locking is used to serialize access to the counter file, this function may block (default timeout: 30 seconds) while trying to acquire the lock, or raise a LockTimeout exception if this fails.
@raise LockTimeout
Function decorator: sets the docstring of the following function to the one of referenced_fn.
Intended usage is for setting docstrings on methods redefined in derived classes, so that they inherit the docstring from the corresponding abstract method in the base class.
Convert string s to an integer number of bytes. Suffixes like ‘KB’, ‘MB’, ‘GB’ (up to ‘YB’), with or without the trailing ‘B’, are allowed and properly accounted for. Case is ignored in suffixes.
Examples:
>>> to_bytes('12')
12
>>> to_bytes('12B')
12
>>> to_bytes('12KB')
12000
>>> to_bytes('1G')
1000000000
Binary units ‘KiB’, ‘MiB’ etc. are also accepted:
>>> to_bytes('1KiB')
1024
>>> to_bytes('1MiB')
1048576
Warning
This module is deprecated and will be removed in a future release. Do not depend on it.
A specialized dictionary for representing computational resource characteristics.
Resource objects are dictionaries, comprised of the following keys.
Statically provided, i.e., specified at construction time and changed never after:
arc_ldap string auth string frontend string gamess_location string max_cores_per_job int * max_memory_per_core int * max_walltime int * name string * ncores int type int *
Starred attributes are required for object construction.
Simple-minded scheduling for GC3Libs.
Warning
This module is deprecated and will be removed in a future release. Do not depend on it.
A specialized dict class.
Implementation of the core command-line front-ends.
Permanently remove jobs from local and remote storage.
In normal operation, only jobs that are in a terminal status can be removed; if you want to force gclean to remove a job that is not in any one of those states, add the -f option to the command line.
If a job description cannot be successfully read, the corresponding job will not be deleted; use the -f option to force removal of a job regardless.
Retrieve output files of a job.
Output files can only be retrieved once a job has reached the ‘RUNNING’ state; this command will print an error message if no output files are available.
Output files can be retrieved multiple times until a job reaches ‘TERMINATED’ state: after that, the remote storage will be released once the output files have been fetched.
Print detailed information about a job.
A complete dump of all the information known about jobs listed on the command line is printed; this will only make sense if you know GC3Libs internals.
Cancel a submitted job. Given a list of jobs, try to cancel each one of them; exit with code 0 if all jobs were cancelled successfully, and 1 if some job was not.
The command will print an error message if a job cannot be canceled because it’s in NEW or TERMINATED state, or if some other error occurred.
Report a failed job to the GC3Libs developers.
This command will not likely work on any machine other than the ones directly controlled by GC3 sysadmins, so just don’t use it and send an email to gc3pie@googlegroups.com describing your problem instead.
Resubmit an already-submitted job with (possibly) different parameters.
If you resubmit a job that is not in terminal state, the existing job is canceled before re-submission.
Submit an application job. Option arguments set computational job requirements. Interpretation of positional arguments varies with the application being submitted; the application name is always the first non-option argument.
Currently supported applications are:
- gamess: Each positional argument (after the application name) is the path to an input file; the first one is the GAMESS ‘.inp’ file and is required.
- rosetta: The first positional argument is the name of the Rosetta application/protocol to run (e.g., minirosetta.static or docking_protocol); after that comes the path to the flags file; remaining positional arguments are paths to input files (at least one must be provided). A list of output files may additionally be specified after the list of input files, separated from this by a : character.