GC3Libs is a python package for controlling the life-cycle of a Grid or batch computational job.
GC3Libs provides services for submitting computational jobs to Grids and batch systems, controlling their execution, persisting job information, and retrieving the final output.
GC3Libs takes an application-oriented approach to batch computing. A generic Application class provides the basic operations for controlling remote computations, but different Application subclasses can expose adapted interfaces, focusing on the most relevant aspects of the application being represented.
Support for running a generic application with the GC3Libs. The following parameters are required to create an Application instance:
Files that will be copied to the remote execution node before execution starts.
There are two possible ways of specifying the inputs parameter:
It can be a Python dictionary: keys are local file paths or URLs, values are remote file names.
It can be a Python list: each item in the list should be a pair (source, remote_file_name): the source can be a local file or a URL; remote_file_name is the path (relative to the execution directory) where source will be downloaded. If remote_file_name is an absolute path, an InvalidArgument error is raised.
A single string file_name is allowed instead of the pair and results in the local file file_name being copied to file_name on the remote host.
Files and directories that will be copied from the remote execution node back to the local computer (or a network-accessible server) after execution has completed. Directories are copied recursively.
There are three possible ways of specifying the outputs parameter:
It can be a Python dictionary: keys are remote file or directory paths (relative to the execution directory), values are corresponding local names.
It can be a Python list: each item in the list should be a pair (remote_file_name, destination): the destination can be a local file or a URL; remote_file_name is the path (relative to the execution directory) that will be uploaded to destination. If remote_file_name is an absolute path, an InvalidArgument error is raised.
A single string file_name is allowed instead of the pair and results in the remote file file_name being copied to file_name on the local host.
The constant gc3libs.ANY_OUTPUT which instructs GC3Libs to copy every file in the remote execution directory back to the local output path (as specified by the output_dir attribute).
Note that no errors will be raised if an output file is not present. Override the terminated() method to raise errors for reacting on this kind of failures.
The following optional parameters may be additionally specified as keyword arguments and will be given special treatment by the Application class logic:
Any other keyword arguments will be set as instance attributes, but otherwise ignored by the Application constructor.
After successful construction, an Application object is guaranteed to have the following instance attributes:
a Run instance; its state attribute is initially set to NEW (Actually inherited from the Task)
A name for applications of this class.
This string is used as a prefix for configuration items related to this application in configured resources. For example, if the application_name is foo, then the application interface code in GC3Pie might search for foo_cmd, foo_extra_args, etc. See qsub_sge() for an actual example.
Get an LSF qsub command-line invocation for submitting an instance of this application. Return a pair (cmd_argv, app_argv), where cmd_argv is a list containing the argv-vector of the command to run to submit an instance of this application to the LSF batch system, and app_argv is the argv-vector to use when invoking the application.
In the construction of the command-line invocation, one should assume that all the input files (as named in Application.inputs) have been copied to the current working directory, and that output files should be created in this same directory.
The default implementation just prefixes any output from the cmdline method with an LSF bsub invocation of the form bsub -cwd . -L /bin/sh + resource limits.
Override this method in application-specific classes to provide appropriate invocation templates and/or add resource-specific submission options.
Return list of command-line arguments for invoking the application.
This is exactly the argv-vector of the application process: the application command name is included as first item (index 0) of the list, further items are command-line arguments.
Hence, to get a UNIX shell command-line, just concatenate the elements of the list, separating them with spaces.
Return a list of compatible resources.
Invocation of Core.fetch_output() on this object failed; ex is the Exception that describes the error.
If this method returns an exception object, that is raised as a result of the Core.fetch_output(), otherwise the return value is ignored and Core.fetch_output returns None.
Default is to return ex unchanged; override in derived classes to change this behavior.
Similar to qsub_sge(), but for the PBS/TORQUE resource manager.
Get an SGE qsub command-line invocation for submitting an instance of this application.
Return a pair (cmd_argv, app_argv). Both cmd_argv and app_argv are argv-lists: the command name is included as first item (index 0) of the list, further items are command-line arguments; cmd_argv is the argv-list for the submission command (excluding the actual application command part); app_argv is the argv-list for invoking the application. By overriding this method, one can add futher resource-specific options at the end of the cmd_argv argv-list.
In the construction of the command-line invocation, one should assume that all the input files (as named in Application.inputs) have been copied to the current working directory, and that output files should be created in this same directory.
The default implementation just prefixes any output from the cmdline method with an SGE qsub invocation of the form qsub -cwd -S /bin/sh + resource limits. Note that there is no generic way of requesting a certain number of cores in SGE: it all depends on the installed parallel environment, and these are totally under control of the local sysadmin; therefore, any request for cores is ignored and a warning is logged.
Override this method in application-specific classes to provide appropriate invocation templates and/or add different submission options.
Sort the given resources in order of preference.
By default, less-loaded resources come first; see _cmp_resources.
Get a SLURM sbatch command-line invocation for submitting an instance of this application.
Return a pair (cmd_argv, app_argv). Both cmd_argv and app_argv are argv-lists: the command name is included as first item (index 0) of the list, further items are command-line arguments; cmd_argv is the argv-list for the submission command (excluding the actual application command part); app_argv is the argv-list for invoking the application. By overriding this method, one can add futher resource-specific options at the end of the cmd_argv argv-list.
In the construction of the command-line invocation, one should assume that all the input files (as named in Application.inputs) have been copied to the current working directory, and that output files should be created in this same directory.
Override this method in application-specific classes to provide appropriate invocation templates and/or add different submission options.
Invocation of Core.submit() on this object failed; exs is a list of Exception objects, one for each attempted submission.
If this method returns an exception object, that is raised as a result of the Core.submit(), otherwise the return value is ignored and Core.submit returns None.
Default is to always return the first exception in the list (on the assumption that it is the root of all exceptions or that at least it refers to the preferred resource). Override in derived classes to change this behavior.
Handle exceptions that occurred during a Core.update_job_state call.
If this method returns an exception object, that exception is processed in Core.update_job_state() instead of the original one. Any other return value is ignored and Core.update_job_state proceeds as if no exception had happened.
Argument ex is the exception that was raised by the backend during job state update.
Default is to return ex unchanged; override in derived classes to change this behavior.
Return a string containing an xRSL sequence, suitable for submitting an instance of this application through ARC’s ngsub command.
The default implementation produces XRSL content based on the construction parameters; you should override this method to produce XRSL tailored to your application.
Warning
WARNING: ARClib SWIG bindings cannot resolve the overloaded constructor if the xRSL string argument is a Python ‘unicode’ object; if you overload this method, force the result to be a ‘str’!
A namespace for all constants and default values used in the GC3Libs package.
only update ARC resources status every this seconds
consider a submitted job lost if it does not show up in the information system after this duration
Proxy validity threshold in seconds. If proxy is expiring before the thresold, it will be marked as to be renewed.
A specialized dict-like object that keeps information about the execution state of an Application instance.
A Run object is guaranteed to have the following attributes:
- log
- A gc3libs.utils.History instance, recording human-readable text messages on events in this job’s history.
- info
- A simplified interface for reading/writing messages to Run.log. Reading from the info attribute returns the last message appended to log. Writing into info appends a message to log.
- timestamp
- Dictionary, recording the most recent timestamp when a certain state was reached. Timestamps are given as UNIX epochs.
For properties state, signal and returncode, see the respective documentation.
Run objects support attribute lookup by both the [...] and the . syntax; see gc3libs.utils.Struct for examples.
Processor architectures, for use as values in the requested_architecture field of the Application class constructor.
The following values are currently defined:
The “exit code” part of a Run.returncode, see os.WEXITSTATUS. This is an 8-bit integer, whose meaning is entirely application-specific. (However, the value 255 is often used to mean that an error has occurred and the application could not end its execution normally.)
Return True if the Run state matches any of the given names.
In addition to the states from Run.State, the two additional names ok and failed are also accepted, with the following meaning:
A simplified interface for reading/writing entries into history.
Setting the info attribute appends a message to the log:
>>> j1 = Run()
>>> j1.info = 'a message'
>>> j1.info = 'a second message'
Getting the value of the info attribute returns the last message entered in the log:
>>> j1.info
u'a second message ...'
The returncode attribute of this job object encodes the Run termination status in a manner compatible with the POSIX termination status as implemented by os.WIFSIGNALED and os.WIFEXITED.
However, in contrast with POSIX usage, the exitcode and the signal part can both be significant: in case a Grid middleware error happened after the application has successfully completed its execution. In other words, os.WEXITSTATUS(returncode) is meaningful iff os.WTERMSIG(returncode) is 0 or one of the pseudo-signals listed in Run.Signals.
Run.exitcode and Run.signal are combined to form the return code 16-bit integer as follows (the convention appears to be obeyed on every known system):
Bit Encodes... 0..7 signal number 8 1 if program dumped core. 9..16 exit code
Note: the “core dump bit” is always 0 here.
Setting the returncode property sets exitcode and signal; you can either assign a (signal, exitcode) pair to returncode, or set returncode to an integer from which the correct exitcode and signal attribute values are extracted:
>>> j = Run()
>>> j.returncode = (42, 56)
>>> j.signal
42
>>> j.exitcode
56
>>> j.returncode = 137
>>> j.signal
9
>>> j.exitcode
0
See also Run.exitcode and Run.signal.
Convert a shell exit code to a POSIX process return code.
A POSIX shell represents the return code of the last-run program within its exit code as follows:
The “signal number” part of a Run.returncode, see os.WTERMSIG for details.
The “signal number” is a 7-bit integer value in the range 0..127; value 0 is used to mean that no signal has been received during the application runtime (i.e., the application terminated by calling exit()).
The value represents either a real UNIX system signal, or a “fake” one that GC3Libs uses to represent Grid middleware errors (see Run.Signals).
The state a Run is in.
The value of Run.state must always be a value from the Run.State enumeration, i.e., one of the following values.
Run.State value | purpose | can change to |
---|---|---|
NEW | Job has not yet been submitted/started (i.e., gsub not called) | SUBMITTED (by gsub) |
SUBMITTED | Job has been sent to execution resource | RUNNING, STOPPED |
STOPPED | Trap state: job needs manual intervention (either user- or sysadmin-level) to resume normal execution | TERMINATING(by gkill), SUBMITTED (by miracle) |
RUNNING | Job is executing on remote resource | TERMINATING |
TERMINATING | Job has finished execution on remote resource; output not yet retrieved | TERMINATED |
TERMINATED | Job execution is finished (correctly or not) and will not be resumed; output has been retrieved | None: final state |
When a Run object is first created, it is assigned the state NEW. After a successful invocation of Core.submit(), it is transitioned to state SUBMITTED. Further transitions to RUNNING or STOPPED or TERMINATED state, happen completely independently of the creator progra; the Core.update_job_state() call provides updates on the status of a job.
The STOPPED state is a kind of generic “run time error” state: a job can get into the STOPPED state if its execution is stopped (e.g., a SIGSTOP is sent to the remote process) or delayed indefinitely (e.g., the remote batch system puts the job “on hold”). There is no way a job can get out of the STOPPED state automatically: all transitions from the STOPPED state require manual intervention, either by the submitting user (e.g., cancel the job), or by the remote systems administrator (e.g., by releasing the hold).
The TERMINATED state is the final state of a job: once a job reaches it, it cannot get back to any other state. Jobs reach TERMINATED state regardless of their exit code, or even if a system failure occurred during remote execution; actually, jobs can reach the TERMINATED status even if they didn’t run at all, for example, in case of a fatal failure during the submission step.
Mix-in class implementing a facade for job control.
A Task can be described as an “active” job, in the sense that all job control is done through methods on the Task instance itself; contrast this with operating on Application objects through a Core or Engine instance.
The following pseudo-code is an example of the usage of the Task interface for controlling a job. Assume that GamessApplication is inheriting from Task (it actually is):
t = GamessApplication(input_file)
t.submit()
# ... do other stuff
t.update_state()
# ... take decisions based on t.execution.state
t.wait() # blocks until task is terminated
Each Task object has an execution attribute: it is an instance of class Run, initialized with a new instance of Run, and at any given time it reflects the current status of the associated remote job. In particular, execution.state can be checked for the current task status.
After successful initialization, a Task instance will have the following attributes:
Use the given Grid interface for operations on the job associated with this task.
Remove any reference to the current grid interface. After this, calling any method other than attach() results in an exception TaskDetachedFromGridError being thrown.
Retrieve the outputs of the computational job associated with this task into directory output_dir, or, if that is None, into the directory whose path is stored in instance attribute .output_dir.
If the execution state is TERMINATING, transition the state to TERMINATED (which runs the appropriate hook).
See gc3libs.Core.fetch_output() for a full explanation.
Returns: | Path to the directory where the job output has been collected. |
---|
Release any remote resources associated with this task.
See gc3libs.Core.free() for a full explanation.
Terminate the computational job associated with this task.
See gc3libs.Core.kill() for a full explanation.
Called when the job state is (re)set to NEW.
Note this will not be called when the application object is created, rather if the state is reset to NEW after it has already been submitted.
The default implementation does nothing, override in derived classes to implement additional behavior.
Download size bytes (at offset offset from the start) from the associated job standard output or error stream, and write them into a local file. Return a file-like object from which the downloaded contents can be read.
See gc3libs.Core.peek() for a full explanation.
Advance the associated job through all states of a regular lifecycle. In detail:
- If execution.state is NEW, the associated job is started.
- The state is updated until it reaches TERMINATED
- Output is collected and the final returncode is returned.
An exception TaskError is raised if the job hits state STOPPED or UNKNOWN during an update in phase 2.
When the job reaches TERMINATING state, the output is retrieved; if this operation is successfull, state is advanced to TERMINATED.
oNCE the job reaches TERMINATED state, the return code (stored also in .returncode) is returned; if the job is not yet in TERMINATED state, calling progress returns None.
Raises : | exception UnexpectedStateError if the associated job goes into state STOPPED or UNKNOWN |
---|---|
Returns: | final returncode, or None if the execution state is not TERMINATED. |
Called when the job state transitions to RUNNING, i.e., the job has been successfully started on a (possibly) remote resource.
The default implementation does nothing, override in derived classes to implement additional behavior.
Called when the job state transitions to STOPPED, i.e., the job has been remotely suspended for an unknown reason and cannot automatically resume execution.
The default implementation does nothing, override in derived classes to implement additional behavior.
Start the computational job associated with this Task instance.
Called when the job state transitions to SUBMITTED, i.e., the job has been successfully sent to a (possibly) remote execution resource and is now waiting to be scheduled.
The default implementation does nothing, override in derived classes to implement additional behavior.
Called when the job state transitions to TERMINATED, i.e., the job has finished execution (with whatever exit status, see returncode) and the final output has been retrieved.
The location where the final output has been stored is available in attribute self.output_dir.
The default implementation does nothing, override in derived classes to implement additional behavior.
Called when the job state transitions to TERMINATING, i.e., the remote job has finished execution (with whatever exit status, see returncode) but output has not yet been retrieved.
The default implementation does nothing, override in derived classes to implement additional behavior.
Called when the job state transitions to UNKNOWN, i.e., the job has not been updated for a certain period of time thus it is placed in UNKNOWN state.
Two possible ways of changing from this state: 1) next update cycle, job status is updated from the remote server 2) derive this method for Application specific logic to deal with this case
The default implementation does nothing, override in derived classes to implement additional behavior.
In-place update of the execution state of the computational job associated with this Task. After successful completion, .execution.state will contain the new state.
After the job has reached the TERMINATING state, the following attributes are also set:
The execution backend may set additional attributes; the exact name and format of these additional attributes is backend-specific. However, you can easily identify the backend-specific attributes because their name is prefixed with the (lowercased) backend name; for instance, the PbsLrms backend sets attributes pbs_queue, pbs_end_time, etc.
Block until the associated job has reached TERMINATED state, then return the job’s return code. Note that this does not automatically fetch the output.
Parameters: | interval (integer) – Poll job state every this number of seconds |
---|
Configure the gc3.gc3libs logger.
Arguments level, format and datefmt set the corresponding arguments in the logging.basicConfig() call.
If a user configuration file exists in file NAME.log.conf in the Default.RCDIR directory (usually ~/.gc3), it is read and used for more advanced configuration; if it does not exist, then a sample one is created.
Prototype classes for GC3Libs-based scripts.
Classes implemented in this file provide common and recurring functionality for GC3Libs command-line utilities and scripts. User applications should implement their specific behavior by subclassing and overriding a few customization methods.
There are currently two public classes provided here:
Base class for GC3Utils scripts.
The default command line implemented is the following:
script [options] JOBID [JOBID ...]
By default, only the standard options -h/--help and -V/--version are considered; to add more, override setup_options() To change default positional argument parsing, override setup_args()
Perform parsing of standard command-line options and call into parse_args() to do non-optional argument processing.
Setup standard command-line parsing.
GC3Utils scripts should probably override setup_args() and setup_options() to modify command-line parsing.
Set up command-line argument parsing.
The default command line parsing considers every argument as a job ID; actual processing of the IDs is done in parse_args()
Base class for grosetta/ggamess/gcodeml and like scripts. Implements a long-running script to submit and manage a large number of jobs grouped into a “session”.
The generic scripts implements a command-line like the following:
PROG [options] INPUT [INPUT ...]
First, the script builds a list of input files by recursively scanning each of the given INPUT arguments for files matching the self.input_file_pattern glob string (you can set it via a keyword argument to the ctor). To perform a different treatment of the command-line arguments, override the process_args() method.
Then, new jobs are added to the session, based on the results of the process_args() method above. For each tuple of items returned by process_args(), an instance of class self.application (which you can set by a keyword argument to the ctor) is created, passing it the tuple as init args, and added to the session.
The script finally proceeds to updating the status of all jobs in the session, submitting new ones and retrieving output as needed. When all jobs are done, the method done() is called, and its return value is used as the script’s exit code.
The script’s exitcode tracks job status, in the following way. The exitcode is a bitfield; only the 4 least-significant bits are used, with the following meaning:
Bit Meaning 0 Set if a fatal error occurred: the script could not complete 1 Set if there are jobs in FAILED state 2 Set if there are jobs in RUNNING or SUBMITTED state 3 Set if there are jobs in NEW state
Hook executed after exit from the main loop.
This is called after the main loop has exited (for whatever reason), but before the session is finally saved and other connections are finalized.
Override in subclasses to plug any behavior here; the default implementation does nothing.
Hook executed before entering the scripts’ main loop.
This is the last chance to alter the script state as it will be seen by the main loop.
Override in subclasses to plug any behavior here; the default implementation does nothing.
Hook executed during each round of the main loop.
This is called from within the main loop, after progressing all tasks.
Override in subclasses to plug any behavior here; the default implementation does nothing.
Return a path to a directory, suitable for storing the output of a job (named after jobname). It is not required that the returned path points to an existing directory.
This is called by the default process_args() using self.params.output (i.e., the argument to the -o/--output option) as pathspec, and jobname and args exactly as returned by new_tasks()
The default implementation substitutes the following strings within pathspec:
- SESSION is replaced with the name of the current session (as specified by the -s/--session command-line option) with a suffix .out appended;
- NAME is replaced with jobname;
- DATE is replaced with the current date, in YYYY-MM-DD format;
- TIME is replaced with the current time, in HH:MM format.
Return a ‘Controller’ object to be used for progressing tasks and getting statistics. In detail, a good ‘Controller’ object has to implement progress and stats methods with the same interface as gc3libs.core.Engine.
By the time this method is called (from _main()), the following instance attributes are already defined:
In addition, any other attribute created during initialization and command-line parsing is of course available.
Iterate over jobs that should be added to the current session. Each item yielded must have the form (jobname, cls, args, kwargs), where:
This method is called by the default process_args(), passing self.extra as the extra parameter.
The default implementation of this method scans the arguments on the command-line for files matching the glob pattern self.input_filename_pattern, and for each matching file returns a job name formed by the base name of the file (sans extension), the class given by self.application, and the full path to the input file as sole argument.
If self.instances_per_file and self.instances_per_job are set to a value other than 1, for each matching file N jobs are generated, where N is the quotient of self.instances_per_file by self.instances_per_job.
See also: process_args()
Perform parsing of standard command-line options and call into parse_args() to do non-optional argument processing.
Print a text summary of the session status to output. This is used to provide the “normal” output of the script; when the -l option is given, the output of the print_tasks_table function is appended.
Override this in subclasses to customize the report that you provide to users. By default, this prints a table with the count of tasks for each possible state.
The output argument is a file-like object, only the write method of which is used. The stats argument is a dictionary, mapping each possible Run.State to the count of tasks in that state; see Engine.stats for a detailed description.
Output a text table to stream output, giving details about tasks in the given states.
Optional second argument states restricts the listing to tasks that are in one of the specified states. By default, all task states are allowed. The states argument should be a list or a set of Run.State values.
Optional third argument only further restricts the listing to tasks that are instances of a subclass of only. By default, there is no restriction and all tasks are listed. The only argument can be a Python class or a tuple – anything infact, that you can pass as second argument to the isinstance operator.
Parameters: |
|
---|
Process command-line positional arguments and set up the session accordingly. In particular, new jobs should be added to the session during the execution of this method: additions are not contemplated elsewhere.
This method is called by the standard _main() after loading or creating a session into self.session. New jobs should be appended to self.session and it is also permitted to remove existing ones.
The default implementation calls new_tasks() and adds to the session all jobs whose name does not clash with the jobname of an already existing task.
See also: new_tasks()
Setup standard command-line parsing.
GC3Libs scripts should probably override setup_args() to modify command-line parsing.
Set up command-line argument parsing.
The default command line parsing considers every argument as an (input) path name; processing of the given path names is done in parse_args()
This function raise an ArgumentTypeError if num is a negative integer (<0), and returns int(num) otherwise. num can be any object which can be converted to an int.
>>> nonnegative_int('1')
1
>>> nonnegative_int(1)
1
>>> nonnegative_int('-1')
Traceback (most recent call last):
...
ArgumentTypeError: '-1' is not a non-negative integer number.
>>> nonnegative_int(-1)
Traceback (most recent call last):
...
ArgumentTypeError: '-1' is not a non-negative integer number.
Please note that 0 and ‘-0’ are ok:
>>> nonnegative_int(0)
0
>>> nonnegative_int(-0)
0
>>> nonnegative_int('0')
0
>>> nonnegative_int('-0')
0
Floats are ok too:
>>> nonnegative_int(3.14)
3
>>> nonnegative_int(0.1)
0
>>> nonnegative_int('ThisWillRaiseAnException')
Traceback (most recent call last):
...
ArgumentTypeError: 'ThisWillRaiseAnException' is not a non-negative integer number.
This function raises an ArgumentTypeError if num is not a*strictly* positive integer (>0) and returns int(num) otherwise. num can be any object which can be converted to an int.
>>> positive_int('1')
1
>>> positive_int(1)
1
>>> positive_int('-1')
Traceback (most recent call last):
...
ArgumentTypeError: '-1' is not a positive integer number.
>>> positive_int(-1)
Traceback (most recent call last):
...
ArgumentTypeError: '-1' is not a positive integer number.
>>> positive_int(0)
Traceback (most recent call last):
...
ArgumentTypeError: '0' is not a positive integer number.
Floats are ok too:
>>> positive_int(3.14)
3
but please take care that float greater than 0 will fail:
>>> positive_int(0.1)
Traceback (most recent call last):
...
ArgumentTypeError: '0.1' is not a positive integer number.
Please note that 0 is NOT ok:
>>> positive_int(-0)
Traceback (most recent call last):
...
ArgumentTypeError: '0' is not a positive integer number.
>>> positive_int('0')
Traceback (most recent call last):
...
ArgumentTypeError: '0' is not a positive integer number.
>>> positive_int('-0')
Traceback (most recent call last):
...
ArgumentTypeError: '-0' is not a positive integer number.
Any string which does cannot be converted to an integer will fail:
>>> positive_int('ThisWillRaiseAnException')
Traceback (most recent call last):
...
ArgumentTypeError: 'ThisWillRaiseAnException' is not a positive integer number.
Deal with GC3Pie configuration files.
In-memory representation of the GC3Pie configuration.
This class provides facilities for:
The constructor takes a list of files to load (locations) and a list of key=value pairs to provide defaults for the configuration. Both lists are optional and can be omitted, resulting in a configuration containing only GC3Pie default values.
Example 1: initialization from config file:
>>> import os
>>> example_cfgfile = os.path.join(os.path.dirname(__file__), 'etc/gc3pie.conf.example')
>>> cfg = Configuration(example_cfgfile)
>>> cfg.debug
'0'
Example 2: initialization from key=value list:
>>> cfg = Configuration(auto_enable_auth=False, foo=1, bar='baz')
>>> cfg.auto_enable_auth
False
>>> cfg.foo
1
>>> cfg.bar
'baz'
When both a configuration file and a key=value list is present, values in the configuration files override those in the key=value list:
>>> cfg = Configuration(example_cfgfile, debug=1)
>>> cfg.debug
'0'
Example 3: default initialization:
>>> cfg = Configuration()
>>> cfg.auto_enable_auth
True
The instance of gc3libs.authentication.Auth used to manage auth access for the resources.
This is a read-only attribute, created upon first access with the values set in self.auths and self.auto_enabled.
Merge settings from configuration files into this Configuration instance.
Environment variables and ~ references are expanded in the location file names.
If any of the specified files does not exist or cannot be read (for whatever reason), a message is logged but the error is ignored. However, a NoConfigurationFile exception is raised if none of the specified locations could be read.
Raises gc3libs.exceptions.NoConfigurationFile: | |
---|---|
if none of the specified files could be read. |
Return factory for auth credentials configured in section [auth/name].
Make backend objects corresponding to the configured resources.
Return a dictionary, mapping the resource name (string) into the corresponding backend object.
By default, errors in constructing backends (e.g., due to a bad configuration) are silently ignored: the offending configuration is just dropped. This can be changed by setting the optional argument ignore_errors to False: in this case, an exception is raised whenever we fail to construct a backend.
Read configuration files and merge the settings into this Configuration object.
Contrary to load() (which see), the file name is taken literally and an error is raised if the file cannot be read for whatever reason.
Any parameter which is set in the configuration files [DEFAULT] section, and whose name does not start with underscore (_) defines an attribute in the current Configuration.
Warning
No type conversion is performed on values set this way - so they all end up being strings!
Raises gc3libs.exceptions.ConfigurationError: | |
---|---|
if the configuration file does not exist, cannot be read, is corrupt or has wrong format. |
Top-level interface to Grid functionality.
Core operations: submit, update state, retrieve (a snapshot of) output, cancel job.
Core operations are blocking, i.e., they return only after the operation has successfully completed, or an error has been detected.
Operations are always performed by a Core object. Core implements an overlay Grid on the resources specified in the configuration file.
This method is here just to allow Core and Engine objects to be used interchangeably. It’s effectively a no-op, as it makes no sense in the synchronous/blocking semantics implemented by Core.
Used to invoke explicitly the destructor on objects e.g. LRMS
Retrieve output into local directory app.output_dir; optional argument download_dir overrides this.
The download directory is created if it does not exist. If it already exists, and the optional argument overwrite is False (default), it is renamed with a .NUMBER suffix and a new empty one is created in its place. Otherwise, if ‘overwrite` is True, files are downloaded over the ones already present.
If the task is in TERMINATING state, the state is changed to TERMINATED, attribute output_dir is set to the absolute path to the directory where files were downloaded, and the terminated transition method is called on the app object.
Task output cannot be retrieved when app.execution is in one of the states NEW or SUBMITTED; an OutputNotAvailableError exception is thrown in these cases.
Raise : | gc3libs.exceptions.OutputNotAvailableError if no output can be fetched from the remote job (e.g., the Application/Task object is in NEW or SUBMITTED state, indicating the remote job has not started running). |
---|
Free up any remote resources used for the execution of app. In particular, this should delete any remote directories and files.
It is an error to call this method if app.execution.state is anything other than TERMINATED: an InvalidOperation exception will be raised in this case.
Raise : | gc3libs.exceptions.InvalidOperation if app.execution.state differs from Run.State.TERMINATED. |
---|
Return list of resources configured into this Core instance.
Terminate a job.
Terminating a job in RUNNING, SUBMITTED, or STOPPED state entails canceling the job with the remote execution system; terminating a job in the NEW or TERMINATED state is a no-op.
Download size bytes (at offset bytes from the start) from the remote job standard output or error stream, and write them into a local file. Return file-like object from which the downloaded contents can be read.
If size is None (default), then snarf all available contents of the remote stream from offset unto the end.
The only allowed values for the what arguments are the strings ‘stdout’ and ‘stderr’, indicating that the relevant section of the job’s standard output resp. standard error should be downloaded.
This method is here just to allow Core and Engine objects to be used interchangeably. It’s effectively a no-op, as it makes no sense in the synchronous/blocking semantics implemented by Core.
Alter the configured list of resources, and retain only those that satisfy predicate match.
Argument match can be:
- either a function (or a generic callable) that is passed each Resource object in turn, and should return a boolean indicating whether the resources should be kept (True) or not (False);
- or it can be a string: only resources whose name matches (wildcards * and ? are allowed) are retained.
Submit a job running an instance of the given app. Upon successful submission, call the submitted method on the app object.
At the beginning of the submission process, the app.execution state is reset to NEW; if submission is successfull, the task will be in SUBMITTED or RUNNING state when this call returns.
Raise : | gc3libs.exceptions.InputFileError if an input file does not exist or cannot otherwise be read. |
---|
Update state of all applications passed in as arguments.
If keyword argument update_on_error is False (default), then application execution state is not changed in case a backend error happens; it is changed to UNKNOWN otherwise.
Note that if state of a job changes, the Run.state calls the appropriate handler method on the application/task object.
Raise : | gc3libs.exceptions.InvalidArgument in case one of the passed Application or Task objects is invalid. This can stop updating the state of other objects in the argument list. |
---|---|
Raise : | gc3libs.exceptions.ConfigurationError if the configuration of this Core object is invalid or otherwise inconsistent (e.g., a resource references a non-existing auth section). |
Update the state of resources configured into this Core instance.
Each resource object in the returned list will have its updated attribute set to True if the update operation succeeded, or False if it failed.
Submit tasks in a collection, and update their state until a terminal state is reached. Specifically:
- tasks in NEW state are submitted;
- the state of tasks in SUBMITTED, RUNNING or STOPPED state is updated;
- when a task reaches TERMINATED state, its output is downloaded.
The behavior of Engine instances can be further customized by setting the following instance attributes:
- can_submit
- Boolean value: if False, no task will be submitted.
- can_retrieve
- Boolean value: if False, no output will ever be retrieved.
- max_in_flight
- If >0, limit the number of tasks in SUBMITTED or RUNNING state: if the number of tasks in SUBMITTED, RUNNING or STOPPED state is greater than max_in_flight, then no new submissions will be attempted.
- max_submitted
- If >0, limit the number of tasks in SUBMITTED state: if the number of tasks in SUBMITTED, RUNNING or STOPPED state is greater than max_submitted, then no new submissions will be attempted.
- output_dir
- Base directory for job output; if not None, each task’s results will be downloaded in a subdirectory named after the task’s permanent_id.
- fetch_output_overwrites
- Default value to pass as the overwrite argument to Core.fetch_output() when retrieving results of a terminated task.
Any of the above can also be set by passing a keyword argument to the constructor (assume g is a Core instance):
| >>> e = Engine(g, can_submit=False)
| >>> e.can_submit
| False
Add task to the list of tasks managed by this Engine. Adding a task that has already been added to this Engine instance results in a no-op.
Call explicilty finalize methods on relevant objects e.g. LRMS
Proxy for Core.fetch_output (which see).
Proxy for Core.free, which see.
Schedule a task for killing on the next progress run.
Proxy for Core.peek (which see).
Update state of all registered tasks and take appropriate action. Specifically:
- tasks in NEW state are submitted;
- the state of tasks in SUBMITTED, RUNNING, STOPPED or UNKNOWN state is updated;
- when a task reaches TERMINATING state, its output is downloaded.
- tasks in TERMINATED status are simply ignored.
The max_in_flight and max_submitted limits (if >0) are taken into account when attempting submission of tasks.
Remove a task from the list of tasks managed by this Engine.
Return a dictionary mapping each state name into the count of tasks in that state. In addition, the following keys are defined:
If the optional argument only is not None, tasks whose whose class is not contained in only are ignored. : param tuple only: Restrict counting to tasks of these classes.
Submit task at the next invocation of perform.
The task state is reset to NEW and then added to the collection of managed tasks.
Return list of current states of the given tasks. States will only be updated at the next invocation of progress; in particular, no state-change handlers are called as a result of calling this method.
Implementation of task collections.
Tasks can be grouped into collections, which are tasks themselves, therefore can be controlled (started/stopped/cancelled) like a single whole. Collection classes provided in this module implement the basic patterns of job group execution; they can be combined to form more complex workflows. Hook methods are provided so that derived classes can implement problem-specific job control policies.
A ParallelTaskCollection runs all of its tasks concurrently.
The collection state is set to TERMINATED once all tasks have reached the same terminal status.
Terminate all tasks in the collection, and set collection state to TERMINATED.
Try to advance all jobs in the collection to the next state in a normal lifecycle. Return list of task execution states.
Start all tasks in the collection.
Update state of all tasks in the collection.
Wrap a Task instance and re-submit it until a specified termination condition is met.
By default, the re-submission upon failure happens iff execution terminated with nonzero return code; the failed task is retried up to self.max_retries times (indefinitely if self.max_retries is 0).
Override the retry method to implement a different retryal policy.
Note: The resubmission code is implemented in the terminated(), so be sure to call it if you override in derived classes.
Return True or False, depending on whether the failed task should be re-submitted or not.
The default behavior is to retry a task iff its execution terminated with nonzero returncode and the maximum retry limit has not been reached. If self.max_retries is 0, then the dependent task is retried indefinitely.
Override this method in subclasses to implement a different policy.
Update the state of the dependent task, then resubmit it if it’s TERMINATED and self.retry() is True.
A SequentialTaskCollection runs its tasks one at a time.
After a task has completed, the next method is called with the index of the finished task in the self.tasks list; the return value of the next method is then made the collection execution.state. If the returned state is RUNNING, then the subsequent task is started, otherwise no action is performed.
The default next implementation just runs the tasks in the order they were given to the constructor, and sets the state to TERMINATED when all tasks have been run.
Stop execution of this sequence. Kill currently-running task (if any), then set collection state to TERMINATED.
Return the state or task to run when step number done is completed.
This method is called when a task is finished; the done argument contains the index number of the just-finished task into the self.tasks list. In other words, the task that just completed is available as self.tasks[done].
The return value from next can be either a task state (i.e., an instance of Run.State), or a valid index number for self.tasks. In the first case:
If instead the return value is a (nonnegative) number, then tasks in the sequence will be re-run starting from that index.
The default implementation runs tasks in the order they were given to the constructor, and sets the state to TERMINATED when all tasks have been run. This method can (and should) be overridden in derived classes to implement policies for serial job execution.
Sequentially advance tasks in the collection through all steps of a regular lifecycle. When the last task transitions to TERMINATED state, the collection’s state is set to TERMINATED as well and this method becomes a no-op. If during execution, any of the managed jobs gets into state STOPPED or UNKNOWN, then an exception Task.UnexpectedExecutionState is raised.
Start the current task in the collection.
Update state of the collection, based on the jobs’ statuses.
Simplified interface for creating a sequence of Tasks. This can be used when the number of Tasks to run is fixed and known at program writing time.
A StagedTaskCollection subclass should define methods stage0, stage1, ... up to stageN (for some arbitrary value of N positive integer). Each of these stageN must return a Task instance; the task returned by the stage0 method will be executed first, followed by the task returned by stage1, and so on. The sequence stops at the first N such that stageN is not defined.
The exit status of the whole sequence is the exit status of the last Task instance run. However, if any of the stageX methods returns an integer value instead of a Task instance, then the sequence stops and that number is used as the sequence exit code.
Base class for all task collections. A “task collection” is a group of tasks, that can be managed collectively as a single one.
A task collection implements the same interface as the Task class, so you can use a TaskCollection everywhere a Task is required. A task collection has a state attribute, which is an instance of gc3libs.Run.State; each concrete collection class decides how to deduce a collective state based on the individual task states.
Add a task to the collection.
Use the given Controller interface for operations on the job associated with this task.
Raise a gc3libs.exceptions.InvalidOperation error, as there is no meaningful semantics that can be defined for peek into a generic collection of tasks.
Remove a task from the collection.
Return a dictionary mapping each state name into the count of tasks in that state. In addition, the following keys are defined:
If the optional argument only is not None, tasks whose class is not contained in only are ignored.
Parameters: | only (tuple) – Restrict counting to tasks of these classes. |
---|
Called when the job state transitions to TERMINATED, i.e., the job has finished execution (with whatever exit status, see returncode) and the final output has been retrieved.
Default implementation for TaskCollection is to set the exitcode to the maximum of the exit codes of its tasks.
Update the running state of all managed tasks.
Block until execution state reaches TERMINATED, then return a list of return codes. Note that this does not automatically fetch the output.
Parameters: | interval (integer) – Poll job state every this number of seconds |
---|
Exceptions specific to the gc3libs package.
In addition to the exceptions listed here, gc3libs functions try to use Python builtin exceptions with the same meaning they have in core Python, namely:
Raised when the dumped description on a given Application produces something that the LRMS backend cannot process. As an example, for arc backends, this error is raised when the parsing of the Application’s XRSL fails
Base class for Auth-related errors.
Should never be instanciated: create a specific error class describing the actual error condition.
Raised when the configuration file (or parts of it) could not be read/parsed. Also used to signal that a required parameter is missing or has an unknown/invalid value.
Base class for data staging and movement errors.
Should never be instanciated: create a specific error class describing the actual error condition.
Raised when a method (other than attach()) is called on a detached Task instance.
Raised by Application.__init__ if not all (local or remote) entries in the input or output files are distinct.
Base class for all error-level exceptions in GC3Pie.
Generally, this indicates a non-fatal error: depending on the nature of the task, steps could be taken to continue, but users must be aware that an error condition occurred, so the message is sent to the logs at the ERROR level.
Exceptions indicating an error condition after which the program cannot continue and should immediately stop, should use the FatalError base class.
A fatal error: execution cannot continue and program should report to user and then stop.
The message is sent to the logs at CRITICAL level when the exception is first constructed.
This is the base class for all fatal exceptions.
Raised when an input file is specified, which does not exist or cannot be read.
Raised when some function cannot fulfill its duties, for reasons that do not depend on the library client code. For instance, when a response string gotten from an external command cannot be parsed as expected.
Raised when the arguments passed to a function do not honor some required contract. For instance, either one of two optional arguments must be provided, but none of them was.
Raised when an operation is attempted, that is not considered valid according to the system state. For instance, trying to retrieve the output of a job that has not yet been submitted.
Raised to signal that no computational resource with the given name is defined in the configuration file.
A specialization of`InvalidArgument` for cases when the type of the passed argument does not match expectations.
Raised when a command is not provided all required arguments on the command line, or the arguments do not match the expected syntax.
Since the exception message is the last thing a user will see, try to be specific about what is wrong on the command line.
A specialization of`InvalidArgument` for cases when the type of the passed argument does not match expectations.
Raised upon errors loading a job from the persistent storage.
Raised when the configuration file cannot be read (e.g., does not exist or has wrong permissions), or cannot be parsed (e.g., is malformed).
Raised to signal that no resources are defined, or that none are compatible with the request.
Raised upon attempts to retrieve the output for jobs that are still in NEW or SUBMITTED state.
Raised when transient problems with copying data to or from the remote execution site occurred.
This error is considered to be transient (e.g., network connectivity interruption), so trying again at a later time could solve the problem.
Used to mark transient errors: retrying the same action at a later time could succeed.
This exception should never be instanciated: it is only to be used in except clauses to catch “try again” situations.
Generic error condition in a Task object.
Raised by Task.progress() when a job lands in STOPPED or TERMINATED state.
Raised when an operation is attempted on a task, which is unknown to the remote server or backend.
Raised when a job state is gotten from the Grid middleware, that is not handled by the GC3Libs code. Might actually mean that there is a version mismatch between GC3Libs and the Grid middleware used.
Raised when problems with copying data to or from the remote execution site occurred.
Used to mark permanent errors: there’s no point in retrying the same action at a later time, because it will yield the same error again.
This exception should never be instanciated: it is only to be used in except clauses to exclude “try again” situations.
A ‘session’ is a persistent collection of tasks.
Tasks added to the session are persistently recorded using an instance of gc3libs.persistence.Store. Stores can be shared among different sessions: each session knows wich jobs it ‘owns’.
A session is associated to a directory, which holds all the data releated to that session. Specifically, two files are always created in the session directory andused internally by this class:
To only argument needed to instantiate a session is the path of the directory; the directory name will be used as the identifier of the session itself. For example, the following code creates a temporary directory and the two files mentioned above inside it:
>>> import tempfile; tmpdir = tempfile.mktemp(dir='.')
>>> session = Session(tmpdir)
>>> sorted(os.listdir(tmpdir))
['created', 'session_ids.txt', 'store.url']
When a Session object is created with a path argument pointing to an existing valid session, the index of jobs is automatically loaded into memory, and the store pointed to by the store.url file in the session directory will be used, disregarding the contents of the `store_url` argument.
In other words, the store_url argument is only used when creating a new session. If no store_url argument is passed (i.e., it has its default value), a Session object will instantiate and use a FileSystemStore store, keeping data in the jobs subdirectory of the session directory.
Methods add and remove are provided to manage the collection; the len() operator returns the number of tasks in the session; iteration over a session returns the tasks one by one:
>>> task1 = gc3libs.Task()
>>> id1 = session.add(task1)
>>> task2 = gc3libs.Task()
>>> id2 = session.add(task2)
>>> len(session)
2
>>> for t in session:
... print(type(t))
<class 'gc3libs.Task'>
<class 'gc3libs.Task'>
>>> session.remove(id1)
>>> len(session)
1
When passed the flush=False optional argument, methods add and remove do not update the session metadata: i.e., the tasks are added or removed from the store and the in-memory task list, but the updated task list is not saved back to disk. This is useful when making many changes in a row; call Session.flush to persist the full set of changes.
The Store object is anyway accessible in the store attribute of each Session instance:
>>> type(session.store)
<class 'gc3libs.persistence.filesystem.FilesystemStore'>
However, Session defines methods save and load as a convenient proxy to the corresponding Store methods:
>>> obj = gc3libs.persistence.Persistable()
>>> oid = session.save(obj)
>>> obj2 = session.load(oid)
>>> obj.persistent_id == obj2.persistent_id
True
The whole session data can be removed by using method destroy:
>>> session.destroy()
>>> os.path.exists(session.path)
False
Add a Task to the current session, save it to the associated persistent storage, and return the assigned persistent_id:
#. create new, empty session
>>> import tempfile; tmpdir = tempfile.mktemp(dir='.')
>>> session = Session(tmpdir)
>>> len(session)
0
#. add a task to it
>>> task = gc3libs.Task()
>>> tid1 = session.add(task)
>>> len(session)
1
Duplicates are silently ignored: the same object can be added many times to the session, but gets the same ID each time:
#. add a different task
>>> tid2 = session.add(task)
>>> len(session)
1
>>> tid1 == tid2
True
# do cleanup
>>> session.destroy()
>>> os.path.exists(session.path)
False
Remove the session directory and all the tasks it contains from the store which are associated to this session.
Note
This will remove the associated task storage if and only if the storage is contained in the session directory!
Update session metadata.
Should be used after a save/remove operations, to ensure that the session state and metadata is correctly persisted.
Remove task identified by task_id from the current session but not from the associated storage.
Return set of all task IDs belonging to this session.
Return set of names of tasks belonging to this session.
Load an object from persistent storage and return it.
This is just a convenience proxy for calling method load on the Store instance associated with this session.
Remove task identified by task_id from the current session and from the associated storage.
Save an object to the persistent storage and return persistent_id of the saved object.
This is just a convenience proxy for calling method save on the Store instance associated with this session.
The object is not added to the session, nor is session meta-data updated:
# create an empty session
>>> import tempfile; tmpdir = tempfile.mktemp(dir='.')
>>> session = Session(tmpdir)
>>> 0 == len(session)
True
# use `save` on an object
>>> obj = gc3libs.persistence.Persistable()
>>> oid = session.save(obj)
# session is still empty
>>> 0 == len(session)
True
# do cleanup
>>> session.destroy()
>>> os.path.exists(session.path)
False
Save all modified tasks to persistent storage.
Create a file named finished in the session directory. It’s creation/modification time will be used to know when the session has finished.
Please note that Session does not know when a session is finished, so this method should be called by a SessionBasedScript class.
Create a file named created in the session directory. It’s creation/modification time will be used to know when the session has sarted.
Facade to store and retrieve Job information from permanent storage.
This module saves Python objects using the pickle framework: thus, the Application subclass corresponding to a job must be already loaded (or at least import-able) in the Python interpreter for pickle to be able to ‘undump’ the object from its on-disk representation.
In other words, if you create a custom Application subclass in some client code, GC3Utils won’t be able to read job files created by this code, because the class definition is not available in GC3Utils.
The recommended simple workaround is for a stand-alone script to ‘import self’ and then use the fully qualified name to run the script. In other words, start your script with this boilerplate code:
if __name__ == '__main__':
import myscriptname
myscriptname.MyScript().run()
The rest of the script now runs as the myscript module, which does the trick!
Note
Of course, the myscript.py file must be in the search path of the Python interpreter, or GC3Utils will still complain!
Manipulation of quantities with units attached with automated conversion among compatible units.
For details and the discussion leading up to this, see: <http://code.google.com/p/gc3pie/issues/detail?id=47>
Represent the duration of a time lapse.
Construction of a duration can be done by parsing a string specification; several formats are accepted:
* A duration as an aggregate of days, hours, minutes and seconds::
>>> l3 = Duration('1day 4hours 9minutes 16seconds')
>>> l3.amount(Duration.s) # convert to seconds
101356
Any of the terms can be omitted (in which case it defaults to zero):
>>> l4 = Duration('1day 4hours 16seconds')
>>> l4 == l3 - Duration('9 minutes')
True
The unit names can be singular or plural, and any amount of space can be added between the time unit name and the associated amount:
>>> l5 = Duration('3 hour 42 minute')
>>> l6 = Duration('3 hours 42 minutes')
>>> l7 = Duration('3hours 42minutes')
>>> l5 == l6 == l7
True
Unit names can also be abbreviated using just the leading letter:
>>> l8 = Duration('3h 42m')
>>> l9 = Duration('3h42m')
>>> l8 == l9
True
The abbreviated formats HH:MM:SS and DD:HH:MM:SS are also accepted:
>>> # 1 hour + 1 minute + 1 second
>>> l1 = Duration('01:01:01')
>>> l1 == Duration('3661 s')
True
>>> # 1 day, 2 hours, 3 minutes, 4 seconds
>>> l2 = Duration('01:02:03:04')
>>> l2.amount(Duration.s)
93784
However, the formats HH:MM and MM:SS are rejected as ambiguous:
>>> # is this hours:minutes or minutes:seconds ?
>>> l0 = Duration('01:02')
Traceback (most recent call last):
...
ValueError: Duration '01:02' is ambiguous: use '1m 2s' for 1 minutes and 2 seconds, or '1h 2m' for 1 hours and 2 minutes.
Finally, you can specify a duration like any other quantity, as an integral amount of a given time unit:
>>> l1 = Duration('1 day')
>>> l2 = Duration('86400 s')
>>> l1 == l2
True
A new quantity can also be defined as a multiple of an existing one:
>>> an_hour = Duration('1 hour')
>>> a_day = 24 * an_hour
>>> a_day.amount(Duration.h)
24
The quantities Duration.hours, Duration.minutes and Duration.seconds (and their single-letter abbreviations h, m, s) are pre-defined with their obvious meaning.
Also module-level aliases hours, minutes and seconds (and the one-letter forms) are available:
>>> a_day1 = 24*hours
>>> a_day2 = 1440*minutes
>>> a_day3 = 86400*seconds
This allows for yet another way of constructing duration objects, i.e., by passing the amount and the unit separately to the constructor:
>>> a_day4 = Duration(24, hours)
Two durations are equal if they indicate the exact same amount in seconds:
>>> a_day1 == a_day2
True
>>> a_day1.amount(s)
86400
>>> a_day2.amount(s)
86400
>>> a_day == an_hour
False
>>> a_day.amount(minutes)
1440
>>> an_hour.amount(minutes)
60
Basic arithmetic is possible with durations:
>>> two_hours = an_hour + an_hour
>>> two_hours == 2*an_hour
True
>>> one_hour = two_hours - an_hour
>>> one_hour.amount(seconds)
3600
The to_str() method allows representing a duration as a string, and provides choice of the output format and unit. The format string should contain exactly two %-specifiers: the first one is used to format the numerical amount, and the second one to format the measurement unit name.
By default, the unit used originally for defining the quantity is used:
>>> an_hour.to_str('%d [%s]')
'1 [hour]'
This can be overridden by specifying an optional second argument unit:
>>> an_hour.to_str('%d [%s]', unit=Duration.m)
'60 [m]'
A third optional argument conv can set the numerical type to be used for conversion computations:
>>> an_hour.to_str('%.1f [%s]', unit=Duration.m, conv=float)
'60.0 [m]'
The default numerical type is int, which in particular implies that you get a null amount if the requested unit is larger than the quantity:
>>> an_hour.to_str('%d [%s]', unit=Duration.days)
'0 [days]'
Conversion to string uses the unit originally used for defining the quantity and the %g%s format:
>>> str(an_hour)
'1hour'
Convert a duration into a Python datetime.timedelta object.
This is useful to operate on Python’s datetime.time and datetime.date objects, which can be added or subtracted to datetime.timedelta.
Represent an amount of RAM.
Construction of a memory quantity can be done by parsing a string specification (amount followed by unit):
>>> byte = Memory('1 B')
>>> kilobyte = Memory('1 kB')
A new quantity can also be defined as a multiple of an existing one:
>>> a_thousand_kB = 1000*kilobyte
The base-10 units (up to TB, Terabytes) and base-2 (up to TiB, TiBiBytes) are available as attributes of the Memory class. This allows for a third way of constructing quantity objects, i.e., by passing the amount and the unit separately to the constructor:
>>> a_megabyte = Memory(1, Memory.MB)
>>> a_mibibyte = Memory(1, Memory.MiB)
>>> a_gigabyte = 1*Memory.GB
>>> a_gibibyte = 1*Memory.GiB
>>> two_terabytes = 2*Memory.TB
>>> two_tibibytes = 2*Memory.TiB
Two memory quantities are equal if they indicate the exact same amount in bytes:
>>> kilobyte == 1000*byte
True
>>> a_megabyte == a_mibibyte
False
>>> a_megabyte < a_mibibyte
True
>>> a_megabyte > a_gigabyte
False
Basic arithmetic is possible with memory quantities:
>>> two_bytes = byte + byte
>>> two_bytes == 2*byte
True
The to_str() method allows representing a quantity as a string, and provides choice of the output format and unit. The format string should contain exactly two %-specifiers: the first one is used to format the numerical amount, and the second one to format the measurement unit name.
By default, the unit used originally for defining the quantity is used:
>>> a_megabyte.to_str('%d [%s]')
'1 [MB]'
This can be overridden by specifying an optional second argument unit:
>>> a_megabyte.to_str('%d [%s]', unit=Memory.kB)
'1000 [kB]'
A third optional argument conv can set the numerical type to be used for conversion computations:
>>> a_megabyte.to_str('%g%s', unit=Memory.GB, conv=float)
'0.001GB'
The default numerical type is int, which in particular implies that you get a null amount if the requested unit is larger than the quantity:
>>> a_megabyte.to_str('%g%s', unit=Memory.GB, conv=int)
'0GB'
Conversion to string uses the unit originally used for defining the quantity and the %g%s format:
>>> str(a_megabyte)
'1MB'
Metaclass for creating quantity classes.
This factory creates subclasses of _Quantity and bootstraps the base unit.
The name of the base unit is given as argument to the metaclass instance:
>>> class Memory1(object):
... __metaclass__ = Quantity('B')
...
>>> B = Memory1('1 B')
>>> print (2*B)
2B
Optional keyword arguments create additional units; the argument key gives the unit name, and its value gives the ratio of the new unit to the base unit. For example:
>>> class Memory2(object):
... __metaclass__ = Quantity('B', kB=1000, MB=1000*1000)
...
>>> a_thousand_kB = Memory2('1000kB')
>>> MB = Memory2('1 MB')
>>> a_thousand_kB == MB
True
Note that the units (base and additional) are also available as class attributes for easier referencing in Python code:
>>> a_thousand_kB == Memory2.MB
True
Support and expansion of programmatic templates.
The module gc3libs.template allows creation of textual templates with a simple object-oriented programming interface: given a string with a list of substitutions (using the syntax of Python’s standard substitute module), a set of replacements can be specified, and the gc3libs.template.expansions function will generate all possible texts coming from the same template. Templates can be nested, and expansions generated recursviely.
A template object is a pair (obj, keywords). Methods are provided to substitute the keyword values into obj, and to iterate over expansions of the given keywords (optionally filtering the allowed combination of keyword values).
Second optional argument validator must be a function that accepts a set of keyword arguments, and returns True if the keyword combination is valid (can be expanded/substituted back into the template) or False if it should be discarded. The default validator passes any combination of keywords/values.
Iterate over all valid expansions of the templated object and the template keywords. Returned items are Template instances constucted with the expanded template object and a valid combination of keyword values.
Return result of interpolating the value of keywords into the template. Keyword arguments extra_args can be used to override keyword values passed to the constructor.
If the templated object provides a substitute method, then return the result of invoking it with the template keywords as keyword arguments. Otherwise, return the result of applying Python standard library’s string.Template.safe_substitute() on the string representation of the templated object.
Raise ValueError if the set of keywords/values is not valid according to the validator specified in the constructor.
Iterate over all expansions of a given object, recursively expanding all templates found. How the expansions are actually computed, depends on the type of object being passed in the first argument obj:
If obj is a list, iterate over expansions of items in obj. (In particular, this flattens out nested lists.)
Example:
>>> L = [0, [2, 3]]
>>> list(expansions(L))
[0, 2, 3]
If obj is a dictionary, return dictionary formed by all combinations of a key k in obj with an expansion of the corresponding value obj[k]. Expansions are computed by recursively calling expansions(obj[k], **extra_args).
Example:
>>> D = {'a':1, 'b':[2,3]}
>>> list(expansions(D))
[{'a': 1, 'b': 2}, {'a': 1, 'b': 3}]
If obj is a tuple, iterate over all tuples formed by the expansion of every item in obj. (Each item t[i] is expanded by calling expansions(t[i], **extra_args).)
Example:
>>> T = (1, [2, 3])
>>> list(expansions(T))
[(1, 2), (1, 3)]
If obj is a Template class instance, then the returned values are the result of applying the template to the expansion of each of its keywords.
Example:
>>> T1 = Template("a=${n}", n=[0,1])
>>> list(expansions(T1))
[Template('a=${n}', n=0), Template('a=${n}', n=1)]
Note that keywords passed to the expand invocation override the ones used in template construction:
>>> T2 = Template("a=${n}")
>>> list(expansions(T2, n=[1,3]))
[Template('a=${n}', n=1), Template('a=${n}', n=3)]
>>> T3 = Template("a=${n}", n=[0,1])
>>> list(expansions(T3, n=[2,3]))
[Template('a=${n}', n=2), Template('a=${n}', n=3)]
Any other value is returned unchanged.
Example:
>>> V = 42
>>> list(expansions(V))
[42]
Utility classes and methods for dealing with URLs.
Represent a URL as a named-tuple object. This is an immutable object that cannot be changed after creation.
The following read-only attributes are defined on objects of class Url.
Attribute | Index | Value | Value if not present |
---|---|---|---|
scheme | 0 | URL scheme specifier | empty string |
netloc | 1 | Network location part | empty string |
path | 2 | Hierarchical path | empty string |
query | 3 | Query component | empty string |
hostname | 4 | Host name (lower case) | None |
port | 5 | Port number as integer (if present) | None |
username | 6 | User name | None |
password | 7 | Password | None |
There are two ways of constructing Url objects:
By passing a string urlstring:
>>> u = Url('http://www.example.org/data')
>>> u.scheme
'http'
>>> u.netloc
'www.example.org'
>>> u.path
'/data'
The default URL scheme is file:
>>> u = Url('/tmp/foo')
>>> u.scheme
'file'
>>> u.path
'/tmp/foo'
Please note that extra leading slashes ‘/’ are interpreted as the begining of a network location:
>>> u = Url('//foo/bar')
>>> u.path
'/bar'
>>> u.netloc
'foo'
>>> Url('///foo/bar').path
'/foo/bar'
Check RFC 3986 http://tools.ietf.org/html/rfc3986
If force_abs is True (default), then the path attribute is made absolute, by calling os.path.abspath if necessary:
>>> u = Url('foo/bar', force_abs=True)
>>> os.path.isabs(u.path)
True
Otherwise, if force_abs is False, then the path attribute stores the passed string unchanged:
>>> u = Url('foo', force_abs=False)
>>> os.path.isabs(u.path)
False
>>> u.path
'foo'
Other keyword arguments can specify defaults for missing parts of the URL:
>>> u = Url('/tmp/foo', scheme='file', netloc='localhost')
>>> u.scheme
'file'
>>> u.netloc
'localhost'
>>> u.path
'/tmp/foo'
By passing keyword arguments only, to construct an Url object with exactly those values for the named fields:
>>> u = Url(scheme='http', netloc='www.example.org', path='/data')
In this form, the force_abs parameter is ignored.
See also: http://docs.python.org/library/urlparse.html#urlparse-result-object
Return a new Url, constructed by appending relpath to the path section of this URL.
Example:
>>> u0 = Url('http://www.example.org')
>>> u1 = u0.adjoin('data')
>>> str(u1)
'http://www.example.org/data'
>>> u2 = u1.adjoin('moredata')
>>> str(u2)
'http://www.example.org/data/moredata'
Even if relpath starts with /, it is still appended to the path in the base URL:
>>> u3 = u2.adjoin('/evenmore')
>>> str(u3)
'http://www.example.org/data/moredata/evenmore'
A dictionary class enforcing that all keys are URLs.
Strings and/or objects returned by urlparse can be used as keys. Setting a string key automatically translates it to a URL:
>>> d = UrlKeyDict()
>>> d['/tmp/foo'] = 1
>>> for k in d.keys(): print (type(k), k.path)
(<class '....Url'>, '/tmp/foo')
Retrieving the value associated with a key works with both the string or the url value of the key:
>>> d['/tmp/foo']
1
>>> d[Url('/tmp/foo')]
1
Key lookup can use both the string or the Url value as well:
>>> '/tmp/foo' in d
True
>>> Url('/tmp/foo') in d
True
>>> 'file:///tmp/foo' in d
True
>>> 'http://example.org' in d
False
Class UrlKeyDict supports initialization by copying items from another dict instance or from an iterable of (key, value) pairs:
>>> d1 = UrlKeyDict({ '/tmp/foo':'foo', '/tmp/bar':'bar' })
>>> d2 = UrlKeyDict([ ('/tmp/foo', 'foo'), ('/tmp/bar', 'bar') ])
>>> d1 == d2
True
Differently from dict, initialization from keyword arguments alone is not supported:
>>> d3 = UrlKeyDict(foo='foo', bar='bar')
Traceback (most recent call last):
...
TypeError: __init__() got an unexpected keyword argument 'foo'
An empty UrlKeyDict instance is returned by the constructor when called with no parameters:
>>> d0 = UrlKeyDict()
>>> len(d0)
0
If force_abs is True, then all paths are converted to absolute ones in the dictionary keys.
>>> d = UrlKeyDict(force_abs=True)
>>> d['foo'] = 1
>>> for k in d.keys(): print os.path.isabs(k.path)
True
>>> d = UrlKeyDict(force_abs=False)
>>> d['foo'] = 2
>>> for k in d.keys(): print os.path.isabs(k.path)
False
A dictionary class enforcing that all values are URLs.
Strings and/or objects returned by urlparse can be used as values. Setting a string value automatically translates it to a URL:
>>> d = UrlValueDict()
>>> d[1] = '/tmp/foo'
>>> d[2] = Url('file:///tmp/bar')
>>> for v in d.values(): print (type(v), v.path)
(<class '....Url'>, '/tmp/foo')
(<class '....Url'>, '/tmp/bar')
Retrieving the value associated with a key always returns the URL-type value, regardless of how it was set:
>>> repr(d[1])
"Url(scheme='file', netloc='', path='/tmp/foo', hostname=None, port=None, username=None, password=None)"
Class UrlValueDict supports initialization by any of the methods that work with a plain dict instance:
>>> d1 = UrlValueDict({ 'foo':'/tmp/foo', 'bar':'/tmp/bar' })
>>> d2 = UrlValueDict([ ('foo', '/tmp/foo'), ('bar', '/tmp/bar') ])
>>> d3 = UrlValueDict(foo='/tmp/foo', bar='/tmp/bar')
>>> d1 == d2
True
>>> d2 == d3
True
In particular, an empty UrlDict instance is returned by the constructor when called with no parameters:
>>> d0 = UrlValueDict()
>>> len(d0)
0
If force_abs is True, then all paths are converted to absolute ones in the dictionary values.
>>> d = UrlValueDict(force_abs=True)
>>> d[1] = 'foo'
>>> for v in d.values(): print os.path.isabs(v.path)
True
>>> d = UrlValueDict(force_abs=False)
>>> d[2] = 'foo'
>>> for v in d.values(): print os.path.isabs(v.path)
False
Generic Python programming utility functions.
This module collects general utility functions, not specifically related to GC3Libs. A good rule of thumb for determining if a function or class belongs in here is the following: place a function or class in this module if you could copy its code into the sources of a different project and it would not stop working.
A generic enumeration class. Inspired by: http://stackoverflow.com/questions/36932/whats-the-best-way-to-implement-an-enum-in-python/2182437#2182437 with some more syntactic sugar added.
An Enum class must be instanciated with a list of strings, that make the enumeration “label”:
>>> Animal = Enum('CAT', 'DOG')
Each label is available as an instance attribute, evaluating to itself:
>>> Animal.DOG
'DOG'
>>> Animal.CAT == 'CAT'
True
As a consequence, you can test for presence of an enumeration label by string value:
>>> 'DOG' in Animal
True
Finally, enumeration labels can also be iterated upon:
>>> for a in Animal: print a
DOG
CAT
A list of messages with timestamps and (optional) tags.
The append method should be used to add a message to the History:
>>> L = History()
>>> L.append('first message')
>>> L.append('second one')
The last method returns the text of the last message appended, with its timestamp:
>>> L.last().startswith('second one at')
True
Iterating over a History instance returns message texts in the temporal order they were added to the list, with their timestamp:
>>> for msg in L: print(msg)
first message ...
Append a message to this History.
The message is timestamped with the time at the moment of the call.
The optional tags argument is a sequence of strings. Tags are recorded together with the message and may be used to filter log messages given a set of labels. (This feature is not yet implemented.)
Return a formatted message, appending to the message its timestamp in human readable format.
Return text of last message appended. If log is empty, return empty string.
An object that is greater-than any other object.
>>> x = PlusInfinity()
>>> x > 1
True
>>> 1 < x
True
>>> 1245632479102509834570124871023487235987634518745 < x
True
>>> x > sys.maxint
True
>>> x < sys.maxint
False
>>> sys.maxint < x
True
PlusInfinity objects are actually larger than any given Python object:
>>> x > 'azz'
True
>>> x > object()
True
Note that PlusInfinity is a singleton, therefore you always get the same instance when calling the class constructor:
>>> x = PlusInfinity()
>>> y = PlusInfinity()
>>> x is y
True
Relational operators try to return the correct value when comparing PlusInfinity to itself:
>>> x < y
False
>>> x <= y
True
>>> x == y
True
>>> x >= y
True
>>> x > y
False
Derived classes of Singleton can have only one instance in the running Python interpreter.
>>> x = Singleton()
>>> y = Singleton()
>>> x is y
True
A dict-like object, whose keys can be accessed with the usual ‘[...]’ lookup syntax, or with the ‘.’ get attribute syntax.
Examples:
>>> a = Struct()
>>> a['x'] = 1
>>> a.x
1
>>> a.y = 2
>>> a['y']
2
Values can also be initially set by specifying them as keyword arguments to the constructor:
>>> a = Struct(z=3)
>>> a['z']
3
>>> a.z
3
Like dict instances, Struct`s have a `copy method to get a shallow copy of the instance:
>>> b = a.copy()
>>> b.z
3
Return a (shallow) copy of this Struct instance.
Rename the filesystem entry at path by appending a unique numerical suffix; return new name.
For example,
>>> import tempfile
>>> path = tempfile.mkstemp()[1]
>>> path1 = backup(path)
>>> os.path.exists(path + '.~1~')
True
3. re-create the file, and make a second backup: this time the file will be renamed with a .~2~ extension:
>>> open(path, 'w').close()
>>> path2 = backup(path)
>>> os.path.exists(path + '.~2~')
True
cleaning up tests
>>> os.remove(path+'.~1~')
>>> os.remove(path+'.~2~')
Return base name without the extension.
Cache the result of a (nullary) method invocation for a given amount of time. Use as a decorator on object methods whose results are to be cached.
Store the result of the first invocation of the decorated method; if another invocation happens before lapse seconds have passed, return the cached value instead of calling the real function again. If a new call happens after the grace period has expired, call the real function and store the result in the cache.
Note: Do not use with methods that take keyword arguments, as they will be discarded! In addition, arguments are compared to elements in the cache by identity, so that invoking the same method with equal but distinct object will result in two separate copies of the result being computed and stored in the cache.
Cache results and timestamps are stored into the objects’ _cache_value and _cache_last_updated attributes, so the caches are destroyed with the object when it goes out of scope.
The working of the cached method can be demonstrated by the following simple code:
>>> class X(object):
... def __init__(self):
... self.times = 0
... @cache_for(2)
... def foo(self):
... self.times += 1
... return ("times effectively run: %d" % self.times)
>>> x = X()
>>> x.foo()
'times effectively run: 1'
>>> x.foo()
'times effectively run: 1'
>>> time.sleep(3)
>>> x.foo()
'times effectively run: 2'
Concatenate the contents of all args into output. Both output and each of the args can be a file-like object or a string (indicating the path of a file to open).
If append is True, then output is opened in append-only mode; otherwise it is overwritten.
Copy src to dst, descending it recursively if necessary.
Copy a file from src to dst; return True if the copy was actually made. If overwrite is False (default), an existing destination entry is left unchanged and False is returned.
If link is True, an attempt at hard-linking is done first; failing that, we copy the source file onto the destination one. Permission bits and modification times are copied as well.
If dst is a directory, a file with the same basename as src is created (or overwritten) in the directory specified.
Recursively copy an entire directory tree rooted at src. If overwrite is False (default), entries that already exist in the destination tree are left unchanged and not overwritten.
See also: shutil.copytree.
Return number of items in seq that match predicate. Argument predicate should be a callable that accepts one argument and returns a boolean.
Decorator to define properties with a simplified syntax in Python 2.4. See http://code.activestate.com/recipes/410698-property-decorator-for-python-24/#c6 for details and examples.
Ensure that configuration file filename exists; possibly copying it from the specified template_filename.
Return True if a file with the specified name exists in the configuration directory. If not, try to copy the template file over and then return False; in case the copy operations fails, a NoConfigurationFile exception is raised.
The template_filename is always resolved relative to GC3Libs’ ‘package resource’ directory (i.e., the etc/ directory in the sources. If template_filename is None, then it is assumed to be the base name of filename.
Same as os.path.dirname but return . in case of path names with no directory component.
Return the first element of sequence or iterator seq. Raise TypeError if the argument does not implement either of the two interfaces.
Examples:
>>> s = [0, 1, 2]
>>> first(s)
0
>>> s = {'a':1, 'b':2, 'c':3}
>>> first(sorted(s.keys()))
'a'
Return the contents of template, substituting all occurrences of Python formatting directives ‘%(key)s’ with the corresponding values taken from dictionary extra_args.
If template is an object providing a read() method, that is used to gather the template contents; else, if a file named template exists, the template contents are read from it; otherwise, template is treated like a string providing the template contents itself.
Like Python’s getattr, but perform a recursive lookup if name contains any dots.
Return if_true is argument test evaluates to True, return if_false otherwise.
This is just a workaround for Python 2.4 lack of the conditional assignment operator:
>>> a = 1
>>> b = ifelse(a, "yes", "no"); print b
yes
>>> b = ifelse(not a, 'yay', 'nope'); print b
nope
Iterate over all values greater or equal than start and less than stop. (Or the reverse, if step < 0.)
Example:
>>> list(irange(1, 5))
[1, 2, 3, 4]
>>> list(irange(0, 8, 3))
[0, 3, 6]
>>> list(irange(8, 0, -2))
[8, 6, 4, 2]
Unlike the built-in range function, irange also accepts floating-point values:
>>> list(irange(0.0, 1.0, 0.5))
[0.0, 0.5]
Also unlike the built-in range, both start and stop have to be specified:
>>> irange(42)
Traceback (most recent call last):
...
TypeError: irange() takes at least 2 arguments (1 given)
Of course, a null step is not allowed:
>>> list(irange(1, 2, 0))
Traceback (most recent call last):
...
AssertionError: Null step in irange.
Lock the file at path. Raise a LockTimeout error if the lock cannot be acquired within timeout seconds.
Return a lock object that should be passed unchanged to the gc3libs.utils.unlock function.
If no path points to a non-existent location, an empty file is created before attempting to lock (unless create is False). An attempt is made to remove the file in case an error happens.
See also: gc3libs.utils.unlock()
Like os.makedirs, but does not throw an exception if PATH already exists.
Like os.makedirs, but if path already exists and is not empty, rename the existing one to a backup name (see the backup function).
Unlike os.makedirs, no exception is thrown if the directory already exists and is empty, but the target directory permissions are not altered to reflect mode.
Print dictionary instance D in a YAML-like format. Each output line consists of:
- indent spaces,
- the key name,
- a colon character :,
- the associated value.
If the total line length exceeds width, the value is printed on the next line, indented by further step spaces; a value of 0 for width disables this line wrapping.
Optional argument only_keys can be a callable that must return True when called with keys that should be printed, or a list of key names to print.
Dictionary instances appearing as values are processed recursively (up to maxdepth nesting). Each nested instance is printed indented step spaces from the enclosing dictionary.
Return a positive integer, whose value is guaranteed to be monotonically increasing across different invocations of this function, and also across separate instances of the calling program.
Example:
(create a temporary directory to avoid bug #)
>>> import tempfile, os
>>> (fd, tmp) = tempfile.mkstemp()
>>> n = progressive_number(id_filename=tmp)
>>> m = progressive_number(id_filename=tmp)
>>> m > n
True
If you specify a positive integer as argument, then a list of monotonically increasing numbers is returned. For example:
>>> ls = progressive_number(5, id_filename=tmp)
>>> len(ls)
5
>>> os.remove(tmp)
In other words, progressive_number(N) is equivalent to:
nums = [ progressive_number() for n in range(N) ]
only more efficient, because it has to obtain and release the lock only once.
After every invocation of this function, the last returned number is stored into the file passed as argument id_filename. If the file does not exist, an attempt to create it is made before allocating an id; the method can raise an IOError or OSError if id_filename cannot be opened for writing.
Note: as file-level locking is used to serialize access to the counter file, this function may block (default timeout: 30 seconds) while trying to acquire the lock, or raise a LockTimeout exception if this fails.
Raise : | LockTimeout, IOError, OSError |
---|---|
Returns: | A positive integer number, monotonically increasing with every call. A list of such numbers if argument qty is a positive integer. |
Return the whole contents of the file at path as a single string.
Example:
>>> read_contents('/dev/null')
''
>>> import tempfile
>>> (fd, tmpfile) = tempfile.mkstemp()
>>> w = open(tmpfile, 'w')
>>> w.write('hey')
>>> w.close()
>>> read_contents(tmpfile)
'hey'
(If you run this test, remember to do cleanup afterwards)
>>> os.remove(tmpfile)
Return a string describing Python object obj.
Avoids calling any Python magic methods, so should be safe to use as a ‘last resort’ in implementation of __str__ and __repr__.
Function decorator: sets the docstring of the following function to the one of referenced_fn.
Intended usage is for setting docstrings on methods redefined in derived classes, so that they inherit the docstring from the corresponding abstract method in the base class.
Like os.path.samefile but return False if either one of the paths does not exist.
Convert word to a Python boolean value and return it. The strings true, yes, on, 1 (with any capitalization and any amount of leading and trailing spaces) are recognized as meaning Python True:
>>> string_to_boolean('yes')
True
>>> string_to_boolean('Yes')
True
>>> string_to_boolean('YES')
True
>>> string_to_boolean(' 1 ')
True
>>> string_to_boolean('True')
True
>>> string_to_boolean('on')
True
Any other word is considered as boolean False:
>>> string_to_boolean('no')
False
>>> string_to_boolean('No')
False
>>> string_to_boolean('Nay!')
False
>>> string_to_boolean('woo-hoo')
False
This includes also the empty string and whitespace-only:
>>> string_to_boolean('')
False
>>> string_to_boolean(' ')
False
Iterate over lines in iterable and return each of them stripped of leading and trailing blanks.
Test for access to a path; if access is not granted, raise an instance of exception with an appropriate error message. This is a frontend to os.access(), which see for exact semantics and the meaning of path and mode.
Parameters: |
|
---|
If the test succeeds, True is returned:
>>> test_file('/bin/cat', os.F_OK)
True
>>> test_file('/bin/cat', os.R_OK)
True
>>> test_file('/bin/cat', os.X_OK)
True
>>> test_file('/tmp', os.X_OK)
True
However, if the test fails, then an exception is raised:
>>> test_file('/bin/cat', os.W_OK)
Traceback (most recent call last):
...
RuntimeError: Cannot write to file '/bin/cat'.
If the optional argument isdir is True, then additionally test that path points to a directory inode:
>>> test_file('/tmp', os.F_OK, isdir=True)
True
>>> test_file('/bin/cat', os.F_OK, isdir=True)
Traceback (most recent call last):
...
RuntimeError: Expected '/bin/cat' to be a directory, but it's not.
Convert string s to an integer number of bytes. Suffixes like ‘KB’, ‘MB’, ‘GB’ (up to ‘YB’), with or without the trailing ‘B’, are allowed and properly accounted for. Case is ignored in suffixes.
Examples:
>>> to_bytes('12')
12
>>> to_bytes('12B')
12
>>> to_bytes('12KB')
12000
>>> to_bytes('1G')
1000000000
Binary units ‘KiB’, ‘MiB’ etc. are also accepted:
>>> to_bytes('1KiB')
1024
>>> to_bytes('1MiB')
1048576
Iterate over all unique elements in sequence seq.
Distinct values are returned in a sorted fashion.
Release a previously-acquired lock.
Argument lock should be the return value of a previous gc3libs.utils.lock call.
See also: gc3libs.utils.lock()
Overwrite the contents of the file at path with the given data. If the file does not exist, it is created.
Example:
>>> import tempfile
>>> (fd, tmpfile) = tempfile.mkstemp()
>>> write_contents(tmpfile, 'big data here')
>>> read_contents(tmpfile)
'big data here'
(If you run this test, remember to clean up afterwards)
>>> os.remove(tmpfile)
Warning
This module is deprecated and will be removed in a future release. Do not depend on it.
A specialized dictionary for representing computational resource characteristics.
Resource objects are dictionaries, comprised of the following keys.
Statically provided, i.e., specified at construction time and changed never after:
arc_ldap string architecture list of string * auth string * frontend string * gamess_location string max_cores_per_job int * max_memory_per_core int * max_walltime int * name string * ncores int type int *
Starred attributes are required for object construction.
Support for running a generic application with the GC3Libs.
Support for AppPot-hosted applications.
For more details about AppPot, visit: <http://apppot.googlecode.com>
Base class for AppPot-hosted applications. Provides the same interface as the base Application and runs the specified command in an AppPot instance.
In addition to the standard Application keyword arguments, the following ones can be given to steer the AppPot execution:
Specialized support for computational jobs running GAMESS-US.
Specialized AppPotApplication object to submit computational jobs running GAMESS-US.
This class makes no check or guarantee that a GAMESS-US executable will be available in the executing AppPot instance: the apppot_img and apppot_tag keyword arguments can be used to select the AppPot system image to run this application; see the AppPotApplication for information.
The __init__ construction interface is compatible with the one used in GamessApplication. The only required parameter for construction is the input file name; any other argument names an additional input file, that is added to the Application.inputs list, but not otherwise treated specially.
Any other keyword parameter that is valid in the Application class can be used here as well, with the exception of input and output. Note that a GAMESS-US job is always submitted with join = True, therefore any stderr setting is ignored.
Specialized Application object to submit computational jobs running GAMESS-US.
The only required parameter for construction is the input file name; subsequent positional arguments are additional input files, that are added to the Application.inputs list, but not otherwise treated specially.
The verno parameter is used to request a specific version of GAMESS-US; if the default value None is used, the default version of GAMESS-US at the executing site is run.
Any other keyword parameter that is valid in the Application class can be used here as well, with the exception of input and output. Note that a GAMESS-US job is always submitted with join = True, therefore any stderr setting is ignored.
Append to log the termination status line as extracted from the GAMESS ‘.out’ file.
The job exit code .execution.exitcode is (re)set according to the following table:
Exit code | Meaning |
---|---|
0 | the output file contains the string EXECUTION OF GAMESS TERMINATED normally |
1 | the output file contains the string EXECUTION OF GAMESS TERMINATED -ABNORMALLY- |
2 | the output file contains the string ddikick exited unexpectedly |
70 (os.EX_SOFTWARE) | the output file cannot be read or does not match any of the above patterns |
Specialized support for computational jobs running programs in the Rosetta suite.
Specialized Application object to submit one run of a single application in the Rosetta suite.
Required parameters for construction:
- application: name of the Rosetta application to call (e.g., “docking_protocol” or “relax”)
- inputs: a dict instance, keys are Rosetta -in:file:* options, values are the (local) path names of the corresponding files. (Example: inputs={"-in:file:s":"1brs.pdb"})
- outputs: list of output file names to fetch after Rosetta has finished running.
Optional parameters:
- flags_file: path to a local file containing additional flags for controlling Rosetta invocation; if None, a local configuration file will be used.
- database: (local) path to the Rosetta DB; if this is not specified, then it is assumed that the correct location will be available at the remote execution site as environment variable ROSETTA_DB_LOCATION
- arguments: If present, they will be appended to the Rosetta application command line.
Extract output files from the tar archive created by the ‘rosetta.sh’ script.
Specialized Application class for executing a single run of the Rosetta “docking_protocol” application.
Currently used in the gdocking app.
Authentication support for the GC3Libs.
A mish-mash of authorization functions.
This class actually serves the purposes of:
FIXME
There are several problems with this approach:
Add the specified keyword arguments as initialization parameters to all the configured auth classes.
Parameters that have already been specified are silently overwritten.
Return an instance of the Auth class corresponding to the given auth_name, or raise an exception if instanciating the same class has given an unrecoverable exception in past calls.
Additional keyword arguments are passed unchanged to the class constructor and can override values specified at configuration time.
Instances are remembered for the lifetime of the program; if an instance of the given class is already present in the cache, that one is returned; otherwise, an instance is contructed with the given parameters.
Caution
The params keyword arguments are only used if a new instance is constructed and are silently ignored if the cached instance is returned.
Auth proxy to use when no auth is needed.
Authentication support for accessing resources through the SSH protocol.
Authentication support with Grid proxy certificates.
Interface to different resource management systems for the GC3Libs.
Base class for interfacing with a computing resource.
The following construction parameters are also set as instance attributes. All of them are mandatory, except auth.
Attribute name | Expected Type | Meaning |
---|---|---|
name | string | A unique identifier for this resource, used for generating error message. |
architecture | set of Run.Arch values | Should contain one entry per each architecture supported. Valid architecture values are constants in the gc3libs.Run.Arch class. |
auth | string | A gc3libs.authentication.Auth instance that will be used to access the computational resource associated with this backend. The default value None is used to mean that no authentication credentials are needed (e.g., access to the resource has been pre-authenticated) or is managed outside of GC3Pie). |
max_cores | int | Maximum number of CPU cores that GC3Pie can allocate on this resource. |
max_cores_per_job | int | Maximum number of CPU cores that GC3Pie can allocate on this resource for a single job. |
max_memory_per_core | Memory | Maximum memory that GC3Pie can allocate to jobs on this resource. The value is per core, so the actual amount allocated to a single job is the value of this entry multiplied by the number of cores requested by the job. |
max_walltime | Duration | Maximum wall-clock time that can be allotted to a single job running on this resource. |
The above should be considered immutable attributes: they are specified at construction time and changed never after.
The following attributes are instead dynamically provided (i.e., defined by the get_resource_status() method or similar), thus can change over the lifetime of the object:
Attribute name | Type |
---|---|
free_slots | int |
user_run | int |
user_queued | int |
queued | int |
Decorator: mark a function as requiring authentication.
Each invocation of the decorated function causes a call to the get method of the authentication object (configured with the auth parameter to the class constructor).
Cancel a running job. If app is associated to a queued or running remote job, tell the execution middleware to cancel it.
Implement gracefully close on LRMS dependent resources e.g. transport
Free up any remote resources used for the execution of app. In particular, this should delete any remote directories and files.
Call this method when app.execution.state is anything other than TERMINATED results in undefined behavior and will likely be the cause of errors later on. Be cautious.
Update the status of the resource associated with this LRMS instance in-place. Return updated Resource object.
Retrieve job output files into local directory download_dir (which must already exists). Will not overwrite existing files, unless the optional argument overwrite is True.
Download size bytes (at offset offset from the start) from remote file remote_filename and write them into local_file. If size is None (default), then snarf contents of remote file from offset unto the end.
Argument local_file is either a local path name (string), or a file-like object supporting a .write() method. If local_file is a path name, it is created if not existent, otherwise overwritten.
Argument remote_filename is the name of a file in the remote job “sandbox”.
Any exception raised by operations will be passed through.
Submit an Application instance to the configured computational resource; return a gc3libs.Job instance for controlling the submitted job.
This method only returns if the job is successfully submitted; upon any failure, an exception is raised.
Note:
- job.state is not altered; it is the caller’s responsibility to update it.
- the job object may be updated with any information that is necessary for this LRMS to perform further operations on it.
Query the state of the remote job associated with app and update app.execution.state accordingly. Return the corresponding Run.State; see Run.State for more details.
Return True if the list of files is expressed in one of the file transfer protocols the LRMS supports. Return False otherwise
This module provides a generic BatchSystem class from which all batch-like backends should inherit.
Base class for backends dealing with a batch-queue system (e.g., PBS/TORQUE, Grid Engine, etc.)
This is an abstract class, that you should subclass in order to interface with a given batch queuing system. (Remember to call this class’ constructor in the derived class __init__ method.)
Cancel a running job. If app is associated to a queued or running remote job, tell the execution middleware to cancel it.
Return True if the list of files is expressed in one of the file transfer protocols the LRMS supports. Return False otherwise
Free up any remote resources used for the execution of app. In particular, this should delete any remote directories and files.
Call this method when app.execution.state is anything other than TERMINATED results in undefined behavior and will likely be the cause of errors later on. Be cautious.
Parse the output of the submission command. Regexp is provided by the caller.
Retrieve job output files into local directory download_dir (which must already exists). Will not overwrite existing files, unless the optional argument overwrite is True.
Download size bytes (at offset offset from the start) from remote file remote_filename and write them into local_file. If size is None (default), then snarf contents of remote file from offset unto the end.
Argument local_file is either a local path name (string), or a file-like object supporting a .write() method. If local_file is a path name, it is created if not existent, otherwise overwritten.
Argument remote_filename is the name of a file in the remote job “sandbox”.
Any exception raised by operations will be passed through.
This method will create a remote directory to store job’s sandbox, and will copy the sandbox in there.
Query the state of the remote job associated with app and update app.execution.state accordingly. Return the corresponding Run.State; see Run.State for more details.
Return True if the list of files is expressed in one of the file transfer protocols the LRMS supports. Return False otherwise
Map STDOUT/STDERR filenames (as recorded in Application.outputs) to commonly used default STDOUT/STDERR file names (e.g., <jobname>.o<jobid>).
Job control on ARC0 resources.
Manage jobs through the ARC middleware.
In addition to attributes
Attribute name Type Required? arc_ldap string frontend string yes
Cancel a running job. If app is associated to a queued or running remote job, tell the execution middleware to cancel it.
Return True if the list of files is expressed in one of the file transfer protocols the LRMS supports. Return False otherwise
Free up any remote resources used for the execution of app. In particular, this should delete any remote directories and files.
Call this method when app.execution.state is anything other than TERMINATED results in undefined behavior and will likely be the cause of errors later on. Be cautious.
Get dynamic information from the ARC infosystem and set attributes on the current object accordingly.
The following attributes are set:
Retrieve job output files into local directory download_dir (which must already exists). Will not overwrite existing files, unless the optional argument overwrite is True.
Download size bytes (at offset offset from the start) from remote file remote_filename and write them into local_file. If size is None (default), then snarf contents of remote file from offset unto the end.
Argument local_file is either a local path name (string), or a file-like object supporting a .write() method. If local_file is a path name, it is created if not existent, otherwise overwritten.
Argument remote_filename is the name of a file in the remote job “sandbox”.
Any exception raised by operations will be passed through.
Submit an Application instance to the configured computational resource; return a gc3libs.Job instance for controlling the submitted job.
This method only returns if the job is successfully submitted; upon any failure, an exception is raised.
Note:
- job.state is not altered; it is the caller’s responsibility to update it.
- the job object may be updated with any information that is necessary for this LRMS to perform further operations on it.
Query the state of the ARC0 job associated with app and update app.execution.state accordingly. Return the corresponding Run.State; see Run.State for more details.
The mapping of ARC0 job statuses to Run.State is as follows:
ARC job status Run.State ACCEPTED SUBMITTED ACCEPTING SUBMITTED SUBMITTING SUBMITTED PREPARING SUBMITTED PREPARED SUBMITTED INLRMS:Q SUBMITTED INLRMS:R RUNNING INLRMS:O STOPPED INLRMS:E STOPPED INLRMS:X STOPPED INLRMS:S STOPPED INLRMS:H STOPPED FINISHING RUNNING EXECUTED RUNNING FINISHED TERMINATING CANCELING TERMINATING FINISHED TERMINATING FAILED TERMINATING KILLED TERMINATED DELETED TERMINATED
Any other ARC job status is mapped to Run.State.UNKNOWN. In particular, querying a job ID that is not found in the ARC information system will result in UNKNOWN state, as will querying a job that has just been submitted and has not yet found its way to the infosys.
Return True if the list of files is expressed in one of the file transfer protocols the LRMS supports. Return False otherwise
Job control using libarcclient. (Which can submit to all EMI-supported resources.)
Manage jobs through ARC’s libarcclient.
Cancel a running job. If app is associated to a queued or running remote job, tell the execution middleware to cancel it.
Return True if the list of files is expressed in one of the file transfer protocols the LRMS supports. Return False otherwise
Free up any remote resources used for the execution of app. In particular, this should delete any remote directories and files.
Call this method when app.execution.state is anything other than TERMINATED results in undefined behavior and will likely be the cause of errors later on. Be cautious.
Get dynamic information from the ARC infosystem and set attributes on the current object accordingly.
The following attributes are set:
Retrieve job output files into local directory download_dir (which must already exists). Will not overwrite existing files, unless the optional argument overwrite is True.
Download size bytes (at offset offset from the start) from remote file remote_filename and write them into local_file. If size is None (default), then snarf contents of remote file from offset unto the end.
Argument local_file is either a local path name (string), or a file-like object supporting a .write() method. If local_file is a path name, it is created if not existent, otherwise overwritten.
Argument remote_filename is the name of a file in the remote job “sandbox”.
Any exception raised by operations will be passed through.
Submit an Application instance to the configured computational resource; return a gc3libs.Job instance for controlling the submitted job.
This method only returns if the job is successfully submitted; upon any failure, an exception is raised.
Note:
- job.state is not altered; it is the caller’s responsibility to update it.
- the job object may be updated with any information that is necessary for this LRMS to perform further operations on it.
Query the state of the ARC job associated with app and update app.execution.state accordingly. Return the corresponding Run.State; see Run.State for more details.
The mapping of ARC job statuses to Run.State is as follows:
ARC job status Run.State ACCEPTED SUBMITTED SUBMITTING SUBMITTED PREPARING SUBMITTED QUEUING SUBMITTED RUNNING RUNNING FINISHING RUNNING FINISHED TERMINATING FAILED TERMINATING KILLED TERMINATED DELETED TERMINATED HOLD STOPPED OTHER UNKNOWN
Any other ARC job status is mapped to Run.State.UNKNOWN.
Return True if the list of files is expressed in one of the file transfer protocols the LRMS supports. Return False otherwise
Job control on SGE clusters (possibly connecting to the front-end via SSH).
Job control on LSF clusters (possibly by connecting via SSH to a submit node).
Get dynamic information out of the LSF subsystem.
return self
dynamic information required (at least those): total_queued free_slots user_running user_queued
Job control on SGE clusters (possibly connecting to the front-end via SSH).
Job control on SGE clusters (possibly by connecting via SSH to a submit node).
Update the status of the resource associated with this LRMS instance in-place. Return updated Resource object.
Compute the number of total, free, and used/reserved slots from the output of SGE’s qstat -F.
Return a dictionary instance, mapping each host name into a dictionary instance, mapping the strings total, available, and unavailable to (respectively) the the total number of slots on the host, the number of free slots on the host, and the number of used+reserved slots on the host.
Cluster-wide totals are associated with key global.
Note: The ‘available slots’ computation carried out by this function is unreliable: there is indeed no notion of a ‘global’ or even ‘per-host’ number of ‘free’ slots in SGE. Slot numbers can be computed per-queue, but a host can belong in different queues at the same time; therefore the number of ‘free’ slots available to a job actually depends on the queue it is submitted to. Since SGE does not force users to submit explicitly to a queue, rather encourages use of a sort of ‘implicit’ routing queue, there is no way to compute the number of free slots, as this entirely depends on how local policies will map a job to the available queues.
Parse SGE’s qstat output (as contained in string qstat_output) and return a quadruple (R, Q, r, q) where:
- R is the total number of running jobs in the SGE cell (from any user);
- Q is the total number of queued jobs in the SGE cell (from any user);
- r is the number of running jobs submitted by user whoami;
- q is the number of queued jobs submitted by user whoami
Parse SGE’s qhost -F output (as contained in string qhost_output) and return a dict instance, mapping each host name to its attributes.
Parse SGE’s qstat -F output (as contained in string qstat_output) and return a dict instance, mapping each queue name to its attributes.
Job control on PBS/Torque clusters (possibly connecting to the front-end via SSH).
Job control on PBS/Torque clusters (possibly by connecting via SSH to a submit node).
Update the status of the resource associated with this LRMS instance in-place. Return updated Resource object.
Parse PBS/Torque’s qstat output (as contained in string qstat_output) and return a quadruple (R, Q, r, q) where:
- R is the total number of running jobs in the PBS/Torque cell (from any user);
- Q is the total number of queued jobs in the PBS/Torque cell (from any user);
- r is the number of running jobs submitted by user whoami;
- q is the number of queued jobs submitted by user whoami
Run applications as local processes.
Execute an Application instance as a local process.
Construction of an instance of ShellcmdLrms takes the following optional parameters (in addition to any parameters taken by the base class LRMS):
Parameters: |
|
---|
Cancel a running job.
If app is associated to a queued or running remote job, tell the execution middleware to cancel it.
Implement gracefully close on LRMS dependent resources e.g. transport
Delete the temporary directory where a child process has run.
The temporary directory is removed with all its content, recursively.
Update the status of the resource associated with this LRMS instance in-place. Return updated Resource object.
Retrieve job output files into local directory download_dir (which must already exists). Will not overwrite existing files, unless the optional argument overwrite is True.
Download size bytes (at offset offset from the start) from remote file remote_filename and write them into local_file. If size is None (default), then snarf contents of remote file from offset unto the end.
Argument local_file is either a local path name (string), or a file-like object supporting a .write() method. If local_file is a path name, it is created if not existent, otherwise overwritten.
Argument remote_filename is the name of a file in the remote job “sandbox”.
Any exception raised by operations will be passed through.
Run an Application instance as a local process.
See : | LRMS.submit_job |
---|
Query the running status of the local process whose PID is stored into app.execution.lrms_jobid, and map the POSIX process status to GC3Libs Run.State.
Return False if any of the URLs in data_file_list cannot be handled by this backend.
The shellcmd backend can only handle file URLs.
The Transport class hierarchy provides an abstraction layer to execute commands and copy/move files irrespective of whether the destination is the local computer or a remote front-end that we access via SSH.
gc3utils - A simple command-line frontend to distributed resources
This is a generic front-end code; actual implementation of commands can be found in gc3utils/commands.py
Generic front-end function to invoke the commands in gc3utils/commands.py
Implementation of the core command-line front-ends.
Permanently remove jobs from local and remote storage.
In normal operation, only jobs that are in a terminal status can be removed; if you want to force gclean to remove a job that is not in any one of those states, add the -f option to the command line.
If a job description cannot be successfully read, the corresponding job will not be deleted; use the -f option to force removal of a job regardless.
Retrieve output files of a job.
Output files can only be retrieved once a job has reached the ‘RUNNING’ state; this command will print an error message if no output files are available.
Output files can be retrieved multiple times until a job reaches ‘TERMINATED’ state: after that, the remote storage will be released once the output files have been fetched.
Print detailed information about a job.
A complete dump of all the information known about jobs listed on the command line is printed; this will only make sense if you know GC3Libs internals.
Cancel a submitted job. Given a list of jobs, try to cancel each one of them; exit with code 0 if all jobs were cancelled successfully, and 1 if some job was not.
The command will print an error message if a job cannot be canceled because it’s in NEW or TERMINATED state, or if some other error occurred.
Resubmit an already-submitted job with (possibly) different parameters.
If you resubmit a job that is not in terminal state, the existing job is canceled before re-submission.
Select job IDs based on specific criteria
Filter job_list based on the basename of their input or output files. It will return any file which matches both ifile and ofile.
Returns an updated job list.
Filter job_list by selecting only the jobs which attribute attribute match the supplied regexp.
Returns an updated job list.
Filter job_list by selecting only the jobs which match the desired state(s).
Returns an updated job list.
Filter job_list by submission date. Only job which have been submitted within the range specified by command line will be selected.
Returns an updated job list.
List status of computational resources.
Override GC3UtilsScript setup method since we don’t need any session argument.
Override GC3UtilsScript setup_args method since we don’t operate on jobs but on resources.
Manage sessions
Called with subcommand abort.
This method will open the desired session and will kill all the jobs which belongs to that session and remove them from store.
Called with subcommand delete.
This method will first call abort and then remove the current session.
If the abort will fail or not all tasks have been removed from the session, it will not delete the session and will exit with exit value 1.
Called when subcommand is list.
This method basically call the command “gstat -n -v -s SESSION”
Called when subcommand is log.
This method will print the history of the jobs in SESSION in a logfile fashon
Print job state.
Display the last lines from a job’s standard output or error stream. Optionally, keep running and displaying the last part of the file as more lines are written to the given stream.
Override GC3UtilsScript setup_args method since we don’t operate on single jobs.