GC3Libs is a python package for controlling the life-cycle of a Grid or batch computational job.
GC3Libs provides services for submitting computational jobs to Grids and batch systems, controlling their execution, persisting job information, and retrieving the final output.
GC3Libs takes an application-oriented approach to batch computing. A generic Application class provides the basic operations for controlling remote computations, but different Application subclasses can expose adapted interfaces, focusing on the most relevant aspects of the application being represented.
Support for running a generic application with the GC3Libs. The following parameters are required to create an Application instance:
Files that will be copied to the remote execution node before execution starts.
There are two possible ways of specifying the inputs parameter:
It can be a Python dictionary: keys are local file paths or URLs, values are remote file names.
It can be a Python list: each item in the list should be a pair (source, remote_file_name): the source can be a local file or a URL; remote_file_name is the path (relative to the execution directory) where source will be downloaded. If remote_file_name is an absolute path, an InvalidArgument error is raised.
A single string file_name is allowed instead of the pair and results in the local file file_name being copied to file_name on the remote host.
Files and directories that will be copied from the remote execution node back to the local computer (or a network-accessible server) after execution has completed. Directories are copied recursively.
There are three possible ways of specifying the outputs parameter:
It can be a Python dictionary: keys are remote file or directory paths (relative to the execution directory), values are corresponding local names.
It can be a Python list: each item in the list should be a pair (remote_file_name, destination): the destination can be a local file or a URL; remote_file_name is the path (relative to the execution directory) that will be uploaded to destination. If remote_file_name is an absolute path, an InvalidArgument error is raised.
A single string file_name is allowed instead of the pair and results in the remote file file_name being copied to file_name on the local host.
The constant gc3libs.ANY_OUTPUT which instructs GC3Libs to copy every file in the remote execution directory back to the local output path (as specified by the output_dir attribute).
Note that no errors will be raised if an output file is not present. Override the terminated() method to raise errors for reacting on this kind of failures.
The following optional parameters may be additionally specified as keyword arguments and will be given special treatment by the Application class logic:
Any other keyword arguments will be set as instance attributes, but otherwise ignored by the Application constructor.
After successful construction, an Application object is guaranteed to have the following instance attributes:
a Run instance; its state attribute is initially set to NEW (Actually inherited from the Task)
A name for applications of this class.
This string is used as a prefix for configuration items related to this application in configured resources. For example, if the application_name is foo, then the application interface code in GC3Pie might search for foo_cmd, foo_extra_args, etc. See qsub_sge() for an actual example.
Get an LSF qsub command-line invocation for submitting an instance of this application. Return a pair (cmd_argv, app_argv), where cmd_argv is a list containing the argv-vector of the command to run to submit an instance of this application to the LSF batch system, and app_argv is the argv-vector to use when invoking the application.
In the construction of the command-line invocation, one should assume that all the input files (as named in Application.inputs) have been copied to the current working directory, and that output files should be created in this same directory.
The default implementation just prefixes any output from the cmdline method with an LSF bsub invocation of the form bsub -cwd . -L /bin/sh + resource limits.
Override this method in application-specific classes to provide appropriate invocation templates and/or add resource-specific submission options.
Return list of command-line arguments for invoking the application.
This is exactly the argv-vector of the application process: the application command name is included as first item (index 0) of the list, further items are command-line arguments.
Hence, to get a UNIX shell command-line, just concatenate the elements of the list, separating them with spaces.
Return a list of compatible resources.
Invocation of Core.fetch_output() on this object failed; ex is the Exception that describes the error.
If this method returns an exception object, that is raised as a result of the Core.fetch_output(), otherwise the return value is ignored and Core.fetch_output returns None.
Default is to return ex unchanged; override in derived classes to change this behavior.
Similar to qsub_sge(), but for the PBS/TORQUE resource manager.
Get an SGE qsub command-line invocation for submitting an instance of this application.
Return a pair (cmd_argv, app_argv). Both cmd_argv and app_argv are argv-lists: the command name is included as first item (index 0) of the list, further items are command-line arguments; cmd_argv is the argv-list for the submission command (excluding the actual application command part); app_argv is the argv-list for invoking the application. By overriding this method, one can add futher resource-specific options at the end of the cmd_argv argv-list.
In the construction of the command-line invocation, one should assume that all the input files (as named in Application.inputs) have been copied to the current working directory, and that output files should be created in this same directory.
The default implementation just prefixes any output from the cmdline method with an SGE qsub invocation of the form qsub -cwd -S /bin/sh + resource limits. Note that there is no generic way of requesting a certain number of cores in SGE: it all depends on the installed parallel environment, and these are totally under control of the local sysadmin; therefore, any request for cores is ignored and a warning is logged.
Override this method in application-specific classes to provide appropriate invocation templates and/or add different submission options.
Sort the given resources in order of preference.
By default, less-loaded resources come first; see _cmp_resources.
Get a SLURM sbatch command-line invocation for submitting an instance of this application.
Return a pair (cmd_argv, app_argv). Both cmd_argv and app_argv are argv-lists: the command name is included as first item (index 0) of the list, further items are command-line arguments; cmd_argv is the argv-list for the submission command (excluding the actual application command part); app_argv is the argv-list for invoking the application. By overriding this method, one can add futher resource-specific options at the end of the cmd_argv argv-list.
In the construction of the command-line invocation, one should assume that all the input files (as named in Application.inputs) have been copied to the current working directory, and that output files should be created in this same directory.
Override this method in application-specific classes to provide appropriate invocation templates and/or add different submission options.
Invocation of Core.submit() on this object failed; exs is a list of Exception objects, one for each attempted submission.
If this method returns an exception object, that is raised as a result of the Core.submit(), otherwise the return value is ignored and Core.submit returns None.
Default is to always return the first exception in the list (on the assumption that it is the root of all exceptions or that at least it refers to the preferred resource). Override in derived classes to change this behavior.
Handle exceptions that occurred during a Core.update_job_state call.
If this method returns an exception object, that exception is processed in Core.update_job_state() instead of the original one. Any other return value is ignored and Core.update_job_state proceeds as if no exception had happened.
Argument ex is the exception that was raised by the backend during job state update.
Default is to return ex unchanged; override in derived classes to change this behavior.
Return a string containing an xRSL sequence, suitable for submitting an instance of this application through ARC’s ngsub command.
The default implementation produces XRSL content based on the construction parameters; you should override this method to produce XRSL tailored to your application.
Warning
WARNING: ARClib SWIG bindings cannot resolve the overloaded constructor if the xRSL string argument is a Python ‘unicode’ object; if you overload this method, force the result to be a ‘str’!
A namespace for all constants and default values used in the GC3Libs package.
only update ARC resources status every this seconds
consider a submitted job lost if it does not show up in the information system after this duration
Proxy validity threshold in seconds. If proxy is expiring before the thresold, it will be marked as to be renewed.
A specialized dict-like object that keeps information about the execution state of an Application instance.
A Run object is guaranteed to have the following attributes:
- log
- A gc3libs.utils.History instance, recording human-readable text messages on events in this job’s history.
- info
- A simplified interface for reading/writing messages to Run.log. Reading from the info attribute returns the last message appended to log. Writing into info appends a message to log.
- timestamp
- Dictionary, recording the most recent timestamp when a certain state was reached. Timestamps are given as UNIX epochs.
For properties state, signal and returncode, see the respective documentation.
Run objects support attribute lookup by both the [...] and the . syntax; see gc3libs.utils.Struct for examples.
Processor architectures, for use as values in the requested_architecture field of the Application class constructor.
The following values are currently defined:
The “exit code” part of a Run.returncode, see os.WEXITSTATUS. This is an 8-bit integer, whose meaning is entirely application-specific. (However, the value 255 is often used to mean that an error has occurred and the application could not end its execution normally.)
Return True if the Run state matches any of the given names.
In addition to the states from Run.State, the two additional names ok and failed are also accepted, with the following meaning:
A simplified interface for reading/writing entries into history.
Setting the info attribute appends a message to the log:
>>> j1 = Run()
>>> j1.info = 'a message'
>>> j1.info = 'a second message'
Getting the value of the info attribute returns the last message entered in the log:
>>> j1.info
u'a second message ...'
The returncode attribute of this job object encodes the Run termination status in a manner compatible with the POSIX termination status as implemented by os.WIFSIGNALED and os.WIFEXITED.
However, in contrast with POSIX usage, the exitcode and the signal part can both be significant: in case a Grid middleware error happened after the application has successfully completed its execution. In other words, os.WEXITSTATUS(returncode) is meaningful iff os.WTERMSIG(returncode) is 0 or one of the pseudo-signals listed in Run.Signals.
Run.exitcode and Run.signal are combined to form the return code 16-bit integer as follows (the convention appears to be obeyed on every known system):
Bit Encodes... 0..7 signal number 8 1 if program dumped core. 9..16 exit code
Note: the “core dump bit” is always 0 here.
Setting the returncode property sets exitcode and signal; you can either assign a (signal, exitcode) pair to returncode, or set returncode to an integer from which the correct exitcode and signal attribute values are extracted:
>>> j = Run()
>>> j.returncode = (42, 56)
>>> j.signal
42
>>> j.exitcode
56
>>> j.returncode = 137
>>> j.signal
9
>>> j.exitcode
0
See also Run.exitcode and Run.signal.
Convert a shell exit code to a POSIX process return code.
A POSIX shell represents the return code of the last-run program within its exit code as follows:
The “signal number” part of a Run.returncode, see os.WTERMSIG for details.
The “signal number” is a 7-bit integer value in the range 0..127; value 0 is used to mean that no signal has been received during the application runtime (i.e., the application terminated by calling exit()).
The value represents either a real UNIX system signal, or a “fake” one that GC3Libs uses to represent Grid middleware errors (see Run.Signals).
The state a Run is in.
The value of Run.state must always be a value from the Run.State enumeration, i.e., one of the following values.
Run.State value | purpose | can change to |
---|---|---|
NEW | Job has not yet been submitted/started (i.e., gsub not called) | SUBMITTED (by gsub) |
SUBMITTED | Job has been sent to execution resource | RUNNING, STOPPED |
STOPPED | Trap state: job needs manual intervention (either user- or sysadmin-level) to resume normal execution | TERMINATING(by gkill), SUBMITTED (by miracle) |
RUNNING | Job is executing on remote resource | TERMINATING |
TERMINATING | Job has finished execution on remote resource; output not yet retrieved | TERMINATED |
TERMINATED | Job execution is finished (correctly or not) and will not be resumed; output has been retrieved | None: final state |
When a Run object is first created, it is assigned the state NEW. After a successful invocation of Core.submit(), it is transitioned to state SUBMITTED. Further transitions to RUNNING or STOPPED or TERMINATED state, happen completely independently of the creator progra; the Core.update_job_state() call provides updates on the status of a job.
The STOPPED state is a kind of generic “run time error” state: a job can get into the STOPPED state if its execution is stopped (e.g., a SIGSTOP is sent to the remote process) or delayed indefinitely (e.g., the remote batch system puts the job “on hold”). There is no way a job can get out of the STOPPED state automatically: all transitions from the STOPPED state require manual intervention, either by the submitting user (e.g., cancel the job), or by the remote systems administrator (e.g., by releasing the hold).
The TERMINATED state is the final state of a job: once a job reaches it, it cannot get back to any other state. Jobs reach TERMINATED state regardless of their exit code, or even if a system failure occurred during remote execution; actually, jobs can reach the TERMINATED status even if they didn’t run at all, for example, in case of a fatal failure during the submission step.
Mix-in class implementing a facade for job control.
A Task can be described as an “active” job, in the sense that all job control is done through methods on the Task instance itself; contrast this with operating on Application objects through a Core or Engine instance.
The following pseudo-code is an example of the usage of the Task interface for controlling a job. Assume that GamessApplication is inheriting from Task (it actually is):
t = GamessApplication(input_file)
t.submit()
# ... do other stuff
t.update_state()
# ... take decisions based on t.execution.state
t.wait() # blocks until task is terminated
Each Task object has an execution attribute: it is an instance of class Run, initialized with a new instance of Run, and at any given time it reflects the current status of the associated remote job. In particular, execution.state can be checked for the current task status.
After successful initialization, a Task instance will have the following attributes:
Use the given Grid interface for operations on the job associated with this task.
Remove any reference to the current grid interface. After this, calling any method other than attach() results in an exception TaskDetachedFromGridError being thrown.
Retrieve the outputs of the computational job associated with this task into directory output_dir, or, if that is None, into the directory whose path is stored in instance attribute .output_dir.
If the execution state is TERMINATING, transition the state to TERMINATED (which runs the appropriate hook).
See gc3libs.Core.fetch_output() for a full explanation.
Returns: | Path to the directory where the job output has been collected. |
---|
Release any remote resources associated with this task.
See gc3libs.Core.free() for a full explanation.
Terminate the computational job associated with this task.
See gc3libs.Core.kill() for a full explanation.
Called when the job state is (re)set to NEW.
Note this will not be called when the application object is created, rather if the state is reset to NEW after it has already been submitted.
The default implementation does nothing, override in derived classes to implement additional behavior.
Download size bytes (at offset offset from the start) from the associated job standard output or error stream, and write them into a local file. Return a file-like object from which the downloaded contents can be read.
See gc3libs.Core.peek() for a full explanation.
Advance the associated job through all states of a regular lifecycle. In detail:
- If execution.state is NEW, the associated job is started.
- The state is updated until it reaches TERMINATED
- Output is collected and the final returncode is returned.
An exception TaskError is raised if the job hits state STOPPED or UNKNOWN during an update in phase 2.
When the job reaches TERMINATING state, the output is retrieved; if this operation is successfull, state is advanced to TERMINATED.
oNCE the job reaches TERMINATED state, the return code (stored also in .returncode) is returned; if the job is not yet in TERMINATED state, calling progress returns None.
Raises : | exception UnexpectedStateError if the associated job goes into state STOPPED or UNKNOWN |
---|---|
Returns: | final returncode, or None if the execution state is not TERMINATED. |
Called when the job state transitions to RUNNING, i.e., the job has been successfully started on a (possibly) remote resource.
The default implementation does nothing, override in derived classes to implement additional behavior.
Called when the job state transitions to STOPPED, i.e., the job has been remotely suspended for an unknown reason and cannot automatically resume execution.
The default implementation does nothing, override in derived classes to implement additional behavior.
Start the computational job associated with this Task instance.
Called when the job state transitions to SUBMITTED, i.e., the job has been successfully sent to a (possibly) remote execution resource and is now waiting to be scheduled.
The default implementation does nothing, override in derived classes to implement additional behavior.
Called when the job state transitions to TERMINATED, i.e., the job has finished execution (with whatever exit status, see returncode) and the final output has been retrieved.
The location where the final output has been stored is available in attribute self.output_dir.
The default implementation does nothing, override in derived classes to implement additional behavior.
Called when the job state transitions to TERMINATING, i.e., the remote job has finished execution (with whatever exit status, see returncode) but output has not yet been retrieved.
The default implementation does nothing, override in derived classes to implement additional behavior.
Called when the job state transitions to UNKNOWN, i.e., the job has not been updated for a certain period of time thus it is placed in UNKNOWN state.
Two possible ways of changing from this state: 1) next update cycle, job status is updated from the remote server 2) derive this method for Application specific logic to deal with this case
The default implementation does nothing, override in derived classes to implement additional behavior.
In-place update of the execution state of the computational job associated with this Task. After successful completion, .execution.state will contain the new state.
After the job has reached the TERMINATING state, the following attributes are also set:
The execution backend may set additional attributes; the exact name and format of these additional attributes is backend-specific. However, you can easily identify the backend-specific attributes because their name is prefixed with the (lowercased) backend name; for instance, the PbsLrms backend sets attributes pbs_queue, pbs_end_time, etc.
Block until the associated job has reached TERMINATED state, then return the job’s return code. Note that this does not automatically fetch the output.
Parameters: | interval (integer) – Poll job state every this number of seconds |
---|
Configure the gc3.gc3libs logger.
Arguments level, format and datefmt set the corresponding arguments in the logging.basicConfig() call.
If a user configuration file exists in file NAME.log.conf in the Default.RCDIR directory (usually ~/.gc3), it is read and used for more advanced configuration; if it does not exist, then a sample one is created.
Implementation of task collections.
Tasks can be grouped into collections, which are tasks themselves, therefore can be controlled (started/stopped/cancelled) like a single whole. Collection classes provided in this module implement the basic patterns of job group execution; they can be combined to form more complex workflows. Hook methods are provided so that derived classes can implement problem-specific job control policies.
A ParallelTaskCollection runs all of its tasks concurrently.
The collection state is set to TERMINATED once all tasks have reached the same terminal status.
Terminate all tasks in the collection, and set collection state to TERMINATED.
Try to advance all jobs in the collection to the next state in a normal lifecycle. Return list of task execution states.
Start all tasks in the collection.
Update state of all tasks in the collection.
Wrap a Task instance and re-submit it until a specified termination condition is met.
By default, the re-submission upon failure happens iff execution terminated with nonzero return code; the failed task is retried up to self.max_retries times (indefinitely if self.max_retries is 0).
Override the retry method to implement a different retryal policy.
Note: The resubmission code is implemented in the terminated(), so be sure to call it if you override in derived classes.
Return True or False, depending on whether the failed task should be re-submitted or not.
The default behavior is to retry a task iff its execution terminated with nonzero returncode and the maximum retry limit has not been reached. If self.max_retries is 0, then the dependent task is retried indefinitely.
Override this method in subclasses to implement a different policy.
Update the state of the dependent task, then resubmit it if it’s TERMINATED and self.retry() is True.
A SequentialTaskCollection runs its tasks one at a time.
After a task has completed, the next method is called with the index of the finished task in the self.tasks list; the return value of the next method is then made the collection execution.state. If the returned state is RUNNING, then the subsequent task is started, otherwise no action is performed.
The default next implementation just runs the tasks in the order they were given to the constructor, and sets the state to TERMINATED when all tasks have been run.
Stop execution of this sequence. Kill currently-running task (if any), then set collection state to TERMINATED.
Return the state or task to run when step number done is completed.
This method is called when a task is finished; the done argument contains the index number of the just-finished task into the self.tasks list. In other words, the task that just completed is available as self.tasks[done].
The return value from next can be either a task state (i.e., an instance of Run.State), or a valid index number for self.tasks. In the first case:
If instead the return value is a (nonnegative) number, then tasks in the sequence will be re-run starting from that index.
The default implementation runs tasks in the order they were given to the constructor, and sets the state to TERMINATED when all tasks have been run. This method can (and should) be overridden in derived classes to implement policies for serial job execution.
Sequentially advance tasks in the collection through all steps of a regular lifecycle. When the last task transitions to TERMINATED state, the collection’s state is set to TERMINATED as well and this method becomes a no-op. If during execution, any of the managed jobs gets into state STOPPED or UNKNOWN, then an exception Task.UnexpectedExecutionState is raised.
Start the current task in the collection.
Update state of the collection, based on the jobs’ statuses.
Simplified interface for creating a sequence of Tasks. This can be used when the number of Tasks to run is fixed and known at program writing time.
A StagedTaskCollection subclass should define methods stage0, stage1, ... up to stageN (for some arbitrary value of N positive integer). Each of these stageN must return a Task instance; the task returned by the stage0 method will be executed first, followed by the task returned by stage1, and so on. The sequence stops at the first N such that stageN is not defined.
The exit status of the whole sequence is the exit status of the last Task instance run. However, if any of the stageX methods returns an integer value instead of a Task instance, then the sequence stops and that number is used as the sequence exit code.
Base class for all task collections. A “task collection” is a group of tasks, that can be managed collectively as a single one.
A task collection implements the same interface as the Task class, so you can use a TaskCollection everywhere a Task is required. A task collection has a state attribute, which is an instance of gc3libs.Run.State; each concrete collection class decides how to deduce a collective state based on the individual task states.
Add a task to the collection.
Use the given Controller interface for operations on the job associated with this task.
Raise a gc3libs.exceptions.InvalidOperation error, as there is no meaningful semantics that can be defined for peek into a generic collection of tasks.
Remove a task from the collection.
Return a dictionary mapping each state name into the count of tasks in that state. In addition, the following keys are defined:
If the optional argument only is not None, tasks whose class is not contained in only are ignored.
Parameters: | only (tuple) – Restrict counting to tasks of these classes. |
---|
Update the running state of all managed tasks.
Block until execution state reaches TERMINATED, then return a list of return codes. Note that this does not automatically fetch the output.
Parameters: | interval (integer) – Poll job state every this number of seconds |
---|
Exceptions specific to the gc3libs package.
In addition to the exceptions listed here, gc3libs functions try to use Python builtin exceptions with the same meaning they have in core Python, namely:
Raised when the dumped description on a given Application produces something that the LRMS backend cannot process. As an example, for arc backends, this error is raised when the parsing of the Application’s XRSL fails
Base class for Auth-related errors.
Should never be instanciated: create a specific error class describing the actual error condition.
Raised when the configuration file (or parts of it) could not be read/parsed. Also used to signal that a required parameter is missing or has an unknown/invalid value.
Base class for data staging and movement errors.
Should never be instanciated: create a specific error class describing the actual error condition.
Raised when a method (other than attach()) is called on a detached Task instance.
Raised by Application.__init__ if not all (local or remote) entries in the input or output files are distinct.
Base class for all error-level exceptions in GC3Pie.
Generally, this indicates a non-fatal error: depending on the nature of the task, steps could be taken to continue, but users must be aware that an error condition occurred, so the message is sent to the logs at the ERROR level.
Exceptions indicating an error condition after which the program cannot continue and should immediately stop, should use the FatalError base class.
A fatal error: execution cannot continue and program should report to user and then stop.
The message is sent to the logs at CRITICAL level when the exception is first constructed.
This is the base class for all fatal exceptions.
Raised when an input file is specified, which does not exist or cannot be read.
Raised when some function cannot fulfill its duties, for reasons that do not depend on the library client code. For instance, when a response string gotten from an external command cannot be parsed as expected.
Raised when the arguments passed to a function do not honor some required contract. For instance, either one of two optional arguments must be provided, but none of them was.
Raised when an operation is attempted, that is not considered valid according to the system state. For instance, trying to retrieve the output of a job that has not yet been submitted.
Raised to signal that no computational resource with the given name is defined in the configuration file.
A specialization of`InvalidArgument` for cases when the type of the passed argument does not match expectations.
Raised when a command is not provided all required arguments on the command line, or the arguments do not match the expected syntax.
Since the exception message is the last thing a user will see, try to be specific about what is wrong on the command line.
A specialization of`InvalidArgument` for cases when the type of the passed argument does not match expectations.
Raised upon errors loading a job from the persistent storage.
Raised when the configuration file cannot be read (e.g., does not exist or has wrong permissions), or cannot be parsed (e.g., is malformed).
Raised to signal that no resources are defined, or that none are compatible with the request.
Raised upon attempts to retrieve the output for jobs that are still in NEW or SUBMITTED state.
Raised when transient problems with copying data to or from the remote execution site occurred.
This error is considered to be transient (e.g., network connectivity interruption), so trying again at a later time could solve the problem.
Used to mark transient errors: retrying the same action at a later time could succeed.
This exception should never be instanciated: it is only to be used in except clauses to catch “try again” situations.
Generic error condition in a Task object.
Raised by Task.progress() when a job lands in STOPPED or TERMINATED state.
Raised when an operation is attempted on a task, which is unknown to the remote server or backend.
Raised when a job state is gotten from the Grid middleware, that is not handled by the GC3Libs code. Might actually mean that there is a version mismatch between GC3Libs and the Grid middleware used.
Raised when problems with copying data to or from the remote execution site occurred.
Used to mark permanent errors: there’s no point in retrying the same action at a later time, because it will yield the same error again.
This exception should never be instanciated: it is only to be used in except clauses to exclude “try again” situations.
Facade to store and retrieve Job information from permanent storage.
This module saves Python objects using the pickle framework: thus, the Application subclass corresponding to a job must be already loaded (or at least import-able) in the Python interpreter for pickle to be able to ‘undump’ the object from its on-disk representation.
In other words, if you create a custom Application subclass in some client code, GC3Utils won’t be able to read job files created by this code, because the class definition is not available in GC3Utils.
The recommended simple workaround is for a stand-alone script to ‘import self’ and then use the fully qualified name to run the script. In other words, start your script with this boilerplate code:
if __name__ == '__main__':
import myscriptname
myscriptname.MyScript().run()
The rest of the script now runs as the myscript module, which does the trick!
Note
Of course, the myscript.py file must be in the search path of the Python interpreter, or GC3Utils will still complain!
Manipulation of quantities with units attached with automated conversion among compatible units.
For details and the discussion leading up to this, see: <http://code.google.com/p/gc3pie/issues/detail?id=47>
Represent the duration of a time lapse.
Construction of a duration can be done by parsing a string specification; several formats are accepted:
* A duration as an aggregate of days, hours, minutes and seconds::
>>> l3 = Duration('1day 4hours 9minutes 16seconds')
>>> l3.amount(Duration.s) # convert to seconds
101356
Any of the terms can be omitted (in which case it defaults to zero):
>>> l4 = Duration('1day 4hours 16seconds')
>>> l4 == l3 - Duration('9 minutes')
True
The unit names can be singular or plural, and any amount of space can be added between the time unit name and the associated amount:
>>> l5 = Duration('3 hour 42 minute')
>>> l6 = Duration('3 hours 42 minutes')
>>> l7 = Duration('3hours 42minutes')
>>> l5 == l6 == l7
True
Unit names can also be abbreviated using just the leading letter:
>>> l8 = Duration('3h 42m')
>>> l9 = Duration('3h42m')
>>> l8 == l9
True
The abbreviated formats HH:MM:SS and DD:HH:MM:SS are also accepted:
>>> # 1 hour + 1 minute + 1 second
>>> l1 = Duration('01:01:01')
>>> l1 == Duration('3661 s')
True
>>> # 1 day, 2 hours, 3 minutes, 4 seconds
>>> l2 = Duration('01:02:03:04')
>>> l2.amount(Duration.s)
93784
However, the formats HH:MM and MM:SS are rejected as ambiguous:
>>> # is this hours:minutes or minutes:seconds ?
>>> l0 = Duration('01:02')
Traceback (most recent call last):
...
ValueError: Duration '01:02' is ambiguous: use '1m 2s' for 1 minutes and 2 seconds, or '1h 2m' for 1 hours and 2 minutes.
Finally, you can specify a duration like any other quantity, as an integral amount of a given time unit:
>>> l1 = Duration('1 day')
>>> l2 = Duration('86400 s')
>>> l1 == l2
True
A new quantity can also be defined as a multiple of an existing one:
>>> an_hour = Duration('1 hour')
>>> a_day = 24 * an_hour
>>> a_day.amount(Duration.h)
24
The quantities Duration.hours, Duration.minutes and Duration.seconds (and their single-letter abbreviations h, m, s) are pre-defined with their obvious meaning.
Also module-level aliases hours, minutes and seconds (and the one-letter forms) are available:
>>> a_day1 = 24*hours
>>> a_day2 = 1440*minutes
>>> a_day3 = 86400*seconds
This allows for yet another way of constructing duration objects, i.e., by passing the amount and the unit separately to the constructor:
>>> a_day4 = Duration(24, hours)
Two durations are equal if they indicate the exact same amount in seconds:
>>> a_day1 == a_day2
True
>>> a_day1.amount(s)
86400
>>> a_day2.amount(s)
86400
>>> a_day == an_hour
False
>>> a_day.amount(minutes)
1440
>>> an_hour.amount(minutes)
60
Basic arithmetic is possible with durations:
>>> two_hours = an_hour + an_hour
>>> two_hours == 2*an_hour
True
>>> one_hour = two_hours - an_hour
>>> one_hour.amount(seconds)
3600
The to_str() method allows representing a duration as a string, and provides choice of the output format and unit. The format string should contain exactly two %-specifiers: the first one is used to format the numerical amount, and the second one to format the measurement unit name.
By default, the unit used originally for defining the quantity is used:
>>> an_hour.to_str('%d [%s]')
'1 [hour]'
This can be overridden by specifying an optional second argument unit:
>>> an_hour.to_str('%d [%s]', unit=Duration.m)
'60 [m]'
A third optional argument conv can set the numerical type to be used for conversion computations:
>>> an_hour.to_str('%.1f [%s]', unit=Duration.m, conv=float)
'60.0 [m]'
The default numerical type is int, which in particular implies that you get a null amount if the requested unit is larger than the quantity:
>>> an_hour.to_str('%d [%s]', unit=Duration.days)
'0 [days]'
Conversion to string uses the unit originally used for defining the quantity and the %g%s format:
>>> str(an_hour)
'1hour'
Convert a duration into a Python datetime.timedelta object.
This is useful to operate on Python’s datetime.time and datetime.date objects, which can be added or subtracted to datetime.timedelta.
Represent an amount of RAM.
Construction of a memory quantity can be done by parsing a string specification (amount followed by unit):
>>> byte = Memory('1 B')
>>> kilobyte = Memory('1 kB')
A new quantity can also be defined as a multiple of an existing one:
>>> a_thousand_kB = 1000*kilobyte
The base-10 units (up to TB, Terabytes) and base-2 (up to TiB, TiBiBytes) are available as attributes of the Memory class. This allows for a third way of constructing quantity objects, i.e., by passing the amount and the unit separately to the constructor:
>>> a_megabyte = Memory(1, Memory.MB)
>>> a_mibibyte = Memory(1, Memory.MiB)
>>> a_gigabyte = 1*Memory.GB
>>> a_gibibyte = 1*Memory.GiB
>>> two_terabytes = 2*Memory.TB
>>> two_tibibytes = 2*Memory.TiB
Two memory quantities are equal if they indicate the exact same amount in bytes:
>>> kilobyte == 1000*byte
True
>>> a_megabyte == a_mibibyte
False
>>> a_megabyte < a_mibibyte
True
>>> a_megabyte > a_gigabyte
False
Basic arithmetic is possible with memory quantities:
>>> two_bytes = byte + byte
>>> two_bytes == 2*byte
True
The to_str() method allows representing a quantity as a string, and provides choice of the output format and unit. The format string should contain exactly two %-specifiers: the first one is used to format the numerical amount, and the second one to format the measurement unit name.
By default, the unit used originally for defining the quantity is used:
>>> a_megabyte.to_str('%d [%s]')
'1 [MB]'
This can be overridden by specifying an optional second argument unit:
>>> a_megabyte.to_str('%d [%s]', unit=Memory.kB)
'1000 [kB]'
A third optional argument conv can set the numerical type to be used for conversion computations:
>>> a_megabyte.to_str('%g%s', unit=Memory.GB, conv=float)
'0.001GB'
The default numerical type is int, which in particular implies that you get a null amount if the requested unit is larger than the quantity:
>>> a_megabyte.to_str('%g%s', unit=Memory.GB, conv=int)
'0GB'
Conversion to string uses the unit originally used for defining the quantity and the %g%s format:
>>> str(a_megabyte)
'1MB'
Metaclass for creating quantity classes.
This factory creates subclasses of _Quantity and bootstraps the base unit.
The name of the base unit is given as argument to the metaclass instance:
>>> class Memory1(object):
... __metaclass__ = Quantity('B')
...
>>> B = Memory1('1 B')
>>> print (2*B)
2B
Optional keyword arguments create additional units; the argument key gives the unit name, and its value gives the ratio of the new unit to the base unit. For example:
>>> class Memory2(object):
... __metaclass__ = Quantity('B', kB=1000, MB=1000*1000)
...
>>> a_thousand_kB = Memory2('1000kB')
>>> MB = Memory2('1 MB')
>>> a_thousand_kB == MB
True
Note that the units (base and additional) are also available as class attributes for easier referencing in Python code:
>>> a_thousand_kB == Memory2.MB
True
Support and expansion of programmatic templates.
The module gc3libs.template allows creation of textual templates with a simple object-oriented programming interface: given a string with a list of substitutions (using the syntax of Python’s standard substitute module), a set of replacements can be specified, and the gc3libs.template.expansions function will generate all possible texts coming from the same template. Templates can be nested, and expansions generated recursviely.
A template object is a pair (obj, keywords). Methods are provided to substitute the keyword values into obj, and to iterate over expansions of the given keywords (optionally filtering the allowed combination of keyword values).
Second optional argument validator must be a function that accepts a set of keyword arguments, and returns True if the keyword combination is valid (can be expanded/substituted back into the template) or False if it should be discarded. The default validator passes any combination of keywords/values.
Iterate over all valid expansions of the templated object and the template keywords. Returned items are Template instances constucted with the expanded template object and a valid combination of keyword values.
Return result of interpolating the value of keywords into the template. Keyword arguments extra_args can be used to override keyword values passed to the constructor.
If the templated object provides a substitute method, then return the result of invoking it with the template keywords as keyword arguments. Otherwise, return the result of applying Python standard library’s string.Template.safe_substitute() on the string representation of the templated object.
Raise ValueError if the set of keywords/values is not valid according to the validator specified in the constructor.
Iterate over all expansions of a given object, recursively expanding all templates found. How the expansions are actually computed, depends on the type of object being passed in the first argument obj:
If obj is a list, iterate over expansions of items in obj. (In particular, this flattens out nested lists.)
Example:
>>> L = [0, [2, 3]]
>>> list(expansions(L))
[0, 2, 3]
If obj is a dictionary, return dictionary formed by all combinations of a key k in obj with an expansion of the corresponding value obj[k]. Expansions are computed by recursively calling expansions(obj[k], **extra_args).
Example:
>>> D = {'a':1, 'b':[2,3]}
>>> list(expansions(D))
[{'a': 1, 'b': 2}, {'a': 1, 'b': 3}]
If obj is a tuple, iterate over all tuples formed by the expansion of every item in obj. (Each item t[i] is expanded by calling expansions(t[i], **extra_args).)
Example:
>>> T = (1, [2, 3])
>>> list(expansions(T))
[(1, 2), (1, 3)]
If obj is a Template class instance, then the returned values are the result of applying the template to the expansion of each of its keywords.
Example:
>>> T1 = Template("a=${n}", n=[0,1])
>>> list(expansions(T1))
[Template('a=${n}', n=0), Template('a=${n}', n=1)]
Note that keywords passed to the expand invocation override the ones used in template construction:
>>> T2 = Template("a=${n}")
>>> list(expansions(T2, n=[1,3]))
[Template('a=${n}', n=1), Template('a=${n}', n=3)]
>>> T3 = Template("a=${n}", n=[0,1])
>>> list(expansions(T3, n=[2,3]))
[Template('a=${n}', n=2), Template('a=${n}', n=3)]
Any other value is returned unchanged.
Example:
>>> V = 42
>>> list(expansions(V))
[42]
Utility classes and methods for dealing with URLs.
Represent a URL as a named-tuple object. This is an immutable object that cannot be changed after creation.
The following read-only attributes are defined on objects of class Url.
Attribute | Index | Value | Value if not present |
---|---|---|---|
scheme | 0 | URL scheme specifier | empty string |
netloc | 1 | Network location part | empty string |
path | 2 | Hierarchical path | empty string |
query | 3 | Query component | empty string |
hostname | 4 | Host name (lower case) | None |
port | 5 | Port number as integer (if present) | None |
username | 6 | User name | None |
password | 7 | Password | None |
There are two ways of constructing Url objects:
By passing a string urlstring:
>>> u = Url('http://www.example.org/data')
>>> u.scheme
'http'
>>> u.netloc
'www.example.org'
>>> u.path
'/data'
The default URL scheme is file:
>>> u = Url('/tmp/foo')
>>> u.scheme
'file'
>>> u.path
'/tmp/foo'
Please note that extra leading slashes ‘/’ are interpreted as the begining of a network location:
>>> u = Url('//foo/bar')
>>> u.path
'/bar'
>>> u.netloc
'foo'
>>> Url('///foo/bar').path
'/foo/bar'
Check RFC 3986 http://tools.ietf.org/html/rfc3986
If force_abs is True (default), then the path attribute is made absolute, by calling os.path.abspath if necessary:
>>> u = Url('foo/bar', force_abs=True)
>>> os.path.isabs(u.path)
True
Otherwise, if force_abs is False, then the path attribute stores the passed string unchanged:
>>> u = Url('foo', force_abs=False)
>>> os.path.isabs(u.path)
False
>>> u.path
'foo'
Other keyword arguments can specify defaults for missing parts of the URL:
>>> u = Url('/tmp/foo', scheme='file', netloc='localhost')
>>> u.scheme
'file'
>>> u.netloc
'localhost'
>>> u.path
'/tmp/foo'
By passing keyword arguments only, to construct an Url object with exactly those values for the named fields:
>>> u = Url(scheme='http', netloc='www.example.org', path='/data')
In this form, the force_abs parameter is ignored.
See also: http://docs.python.org/library/urlparse.html#urlparse-result-object
Return a new Url, constructed by appending relpath to the path section of this URL.
Example:
>>> u0 = Url('http://www.example.org')
>>> u1 = u0.adjoin('data')
>>> str(u1)
'http://www.example.org/data'
>>> u2 = u1.adjoin('moredata')
>>> str(u2)
'http://www.example.org/data/moredata'
Even if relpath starts with /, it is still appended to the path in the base URL:
>>> u3 = u2.adjoin('/evenmore')
>>> str(u3)
'http://www.example.org/data/moredata/evenmore'
A dictionary class enforcing that all keys are URLs.
Strings and/or objects returned by urlparse can be used as keys. Setting a string key automatically translates it to a URL:
>>> d = UrlKeyDict()
>>> d['/tmp/foo'] = 1
>>> for k in d.keys(): print (type(k), k.path)
(<class '....Url'>, '/tmp/foo')
Retrieving the value associated with a key works with both the string or the url value of the key:
>>> d['/tmp/foo']
1
>>> d[Url('/tmp/foo')]
1
Key lookup can use both the string or the Url value as well:
>>> '/tmp/foo' in d
True
>>> Url('/tmp/foo') in d
True
>>> 'file:///tmp/foo' in d
True
>>> 'http://example.org' in d
False
Class UrlKeyDict supports initialization by copying items from another dict instance or from an iterable of (key, value) pairs:
>>> d1 = UrlKeyDict({ '/tmp/foo':'foo', '/tmp/bar':'bar' })
>>> d2 = UrlKeyDict([ ('/tmp/foo', 'foo'), ('/tmp/bar', 'bar') ])
>>> d1 == d2
True
Differently from dict, initialization from keyword arguments alone is not supported:
>>> d3 = UrlKeyDict(foo='foo', bar='bar')
Traceback (most recent call last):
...
TypeError: __init__() got an unexpected keyword argument 'foo'
An empty UrlKeyDict instance is returned by the constructor when called with no parameters:
>>> d0 = UrlKeyDict()
>>> len(d0)
0
If force_abs is True, then all paths are converted to absolute ones in the dictionary keys.
>>> d = UrlKeyDict(force_abs=True)
>>> d['foo'] = 1
>>> for k in d.keys(): print os.path.isabs(k.path)
True
>>> d = UrlKeyDict(force_abs=False)
>>> d['foo'] = 2
>>> for k in d.keys(): print os.path.isabs(k.path)
False
A dictionary class enforcing that all values are URLs.
Strings and/or objects returned by urlparse can be used as values. Setting a string value automatically translates it to a URL:
>>> d = UrlValueDict()
>>> d[1] = '/tmp/foo'
>>> d[2] = Url('file:///tmp/bar')
>>> for v in d.values(): print (type(v), v.path)
(<class '....Url'>, '/tmp/foo')
(<class '....Url'>, '/tmp/bar')
Retrieving the value associated with a key always returns the URL-type value, regardless of how it was set:
>>> repr(d[1])
"Url(scheme='file', netloc='', path='/tmp/foo', hostname=None, port=None, username=None, password=None)"
Class UrlValueDict supports initialization by any of the methods that work with a plain dict instance:
>>> d1 = UrlValueDict({ 'foo':'/tmp/foo', 'bar':'/tmp/bar' })
>>> d2 = UrlValueDict([ ('foo', '/tmp/foo'), ('bar', '/tmp/bar') ])
>>> d3 = UrlValueDict(foo='/tmp/foo', bar='/tmp/bar')
>>> d1 == d2
True
>>> d2 == d3
True
In particular, an empty UrlDict instance is returned by the constructor when called with no parameters:
>>> d0 = UrlValueDict()
>>> len(d0)
0
If force_abs is True, then all paths are converted to absolute ones in the dictionary values.
>>> d = UrlValueDict(force_abs=True)
>>> d[1] = 'foo'
>>> for v in d.values(): print os.path.isabs(v.path)
True
>>> d = UrlValueDict(force_abs=False)
>>> d[2] = 'foo'
>>> for v in d.values(): print os.path.isabs(v.path)
False
Generic Python programming utility functions.
This module collects general utility functions, not specifically related to GC3Libs. A good rule of thumb for determining if a function or class belongs in here is the following: place a function or class in this module if you could copy its code into the sources of a different project and it would not stop working.
A generic enumeration class. Inspired by: http://stackoverflow.com/questions/36932/whats-the-best-way-to-implement-an-enum-in-python/2182437#2182437 with some more syntactic sugar added.
An Enum class must be instanciated with a list of strings, that make the enumeration “label”:
>>> Animal = Enum('CAT', 'DOG')
Each label is available as an instance attribute, evaluating to itself:
>>> Animal.DOG
'DOG'
>>> Animal.CAT == 'CAT'
True
As a consequence, you can test for presence of an enumeration label by string value:
>>> 'DOG' in Animal
True
Finally, enumeration labels can also be iterated upon:
>>> for a in Animal: print a
DOG
CAT
A list of messages with timestamps and (optional) tags.
The append method should be used to add a message to the History:
>>> L = History()
>>> L.append('first message')
>>> L.append('second one')
The last method returns the text of the last message appended, with its timestamp:
>>> L.last().startswith('second one at')
True
Iterating over a History instance returns message texts in the temporal order they were added to the list, with their timestamp:
>>> for msg in L: print(msg)
first message ...
Append a message to this History.
The message is timestamped with the time at the moment of the call.
The optional tags argument is a sequence of strings. Tags are recorded together with the message and may be used to filter log messages given a set of labels. (This feature is not yet implemented.)
Return a formatted message, appending to the message its timestamp in human readable format.
Return text of last message appended. If log is empty, return empty string.
An object that is greater-than any other object.
>>> x = PlusInfinity()
>>> x > 1
True
>>> 1 < x
True
>>> 1245632479102509834570124871023487235987634518745 < x
True
>>> x > sys.maxint
True
>>> x < sys.maxint
False
>>> sys.maxint < x
True
PlusInfinity objects are actually larger than any given Python object:
>>> x > 'azz'
True
>>> x > object()
True
Note that PlusInfinity is a singleton, therefore you always get the same instance when calling the class constructor:
>>> x = PlusInfinity()
>>> y = PlusInfinity()
>>> x is y
True
Relational operators try to return the correct value when comparing PlusInfinity to itself:
>>> x < y
False
>>> x <= y
True
>>> x == y
True
>>> x >= y
True
>>> x > y
False
Derived classes of Singleton can have only one instance in the running Python interpreter.
>>> x = Singleton()
>>> y = Singleton()
>>> x is y
True
A dict-like object, whose keys can be accessed with the usual ‘[...]’ lookup syntax, or with the ‘.’ get attribute syntax.
Examples:
>>> a = Struct()
>>> a['x'] = 1
>>> a.x
1
>>> a.y = 2
>>> a['y']
2
Values can also be initially set by specifying them as keyword arguments to the constructor:
>>> a = Struct(z=3)
>>> a['z']
3
>>> a.z
3
Like dict instances, Struct`s have a `copy method to get a shallow copy of the instance:
>>> b = a.copy()
>>> b.z
3
Return a (shallow) copy of this Struct instance.
Rename the filesystem entry at path by appending a unique numerical suffix; return new name.
For example,
>>> import tempfile
>>> path = tempfile.mkstemp()[1]
>>> path1 = backup(path)
>>> os.path.exists(path + '.~1~')
True
3. re-create the file, and make a second backup: this time the file will be renamed with a .~2~ extension:
>>> open(path, 'w').close()
>>> path2 = backup(path)
>>> os.path.exists(path + '.~2~')
True
cleaning up tests
>>> os.remove(path+'.~1~')
>>> os.remove(path+'.~2~')
Return base name without the extension.
Cache the result of a (nullary) method invocation for a given amount of time. Use as a decorator on object methods whose results are to be cached.
Store the result of the first invocation of the decorated method; if another invocation happens before lapse seconds have passed, return the cached value instead of calling the real function again. If a new call happens after the grace period has expired, call the real function and store the result in the cache.
Note: Do not use with methods that take keyword arguments, as they will be discarded! In addition, arguments are compared to elements in the cache by identity, so that invoking the same method with equal but distinct object will result in two separate copies of the result being computed and stored in the cache.
Cache results and timestamps are stored into the objects’ _cache_value and _cache_last_updated attributes, so the caches are destroyed with the object when it goes out of scope.
The working of the cached method can be demonstrated by the following simple code:
>>> class X(object):
... def __init__(self):
... self.times = 0
... @cache_for(2)
... def foo(self):
... self.times += 1
... return ("times effectively run: %d" % self.times)
>>> x = X()
>>> x.foo()
'times effectively run: 1'
>>> x.foo()
'times effectively run: 1'
>>> time.sleep(3)
>>> x.foo()
'times effectively run: 2'
Concatenate the contents of all args into output. Both output and each of the args can be a file-like object or a string (indicating the path of a file to open).
If append is True, then output is opened in append-only mode; otherwise it is overwritten.
Copy src to dst, descending it recursively if necessary.
Copy a file from src to dst; return True if the copy was actually made. If overwrite is False (default), an existing destination entry is left unchanged and False is returned.
If link is True, an attempt at hard-linking is done first; failing that, we copy the source file onto the destination one. Permission bits and modification times are copied as well.
If dst is a directory, a file with the same basename as src is created (or overwritten) in the directory specified.
Recursively copy an entire directory tree rooted at src. If overwrite is False (default), entries that already exist in the destination tree are left unchanged and not overwritten.
See also: shutil.copytree.
Return number of items in seq that match predicate. Argument predicate should be a callable that accepts one argument and returns a boolean.
Decorator to define properties with a simplified syntax in Python 2.4. See http://code.activestate.com/recipes/410698-property-decorator-for-python-24/#c6 for details and examples.
Ensure that configuration file filename exists; possibly copying it from the specified template_filename.
Return True if a file with the specified name exists in the configuration directory. If not, try to copy the template file over and then return False; in case the copy operations fails, a NoConfigurationFile exception is raised.
The template_filename is always resolved relative to GC3Libs’ ‘package resource’ directory (i.e., the etc/ directory in the sources. If template_filename is None, then it is assumed to be the base name of filename.
Same as os.path.dirname but return . in case of path names with no directory component.
Return the first element of sequence or iterator seq. Raise TypeError if the argument does not implement either of the two interfaces.
Examples:
>>> s = [0, 1, 2]
>>> first(s)
0
>>> s = {'a':1, 'b':2, 'c':3}
>>> first(sorted(s.keys()))
'a'
Return the contents of template, substituting all occurrences of Python formatting directives ‘%(key)s’ with the corresponding values taken from dictionary extra_args.
If template is an object providing a read() method, that is used to gather the template contents; else, if a file named template exists, the template contents are read from it; otherwise, template is treated like a string providing the template contents itself.
Like Python’s getattr, but perform a recursive lookup if name contains any dots.
Return if_true is argument test evaluates to True, return if_false otherwise.
This is just a workaround for Python 2.4 lack of the conditional assignment operator:
>>> a = 1
>>> b = ifelse(a, "yes", "no"); print b
yes
>>> b = ifelse(not a, 'yay', 'nope'); print b
nope
Iterate over all values greater or equal than start and less than stop. (Or the reverse, if step < 0.)
Example:
>>> list(irange(1, 5))
[1, 2, 3, 4]
>>> list(irange(0, 8, 3))
[0, 3, 6]
>>> list(irange(8, 0, -2))
[8, 6, 4, 2]
Unlike the built-in range function, irange also accepts floating-point values:
>>> list(irange(0.0, 1.0, 0.5))
[0.0, 0.5]
Also unlike the built-in range, both start and stop have to be specified:
>>> irange(42)
Traceback (most recent call last):
...
TypeError: irange() takes at least 2 arguments (1 given)
Of course, a null step is not allowed:
>>> list(irange(1, 2, 0))
Traceback (most recent call last):
...
AssertionError: Null step in irange.
Lock the file at path. Raise a LockTimeout error if the lock cannot be acquired within timeout seconds.
Return a lock object that should be passed unchanged to the gc3libs.utils.unlock function.
If no path points to a non-existent location, an empty file is created before attempting to lock (unless create is False). An attempt is made to remove the file in case an error happens.
See also: gc3libs.utils.unlock()
Like os.makedirs, but does not throw an exception if PATH already exists.
Like os.makedirs, but if path already exists and is not empty, rename the existing one to a backup name (see the backup function).
Unlike os.makedirs, no exception is thrown if the directory already exists and is empty, but the target directory permissions are not altered to reflect mode.
Print dictionary instance D in a YAML-like format. Each output line consists of:
- indent spaces,
- the key name,
- a colon character :,
- the associated value.
If the total line length exceeds width, the value is printed on the next line, indented by further step spaces; a value of 0 for width disables this line wrapping.
Optional argument only_keys can be a callable that must return True when called with keys that should be printed, or a list of key names to print.
Dictionary instances appearing as values are processed recursively (up to maxdepth nesting). Each nested instance is printed indented step spaces from the enclosing dictionary.
Return a positive integer, whose value is guaranteed to be monotonically increasing across different invocations of this function, and also across separate instances of the calling program.
Example:
(create a temporary directory to avoid bug #)
>>> import tempfile, os
>>> (fd, tmp) = tempfile.mkstemp()
>>> n = progressive_number(id_filename=tmp)
>>> m = progressive_number(id_filename=tmp)
>>> m > n
True
If you specify a positive integer as argument, then a list of monotonically increasing numbers is returned. For example:
>>> ls = progressive_number(5, id_filename=tmp)
>>> len(ls)
5
>>> os.remove(tmp)
In other words, progressive_number(N) is equivalent to:
nums = [ progressive_number() for n in range(N) ]
only more efficient, because it has to obtain and release the lock only once.
After every invocation of this function, the last returned number is stored into the file passed as argument id_filename. If the file does not exist, an attempt to create it is made before allocating an id; the method can raise an IOError or OSError if id_filename cannot be opened for writing.
Note: as file-level locking is used to serialize access to the counter file, this function may block (default timeout: 30 seconds) while trying to acquire the lock, or raise a LockTimeout exception if this fails.
Raise : | LockTimeout, IOError, OSError |
---|---|
Returns: | A positive integer number, monotonically increasing with every call. A list of such numbers if argument qty is a positive integer. |
Return the whole contents of the file at path as a single string.
Example:
>>> read_contents('/dev/null')
''
>>> import tempfile
>>> (fd, tmpfile) = tempfile.mkstemp()
>>> w = open(tmpfile, 'w')
>>> w.write('hey')
>>> w.close()
>>> read_contents(tmpfile)
'hey'
(If you run this test, remember to do cleanup afterwards)
>>> os.remove(tmpfile)
Return a string describing Python object obj.
Avoids calling any Python magic methods, so should be safe to use as a ‘last resort’ in implementation of __str__ and __repr__.
Function decorator: sets the docstring of the following function to the one of referenced_fn.
Intended usage is for setting docstrings on methods redefined in derived classes, so that they inherit the docstring from the corresponding abstract method in the base class.
Like os.path.samefile but return False if either one of the paths does not exist.
Convert word to a Python boolean value and return it. The strings true, yes, on, 1 (with any capitalization and any amount of leading and trailing spaces) are recognized as meaning Python True:
>>> string_to_boolean('yes')
True
>>> string_to_boolean('Yes')
True
>>> string_to_boolean('YES')
True
>>> string_to_boolean(' 1 ')
True
>>> string_to_boolean('True')
True
>>> string_to_boolean('on')
True
Any other word is considered as boolean False:
>>> string_to_boolean('no')
False
>>> string_to_boolean('No')
False
>>> string_to_boolean('Nay!')
False
>>> string_to_boolean('woo-hoo')
False
This includes also the empty string and whitespace-only:
>>> string_to_boolean('')
False
>>> string_to_boolean(' ')
False
Iterate over lines in iterable and return each of them stripped of leading and trailing blanks.
Test for access to a path; if access is not granted, raise an instance of exception with an appropriate error message. This is a frontend to os.access(), which see for exact semantics and the meaning of path and mode.
Parameters: |
|
---|
If the test succeeds, True is returned:
>>> test_file('/bin/cat', os.F_OK)
True
>>> test_file('/bin/cat', os.R_OK)
True
>>> test_file('/bin/cat', os.X_OK)
True
>>> test_file('/tmp', os.X_OK)
True
However, if the test fails, then an exception is raised:
>>> test_file('/bin/cat', os.W_OK)
Traceback (most recent call last):
...
RuntimeError: Cannot write to file '/bin/cat'.
If the optional argument isdir is True, then additionally test that path points to a directory inode:
>>> test_file('/tmp', os.F_OK, isdir=True)
True
>>> test_file('/bin/cat', os.F_OK, isdir=True)
Traceback (most recent call last):
...
RuntimeError: Expected '/bin/cat' to be a directory, but it's not.
Convert string s to an integer number of bytes. Suffixes like ‘KB’, ‘MB’, ‘GB’ (up to ‘YB’), with or without the trailing ‘B’, are allowed and properly accounted for. Case is ignored in suffixes.
Examples:
>>> to_bytes('12')
12
>>> to_bytes('12B')
12
>>> to_bytes('12KB')
12000
>>> to_bytes('1G')
1000000000
Binary units ‘KiB’, ‘MiB’ etc. are also accepted:
>>> to_bytes('1KiB')
1024
>>> to_bytes('1MiB')
1048576
Iterate over all unique elements in sequence seq.
Distinct values are returned in a sorted fashion.
Release a previously-acquired lock.
Argument lock should be the return value of a previous gc3libs.utils.lock call.
See also: gc3libs.utils.lock()
Overwrite the contents of the file at path with the given data. If the file does not exist, it is created.
Example:
>>> import tempfile
>>> (fd, tmpfile) = tempfile.mkstemp()
>>> write_contents(tmpfile, 'big data here')
>>> read_contents(tmpfile)
'big data here'
(If you run this test, remember to clean up afterwards)
>>> os.remove(tmpfile)
Warning
This module is deprecated and will be removed in a future release. Do not depend on it.
Support for running a generic application with the GC3Libs.
Support for AppPot-hosted applications.
For more details about AppPot, visit: <http://apppot.googlecode.com>
Base class for AppPot-hosted applications. Provides the same interface as the base Application and runs the specified command in an AppPot instance.
In addition to the standard Application keyword arguments, the following ones can be given to steer the AppPot execution:
Specialized support for computational jobs running GAMESS-US.
Specialized AppPotApplication object to submit computational jobs running GAMESS-US.
This class makes no check or guarantee that a GAMESS-US executable will be available in the executing AppPot instance: the apppot_img and apppot_tag keyword arguments can be used to select the AppPot system image to run this application; see the AppPotApplication for information.
The __init__ construction interface is compatible with the one used in GamessApplication. The only required parameter for construction is the input file name; any other argument names an additional input file, that is added to the Application.inputs list, but not otherwise treated specially.
Any other keyword parameter that is valid in the Application class can be used here as well, with the exception of input and output. Note that a GAMESS-US job is always submitted with join = True, therefore any stderr setting is ignored.
Specialized Application object to submit computational jobs running GAMESS-US.
The only required parameter for construction is the input file name; subsequent positional arguments are additional input files, that are added to the Application.inputs list, but not otherwise treated specially.
The verno parameter is used to request a specific version of GAMESS-US; if the default value None is used, the default version of GAMESS-US at the executing site is run.
Any other keyword parameter that is valid in the Application class can be used here as well, with the exception of input and output. Note that a GAMESS-US job is always submitted with join = True, therefore any stderr setting is ignored.
Append to log the termination status line as extracted from the GAMESS ‘.out’ file.
The job exit code .execution.exitcode is (re)set according to the following table:
Exit code | Meaning |
---|---|
0 | the output file contains the string EXECUTION OF GAMESS TERMINATED normally |
1 | the output file contains the string EXECUTION OF GAMESS TERMINATED -ABNORMALLY- |
2 | the output file contains the string ddikick exited unexpectedly |
70 (os.EX_SOFTWARE) | the output file cannot be read or does not match any of the above patterns |
Specialized support for computational jobs running programs in the Rosetta suite.
Specialized Application object to submit one run of a single application in the Rosetta suite.
Required parameters for construction:
- application: name of the Rosetta application to call (e.g., “docking_protocol” or “relax”)
- inputs: a dict instance, keys are Rosetta -in:file:* options, values are the (local) path names of the corresponding files. (Example: inputs={"-in:file:s":"1brs.pdb"})
- outputs: list of output file names to fetch after Rosetta has finished running.
Optional parameters:
- flags_file: path to a local file containing additional flags for controlling Rosetta invocation; if None, a local configuration file will be used.
- database: (local) path to the Rosetta DB; if this is not specified, then it is assumed that the correct location will be available at the remote execution site as environment variable ROSETTA_DB_LOCATION
- arguments: If present, they will be appended to the Rosetta application command line.
Extract output files from the tar archive created by the ‘rosetta.sh’ script.
Specialized Application class for executing a single run of the Rosetta “docking_protocol” application.
Currently used in the gdocking app.
Authentication support for the GC3Libs.
A mish-mash of authorization functions.
This class actually serves the purposes of:
FIXME
There are several problems with this approach:
Add the specified keyword arguments as initialization parameters to all the configured auth classes.
Parameters that have already been specified are silently overwritten.
Return an instance of the Auth class corresponding to the given auth_name, or raise an exception if instanciating the same class has given an unrecoverable exception in past calls.
Additional keyword arguments are passed unchanged to the class constructor and can override values specified at configuration time.
Instances are remembered for the lifetime of the program; if an instance of the given class is already present in the cache, that one is returned; otherwise, an instance is contructed with the given parameters.
Caution
The params keyword arguments are only used if a new instance is constructed and are silently ignored if the cached instance is returned.
Auth proxy to use when no auth is needed.
Authentication support for accessing resources through the SSH protocol.
Authentication support with Grid proxy certificates.
Interface to different resource management systems for the GC3Libs.
Base class for interfacing with a computing resource.
The following construction parameters are also set as instance attributes. All of them are mandatory, except auth.
Attribute name | Expected Type | Meaning |
---|---|---|
name | string | A unique identifier for this resource, used for generating error message. |
architecture | set of Run.Arch values | Should contain one entry per each architecture supported. Valid architecture values are constants in the gc3libs.Run.Arch class. |
auth | string | A gc3libs.authentication.Auth instance that will be used to access the computational resource associated with this backend. The default value None is used to mean that no authentication credentials are needed (e.g., access to the resource has been pre-authenticated) or is managed outside of GC3Pie). |
max_cores | int | Maximum number of CPU cores that GC3Pie can allocate on this resource. |
max_cores_per_job | int | Maximum number of CPU cores that GC3Pie can allocate on this resource for a single job. |
max_memory_per_core | Memory | Maximum memory that GC3Pie can allocate to jobs on this resource. The value is per core, so the actual amount allocated to a single job is the value of this entry multiplied by the number of cores requested by the job. |
max_walltime | Duration | Maximum wall-clock time that can be allotted to a single job running on this resource. |
The above should be considered immutable attributes: they are specified at construction time and changed never after.
The following attributes are instead dynamically provided (i.e., defined by the get_resource_status() method or similar), thus can change over the lifetime of the object:
Attribute name | Type |
---|---|
free_slots | int |
user_run | int |
user_queued | int |
queued | int |
Decorator: mark a function as requiring authentication.
Each invocation of the decorated function causes a call to the get method of the authentication object (configured with the auth parameter to the class constructor).
Cancel a running job. If app is associated to a queued or running remote job, tell the execution middleware to cancel it.
Implement gracefully close on LRMS dependent resources e.g. transport
Free up any remote resources used for the execution of app. In particular, this should delete any remote directories and files.
Call this method when app.execution.state is anything other than TERMINATED results in undefined behavior and will likely be the cause of errors later on. Be cautious.
Update the status of the resource associated with this LRMS instance in-place. Return updated Resource object.
Retrieve job output files into local directory download_dir (which must already exists). Will not overwrite existing files, unless the optional argument overwrite is True.
Download size bytes (at offset offset from the start) from remote file remote_filename and write them into local_file. If size is None (default), then snarf contents of remote file from offset unto the end.
Argument local_file is either a local path name (string), or a file-like object supporting a .write() method. If local_file is a path name, it is created if not existent, otherwise overwritten.
Argument remote_filename is the name of a file in the remote job “sandbox”.
Any exception raised by operations will be passed through.
Submit an Application instance to the configured computational resource; return a gc3libs.Job instance for controlling the submitted job.
This method only returns if the job is successfully submitted; upon any failure, an exception is raised.
Note:
- job.state is not altered; it is the caller’s responsibility to update it.
- the job object may be updated with any information that is necessary for this LRMS to perform further operations on it.
Query the state of the remote job associated with app and update app.execution.state accordingly. Return the corresponding Run.State; see Run.State for more details.
Return True if the list of files is expressed in one of the file transfer protocols the LRMS supports. Return False otherwise
Job control on ARC0 resources.
Manage jobs through the ARC middleware.
In addition to attributes
Attribute name Type Required? arc_ldap string frontend string yes
Cancel a running job. If app is associated to a queued or running remote job, tell the execution middleware to cancel it.
Return True if the list of files is expressed in one of the file transfer protocols the LRMS supports. Return False otherwise
Free up any remote resources used for the execution of app. In particular, this should delete any remote directories and files.
Call this method when app.execution.state is anything other than TERMINATED results in undefined behavior and will likely be the cause of errors later on. Be cautious.
Get dynamic information from the ARC infosystem and set attributes on the current object accordingly.
The following attributes are set:
Retrieve job output files into local directory download_dir (which must already exists). Will not overwrite existing files, unless the optional argument overwrite is True.
Download size bytes (at offset offset from the start) from remote file remote_filename and write them into local_file. If size is None (default), then snarf contents of remote file from offset unto the end.
Argument local_file is either a local path name (string), or a file-like object supporting a .write() method. If local_file is a path name, it is created if not existent, otherwise overwritten.
Argument remote_filename is the name of a file in the remote job “sandbox”.
Any exception raised by operations will be passed through.
Submit an Application instance to the configured computational resource; return a gc3libs.Job instance for controlling the submitted job.
This method only returns if the job is successfully submitted; upon any failure, an exception is raised.
Note:
- job.state is not altered; it is the caller’s responsibility to update it.
- the job object may be updated with any information that is necessary for this LRMS to perform further operations on it.
Query the state of the ARC0 job associated with app and update app.execution.state accordingly. Return the corresponding Run.State; see Run.State for more details.
The mapping of ARC0 job statuses to Run.State is as follows:
ARC job status Run.State ACCEPTED SUBMITTED ACCEPTING SUBMITTED SUBMITTING SUBMITTED PREPARING SUBMITTED PREPARED SUBMITTED INLRMS:Q SUBMITTED INLRMS:R RUNNING INLRMS:O STOPPED INLRMS:E STOPPED INLRMS:X STOPPED INLRMS:S STOPPED INLRMS:H STOPPED FINISHING RUNNING EXECUTED RUNNING FINISHED TERMINATING CANCELING TERMINATING FINISHED TERMINATING FAILED TERMINATING KILLED TERMINATED DELETED TERMINATED
Any other ARC job status is mapped to Run.State.UNKNOWN. In particular, querying a job ID that is not found in the ARC information system will result in UNKNOWN state, as will querying a job that has just been submitted and has not yet found its way to the infosys.
Return True if the list of files is expressed in one of the file transfer protocols the LRMS supports. Return False otherwise
Job control using libarcclient. (Which can submit to all EMI-supported resources.)
Manage jobs through ARC’s libarcclient.
Cancel a running job. If app is associated to a queued or running remote job, tell the execution middleware to cancel it.
Return True if the list of files is expressed in one of the file transfer protocols the LRMS supports. Return False otherwise
Free up any remote resources used for the execution of app. In particular, this should delete any remote directories and files.
Call this method when app.execution.state is anything other than TERMINATED results in undefined behavior and will likely be the cause of errors later on. Be cautious.
Get dynamic information from the ARC infosystem and set attributes on the current object accordingly.
The following attributes are set:
Retrieve job output files into local directory download_dir (which must already exists). Will not overwrite existing files, unless the optional argument overwrite is True.
Download size bytes (at offset offset from the start) from remote file remote_filename and write them into local_file. If size is None (default), then snarf contents of remote file from offset unto the end.
Argument local_file is either a local path name (string), or a file-like object supporting a .write() method. If local_file is a path name, it is created if not existent, otherwise overwritten.
Argument remote_filename is the name of a file in the remote job “sandbox”.
Any exception raised by operations will be passed through.
Submit an Application instance to the configured computational resource; return a gc3libs.Job instance for controlling the submitted job.
This method only returns if the job is successfully submitted; upon any failure, an exception is raised.
Note:
- job.state is not altered; it is the caller’s responsibility to update it.
- the job object may be updated with any information that is necessary for this LRMS to perform further operations on it.
Query the state of the ARC job associated with app and update app.execution.state accordingly. Return the corresponding Run.State; see Run.State for more details.
The mapping of ARC job statuses to Run.State is as follows:
ARC job status Run.State ACCEPTED SUBMITTED SUBMITTING SUBMITTED PREPARING SUBMITTED QUEUING SUBMITTED RUNNING RUNNING FINISHING RUNNING FINISHED TERMINATING FAILED TERMINATING KILLED TERMINATED DELETED TERMINATED HOLD STOPPED OTHER UNKNOWN
Any other ARC job status is mapped to Run.State.UNKNOWN.
Return True if the list of files is expressed in one of the file transfer protocols the LRMS supports. Return False otherwise
gc3utils - A simple command-line frontend to distributed resources
This is a generic front-end code; actual implementation of commands can be found in gc3utils/commands.py
Generic front-end function to invoke the commands in gc3utils/commands.py