This is the official tutorial for the xcs package for Python 3. You can find the latest release and get updates on the project's status at the project home page on GitHub.com.
XCS is a Python 3 implementation of the XCS algorithm as described in the 2001 paper, An Algorithmic Description of XCS, by Martin Butz and Stewart Wilson. XCS is a type of Learning Classifier System (LCS), a machine learning algorithm that utilizes a genetic algorithm acting on a rule-based system, to solve a reinforcement learning problem.
In its canonical form, XCS accepts a fixed-width string of bits as its input, and attempts to select the best action from a predetermined list of choices using an evolving set of rules that match inputs and offer appropriate suggestions. It then receives a reward signal indicating the quality of its decision, which it uses to adjust the rule set that was used to make the decision. This process is subsequently repeated, allowing the algorithm to evaluate the changes it has already made and further refine the rule set.
A key feature of XCS is that, unlike many other machine learning algorithms, it not only learns the optimal input/output mapping, but also produces a minimal set of rules for describing that mapping. This is a big advantage over other learning algorithms such as neural networks whose models are largely opaque to human analysis, making XCS an important tool in any data scientist's tool belt.
The XCS library provides not only an implementation of the standard XCS algorithm, but a set of interfaces which together constitute a framework for implementing and experimenting with other LCS variants. Future plans for the XCS library include continued expansion of the tool set with additional algorithms, and refinement of the interface to support reinforcement learning algorithms in general.
Being both a reinforcement learning algorithm and an evolutionary algorithm, XCS requires an understanding of terms pertaining to both.
A situation is just another term for an input received by the classifier.
An action is an output produced by the classifier.
A rule is a pairing between a condition, describing which situations can be matched, and a suggested action. Each rule has an associated prediction indicating the expected reward if the suggested action is taken when the condition matches the situation, a fitness indicating its suitability for reproduction and continued use in the population, and a numerosity value which indicates the number of (virtual) instances of the rule in the population. (There are other parameters associated with each rule, as well, but these are visibly important ones.)
This is the collection of all rules currently used and tracked by the classifier. The genetic algorithm operates on this set of rules over time to optimize them for accuracy and generality in their descriptiveness of the problem space. Note that the population is virtual, meaning that if the same rule has multiple copies in the population, it is represented only once, with an associated numerosity value to indicate the number of virtual instances of the rule in the population.
The match set is the set of rules which match against the current situation.
The action set is the set of rules which match against the current situation and recommend the selected action. Thus the action set is a subset of the match set. In fact, the match set can be seen as a collection of mutually exclusive and competing action sets, from which only one is to be selected.
The reward is the signal the algorithm attempts to maximize. It takes the form of a floating point value passed to the action set via the accept_payoff()
method.
A prediction is an estimate by a rule or an action set as to the reward expected to be received by taking the suggested action in the given situation. The prediction of an action set is formed by taking the fitness-weighted average of the predictions made by the individual rules within it.
Fitness is another floating point value similar in function to the reward, except that in this case it is an internal signal defined by the algorithm itself, which is then used as a guide for selection of which rules are to act as parents to the next generation. Each rule in the population has its own associated fitness value. In XCS, as opposed to strength-based LCS variants such as ZCS, the fitness is actually based on the accuracy of each rule's reward prediction, as opposed to its size. Thus a rule with a very low expected reward can have a high fitness provided it is accurate in its prediction of low reward, whereas a rule with very high expected reward may have low fitness because the reward it receives varies widely from one reward cycle to the next. Using reward prediction accuracy instead of reward prediction size helps XCS find rules that describe the problem in a stable, predictable way.
To install xcs
, you will of course need a Python 3 interpreter. The latest version of the standard CPython distribution is available for download from the Python Software Foundation, or if you prefer a download that comes with a long list of top-notch machine learning and scientific computing packages already built for you, I recommend Anaconda from Continuum Analytics.
Starting with Python 3.4, the standard CPython distribution comes with the package installation tool, pip, as part of the standard distribution. Anaconda comes with pip
regardless of the Python version. If you have pip
, installation of xcs
is straight forward:
pip install xcs
If all goes as planned, you should see a message like this:
Successfully installed xcs-1.0.0
If for some reason you are unable to use pip
, you can still install xcs
manually. The latest release can be found here or here. Download the zip file, unpack it, and cd into the directory. Then run:
python setup.py install
You should see a message indicating that the package was successfully installed.
It is recommended that you also install numpy
if you are using a Python distribution that does not come with it already installed. The xcs
package will still work without numpy
, but you should expect slower execution speeds. To install numpy
with pip
:
pip install numpy
For instructions on how to manually install numpy, visit the numpy installation instructions page at SciPy.org.
Let's start things off with a quick test, to verify that everything has been installed properly. First, fire up the Python interpreter. We'll set up Python's built-in logging system so we can see the test's progress.
import logging
logging.root.setLevel(logging.INFO)
Then we import the xcs
module and run the built-in test()
function. By default, the test()
function runs the canonical XCS algorithm on the 11-bit (3-bit address) MUX problem for 10,000 steps.
import xcs
xcs.test()
INFO:xcs.problems:Possible actions:
INFO:xcs.problems: False
INFO:xcs.problems: True
INFO:xcs.problems:Steps completed: 0
INFO:xcs.problems:Average reward per step: 0.00000
INFO:xcs.problems:Steps completed: 100
INFO:xcs.problems:Average reward per step: 0.46000
INFO:xcs.problems:Steps completed: 200
INFO:xcs.problems:Average reward per step: 0.52000
.
.
.
00#11###### => True
Time Stamp: 9992
Average Reward: 1.0
Error: 0.0
Fitness: 0.78492797199
Experience: 535
Action Set Size: 84.31258405257977
Numerosity: 33
0#1#1#1#### => True
Time Stamp: 9978
Average Reward: 1.0
Error: 0.0
Fitness: 0.813621637225
Experience: 325
Action Set Size: 75.4028428842268
Numerosity: 37
INFO:xcs:Total time: 24.01673 seconds
Note that your results may vary somewhat from what is shown here. XCS relies on randomization to discover new rules, so unless you set the random seed with random.seed()
, each run will be different.
Now we'll run through a quick demo of how to use existing algorithms and problems. This is essentially the same code that appears in the test()
function we called above.
First, we're going to need to import a few things:
from xcs import XCSAlgorithm, LCS
from xcs.problems import MUXProblem, OnLineObserver
The XCSAlgorithm
class contains the actual XCS algorithm implementation. The LCS
class combines the selected algorithm with its state (a Population
instance) to form a learning classifier system. MUXProblem
is the classic multiplexer problem, which defaults to 3 address bits (11 bits total). OnLineObserver
is a wrapper for problems which logs the inputs, actions, and rewards as the algorithm attempts to solve the problem.
Now that we've imported the necessary tools, we can define the actual problem, telling it to give the algorithm 10,000 reward cycles to attempt to learn the appropriate input/output mapping, and wrapping it with an observer so we can see the algorithm's progress.
problem = OnLineObserver(MUXProblem(50000))
Next, we'll create the algorithm which will be used to manage the classifier population and learn the mapping defined by the problem we have selected:
algorithm = XCSAlgorithm(problem.get_possible_actions())
We ask the problem for the possible actions that can be taken, and pass them to the XCS algorithm so they can be used in covering operations. (Covering is the generation of a random classifier rule when too few match the current situation.) The algorithm's parameters are set to appropriate defaults for most problems, but it is straight forward to modify them if it becomes necessary.
algorithm.exploration_probability = .1
algorithm.discount_factor = 0
algorithm.do_GA_subsumption = True
algorithm.do_action_set_subsumption = True
Here we have selected an exploration probability of .1, which will sacrifice most (9 out of 10) learning opportunities in favor of taking advantage of what has already been learned so far. This makes sense in real-time learning environment; a lower value is more appropriate in cases where the classifier is being trained in advance or is being used simply to learn a minimal rule set. The discount factor is set to 0, since future rewards are not affected at all by the currently selected action. We have also elected to turn on GA and action set subsumption, which help the system to converge to the minimal effective rule set more quickly in some types of problems.
Next, we create the classifier itself:
classifier = LCS(algorithm)
The LCS
instance will automatically create an empty population for us if we do not provide one.
And finally, this is where all the magic happens:
classifier.learn(problem)
We pass the problem to the classifier and ask it to learn the appropriate input/output mapping. It executes training cycles until the problem dictates that training should stop. Note that if you wish to see the progress as the algorithm learns the problem, you will need to set the logging level to INFO
, as described in the previous section, before calling the learn()
method.
Now we can observe the fruits of our labors.
print(classifier.population)
00##00####1 => True
Time Stamp: 49977
Average Reward: 0.87821679153
Error: 0.202410212397
Fitness: 0.00194208652311
Experience: 0
Action Set Size: 1
Numerosity: 1
#001######1 => True
Time Stamp: 49979
Average Reward: 0.858333333333
Error: 0.179444444444
Fitness: 0.00747704267322
Experience: 0
Action Set Size: 1
Numerosity: 1
#00##1##### => True
.
.
.
001#0###### => False
Time Stamp: 49990
Average Reward: 1.0
Error: 0.0
Fitness: 0.932438846396
Experience: 752
Action Set Size: 13.743613415253174
Numerosity: 9
101#####0## => False
Time Stamp: 49971
Average Reward: 1.0
Error: 0.0
Fitness: 0.960029964669
Experience: 741
Action Set Size: 16.74403701264255
Numerosity: 10
This gives us a printout of each rule, in the form condition => action, followed by various stats about the rule pertaining to the algorithm we selected. The population can also be accessed as an iterable container:
print(len(classifier.population))
for condition, action in classifier.population:
metadata = classifier.population.get_metadata(condition, action)
if metadata.fitness > .5:
print(condition, '=>', action, ' [%.5f]' % metadata.fitness)
To define a new problem type, inherit from the OnLineProblem
abstract class defined in the xcs.problems
submodule. Suppose, as an example, that we wish to test the algorithm's ability to find a single important input bit from among a large number of irrelevant input bits.
from xcs.problems import OnLineProblem
class HaystackProblem(OnLineProblem):
pass
We defined a new class, HaystackProblem
, to represent this test case, which inherits from OnLineProblem
to ensure that we cannot instantiate the problem until the appropriate methods have been implemented.
Now let's define an __init__
method for this class. We'll need a parameter, training_cycles
, to determine how many reward cycles the algorithm has to identify the "needle", and another parameter, input_size
, to determine how big the "haystack" is.
from xcs.problems import OnLineProblem
class HaystackProblem(OnLineProblem):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
The input_size
is saved as a member for later use. Likewise, the value of training_cycles
was saved in two places: the remaining_cycles
member, which tells the instance how many training cycles remain for the current run, and the intitial_training_cycles
member, which the instance will use to reset remaining_cycles
to the original value for a new run.
We also defined the possible_actions
member, which we set to (True, False)
. This is the value we will return when the algorithm asks for the possible actions. We will expect the algorithm to return True
when the needle bit is set, and False
when the needle bit is clear, in order to indicate that it has correctly identified the needle's location.
Now let's define some methods for the class. The OnLineProblem
defines several abstract methods:
get_possible_actions()
should return the actions the algorithm can take.reset()
should restart the problem for a new run.sense()
should return a new input (the "situation").execute(action)
should accept an action from among those returned by get_possible_actions()
, in response to the last situation that was returned by sense()
. It should then return a reward value indicating how well the algorithm is doing at solving the problem.more()
should return a Boolean value to indicate whether the algorithm has remaining reward cycles in which to solve the problem.Each of these abstract methods must be defined, or we will get a TypeError when we attempt to instantiate the class:
problem = HaystackProblem()
The implementations for the methods other than sense()
and execute()
will be trivial, so let's start with those:
from xcs.problems import OnLineProblem
class HaystackProblem(OnLineProblem):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
def more(self):
return self.remaining_cycles > 0
Now we are going to get into the meat of the problem. We want to give the algorithm a random string of bits of size input_size
and have it pick out the location of the needle bit through trial and error, by telling us what it thinks the value of the needle bit is. For this to be a useful test, the needle bit needs to be in a fixed location, which we have not yet defined. Let's choose a random bit from among inputs on each run.
import random
from xcs.problems import OnLineProblem
class HaystackProblem(OnLineProblem):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
self.needle_index = random.randrange(input_size)
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
self.needle_index = random.randrange(self.input_size)
def more(self):
return self.remaining_cycles > 0
The sense()
method is going to create a string of random bits of size input_size
and return it. But first it will pick out the value of the needle bit, located at needle_index
, and store it in a new member, needle_value
, so that execute(action)
will know what the correct value for action
is.
import random
from xcs.problems import OnLineProblem
from xcs.bitstrings import BitString
class HaystackProblem(OnLineProblem):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
self.needle_index = random.randrange(input_size)
self.needle_value = None
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
self.needle_index = random.randrange(self.input_size)
def more(self):
return self.remaining_cycles > 0
def sense(self):
haystack = BitString.random(self.input_size)
self.needle_value = haystack[self.needle_index]
return haystack
Now we need to define the execute(action)
method. In order to give the algorithm appropriate feedback to make problem solvable, we should return a high reward when it guesses the correct value for the needle bit, and a low value otherwise. Thus we will return a 1
when the action is the value of the needle bit, and a 0
otherwise. We must also make sure to decrement the remaining cycles to prevent the problem from running indefinitely.
import random
from xcs.problems import OnLineProblem
from xcs.bitstrings import BitString
class HaystackProblem(OnLineProblem):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
self.needle_index = random.randrange(input_size)
self.needle_value = None
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
self.needle_index = random.randrange(self.input_size)
def more(self):
return self.remaining_cycles > 0
def sense(self):
haystack = BitString.random(self.input_size)
self.needle_value = haystack[self.needle_index]
return haystack
def execute(self, action):
self.remaining_cycles -= 1
return action == self.needle_value
We have now defined all of the methods that OnLineProblem
requires. Let's give it a test run.
import logging
import xcs
from xcs.problems import OnLineObserver
# Setup logging so we can see the test run as it progresses.
logging.root.setLevel(logging.INFO)
# Create the problem instance
problem = HaystackProblem()
# Wrap the problem instance in an observer so progress gets logged,
# and pass it on to the test() function.
xcs.test(problem=OnLineObserver(problem))
INFO:xcs.problems:Possible actions:
INFO:xcs.problems: False
INFO:xcs.problems: True
INFO:xcs.problems:Steps completed: 0
INFO:xcs.problems:Average reward per step: 0.00000
INFO:xcs.problems:Steps completed: 100
INFO:xcs.problems:Average reward per step: 0.60000
.
.
.
INFO:xcs.problems:Steps completed: 900
INFO:xcs.problems:Average reward per step: 0.51556
INFO:xcs.problems:Steps completed: 1000
INFO:xcs.problems:Average reward per step: 0.51400
INFO:xcs.problems:Run completed.
INFO:xcs.problems:Total steps: 1000
INFO:xcs.problems:Total reward received: 514.00000
INFO:xcs.problems:Average reward per step: 0.51400
INFO:xcs:Population:
######11#0##10##1000#0#1#1##010##1011000###11101#100##00#0011111#0001000###10#100#0#0#01100000#011###1#01##000##011#1##000###001#1#1#000001#0#10#0#00101##111#1#1#1#110##0#111000101010#101001#1#100#1##0##1#01#0#0111##1###1000#111111#01##0#0001###00##1#01#00#0100111#1010#00#11#111#11#0#101#11#10#10###1110#1000##0#0#0#1#10##011#101#00100##111##0##10010001010#0#00###1####10#00#1010010#0110111#10###011#1011110101010101#001###100011##0#010#001#01#1100#1#1#001##0#001##11#000110010###101111100110##11#0# => True
Time Stamp: 853
Average Reward: 1e-05
Error: 1e-05
Fitness: 1.0000000000000002e-06
Experience: 0
Action Set Size: 1
Numerosity: 1
1#01101010101#010#1#1010#0###00##1#1#0##010#01##1#0100100#00001000#000#0000#0#0#0#001#0110111##011011001101##00###111#00#11010#10#01#1#1#00#1111000001#0##101###1#11#1#0#00011010#1010##0##1000101#110##0#0#0#1#00#0100##1#0101###0010#0#0#1#0#0###101#001010###00#1##011001100#0##1111#111##0000000####000#11####0##10101#100100#0#1#00000#0###1011#1#1##0#0##00###11##110#1#10#01#111###01011#011001##1011#1000#10#11011#0010#1##00#11#0###0###1##00010#011110#1#0#000##101#100##10#0101#01#0#1#1110#1###111111### => True
Time Stamp: 604
Average Reward: 1e-05
Error: 1e-05
Fitness: 1e-05
Experience: 0
Action Set Size: 1
Numerosity: 1
.
.
.
#0010#11011#0#1###1#100100#001001##00000010#0#1010#1#11001#0#1##10##100110###0#110#001101###1#00#011###01111011100#1110#000#11001100010###010001#01#001110111##1001001#01#1101#1#11###101#00000#0#1011110#0#1#100#0##0#1#001#10#1###00100100###1#100#0#00101#1110#11#01#10#0111###10#00###10#00#110####1#1#000#1###1101#0100###111#1#1#11100#10##1##0##1#010#1#010#1#0#1#1101111#0#00010##0#0#00#0#0##10011##1##011001#110#0#0000#11111#1011##0#01#000##10###110010#01001#101001#11100111##101#10###0###0#111#110#00 => True
Time Stamp: 997
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
11#10#00#11##101#1#111100###001#1#1#00#01##0100##00#11###0##100111#10##0#100#1010#0#100011##1####1#01#10000##11101#0#00101101####1#100#01##10010#001#111##001##0#10100#01##11##0#1#000####1#01###0#0110#111#####110#100#0#00#1#1##1010011##1#1#0#1##0#01#0#101001#0#00110110#1110#11#0#0#10####101#111#100#0###0001111##10101100#10111111#11001#1#00#01000#101#1#11#1#1#0##1#0##010#0##010010100010##001110#0#10010##0#00##00#111#01#111#01110###11#110###01#1#1###001100000#10#11100##0#00#000001#1##010000###11111 => False
Time Stamp: 998
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
INFO:xcs:Total time: 14.70052 seconds
Hmm, the algorithm didn't do so hot. Maybe we've found a weakness in the algorithm, or maybe some different parameter settings will improve its performance. Let's reduce the size of the haystack and give it more reward cycles so we can see whether it's learning at all.
problem = HaystackProblem(training_cycles=10000, input_size=100)
xcs.test(problem=OnLineObserver(problem))
INFO:xcs.problems:Possible actions:
INFO:xcs.problems: False
INFO:xcs.problems: True
INFO:xcs.problems:Steps completed: 0
INFO:xcs.problems:Average reward per step: 0.00000
INFO:xcs.problems:Steps completed: 100
INFO:xcs.problems:Average reward per step: 0.49000
.
.
.
INFO:xcs.problems:Steps completed: 9900
INFO:xcs.problems:Average reward per step: 0.50051
INFO:xcs.problems:Steps completed: 10000
INFO:xcs.problems:Average reward per step: 0.50070
INFO:xcs.problems:Run completed.
INFO:xcs.problems:Total steps: 10000
INFO:xcs.problems:Total reward received: 5007.00000
INFO:xcs.problems:Average reward per step: 0.50070
INFO:xcs:Population:
##1#1101011#01#1#0#1100000001##111000#110#0100#1###0#010101#111###0#0100#0#0#0#0#11###00#101##1#0000 => True
Time Stamp: 9953
Average Reward: 1e-05
Error: 1e-05
Fitness: 1.0000000000000002e-06
Experience: 0
Action Set Size: 1
Numerosity: 1
##1#1101011#01#1#0#110000000111111000##10#0100#1###0#010101##11###0#01#0#0#0#0#0#11#1#00#101##1#0000 => True
Time Stamp: 9953
Average Reward: 1e-05
Error: 1e-05
Fitness: 1.0000000000000002e-06
Experience: 0
Action Set Size: 1
Numerosity: 1
.
.
.
#0##00##00#0#101#11#1##10011010#0#111#11110100110#01#0#1101000#1#010#000#0#10#1#10##0#1#1#100###00## => True
Time Stamp: 9995
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
1###1#0##101#1#010#11001#1#101010#001#110#011100#00##110##0#0#001##1000#0#01#1#0#00111#001110##11010 => True
Time Stamp: 9996
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
INFO:xcs:Total time: 142.01913 seconds
It appears the algorithm isn't learning at all, at least not at a visible rate. But after a few rounds of playing with the parameter values, it becomes apparent that with the correct settings, it's possible for the algorithm to handle the new problem provided sufficient training cycles.
problem = HaystackProblem(training_cycles=10000, input_size=500)
algorithm = xcs.XCSAlgorithm(problem.get_possible_actions())
# Default parameter settings in test()
algorithm.exploration_probability = .1
algorithm.discount_factor = 0
# Modified parameter settings
algorithm.do_action_set_subsumption = False
algorithm.do_GA_subsumption = False
algorithm.wildcard_probability = .99
algorithm.deletion_threshold = 10
algorithm.mutation_probability = .0001
xcs.test(algorithm, problem=OnLineObserver(problem))
INFO:xcs.problems:Steps completed: 0
INFO:xcs.problems:Average reward per step: 0.00000
INFO:xcs.problems:Steps completed: 100
INFO:xcs.problems:Average reward per step: 0.35000
.
.
.
INFO:xcs.problems:Steps completed: 9900
INFO:xcs.problems:Average reward per step: 0.77162
INFO:xcs.problems:Steps completed: 10000
INFO:xcs.problems:Average reward per step: 0.77310
INFO:xcs.problems:Run completed.
INFO:xcs.problems:Total steps: 10000
INFO:xcs.problems:Total reward received: 7731.00000
INFO:xcs.problems:Average reward per step: 0.77310
INFO:xcs:Population:
.
.
.
#################################################################################################################################################################################################################################################################################################################################################################################################################################1##############################################1################################### => True
Time Stamp: 9987
Average Reward: 1.0
Error: 0.0
Fitness: 0.0306583767106
Experience: 2198
Action Set Size: 82.26092203736687
Numerosity: 2
########################################################################################################################################################################################################################################################################################0########################################################################################################################################1################################################################################## => True
Time Stamp: 9953
Average Reward: 1.0
Error: 0.0
Fitness: 0.0310299306533
Experience: 327
Action Set Size: 81.08109639159954
Numerosity: 2
.
.
.
#########################################################################################################1###########################1#####################################################################################################################################################################0#################################0#################################################################################0#0################################################################################## => False
Time Stamp: 9899
Average Reward: 1.0
Error: 0.0
Fitness: 0.813623338102
Experience: 18
Action Set Size: 47.56505816951893
Numerosity: 6
.
.
.
#################################################################################################################################################################################################################################################################################################################################################################################################################################1################################################################################## => True
Time Stamp: 9987
Average Reward: 1.0
Error: 0.0
Fitness: 0.943011424064
Experience: 3094
Action Set Size: 80.59117604558953
Numerosity: 60
INFO:xcs:Total time: 12.67547 seconds