faust
¶
Python Stream processing.
-
class
faust.
Service
(*, beacon: mode.utils.types.trees.NodeT = None, loop: asyncio.events.AbstractEventLoop = None) → None[source]¶ An asyncio service that can be started/stopped/restarted.
- Keyword Arguments
beacon (NodeT) – Beacon used to track services in a graph.
loop (asyncio.AbstractEventLoop) – Event loop object.
-
abstract
= False¶
-
class
Diag
(service: mode.types.services.ServiceT) → None¶ Service diagnostics.
This can be used to track what your service is doing. For example if your service is a Kafka consumer with a background thread that commits the offset every 30 seconds, you may want to see when this happens:
DIAG_COMMITTING = 'committing' class Consumer(Service): @Service.task async def _background_commit(self) -> None: while not self.should_stop: await self.sleep(30.0) self.diag.set_flag(DIAG_COMITTING) try: await self._consumer.commit() finally: self.diag.unset_flag(DIAG_COMMITTING)
The above code is setting the flag manually, but you can also use a decorator to accomplish the same thing:
@Service.timer(30.0) async def _background_commit(self) -> None: await self.commit() @Service.transitions_with(DIAG_COMITTING) async def commit(self) -> None: await self._consumer.commit()
-
set_flag
(flag: str) → None¶ - Return type
None
-
unset_flag
(flag: str) → None¶ - Return type
None
-
-
wait_for_shutdown
= False¶ Set to True if .stop must wait for the shutdown flag to be set.
-
shutdown_timeout
= 60.0¶ Time to wait for shutdown flag set before we give up.
-
restart_count
= 0¶ Current number of times this service instance has been restarted.
-
mundane_level
= 'info'¶ The log level for mundane info such as starting, stopping, etc. Set this to
"debug"
for less information.
-
classmethod
from_awaitable
(coro: Awaitable, *, name: str = None, **kwargs: Any) → mode.types.services.ServiceT[source]¶ - Return type
ServiceT
[]
-
classmethod
task
(fun: Callable[Any, Awaitable[None]]) → mode.services.ServiceTask[source]¶ Decorate function to be used as background task.
Example
>>> class S(Service): ... ... @Service.task ... async def background_task(self): ... while not self.should_stop: ... await self.sleep(1.0) ... print('Waking up')
- Return type
ServiceTask
-
classmethod
timer
(interval: Union[datetime.timedelta, float, str]) → Callable[Callable, mode.services.ServiceTask][source]¶ Background timer executing every
n
seconds.Example
>>> class S(Service): ... ... @Service.timer(1.0) ... async def background_timer(self): ... print('Waking up')
-
classmethod
transitions_to
(flag: str) → Callable[source]¶ Decorate function to set and reset diagnostic flag.
- Return type
-
async
transition_with
(flag: str, fut: Awaitable, *args: Any, **kwargs: Any) → Any[source]¶ - Return type
-
add_dependency
(service: mode.types.services.ServiceT) → mode.types.services.ServiceT[source]¶ Add dependency to other service.
The service will be started/stopped with this service.
- Return type
ServiceT
[]
-
async
add_runtime_dependency
(service: mode.types.services.ServiceT) → mode.types.services.ServiceT[source]¶ - Return type
ServiceT
[]
-
async
remove_dependency
(service: mode.types.services.ServiceT) → mode.types.services.ServiceT[source]¶ Stop and remove dependency of this service.
- Return type
ServiceT
[]
-
add_future
(coro: Awaitable) → _asyncio.Future[source]¶ Add relationship to asyncio.Future.
The future will be joined when this service is stopped.
- Return type
Future
-
on_init_dependencies
() → Iterable[mode.types.services.ServiceT][source]¶ Return list of service dependencies for this service.
-
async
join_services
(services: Sequence[mode.types.services.ServiceT]) → None[source]¶ - Return type
None
-
async
sleep
(n: Union[datetime.timedelta, float, str], *, loop: asyncio.events.AbstractEventLoop = None) → None[source]¶ Sleep for
n
seconds, or until service stopped.- Return type
None
-
async
wait_for_stopped
(*coros: Union[Generator[[Any, None], Any], Awaitable, asyncio.locks.Event, mode.utils.locks.Event], timeout: Union[datetime.timedelta, float, str] = None) → bool[source]¶ - Return type
-
async
wait
(*coros: Union[Generator[[Any, None], Any], Awaitable, asyncio.locks.Event, mode.utils.locks.Event], timeout: Union[datetime.timedelta, float, str] = None) → mode.services.WaitResult[source]¶ Wait for coroutines to complete, or until the service stops.
- Return type
WaitResult
-
async
wait_many
(coros: Iterable[Union[Generator[[Any, None], Any], Awaitable, asyncio.locks.Event, mode.utils.locks.Event]], *, timeout: Union[datetime.timedelta, float, str] = None) → mode.services.WaitResult[source]¶ - Return type
WaitResult
-
async
wait_first
(*coros: Union[Generator[[Any, None], Any], Awaitable, asyncio.locks.Event, mode.utils.locks.Event], timeout: Union[datetime.timedelta, float, str] = None) → mode.services.WaitResults[source]¶ - Return type
WaitResults
-
async
maybe_start
() → bool[source]¶ Start the service, if it has not already been started.
- Return type
-
async
crash
(reason: BaseException) → None[source]¶ Crash the service and all child services.
- Return type
None
-
async
wait_until_stopped
() → None[source]¶ Wait until the service is signalled to stop.
- Return type
None
-
set_shutdown
() → None[source]¶ Set the shutdown signal.
Notes
If
wait_for_shutdown
is set, stopping the service will wait for this flag to be set.- Return type
None
-
itertimer
(interval: Union[datetime.timedelta, float, str], *, max_drift_correction: float = 0.1, loop: asyncio.events.AbstractEventLoop = None, sleep: Callable[..., Awaitable] = None, clock: Callable[float] = <built-in function perf_counter>, name: str = '') → AsyncIterator[float][source]¶ Sleep
interval
seconds for every iteration.This is an async iterator that takes advantage of
Timer()
to monitor drift and timer oerlap.Uses
Service.sleep
so exits fast when the service is stopped.Note
Will sleep the full interval seconds before returning from first iteration.
Examples
>>> async for sleep_time in self.itertimer(1.0): ... print('another second passed, just woke up...') ... await perform_some_http_request()
- Return type
-
logger
= <Logger mode.services (WARNING)>¶
-
property
crash_reason
¶ - Return type
-
class
faust.
ServiceT
(*, beacon: mode.utils.types.trees.NodeT = None, loop: asyncio.events.AbstractEventLoop = None) → None[source]¶ Abstract type for an asynchronous service that can be started/stopped.
See also
-
wait_for_shutdown
= False¶
-
restart_count
= 0¶
-
supervisor
= None¶
-
abstract
add_dependency
(service: mode.types.services.ServiceT) → mode.types.services.ServiceT[source]¶ - Return type
ServiceT
[]
-
abstract async
add_runtime_dependency
(service: mode.types.services.ServiceT) → mode.types.services.ServiceT[source]¶ - Return type
ServiceT
[]
-
abstract property
loop
¶ - Return type
AbstractEventLoop
-
abstract property
crash_reason
¶ - Return type
-
-
class
faust.
GSSAPICredentials
(*, kerberos_service_name: str = 'kafka', kerberos_domain_name: str = None, ssl_context: ssl.SSLContext = None, mechanism: Union[str, faust.types.auth.SASLMechanism] = None) → None[source]¶ Describe GSSAPI credentials over SASL.
-
protocol
= 'SASL_PLAINTEXT'¶
-
mechanism
= 'GSSAPI'¶
-
-
class
faust.
SASLCredentials
(*, username: str = None, password: str = None, ssl_context: ssl.SSLContext = None, mechanism: Union[str, faust.types.auth.SASLMechanism] = None) → None[source]¶ Describe SASL credentials.
-
protocol
= 'SASL_PLAINTEXT'¶
-
mechanism
= 'PLAIN'¶
-
-
class
faust.
SSLCredentials
(context: ssl.SSLContext = None, *, purpose: Any = None, cafile: Optional[str] = None, capath: Optional[str] = None, cadata: Optional[str] = None) → None[source]¶ Describe SSL credentials/settings.
-
protocol
= 'SSL'¶
-
-
class
faust.
Channel
(app: faust.types.app.AppT, *, schema: faust.types.serializers.SchemaT = None, key_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, value_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, is_iterator: bool = False, queue: mode.utils.queues.ThrowableQueue = None, maxsize: int = None, root: faust.types.channels.ChannelT = None, active_partitions: Set[faust.types.tuples.TP] = None, loop: asyncio.events.AbstractEventLoop = None) → None[source]¶ Create new channel.
- Parameters
app (
AppT
[]) – The app that created this channel (app.channel()
)schema (
Optional
[SchemaT
[~KT, ~VT]]) – Schema used for serialization/deserializationkey_type (
Union
[Type
[ModelT
],Type
[bytes
],Type
[str
],None
]) – The Model used for keys in this channel. (overrides schema if one is defined)value_type (
Union
[Type
[ModelT
],Type
[bytes
],Type
[str
],None
]) – The Model used for values in this channel. (overrides schema if one is defined)maxsize (
Optional
[int
]) – The maximum number of messages this channel can hold. If exceeded any newput
call will block until a message is removed from the channel.is_iterator (
bool
) – When streams iterate over a channel they will callstream.clone(is_iterator=True)
so this attribute denotes that this channel instance is currently being iterated over.active_partitions (
Optional
[Set
[TP
]]) – Set of active topic partitions this channel instance is assigned to.loop (
Optional
[AbstractEventLoop
]) – Theasyncio
event loop to use.
-
property
queue
¶ Return the underlying queue/buffer backing this channel. :rtype:
ThrowableQueue
-
clone
(*, is_iterator: bool = None, **kwargs: Any) → faust.types.channels.ChannelT[T][source]¶ Create clone of this channel.
- Parameters
is_iterator (
Optional
[bool
]) – Set to True if this is now a channel that is being iterated over.- Keyword Arguments
**kwargs – Any keyword arguments passed will override any of the arguments supported by
Channel.__init__
.- Return type
ChannelT
[~T]
-
clone_using_queue
(queue: asyncio.queues.Queue) → faust.types.channels.ChannelT[T][source]¶ Create clone of this channel using specific queue instance.
- Return type
ChannelT
[~T]
-
stream
(**kwargs: Any) → faust.types.streams.StreamT[T][source]¶ Create stream reading from this channel.
- Return type
StreamT
[~T]
-
get_topic_name
() → str[source]¶ Get the topic name, or raise if this is not a named channel.
- Return type
-
async
send
(*, key: Union[bytes, faust.types.core._ModelT, Any, None] = None, value: Union[bytes, faust.types.core._ModelT, Any] = None, partition: int = None, timestamp: float = None, headers: Union[List[Tuple[str, bytes]], Mapping[str, bytes], None] = None, schema: faust.types.serializers.SchemaT = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, callback: Callable[faust.types.tuples.FutureMessage, Union[None, Awaitable[None]]] = None, force: bool = False) → Awaitable[faust.types.tuples.RecordMetadata][source]¶ Send message to channel.
- Return type
-
send_soon
(*, key: Union[bytes, faust.types.core._ModelT, Any, None] = None, value: Union[bytes, faust.types.core._ModelT, Any] = None, partition: int = None, timestamp: float = None, headers: Union[List[Tuple[str, bytes]], Mapping[str, bytes], None] = None, schema: faust.types.serializers.SchemaT = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, callback: Callable[faust.types.tuples.FutureMessage, Union[None, Awaitable[None]]] = None, force: bool = False, eager_partitioning: bool = False) → faust.types.tuples.FutureMessage[source]¶ Produce message by adding to buffer.
This method is only supported by
Topic
.- Raises
NotImplementedError – always for in-memory channel.
- Return type
-
as_future_message
(key: Union[bytes, faust.types.core._ModelT, Any, None] = None, value: Union[bytes, faust.types.core._ModelT, Any] = None, partition: int = None, timestamp: float = None, headers: Union[List[Tuple[str, bytes]], Mapping[str, bytes], None] = None, schema: faust.types.serializers.SchemaT = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, callback: Callable[faust.types.tuples.FutureMessage, Union[None, Awaitable[None]]] = None, eager_partitioning: bool = False) → faust.types.tuples.FutureMessage[source]¶ Create promise that message will be transmitted.
- Return type
-
prepare_headers
(headers: Union[List[Tuple[str, bytes]], Mapping[str, bytes], None]) → Union[List[Tuple[str, bytes]], MutableMapping[str, bytes], None][source]¶ Prepare
headers
passed before publishing.
-
async
publish_message
(fut: faust.types.tuples.FutureMessage, wait: bool = True) → Awaitable[faust.types.tuples.RecordMetadata][source]¶ Publish message to channel.
This is the interface used by
topic.send()
, etc. to actually publish the message on the channel after being buffered up or similar.It takes a
FutureMessage
object, which contains all the information required to send the message, and acts as a promise that is resolved once the message has been fully transmitted.- Return type
-
async
declare
() → None[source]¶ Declare/create this channel.
This is used to create this channel on a server, if that is required to operate it.
- Return type
None
-
prepare_key
(key: Union[bytes, faust.types.core._ModelT, Any, None], key_serializer: Union[faust.types.codecs.CodecT, str, None], schema: faust.types.serializers.SchemaT = None, headers: Union[List[Tuple[str, bytes]], MutableMapping[str, bytes], None] = None) → Tuple[Any, Union[List[Tuple[str, bytes]], MutableMapping[str, bytes], None]][source]¶ Prepare key before it is sent to this channel.
Topic
uses this to implement serialization of keys sent to the channel.
-
prepare_value
(value: Union[bytes, faust.types.core._ModelT, Any], value_serializer: Union[faust.types.codecs.CodecT, str, None], schema: faust.types.serializers.SchemaT = None, headers: Union[List[Tuple[str, bytes]], MutableMapping[str, bytes], None] = None) → Tuple[Any, Union[List[Tuple[str, bytes]], MutableMapping[str, bytes], None]][source]¶ Prepare value before it is sent to this channel.
Topic
uses this to implement serialization of values sent to the channel.
-
async
decode
(message: faust.types.tuples.Message, *, propagate: bool = False) → faust.types.events.EventT[T][source]¶ Decode
Message
intoEvent
.- Return type
EventT
[~T]
-
async
deliver
(message: faust.types.tuples.Message) → None[source]¶ Deliver message to queue from consumer.
This is called by the consumer to deliver the message to the channel.
- Return type
None
-
async
put
(value: faust.types.events.EventT[T_contra]) → None[source]¶ Put event onto this channel.
- Return type
None
-
async
get
(*, timeout: Union[datetime.timedelta, float, str] = None) → faust.types.events.EventT[T][source]¶ Get the next
Event
received on this channel.- Return type
EventT
[~T]
-
async
on_key_decode_error
(exc: Exception, message: faust.types.tuples.Message) → None[source]¶ Unable to decode the key of an item in the queue.
See also
- Return type
None
-
async
on_value_decode_error
(exc: Exception, message: faust.types.tuples.Message) → None[source]¶ Unable to decode the value of an item in the queue.
See also
- Return type
None
-
async
on_decode_error
(exc: Exception, message: faust.types.tuples.Message) → None[source]¶ Signal that there was an error reading an event in the queue.
When a message in the channel needs deserialization to be reconstructed back to its original form, we will sometimes see decoding/deserialization errors being raised, from missing fields or malformed payloads, and so on.
We will log the exception, but you can also override this to perform additional actions.
- Admonition: Kafka
In the event a deserialization error occurs, we HAVE to commit the offset of the source message to continue processing the stream.
For this reason it is important that you keep a close eye on error logs. For easy of use, we suggest using log aggregation software, such as Sentry, to surface these errors to your operations team.
- Return type
None
-
on_stop_iteration
() → None[source]¶ Signal that iteration over this channel was stopped.
Tip
Remember to call
super
when overriding this method.- Return type
None
-
derive
(**kwargs: Any) → faust.types.channels.ChannelT[T][source]¶ Derive new channel from this channel, using new configuration.
See
faust.Topic.derive
.For local channels this will simply return the same channel.
- Return type
ChannelT
[~T]
-
class
faust.
ChannelT
(app: faust.types.channels._AppT, *, schema: faust.types.channels._SchemaT = None, key_type: faust.types.channels._ModelArg = None, value_type: faust.types.channels._ModelArg = None, is_iterator: bool = False, queue: mode.utils.queues.ThrowableQueue = None, maxsize: int = None, root: Optional[faust.types.channels.ChannelT] = None, active_partitions: Set[faust.types.tuples.TP] = None, loop: asyncio.events.AbstractEventLoop = None) → None[source]¶ -
abstract
clone
(*, is_iterator: bool = None, **kwargs: Any) → faust.types.channels.ChannelT[_T][source]¶ - Return type
ChannelT
[~_T]
-
abstract
clone_using_queue
(queue: asyncio.queues.Queue) → faust.types.channels.ChannelT[_T][source]¶ - Return type
ChannelT
[~_T]
-
abstract async
send
(*, key: Union[bytes, faust.types.core._ModelT, Any, None] = None, value: Union[bytes, faust.types.core._ModelT, Any] = None, partition: int = None, timestamp: float = None, headers: Union[List[Tuple[str, bytes]], Mapping[str, bytes], None] = None, schema: faust.types.channels._SchemaT = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, callback: Callable[faust.types.tuples.FutureMessage, Union[None, Awaitable[None]]] = None, force: bool = False) → Awaitable[faust.types.tuples.RecordMetadata][source]¶ - Return type
-
abstract
send_soon
(*, key: Union[bytes, faust.types.core._ModelT, Any, None] = None, value: Union[bytes, faust.types.core._ModelT, Any] = None, partition: int = None, timestamp: float = None, headers: Union[List[Tuple[str, bytes]], Mapping[str, bytes], None] = None, schema: faust.types.channels._SchemaT = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, callback: Callable[faust.types.tuples.FutureMessage, Union[None, Awaitable[None]]] = None, force: bool = False, eager_partitioning: bool = False) → faust.types.tuples.FutureMessage[source]¶ - Return type
-
abstract
as_future_message
(key: Union[bytes, faust.types.core._ModelT, Any, None] = None, value: Union[bytes, faust.types.core._ModelT, Any] = None, partition: int = None, timestamp: float = None, headers: Union[List[Tuple[str, bytes]], Mapping[str, bytes], None] = None, schema: faust.types.channels._SchemaT = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, callback: Callable[faust.types.tuples.FutureMessage, Union[None, Awaitable[None]]] = None, eager_partitioning: bool = False) → faust.types.tuples.FutureMessage[source]¶ - Return type
-
abstract async
publish_message
(fut: faust.types.tuples.FutureMessage, wait: bool = True) → Awaitable[faust.types.tuples.RecordMetadata][source]¶ - Return type
-
abstract
prepare_key
(key: Union[bytes, faust.types.core._ModelT, Any, None], key_serializer: Union[faust.types.codecs.CodecT, str, None], schema: faust.types.channels._SchemaT = None) → Any[source]¶ - Return type
-
abstract
prepare_value
(value: Union[bytes, faust.types.core._ModelT, Any], value_serializer: Union[faust.types.codecs.CodecT, str, None], schema: faust.types.channels._SchemaT = None) → Any[source]¶ - Return type
-
abstract async
decode
(message: faust.types.tuples.Message, *, propagate: bool = False) → faust.types.channels._EventT[_T][source]¶ - Return type
_EventT
[~_T]
-
abstract async
get
(*, timeout: Union[datetime.timedelta, float, str] = None) → faust.types.channels._EventT[_T][source]¶ - Return type
_EventT
[~_T]
-
abstract async
on_key_decode_error
(exc: Exception, message: faust.types.tuples.Message) → None[source]¶ - Return type
None
-
abstract async
on_value_decode_error
(exc: Exception, message: faust.types.tuples.Message) → None[source]¶ - Return type
None
-
abstract async
on_decode_error
(exc: Exception, message: faust.types.tuples.Message) → None[source]¶ - Return type
None
-
abstract property
queue
¶ - Return type
-
abstract
-
class
faust.
Event
(app: faust.types.app.AppT, key: Union[bytes, faust.types.core._ModelT, Any, None], value: Union[bytes, faust.types.core._ModelT, Any], headers: Union[List[Tuple[str, bytes]], Mapping[str, bytes], None], message: faust.types.tuples.Message) → None[source]¶ An event received on a channel.
Notes
Events have a key and a value:
event.key, event.value
They also have a reference to the original message (if available), such as a Kafka record:
event.message.offset
Iterating over channels/topics yields Event:
- async for event in channel:
…
Iterating over a stream (that in turn iterate over channel) yields Event.value:
async for value in channel.stream() # value is event.value ...
If you only have a Stream object, you can also access underlying events by using
Stream.events
.For example:
async for event in channel.stream.events(): ...
Also commonly used for finding the “current event” related to a value in the stream:
stream = channel.stream() async for event in stream.events(): event = stream.current_event message = event.message topic = event.message.topic
You can retrieve the current event in a stream to:
Get access to the serialized key+value.
Get access to message properties like, what topic+partition the value was received on, or its offset.
If you want access to both key and value, you should use
stream.items()
instead.async for key, value in stream.items(): ...
stream.current_event
can also be accessed but you must take extreme care you are using the correct stream object. Methods such as.group_by(key)
and.through(topic)
returns cloned stream objects, so in the example:The best way to access the current_event in an agent is to use the
ContextVar
:from faust import current_event @app.agent(topic) async def process(stream): async for value in stream: event = current_event()
-
app
¶
-
key
¶
-
value
¶
-
message
¶
-
headers
¶
-
acked
¶
-
async
send
(channel: Union[str, faust.types.channels.ChannelT], key: Union[bytes, faust.types.core._ModelT, Any, None] = <object object>, value: Union[bytes, faust.types.core._ModelT, Any] = <object object>, partition: int = None, timestamp: float = None, headers: Any = <object object>, schema: faust.types.serializers.SchemaT = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, callback: Callable[faust.types.tuples.FutureMessage, Union[None, Awaitable[None]]] = None, force: bool = False) → Awaitable[faust.types.tuples.RecordMetadata][source]¶ Send object to channel.
- Return type
-
async
forward
(channel: Union[str, faust.types.channels.ChannelT], key: Union[bytes, faust.types.core._ModelT, Any, None] = <object object>, value: Union[bytes, faust.types.core._ModelT, Any] = <object object>, partition: int = None, timestamp: float = None, headers: Any = <object object>, schema: faust.types.serializers.SchemaT = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, callback: Callable[faust.types.tuples.FutureMessage, Union[None, Awaitable[None]]] = None, force: bool = False) → Awaitable[faust.types.tuples.RecordMetadata][source]¶ Forward original message (will not be reserialized).
- Return type
-
class
faust.
EventT
(app: faust.types.events._AppT, key: Union[bytes, faust.types.core._ModelT, Any, None], value: Union[bytes, faust.types.core._ModelT, Any], headers: Union[List[Tuple[str, bytes]], Mapping[str, bytes], None], message: faust.types.tuples.Message) → None[source]¶ -
app
¶
-
key
¶
-
value
¶
-
headers
¶
-
message
¶
-
acked
¶
-
abstract async
send
(channel: Union[str, faust.types.events._ChannelT], key: Union[bytes, faust.types.core._ModelT, Any, None] = None, value: Union[bytes, faust.types.core._ModelT, Any] = None, partition: int = None, timestamp: float = None, headers: Union[List[Tuple[str, bytes]], Mapping[str, bytes], None] = None, schema: faust.types.events._SchemaT = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, callback: Callable[faust.types.tuples.FutureMessage, Union[None, Awaitable[None]]] = None, force: bool = False) → Awaitable[faust.types.tuples.RecordMetadata][source]¶ - Return type
-
abstract async
forward
(channel: Union[str, faust.types.events._ChannelT], key: Any = None, value: Any = None, partition: int = None, timestamp: float = None, headers: Union[List[Tuple[str, bytes]], Mapping[str, bytes], None] = None, schema: faust.types.events._SchemaT = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, callback: Callable[faust.types.tuples.FutureMessage, Union[None, Awaitable[None]]] = None, force: bool = False) → Awaitable[faust.types.tuples.RecordMetadata][source]¶ - Return type
-
-
class
faust.
ModelOptions
(*args, **kwargs)[source]¶ -
serializer
= None¶
-
include_metadata
= True¶
-
polymorphic_fields
= False¶
-
allow_blessed_key
= False¶
-
isodates
= False¶
-
decimals
= False¶
-
validation
= False¶
-
coerce
= False¶
-
coercions
= None¶
-
date_parser
= None¶
-
fields
= None¶ Flattened view of __annotations__ in MRO order.
- Type
Index
-
fieldset
= None¶ Set of required field names, for fast argument checking.
- Type
Index
-
descriptors
= None¶ Mapping of field name to field descriptor.
- Type
Index
-
fieldpos
= None¶ Positional argument index to field name. Used by Record.__init__ to map positional arguments to fields.
- Type
Index
-
optionalset
= None¶ Set of optional field names, for fast argument checking.
- Type
Index
-
defaults
= None¶ Mapping of field names to default value.
-
tagged_fields
= None¶
-
personal_fields
= None¶
-
sensitive_fields
= None¶
-
secret_fields
= None¶
-
has_tagged_fields
= False¶
-
has_personal_fields
= False¶
-
has_sensitive_fields
= False¶
-
has_secret_fields
= False¶
-
-
class
faust.
Record
→ None[source]¶ Describes a model type that is a record (Mapping).
Examples
>>> class LogEvent(Record, serializer='json'): ... severity: str ... message: str ... timestamp: float ... optional_field: str = 'default value'
>>> event = LogEvent( ... severity='error', ... message='Broken pact', ... timestamp=666.0, ... )
>>> event.severity 'error'
>>> serialized = event.dumps() '{"severity": "error", "message": "Broken pact", "timestamp": 666.0}'
>>> restored = LogEvent.loads(serialized) <LogEvent: severity='error', message='Broken pact', timestamp=666.0>
>>> # You can also subclass a Record to create a new record >>> # with additional fields >>> class RemoteLogEvent(LogEvent): ... url: str
>>> # You can also refer to record fields and pass them around: >>> LogEvent.severity >>> <FieldDescriptor: LogEvent.severity (str)>
-
classmethod
from_data
(data: Mapping, *, preferred_type: Type[faust.types.models.ModelT] = None) → faust.models.record.Record[source]¶ Create model object from Python dictionary.
- Return type
-
classmethod
-
class
faust.
Monitor
(*, max_avg_history: int = None, max_commit_latency_history: int = None, max_send_latency_history: int = None, max_assignment_latency_history: int = None, messages_sent: int = 0, tables: MutableMapping[str, faust.sensors.monitor.TableState] = None, messages_active: int = 0, events_active: int = 0, messages_received_total: int = 0, messages_received_by_topic: Counter[str] = None, events_total: int = 0, events_by_stream: Counter[faust.types.streams.StreamT] = None, events_by_task: Counter[_asyncio.Task] = None, events_runtime: Deque[float] = None, commit_latency: Deque[float] = None, send_latency: Deque[float] = None, assignment_latency: Deque[float] = None, events_s: int = 0, messages_s: int = 0, events_runtime_avg: float = 0.0, topic_buffer_full: Counter[faust.types.topics.TopicT] = None, rebalances: int = None, rebalance_return_latency: Deque[float] = None, rebalance_end_latency: Deque[float] = None, rebalance_return_avg: float = 0.0, rebalance_end_avg: float = 0.0, time: Callable[float] = <built-in function monotonic>, http_response_codes: Counter[http.HTTPStatus] = None, http_response_latency: Deque[float] = None, http_response_latency_avg: float = 0.0, **kwargs: Any) → None[source]¶ Default Faust Sensor.
This is the default sensor, recording statistics about events, etc.
-
send_errors
= 0¶ Number of produce operations that ended in error.
-
assignments_completed
= 0¶ Number of partition assignments completed.
-
assignments_failed
= 0¶ Number of partitions assignments that failed.
-
max_avg_history
= 100¶ Max number of total run time values to keep to build average.
-
max_commit_latency_history
= 30¶ Max number of commit latency numbers to keep.
-
max_send_latency_history
= 30¶ Max number of send latency numbers to keep.
-
max_assignment_latency_history
= 30¶ Max number of assignment latency numbers to keep.
-
rebalances
= 0¶ Number of rebalances seen by this worker.
-
tables
= None¶ Mapping of tables
-
commit_latency
= None¶ Deque of commit latency values
-
send_latency
= None¶ Deque of send latency values
-
assignment_latency
= None¶ Deque of assignment latency values.
-
rebalance_return_latency
= None¶ Deque of previous n rebalance return latencies.
-
rebalance_end_latency
= None¶ Deque of previous n rebalance end latencies.
-
rebalance_return_avg
= 0.0¶ Average rebalance return latency.
-
rebalance_end_avg
= 0.0¶ Average rebalance end latency.
-
messages_active
= 0¶ Number of messages currently being processed.
-
messages_received_total
= 0¶ Number of messages processed in total.
-
messages_received_by_topic
= None¶ Count of messages received by topic
-
messages_sent
= 0¶ Number of messages sent in total.
-
messages_sent_by_topic
= None¶ Number of messages sent by topic.
-
messages_s
= 0¶ Number of messages being processed this second.
-
events_active
= 0¶ Number of events currently being processed.
-
events_total
= 0¶ Number of events processed in total.
-
events_by_task
= None¶ Count of events processed by task
-
events_by_stream
= None¶ Count of events processed by stream
-
events_s
= 0¶ Number of events being processed this second.
-
events_runtime_avg
= 0.0¶ Average event runtime over the last second.
-
events_runtime
= None¶ Deque of run times used for averages
-
topic_buffer_full
= None¶ Counter of times a topics buffer was full
-
http_response_codes
= None¶ Counter of returned HTTP status codes.
-
http_response_latency
= None¶ Deque of previous n HTTP request->response latencies.
-
http_response_latency_avg
= 0.0¶ Average request->response latency.
-
metric_counts
= None¶ Arbitrary counts added by apps
-
tp_committed_offsets
= None¶ Last committed offsets by TopicPartition
-
tp_read_offsets
= None¶ Last read offsets by TopicPartition
-
tp_end_offsets
= None¶ Log end offsets by TopicPartition
-
stream_inbound_time
= None¶
-
secs_since
(start_time: float) → float[source]¶ Given timestamp start, return number of seconds since that time.
- Return type
-
logger
= <Logger faust.sensors.monitor (WARNING)>¶
-
ms_since
(start_time: float) → float[source]¶ Given timestamp start, return number of ms since that time.
- Return type
-
on_message_in
(tp: faust.types.tuples.TP, offset: int, message: faust.types.tuples.Message) → None[source]¶ Call before message is delegated to streams.
- Return type
None
-
on_stream_event_in
(tp: faust.types.tuples.TP, offset: int, stream: faust.types.streams.StreamT, event: faust.types.events.EventT) → Optional[Dict][source]¶ Call when stream starts processing an event.
-
on_stream_event_out
(tp: faust.types.tuples.TP, offset: int, stream: faust.types.streams.StreamT, event: faust.types.events.EventT, state: Dict = None) → None[source]¶ Call when stream is done processing an event.
- Return type
None
-
on_topic_buffer_full
(topic: faust.types.topics.TopicT) → None[source]¶ Call when conductor topic buffer is full and has to wait.
- Return type
None
-
on_message_out
(tp: faust.types.tuples.TP, offset: int, message: faust.types.tuples.Message) → None[source]¶ Call when message is fully acknowledged and can be committed.
- Return type
None
-
on_table_get
(table: faust.types.tables.CollectionT, key: Any) → None[source]¶ Call when value in table is retrieved.
- Return type
None
-
on_table_set
(table: faust.types.tables.CollectionT, key: Any, value: Any) → None[source]¶ Call when new value for key in table is set.
- Return type
None
-
on_table_del
(table: faust.types.tables.CollectionT, key: Any) → None[source]¶ Call when key in a table is deleted.
- Return type
None
-
on_commit_initiated
(consumer: faust.types.transports.ConsumerT) → Any[source]¶ Consumer is about to commit topic offset.
- Return type
-
on_commit_completed
(consumer: faust.types.transports.ConsumerT, state: Any) → None[source]¶ Call when consumer commit offset operation completed.
- Return type
None
-
on_send_initiated
(producer: faust.types.transports.ProducerT, topic: str, message: faust.types.tuples.PendingMessage, keysize: int, valsize: int) → Any[source]¶ Call when message added to producer buffer.
- Return type
-
on_send_completed
(producer: faust.types.transports.ProducerT, state: Any, metadata: faust.types.tuples.RecordMetadata) → None[source]¶ Call when producer finished sending message.
- Return type
None
-
on_send_error
(producer: faust.types.transports.ProducerT, exc: BaseException, state: Any) → None[source]¶ Call when producer was unable to publish message.
- Return type
None
-
on_tp_commit
(tp_offsets: MutableMapping[faust.types.tuples.TP, int]) → None[source]¶ Call when offset in topic partition is committed.
- Return type
None
-
track_tp_end_offset
(tp: faust.types.tuples.TP, offset: int) → None[source]¶ Track new topic partition end offset for monitoring lags.
- Return type
None
-
on_assignment_start
(assignor: faust.types.assignor.PartitionAssignorT) → Dict[source]¶ Partition assignor is starting to assign partitions.
- Return type
Dict
[~KT, ~VT]
-
on_assignment_error
(assignor: faust.types.assignor.PartitionAssignorT, state: Dict, exc: BaseException) → None[source]¶ Partition assignor did not complete assignor due to error.
- Return type
None
-
on_assignment_completed
(assignor: faust.types.assignor.PartitionAssignorT, state: Dict) → None[source]¶ Partition assignor completed assignment.
- Return type
None
-
on_rebalance_start
(app: faust.types.app.AppT) → Dict[source]¶ Cluster rebalance in progress.
- Return type
Dict
[~KT, ~VT]
-
on_rebalance_return
(app: faust.types.app.AppT, state: Dict) → None[source]¶ Consumer replied assignment is done to broker.
- Return type
None
-
on_rebalance_end
(app: faust.types.app.AppT, state: Dict) → None[source]¶ Cluster rebalance fully completed (including recovery).
- Return type
None
-
-
class
faust.
Sensor
(*, beacon: mode.utils.types.trees.NodeT = None, loop: asyncio.events.AbstractEventLoop = None) → None[source]¶ Base class for sensors.
This sensor does not do anything at all, but can be subclassed to create new monitors.
-
on_message_in
(tp: faust.types.tuples.TP, offset: int, message: faust.types.tuples.Message) → None[source]¶ Message received by a consumer.
- Return type
None
-
on_stream_event_in
(tp: faust.types.tuples.TP, offset: int, stream: faust.types.streams.StreamT, event: faust.types.events.EventT) → Optional[Dict][source]¶ Message sent to a stream as an event.
-
on_stream_event_out
(tp: faust.types.tuples.TP, offset: int, stream: faust.types.streams.StreamT, event: faust.types.events.EventT, state: Dict = None) → None[source]¶ Event was acknowledged by stream.
Notes
Acknowledged means a stream finished processing the event, but given that multiple streams may be handling the same event, the message cannot be committed before all streams have processed it. When all streams have acknowledged the event, it will go through
on_message_out()
just before offsets are committed.- Return type
None
-
on_message_out
(tp: faust.types.tuples.TP, offset: int, message: faust.types.tuples.Message) → None[source]¶ All streams finished processing message.
- Return type
None
-
on_topic_buffer_full
(topic: faust.types.topics.TopicT) → None[source]¶ Topic buffer full so conductor had to wait.
- Return type
None
-
on_table_get
(table: faust.types.tables.CollectionT, key: Any) → None[source]¶ Key retrieved from table.
- Return type
None
-
on_table_set
(table: faust.types.tables.CollectionT, key: Any, value: Any) → None[source]¶ Value set for key in table.
- Return type
None
-
on_table_del
(table: faust.types.tables.CollectionT, key: Any) → None[source]¶ Key deleted from table.
- Return type
None
-
on_commit_initiated
(consumer: faust.types.transports.ConsumerT) → Any[source]¶ Consumer is about to commit topic offset.
- Return type
-
on_commit_completed
(consumer: faust.types.transports.ConsumerT, state: Any) → None[source]¶ Consumer finished committing topic offset.
- Return type
None
-
on_send_initiated
(producer: faust.types.transports.ProducerT, topic: str, message: faust.types.tuples.PendingMessage, keysize: int, valsize: int) → Any[source]¶ About to send a message.
- Return type
-
on_send_completed
(producer: faust.types.transports.ProducerT, state: Any, metadata: faust.types.tuples.RecordMetadata) → None[source]¶ Message successfully sent.
- Return type
None
-
on_send_error
(producer: faust.types.transports.ProducerT, exc: BaseException, state: Any) → None[source]¶ Error while sending message.
- Return type
None
-
on_assignment_start
(assignor: faust.types.assignor.PartitionAssignorT) → Dict[source]¶ Partition assignor is starting to assign partitions.
- Return type
Dict
[~KT, ~VT]
-
on_assignment_error
(assignor: faust.types.assignor.PartitionAssignorT, state: Dict, exc: BaseException) → None[source]¶ Partition assignor did not complete assignor due to error.
- Return type
None
-
on_assignment_completed
(assignor: faust.types.assignor.PartitionAssignorT, state: Dict) → None[source]¶ Partition assignor completed assignment.
- Return type
None
-
on_rebalance_start
(app: faust.types.app.AppT) → Dict[source]¶ Cluster rebalance in progress.
- Return type
Dict
[~KT, ~VT]
-
on_rebalance_return
(app: faust.types.app.AppT, state: Dict) → None[source]¶ Consumer replied assignment is done to broker.
- Return type
None
-
on_rebalance_end
(app: faust.types.app.AppT, state: Dict) → None[source]¶ Cluster rebalance fully completed (including recovery).
- Return type
None
-
on_web_request_start
(app: faust.types.app.AppT, request: faust.web.base.Request, *, view: faust.web.views.View = None) → Dict[source]¶ Web server started working on request.
- Return type
Dict
[~KT, ~VT]
-
on_web_request_end
(app: faust.types.app.AppT, request: faust.web.base.Request, response: Optional[faust.web.base.Response], state: Dict, *, view: faust.web.views.View = None) → None[source]¶ Web server finished working on request.
- Return type
None
-
logger
= <Logger faust.sensors.base (WARNING)>¶
-
-
class
faust.
Codec
(children: Tuple[faust.types.codecs.CodecT, ...] = None, **kwargs: Any) → None[source]¶ Base class for codecs.
-
children
= None¶ next steps in the recursive codec chain.
x = pickle | binary
returns codec with children set to(pickle, binary)
.
-
nodes
= None¶ cached version of children including this codec as the first node. could use chain below, but seems premature so just copying the list.
-
-
class
faust.
Schema
(*, key_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, value_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, allow_empty: bool = None) → None[source]¶ -
update
(*, key_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, value_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, allow_empty: bool = None) → None[source]¶ - Return type
None
-
loads_key
(app: faust.types.app.AppT, message: faust.types.tuples.Message, *, loads: Callable = None, serializer: Union[faust.types.codecs.CodecT, str, None] = None) → KT[source]¶ - Return type
~KT
-
loads_value
(app: faust.types.app.AppT, message: faust.types.tuples.Message, *, loads: Callable = None, serializer: Union[faust.types.codecs.CodecT, str, None] = None) → VT[source]¶ - Return type
~VT
-
dumps_key
(app: faust.types.app.AppT, key: Union[bytes, faust.types.core._ModelT, Any, None], *, serializer: Union[faust.types.codecs.CodecT, str, None] = None, headers: Union[List[Tuple[str, bytes]], MutableMapping[str, bytes], None]) → Tuple[Any, Union[List[Tuple[str, bytes]], MutableMapping[str, bytes], None]][source]¶
-
dumps_value
(app: faust.types.app.AppT, value: Union[bytes, faust.types.core._ModelT, Any], *, serializer: Union[faust.types.codecs.CodecT, str, None] = None, headers: Union[List[Tuple[str, bytes]], MutableMapping[str, bytes], None]) → Tuple[Any, Union[List[Tuple[str, bytes]], MutableMapping[str, bytes], None]][source]¶
-
on_dumps_key_prepare_headers
(key: Union[bytes, faust.types.core._ModelT, Any], headers: Union[List[Tuple[str, bytes]], MutableMapping[str, bytes], None]) → Union[List[Tuple[str, bytes]], MutableMapping[str, bytes], None][source]¶
-
on_dumps_value_prepare_headers
(value: Union[bytes, faust.types.core._ModelT, Any], headers: Union[List[Tuple[str, bytes]], MutableMapping[str, bytes], None]) → Union[List[Tuple[str, bytes]], MutableMapping[str, bytes], None][source]¶
-
async
decode
(app: faust.types.app.AppT, message: faust.types.tuples.Message, *, propagate: bool = False) → faust.types.events.EventT[source]¶ Decode message from topic (compiled function not cached).
- Return type
EventT
[~T]
-
compile
(app: faust.types.app.AppT, *, on_key_decode_error: Callable[[Exception, faust.types.tuples.Message], Awaitable[None]] = <function _noop_decode_error>, on_value_decode_error: Callable[[Exception, faust.types.tuples.Message], Awaitable[None]] = <function _noop_decode_error>, default_propagate: bool = False) → Callable[..., Awaitable[faust.types.events.EventT]][source]¶ Compile function used to decode event.
-
-
class
faust.
Stream
(channel: AsyncIterator[T_co], *, app: faust.types.app.AppT, processors: Iterable[Callable[T]] = None, combined: List[faust.types.streams.JoinableT] = None, on_start: Callable = None, join_strategy: faust.types.joins.JoinT = None, beacon: mode.utils.types.trees.NodeT = None, concurrency_index: int = None, prev: faust.types.streams.StreamT = None, active_partitions: Set[faust.types.tuples.TP] = None, enable_acks: bool = True, prefix: str = '', loop: asyncio.events.AbstractEventLoop = None) → None[source]¶ A stream: async iterator processing events in channels/topics.
-
logger
= <Logger faust.streams (WARNING)>¶
-
mundane_level
= 'debug'¶
-
events_total
= 0¶ Number of events processed by this instance so far.
-
get_active_stream
() → faust.types.streams.StreamT[source]¶ Return the currently active stream.
A stream can be derived using
Stream.group_by
etc, so if this stream was used to create another derived stream, this function will return the stream being actively consumed from. E.g. in the example:>>> @app.agent() ... async def agent(a): .. a = a ... b = a.group_by(Withdrawal.account_id) ... c = b.through('backup_topic') ... async for value in c: ... ...
The return value of
a.get_active_stream()
would bec
.Notes
The chain of streams that leads to the active stream is decided by the
_next
attribute. To get to the active stream we just traverse this linked-list:>>> def get_active_stream(self): ... node = self ... while node._next: ... node = node._next
- Return type
StreamT
[+T_co]
-
get_root_stream
() → faust.types.streams.StreamT[source]¶ Get the root stream that this stream was derived from.
- Return type
StreamT
[+T_co]
-
add_processor
(processor: Callable[T]) → None[source]¶ Add processor callback executed whenever a new event is received.
Processor functions can be async or non-async, must accept a single argument, and should return the value, mutated or not.
For example a processor handling a stream of numbers may modify the value:
def double(value: int) -> int: return value * 2 stream.add_processor(double)
- Return type
None
-
clone
(**kwargs: Any) → faust.types.streams.StreamT[source]¶ Create a clone of this stream.
Notes
If the cloned stream is supposed to supersede this stream, like in
group_by
/through
/etc., you should use_chain()
instead so stream._next = cloned_stream is set andget_active_stream()
returns the cloned stream.- Return type
StreamT
[+T_co]
-
noack
() → faust.types.streams.StreamT[source]¶ Create new stream where acks are manual.
- Return type
StreamT
[+T_co]
-
items
() → AsyncIterator[Tuple[Union[bytes, faust.types.core._ModelT, Any, None], T_co]][source]¶ Iterate over the stream as
key, value
pairs.Examples
@app.agent(topic) async def mytask(stream): async for key, value in stream.items(): print(key, value)
- Return type
AsyncIterator
[Tuple
[Union
[bytes
,_ModelT
,Any
,None
], +T_co]]
-
events
() → AsyncIterable[faust.types.events.EventT][source]¶ Iterate over the stream as events exclusively.
This means the stream must be iterating over a channel, or at least an iterable of event objects.
- Return type
AsyncIterable
[EventT
[~T]]
-
take
(max_: int, within: Union[datetime.timedelta, float, str]) → AsyncIterable[Sequence[T_co]][source]¶ Buffer n values at a time and yield a list of buffered values.
- Parameters
max_ – Max number of messages to receive. When more than this number of messages are received within the specified number of seconds then we flush the buffer immediately.
within (
Union
[timedelta
,float
,str
]) – Timeout for when we give up waiting for another value, and process the values we have. Warning: If there’s no timeout (i.e. timeout=None), the agent is likely to stall and block buffered events for an unreasonable length of time(!).
- Return type
AsyncIterable
[Sequence
[+T_co]]
-
enumerate
(start: int = 0) → AsyncIterable[Tuple[int, T_co]][source]¶ Enumerate values received on this stream.
Unlike Python’s built-in
enumerate
, this works with async generators.- Return type
AsyncIterable
[Tuple
[int
, +T_co]]
-
through
(channel: Union[str, faust.types.channels.ChannelT]) → faust.types.streams.StreamT[source]¶ Forward values to in this stream to channel.
Send messages received on this stream to another channel, and return a new stream that consumes from that channel.
Notes
The messages are forwarded after any processors have been applied.
Example
topic = app.topic('foo') @app.agent(topic) async def mytask(stream): async for value in stream.through(app.topic('bar')): # value was first received in topic 'foo', # then forwarded and consumed from topic 'bar' print(value)
- Return type
StreamT
[+T_co]
-
echo
(*channels: Union[str, faust.types.channels.ChannelT]) → faust.types.streams.StreamT[source]¶ Forward values to one or more channels.
Unlike
through()
, we don’t consume from these channels.- Return type
StreamT
[+T_co]
-
group_by
(key: Union[faust.types.models.FieldDescriptorT, Callable[T, Union[bytes, faust.types.core._ModelT, Any, None]]], *, name: str = None, topic: faust.types.topics.TopicT = None, partitions: int = None) → faust.types.streams.StreamT[source]¶ Create new stream that repartitions the stream using a new key.
- Parameters
key (
Union
[FieldDescriptorT
[~T],Callable
[[~T],Union
[bytes
,_ModelT
,Any
,None
]]]) –The key argument decides how the new key is generated, it can be a field descriptor, a callable, or an async callable.
- Note: The
name
argument must be provided if the key argument is a callable.
- Note: The
name (
Optional
[str
]) – Suffix to use for repartitioned topics. This argument is required if key is a callable.
Examples
Using a field descriptor to use a field in the event as the new key:
s = withdrawals_topic.stream() # values in this stream are of type Withdrawal async for event in s.group_by(Withdrawal.account_id): ...
Using an async callable to extract a new key:
s = withdrawals_topic.stream() async def get_key(withdrawal): return await aiohttp.get( f'http://e.com/resolve_account/{withdrawal.account_id}') async for event in s.group_by(get_key): ...
Using a regular callable to extract a new key:
s = withdrawals_topic.stream() def get_key(withdrawal): return withdrawal.account_id.upper() async for event in s.group_by(get_key): ...
- Return type
StreamT
[+T_co]
-
filter
(fun: Callable[T]) → faust.types.streams.StreamT[source]¶ Filter values from stream using callback.
The callback may be a traditional function, lambda function, or an async def function.
This method is useful for filtering events before repartitioning a stream.
Examples
>>> async for v in stream.filter(lambda: v > 1000).group_by(...): ... # do something
- Return type
StreamT
[+T_co]
-
derive_topic
(name: str, *, schema: faust.types.serializers.SchemaT = None, key_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, value_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, prefix: str = '', suffix: str = '') → faust.types.topics.TopicT[source]¶ Create Topic description derived from the K/V type of this stream.
- Parameters
name (
str
) – Topic name.key_type (
Union
[Type
[ModelT
],Type
[bytes
],Type
[str
],None
]) – Specific key type to use for this topic. If not set, the key type of this stream will be used.value_type (
Union
[Type
[ModelT
],Type
[bytes
],Type
[str
],None
]) – Specific value type to use for this topic. If not set, the value type of this stream will be used.
- Raises
ValueError – if the stream channel is not a topic.
- Return type
TopicT
[]
-
async
throw
(exc: BaseException) → None[source]¶ Send exception to stream iteration.
- Return type
None
-
combine
(*nodes: faust.types.streams.JoinableT, **kwargs: Any) → faust.types.streams.StreamT[source]¶ Combine streams and tables into joined stream.
- Return type
StreamT
[+T_co]
-
contribute_to_stream
(active: faust.types.streams.StreamT) → None[source]¶ Add stream as node in joined stream.
- Return type
None
-
async
remove_from_stream
(stream: faust.types.streams.StreamT) → None[source]¶ Remove as node in a joined stream.
- Return type
None
-
join
(*fields: faust.types.models.FieldDescriptorT) → faust.types.streams.StreamT[source]¶ Create stream where events are joined.
- Return type
StreamT
[+T_co]
-
left_join
(*fields: faust.types.models.FieldDescriptorT) → faust.types.streams.StreamT[source]¶ Create stream where events are joined by LEFT JOIN.
- Return type
StreamT
[+T_co]
-
inner_join
(*fields: faust.types.models.FieldDescriptorT) → faust.types.streams.StreamT[source]¶ Create stream where events are joined by INNER JOIN.
- Return type
StreamT
[+T_co]
-
outer_join
(*fields: faust.types.models.FieldDescriptorT) → faust.types.streams.StreamT[source]¶ Create stream where events are joined by OUTER JOIN.
- Return type
StreamT
[+T_co]
-
async
on_merge
(value: T = None) → Optional[T][source]¶ Signal called when an event is to be joined.
- Return type
Optional
[~T]
-
-
class
faust.
StreamT
(channel: AsyncIterator[T_co] = None, *, app: faust.types.streams._AppT = None, processors: Iterable[Callable[T]] = None, combined: List[faust.types.streams.JoinableT] = None, on_start: Callable = None, join_strategy: faust.types.streams._JoinT = None, beacon: mode.utils.types.trees.NodeT = None, concurrency_index: int = None, prev: Optional[faust.types.streams.StreamT] = None, active_partitions: Set[faust.types.tuples.TP] = None, enable_acks: bool = True, prefix: str = '', loop: asyncio.events.AbstractEventLoop = None) → None[source]¶ -
outbox
= None¶
-
join_strategy
= None¶
-
task_owner
= None¶
-
current_event
= None¶
-
active_partitions
= None¶
-
concurrency_index
= None¶
-
enable_acks
= True¶
-
prefix
= ''¶
-
abstract async
items
() → AsyncIterator[Tuple[Union[bytes, faust.types.core._ModelT, Any, None], T_co]][source]¶
-
abstract async
take
(max_: int, within: Union[datetime.timedelta, float, str]) → AsyncIterable[Sequence[T_co]][source]¶
-
abstract
enumerate
(start: int = 0) → AsyncIterable[Tuple[int, T_co]][source]¶ - Return type
AsyncIterable
[Tuple
[int
, +T_co]]
-
abstract
through
(channel: Union[str, faust.types.channels.ChannelT]) → faust.types.streams.StreamT[source]¶ - Return type
StreamT
[+T_co]
-
abstract
echo
(*channels: Union[str, faust.types.channels.ChannelT]) → faust.types.streams.StreamT[source]¶ - Return type
StreamT
[+T_co]
-
abstract
group_by
(key: Union[faust.types.models.FieldDescriptorT, Callable[T, Union[bytes, faust.types.core._ModelT, Any, None]]], *, name: str = None, topic: faust.types.topics.TopicT = None) → faust.types.streams.StreamT[source]¶ - Return type
StreamT
[+T_co]
-
abstract
derive_topic
(name: str, *, schema: faust.types.streams._SchemaT = None, key_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, value_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, prefix: str = '', suffix: str = '') → faust.types.topics.TopicT[source]¶ - Return type
TopicT
[]
-
-
faust.
current_event
() → Optional[faust.types.events.EventT][source]¶ Return the event currently being processed, or None.
-
class
faust.
Table
(app: faust.types.app.AppT, *, name: str = None, default: Callable[Any] = None, store: Union[str, yarl.URL] = None, schema: faust.types.serializers.SchemaT = None, key_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, value_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, partitions: int = None, window: faust.types.windows.WindowT = None, changelog_topic: faust.types.topics.TopicT = None, help: str = None, on_recover: Callable[Awaitable[None]] = None, on_changelog_event: Callable[faust.types.events.EventT, Awaitable[None]] = None, recovery_buffer_size: int = 1000, standby_buffer_size: int = None, extra_topic_configs: Mapping[str, Any] = None, recover_callbacks: Set[Callable[Awaitable[None]]] = None, options: Mapping[str, Any] = None, use_partitioner: bool = False, on_window_close: Callable[[Any, Any], Union[None, Awaitable[None]]] = None, is_global: bool = False, **kwargs: Any) → None[source]¶ Table (non-windowed).
-
class
WindowWrapper
(table: faust.types.tables.TableT, *, relative_to: Union[faust.types.tables._FieldDescriptorT, Callable[Optional[faust.types.events.EventT], Union[float, datetime.datetime]], datetime.datetime, float, None] = None, key_index: bool = False, key_index_table: faust.types.tables.TableT = None) → None¶ Windowed table wrapper.
A windowed table does not return concrete values when keys are accessed, instead
WindowSet
is returned so that the values can be further reduced to the wanted time period.-
ValueType
¶ alias of
WindowSet
-
as_ansitable
(title: str = '{table.name}', **kwargs: Any) → str¶ Draw table as a terminal ANSI table.
- Return type
-
clone
(relative_to: Union[faust.types.tables._FieldDescriptorT, Callable[Optional[faust.types.events.EventT], Union[float, datetime.datetime]], datetime.datetime, float, None]) → faust.types.tables.WindowWrapperT¶ Clone this table using a new time-relativity configuration.
- Return type
-
property
get_relative_timestamp
¶ Return the current handler for extracting event timestamp. :rtype:
Optional
[Callable
[[Optional
[EventT
[~T]]],Union
[float
,datetime
]]]
-
get_timestamp
(event: faust.types.events.EventT = None) → float¶ Get timestamp from event.
- Return type
-
items
(event: faust.types.events.EventT = None) → ItemsView¶ Return table items view: iterate over
(key, value)
pairs.- Return type
ItemsView
[~KT, +VT_co]
-
key_index
= False¶
-
key_index_table
= None¶
-
keys
() → KeysView¶ Return table keys view: iterate over keys found in this table.
- Return type
KeysView
[~KT]
-
on_del_key
(key: Any) → None¶ Call when a key is deleted from this table.
- Return type
None
-
on_recover
(fun: Callable[Awaitable[None]]) → Callable[Awaitable[None]]¶ Call after table recovery.
-
on_set_key
(key: Any, value: Any) → None¶ Call when the value for a key in this table is set.
- Return type
None
-
relative_to
(ts: Union[faust.types.tables._FieldDescriptorT, Callable[Optional[faust.types.events.EventT], Union[float, datetime.datetime]], datetime.datetime, float, None]) → faust.types.tables.WindowWrapperT¶ Configure the time-relativity of this windowed table.
- Return type
-
relative_to_field
(field: faust.types.models.FieldDescriptorT) → faust.types.tables.WindowWrapperT¶ Configure table to be time-relative to a field in the stream.
This means the window will use the timestamp from the event currently being processed in the stream.
Further it will not use the timestamp of the Kafka message, but a field in the value of the event.
For example a model field:
class Account(faust.Record): created: float table = app.Table('foo').hopping( ..., ).relative_to_field(Account.created)
- Return type
-
relative_to_now
() → faust.types.tables.WindowWrapperT¶ Configure table to be time-relative to the system clock.
- Return type
-
relative_to_stream
() → faust.types.tables.WindowWrapperT¶ Configure table to be time-relative to the stream.
This means the window will use the timestamp from the event currently being processed in the stream.
- Return type
-
values
(event: faust.types.events.EventT = None) → ValuesView¶ Return table values view: iterate over values in this table.
- Return type
ValuesView
[+VT_co]
-
-
using_window
(window: faust.types.windows.WindowT, *, key_index: bool = False) → faust.types.tables.WindowWrapperT[source]¶ Wrap table using a specific window type.
- Return type
-
hopping
(size: Union[datetime.timedelta, float, str], step: Union[datetime.timedelta, float, str], expires: Union[datetime.timedelta, float, str] = None, key_index: bool = False) → faust.types.tables.WindowWrapperT[source]¶ Wrap table in a hopping window.
- Return type
-
tumbling
(size: Union[datetime.timedelta, float, str], expires: Union[datetime.timedelta, float, str] = None, key_index: bool = False) → faust.types.tables.WindowWrapperT[source]¶ Wrap table in a tumbling window.
- Return type
-
on_key_get
(key: KT) → None[source]¶ Call when the value for a key in this table is retrieved.
- Return type
None
-
on_key_set
(key: KT, value: VT) → None[source]¶ Call when the value for a key in this table is set.
- Return type
None
-
as_ansitable
(title: str = '{table.name}', **kwargs: Any) → str[source]¶ Draw table as a a terminal ANSI table.
- Return type
-
logger
= <Logger faust.tables.table (WARNING)>¶
-
class
-
class
faust.
Topic
(app: faust.types.app.AppT, *, topics: Sequence[str] = None, pattern: Union[str, Pattern[~AnyStr]] = None, schema: faust.types.serializers.SchemaT = None, key_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, value_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, is_iterator: bool = False, partitions: int = None, retention: Union[datetime.timedelta, float, str] = None, compacting: bool = None, deleting: bool = None, replicas: int = None, acks: bool = True, internal: bool = False, config: Mapping[str, Any] = None, queue: mode.utils.queues.ThrowableQueue = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, maxsize: int = None, root: faust.types.channels.ChannelT = None, active_partitions: Set[faust.types.tuples.TP] = None, allow_empty: bool = None, has_prefix: bool = False, loop: asyncio.events.AbstractEventLoop = None) → None[source]¶ Define new topic description.
- Parameters
app (
AppT
[]) – App instance used to create this topic description.partitions (
Optional
[int
]) – Number of partitions for these topics. On declaration, topics are created using this. Note: If a message is produced before the topic is declared, andautoCreateTopics
is enabled on the Kafka Server, the number of partitions used will be specified by the server configuration.retention (
Union
[timedelta
,float
,str
,None
]) – Number of seconds (as float/timedelta
) to keep messages in the topic before they can be expired by the server.pattern (
Union
[str
,Pattern
[AnyStr
],None
]) – Regular expression evaluated to decide what topics to subscribe to. You cannot specify both topics and a pattern.schema (
Optional
[SchemaT
[~KT, ~VT]]) – Schema used for serialization/deserialization.key_type (
Union
[Type
[ModelT
],Type
[bytes
],Type
[str
],None
]) – How to deserialize keys for messages in this topic. Can be afaust.Model
type,str
,bytes
, orNone
for “autodetect” (Overrides schema if one is defined).value_type (
Union
[Type
[ModelT
],Type
[bytes
],Type
[str
],None
]) – How to deserialize values for messages in this topic. Can be afaust.Model
type,str
,bytes
, orNone
for “autodetect” (Overrides schema if ones is defined).active_partitions (
Optional
[Set
[TP
]]) – Set offaust.types.tuples.TP
that this topic should be restricted to.
- Raises
TypeError – if both topics and pattern is provided.
-
async
send
(*, key: Union[bytes, faust.types.core._ModelT, Any, None] = None, value: Union[bytes, faust.types.core._ModelT, Any] = None, partition: int = None, timestamp: float = None, headers: Union[List[Tuple[str, bytes]], Mapping[str, bytes], None] = None, schema: faust.types.serializers.SchemaT = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, callback: Callable[faust.types.tuples.FutureMessage, Union[None, Awaitable[None]]] = None, force: bool = False) → Awaitable[faust.types.tuples.RecordMetadata][source]¶ Send message to topic.
- Return type
-
send_soon
(*, key: Union[bytes, faust.types.core._ModelT, Any, None] = None, value: Union[bytes, faust.types.core._ModelT, Any] = None, partition: int = None, timestamp: float = None, headers: Union[List[Tuple[str, bytes]], Mapping[str, bytes], None] = None, schema: faust.types.serializers.SchemaT = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, callback: Callable[faust.types.tuples.FutureMessage, Union[None, Awaitable[None]]] = None, force: bool = False, eager_partitioning: bool = False) → faust.types.tuples.FutureMessage[source]¶ Produce message by adding to buffer.
Notes
This method can be used by non-async def functions to produce messages.
- Return type
-
async
put
(event: faust.types.events.EventT) → None[source]¶ Put event directly onto the underlying queue of this topic.
This will only affect subscribers to a particular instance, in a particular process.
- Return type
None
-
property
partitions
¶ Return the number of configured partitions for this topic.
Notes
This is only active for internal topics, fully owned and managed by Faust itself.
We never touch the configuration of a topic that exists in Kafka, and Kafka will sometimes automatically create topics when they don’t exist. In this case the number of partitions for the automatically created topic will depend on the Kafka server configuration (
num.partitions
).Always make sure your topics have the correct number of partitions. :rtype:
Optional
[int
]
-
derive
(**kwargs: Any) → faust.types.channels.ChannelT[source]¶ Create topic derived from the configuration of this topic.
Configuration will be copied from this topic, but any parameter overridden as a keyword argument.
See also
derive_topic()
: for a list of supported keyword arguments.- Return type
ChannelT
[~_T]
-
derive_topic
(*, topics: Sequence[str] = None, schema: faust.types.serializers.SchemaT = None, key_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, value_type: Union[Type[faust.types.models.ModelT], Type[bytes], Type[str]] = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, partitions: int = None, retention: Union[datetime.timedelta, float, str] = None, compacting: bool = None, deleting: bool = None, internal: bool = None, config: Mapping[str, Any] = None, prefix: str = '', suffix: str = '', **kwargs: Any) → faust.types.topics.TopicT[source]¶ Create new topic with configuration derived from this topic.
- Return type
TopicT
[]
-
get_topic_name
() → str[source]¶ Return the main topic name of this topic description.
As topic descriptions can have multiple topic names, this will only return when the topic has a singular topic name in the description.
- Raises
TypeError – if configured with a regular expression pattern.
ValueError – if configured with multiple topic names.
TypeError – if not configured with any names or patterns.
- Return type
-
class
faust.
TopicT
(app: faust.types.topics._AppT, *, topics: Sequence[str] = None, pattern: Union[str, Pattern[~AnyStr]] = None, schema: faust.types.topics._SchemaT = None, key_type: faust.types.topics._ModelArg = None, value_type: faust.types.topics._ModelArg = None, is_iterator: bool = False, partitions: int = None, retention: Union[datetime.timedelta, float, str] = None, compacting: bool = None, deleting: bool = None, replicas: int = None, acks: bool = True, internal: bool = False, config: Mapping[str, Any] = None, queue: mode.utils.queues.ThrowableQueue = None, key_serializer: Union[faust.types.codecs.CodecT, str, None] = None, value_serializer: Union[faust.types.codecs.CodecT, str, None] = None, maxsize: int = None, root: faust.types.channels.ChannelT = None, active_partitions: Set[faust.types.tuples.TP] = None, allow_empty: bool = False, has_prefix: bool = False, loop: asyncio.events.AbstractEventLoop = None) → None[source]¶ -
topics
= None¶ Iterable/Sequence of topic names to subscribe to.
-
retention
= None¶ expiry time in seconds for messages in the topic.
- Type
Topic retention setting
-
compacting
= None¶ Flag that when enabled means the topic can be “compacted”: if the topic is a log of key/value pairs, the broker can delete old values for the same key.
-
replicas
= None¶ Number of replicas for topic.
-
config
= None¶ Additional configuration as a mapping.
-
acks
= None¶ Enable acks for this topic.
-
internal
= None¶ it’s owned by us and we are allowed to create or delete the topic as necessary.
- Type
Mark topic as internal
-
has_prefix
= False¶
-
abstract
derive_topic
(*, topics: Sequence[str] = None, schema: faust.types.topics._SchemaT = None, key_type: faust.types.topics._ModelArg = None, value_type: faust.types.topics._ModelArg = None, partitions: int = None, retention: Union[datetime.timedelta, float, str] = None, compacting: bool = None, deleting: bool = None, internal: bool = False, config: Mapping[str, Any] = None, prefix: str = '', suffix: str = '', **kwargs: Any) → faust.types.topics.TopicT[source]¶ - Return type
TopicT
[]
-
-
class
faust.
Settings
(*args: Any, **kwargs: Any) → None[source]¶ -
NODE_HOSTNAME
= 'Marcoss-MacBook-Pro.local'¶
-
DEFAULT_BROKER_URL
= 'kafka://localhost:9092'¶
-
env
= None¶ Environment. Defaults to
os.environ
.
-
relative_to_appdir
(path: pathlib.Path) → pathlib.Path[source]¶ Prepare app directory path.
If path is absolute the path is returned as-is, but if path is relative it will be assumed to belong under the app directory.
- Return type
-
data_directory_for_version
(version: int) → pathlib.Path[source]¶ Return the directory path for data belonging to specific version.
- Return type
-
property
MY_SETTING
¶ My custom setting.
To contribute new settings you only have to define a new setting decorated attribute here.
Look at the other settings for examples.
Remember that once you’ve added the setting you must also render the configuration reference:
$ make configref
-
property
autodiscover
¶ Automatic discovery of agents, tasks, timers, views and commands.
Faust has an API to add different
asyncio
services and other user extensions, such as “Agents”, HTTP web views, command-line commands, and timers to your Faust workers. These can be defined in any module, so to discover them at startup, the worker needs to traverse packages looking for them.Warning
The autodiscovery functionality uses the Venusian library to scan wanted packages for
@app.agent
,@app.page
,@app.command
,@app.task
and@app.timer
decorators, but to do so, it’s required to traverse the package path and import every module in it.Importing random modules like this can be dangerous so make sure you follow Python programming best practices. Do not start threads; perform network I/O; do test monkey-patching for mocks or similar, as a side effect of importing a module. If you encounter a case such as this then please find a way to perform your action in a lazy manner.
Warning
If the above warning is something you cannot fix, or if it’s out of your control, then please set
autodiscover=False
and make sure the worker imports all modules where your decorators are defined.The value for this argument can be:
bool
If
App(autodiscover=True)
is set, the autodiscovery will scan the package name described in theorigin
attribute.The
origin
attribute is automatically set when you start a worker using the faust command line program, for example:faust -A example.simple worker
The
-A
, option specifies the app, but you can also create a shortcut entry point by callingapp.main()
:if __name__ == '__main__': app.main()
Then you can start the faust program by executing for example
python myscript.py worker --loglevel=INFO
, and it will use the correct application.Sequence[str]
The argument can also be a list of packages to scan:
app = App(..., autodiscover=['proj_orders', 'proj_accounts'])
Callable[[], Sequence[str]]
The argument can also be a function returning a list of packages to scan:
def get_all_packages_to_scan(): return ['proj_orders', 'proj_accounts'] app = App(..., autodiscover=get_all_packages_to_scan)
False
If everything you need is in a self-contained module, or you import the stuff you need manually, just set
autodiscover
to False and don’t worry about it :-)
Django
When using Django and the
DJANGO_SETTINGS_MODULE
environment variable is set, the Faust app will scan all packages found in theINSTALLED_APPS
setting.If you’re using Django you can use this to scan for agents/pages/commands in all packages defined in
INSTALLED_APPS
.Faust will automatically detect that you’re using Django and do the right thing if you do:
app = App(..., autodiscover=True)
It will find agents and other decorators in all of the reusable Django applications. If you want to manually control what packages are traversed, then provide a list:
app = App(..., autodiscover=['package1', 'package2'])
or if you want exactly
None
packages to be traversed, then provide a False:app = App(.., autodiscover=False)
which is the default, so you can simply omit the argument.
Tip
For manual control over autodiscovery, you can also call the
app.discover()
method manually.
-
property
datadir
¶ Application data directory.
The directory in which this instance stores the data used by local tables, etc.
See also
The data directory can also be set using the
faust --datadir
option, from the command-line, so there is usually no reason to provide a default value when creating the app.
-
property
tabledir
¶ Application table data directory.
The directory in which this instance stores local table data. Usually you will want to configure the
datadir
setting, but if you want to store tables separately you can configure this one.If the path provided is relative (it has no leading slash), then the path will be considered to be relative to the
datadir
setting.
-
property
debug
¶ Use in development to expose sensor information endpoint.
Tip
If you want to enable the sensor statistics endpoint in production, without enabling the
debug
setting, you can do so by adding the following code:app.web.blueprints.add( '/stats/', 'faust.web.apps.stats:blueprint')
-
property
env_prefix
¶ Environment variable prefix.
When configuring Faust by environent variables, this adds a common prefix to all Faust environment value names.
-
property
id_format
¶ Application ID format template.
The format string used to generate the final
id
value by combining it with theversion
parameter.
-
property
origin
¶ The reverse path used to find the app.
For example if the app is located in:
from myproj.app import app
Then the
origin
should be"myproj.app"
.The faust worker program will try to automatically set the origin, but if you are having problems with auto generated names then you can set origin manually.
-
property
timezone
¶ Project timezone.
The timezone used for date-related functionality such as cronjobs.
-
property
version
¶ App version.
Version of the app, that when changed will create a new isolated instance of the application. The first version is 1, the second version is 2, and so on.
Source topics will not be affected by a version change.
Faust applications will use two kinds of topics: source topics, and internally managed topics. The source topics are declared by the producer, and we do not have the opportunity to modify any configuration settings, like number of partitions for a source topic; we may only consume from them. To mark a topic as internal, use:
app.topic(..., internal=True)
.
-
property
agent_supervisor
¶ Default agent supervisor type.
An agent may start multiple instances (actors) when the concurrency setting is higher than one (e.g.
@app.agent(concurrency=2)
).Multiple instances of the same agent are considered to be in the same supervisor group.
The default supervisor is the
mode.OneForOneSupervisor
: if an instance in the group crashes, we restart that instance only.These are the supervisors supported:
-
If an instance in the group crashes we restart only that instance.
-
If an instance in the group crashes we restart the whole group.
-
If an instance in the group crashes we stop the whole application, and exit so that the Operating System supervisor can restart us.
mode.ForfeitOneForOneSupervisor
If an instance in the group crashes we give up on that instance and never restart it again (until the program is restarted).
mode.ForfeitOneForAllSupervisor
If an instance in the group crashes we stop all instances in the group and never restarted them again (until the program is restarted).
-
-
property
blocking_timeout
¶ Blocking timeout (in seconds).
When specified the worker will start a periodic signal based timer that only triggers when the loop has been blocked for a time exceeding this timeout.
This is the most safe way to detect blocking, but could have adverse effects on libraries that do not automatically retry interrupted system calls.
Python itself does retry all interrupted system calls since version 3.5 (see PEP 475), but this might not be the case with C extensions added to the worker by the user.
The blocking detector is a background thread that periodically wakes up to either arm a timer, or cancel an already armed timer. In pseudocode:
while True: # cancel previous alarm and arm new alarm signal.signal(signal.SIGALRM, on_alarm) signal.setitimer(signal.ITIMER_REAL, blocking_timeout) # sleep to wakeup just before the timeout await asyncio.sleep(blocking_timeout * 0.96) def on_alarm(signum, frame): logger.warning('Blocking detected: ...')
If the sleep does not wake up in time the alarm signal will be sent to the process and a traceback will be logged.
-
property
broker
¶ Broker URL, or a list of alternative broker URLs.
Faust needs the URL of a “transport” to send and receive messages.
Currently, the only supported production transport is
kafka://
. This uses the aiokafka client under the hood, for consuming and producing messages.You can specify multiple hosts at the same time by separating them using the semi-comma:
kafka://kafka1.example.com:9092;kafka2.example.com:9092
Which in actual code looks like this:
BROKERS = 'kafka://kafka1.example.com:9092;kafka2.example.com:9092' app = faust.App( 'id', broker=BROKERS, )
You can also pass a list of URLs:
app = faust.App( 'id', broker=['kafka://kafka1.example.com:9092', 'kafka://kafka2.example.com:9092'], )
See also
You can configure the transport used for consuming and producing separately, by setting the
broker_consumer
andbroker_producer
settings.This setting is used as the default.
Available Transports
kafka://
Alias to
aiokafka://
aiokafka://
The recommended transport using the aiokafka client.
Limitations: None
confluent://
Experimental transport using the confluent-kafka client.
- Limitations: Does not do sticky partition assignment (not
suitable for tables), and do not create any necessary internal topics (you have to create them manually).
-
property
broker_consumer
¶ Consumer broker URL.
You can use this setting to configure the transport used for producing and consuming separately.
If not set the value found in
broker
will be used.
-
property
broker_producer
¶ Producer broker URL.
You can use this setting to configure the transport used for producing and consuming separately.
If not set the value found in
broker
will be used.
-
property
broker_api_version
¶ Broker API version,.
This setting is also the default for
consumer_api_version
, andproducer_api_version
.Negotiate producer protocol version.
The default value - “auto” means use the latest version supported by both client and server.
Any other version set means you are requesting a specific version of the protocol.
Example Kafka uses:
Disable sending headers for all messages produced
Kafka headers support was added in Kafka 0.11, so you can specify
broker_api_version="0.10"
to remove the headers from messages.
-
property
broker_check_crcs
¶ Broker CRC check.
Automatically check the CRC32 of the records consumed.
-
property
broker_client_id
¶ Broker client ID.
There is rarely any reason to configure this setting.
The client id is used to identify the software used, and is not usually configured by the user.
-
property
broker_commit_every
¶ Broker commit message frequency.
Commit offset every n messages.
See also
broker_commit_interval
, which is how frequently we commit on a timer when there are few messages being received.
-
property
broker_commit_interval
¶ Broker commit time frequency.
How often we commit messages that have been fully processed (acked).
-
property
broker_commit_livelock_soft_timeout
¶ Commit livelock timeout.
How long time it takes before we warn that the Kafka commit offset has not advanced (only when processing messages).
-
property
broker_credentials
¶ Broker authentication mechanism.
Specify the authentication mechanism to use when connecting to the broker.
The default is to not use any authentication.
- SASL Authentication
You can enable SASL authentication via plain text:
app = faust.App( broker_credentials=faust.SASLCredentials( username='x', password='y', ))
Warning
Do not use literal strings when specifying passwords in production, as they can remain visible in stack traces.
Instead the best practice is to get the password from a configuration file, or from the environment:
BROKER_USERNAME = os.environ.get('BROKER_USERNAME') BROKER_PASSWORD = os.environ.get('BROKER_PASSWORD') app = faust.App( broker_credentials=faust.SASLCredentials( username=BROKER_USERNAME, password=BROKER_PASSWORD, ))
- GSSAPI Authentication
GSSAPI authentication over plain text:
app = faust.App( broker_credentials=faust.GSSAPICredentials( kerberos_service_name='faust', kerberos_domain_name='example.com', ), )
GSSAPI authentication over SSL:
import ssl ssl_context = ssl.create_default_context( purpose=ssl.Purpose.SERVER_AUTH, cafile='ca.pem') ssl_context.load_cert_chain( 'client.cert', keyfile='client.key') app = faust.App( broker_credentials=faust.GSSAPICredentials( kerberos_service_name='faust', kerberos_domain_name='example.com', ssl_context=ssl_context, ), )
- SSL Authentication
Provide an SSL context for the Kafka broker connections.
This allows Faust to use a secure SSL/TLS connection for the Kafka connections and enabling certificate-based authentication.
import ssl ssl_context = ssl.create_default_context( purpose=ssl.Purpose.SERVER_AUTH, cafile='ca.pem') ssl_context.load_cert_chain( 'client.cert', keyfile='client.key') app = faust.App(..., broker_credentials=ssl_context)
-
property
broker_heartbeat_interval
¶ Broker heartbeat interval.
How often we send heartbeats to the broker, and also how often we expect to receive heartbeats from the broker.
If any of these time out, you should increase this setting.
-
property
broker_max_poll_interval
¶ Broker max poll interval.
The maximum allowed time (in seconds) between calls to consume messages If this interval is exceeded the consumer is considered failed and the group will rebalance in order to reassign the partitions to another consumer group member. If API methods block waiting for messages, that time does not count against this timeout.
See KIP-62 for technical details.
-
property
broker_max_poll_records
¶ Broker max poll records.
The maximum number of records returned in a single call to
poll()
. If you find that your application needs more time to process messages you may want to adjustbroker_max_poll_records
to tune the number of records that must be handled on every loop iteration.
-
property
broker_rebalance_timeout
¶ Broker rebalance timeout.
How long to wait for a node to finish rebalancing before the broker will consider it dysfunctional and remove it from the cluster.
Increase this if you experience the cluster being in a state of constantly rebalancing, but make sure you also increase the
broker_heartbeat_interval
at the same time.Note
The session timeout must not be greater than the
broker_request_timeout
.
-
property
broker_request_timeout
¶ Kafka client request timeout.
Note
The request timeout must not be less than the
broker_session_timeout
.
-
property
broker_session_timeout
¶ Broker session timeout.
How long to wait for a node to finish rebalancing before the broker will consider it dysfunctional and remove it from the cluster.
Increase this if you experience the cluster being in a state of constantly rebalancing, but make sure you also increase the
broker_heartbeat_interval
at the same time.Note
The session timeout must not be greater than the
broker_request_timeout
.
-
property
ssl_context
¶ SSL configuration.
See
credentials
.
-
property
consumer_api_version
¶ Consumer API version.
Configures the broker API version to use for consumers. See
broker_api_version
for more information.
-
property
consumer_max_fetch_size
¶ Consumer max fetch size.
The maximum amount of data per-partition the server will return. This size must be at least as large as the maximum message size.
Note: This is PER PARTITION, so a limit of 1Mb when your workers consume from 10 topics having 100 partitions each, means a fetch request can be up to a gigabyte (10 * 100 * 1Mb), This limit being too generous may cause rebalancing issues: if the amount of time required to flush pending data stuck in socket buffers exceed the rebalancing timeout.
You must keep this limit low enough to account for many partitions being assigned to a single node.
-
property
consumer_auto_offset_reset
¶ Consumer auto offset reset.
Where the consumer should start reading messages from when there is no initial offset, or the stored offset no longer exists, e.g. when starting a new consumer for the first time.
Options include ‘earliest’, ‘latest’, ‘none’.
-
property
key_serializer
¶ Default key serializer.
Serializer used for keys by default when no serializer is specified, or a model is not being used.
This can be the name of a serializer/codec, or an actual
faust.serializers.codecs.Codec
instance.See also
The Codecs section in the model guide – for more information about codecs.
-
property
value_serializer
¶ Default value serializer.
Serializer used for values by default when no serializer is specified, or a model is not being used.
This can be string, the name of a serializer/codec, or an actual
faust.serializers.codecs.Codec
instance.See also
The Codecs section in the model guide – for more information about codecs.
-
property
logging_config
¶ Logging dictionary configuration.
Optional dictionary for logging configuration, as supported by
logging.config.dictConfig()
.
-
property
loghandlers
¶ List of custom logging handlers.
Specify a list of custom log handlers to use in worker instances.
-
property
producer_acks
¶ Producer Acks.
The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are common:
0
: Producer will not wait for any acknowledgment fromthe server at all. The message will immediately be considered sent (Not recommended).
1
: The broker leader will write the record to its locallog but will respond without awaiting full acknowledgment from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost.
-1
: The broker leader will wait for the full set of in-syncreplicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee.
-
property
producer_api_version
¶ Producer API version.
Configures the broker API version to use for producers. See
broker_api_version
for more information.
-
property
producer_compression_type
¶ Producer compression type.
The compression type for all data generated by the producer. Valid values are gzip, snappy, lz4, or
None
.
-
property
producer_linger
¶ Producer batch linger configuration.
Minimum time to batch before sending out messages from the producer.
Should rarely have to change this.
-
property
producer_max_batch_size
¶ Producer max batch size.
Max size of each producer batch, in bytes.
-
property
producer_max_request_size
¶ Producer maximum request size.
Maximum size of a request in bytes in the producer.
Should rarely have to change this.
-
property
producer_partitioner
¶ Producer partitioning strategy.
The Kafka producer can be configured with a custom partitioner to change how keys are partitioned when producing to topics.
The default partitioner for Kafka is implemented as follows, and can be used as a template for your own partitioner:
import random from typing import List from kafka.partitioner.hashed import murmur2 def partition(key: bytes, all_partitions: List[int], available: List[int]) -> int: '''Default partitioner. Hashes key to partition using murmur2 hashing (from java client) If key is None, selects partition randomly from available, or from all partitions if none are currently available Arguments: key: partitioning key all_partitions: list of all partitions sorted by partition ID. available: list of available partitions in no particular order Returns: int: one of the values from ``all_partitions`` or ``available``. ''' if key is None: source = available if available else all_paritions return random.choice(source) index: int = murmur2(key) index &= 0x7fffffff index %= len(all_partitions) return all_partitions[index]
-
property
producer_request_timeout
¶ Producer request timeout.
Timeout for producer operations. This is set high by default, as this is also the time when producer batches expire and will no longer be retried.
-
property
reply_create_topic
¶ Automatically create reply topics.
Set this to
True
if you plan on using the RPC with agents.This will create the internal topic used for RPC replies on that instance at startup.
-
property
reply_expires
¶ RPC reply expiry time in seconds.
The expiry time (in seconds
float
, ortimedelta
), for how long replies will stay in the instances local reply topic before being removed.
-
property
reply_to
¶ Reply to address.
The name of the reply topic used by this instance. If not set one will be automatically generated when the app is created.
-
property
reply_to_prefix
¶ Reply address topic name prefix.
The prefix used when generating reply topic names.
-
property
processing_guarantee
¶ The processing guarantee that should be used.
Possible values are “at_least_once” (default) and “exactly_once”.
Note that if exactly-once processing is enabled consumers are configured with
isolation.level="read_committed"
and producers are configured withretries=Integer.MAX_VALUE
andenable.idempotence=true
per default.Note that by default exactly-once processing requires a cluster of at least three brokers what is the recommended setting for production. For development you can change this, by adjusting broker setting
transaction.state.log.replication.factor
to the number of brokers you want to use.
-
property
stream_buffer_maxsize
¶ Stream buffer maximum size.
This setting control back pressure to streams and agents reading from streams.
If set to 4096 (default) this means that an agent can only keep at most 4096 unprocessed items in the stream buffer.
Essentially this will limit the number of messages a stream can “prefetch”.
Higher numbers gives better throughput, but do note that if your agent sends messages or update tables (which sends changelog messages).
This means that if the buffer size is large, the
broker_commit_interval
orbroker_commit_every
settings must be set to commit frequently, avoiding back pressure from building up.A buffer size of 131_072 may let you process over 30,000 events a second as a baseline, but be careful with a buffer size that large when you also send messages or update tables.
-
property
stream_processing_timeout
¶ Stream processing timeout.
Timeout (in seconds) for processing events in the stream. If processing of a single event exceeds this time we log an error, but do not stop processing.
If you are seeing a warning like this you should either
- increase this timeout to allow agents to spend more time
on a single event, or
- add a timeout to the operation in the agent, so stream processing
always completes before the timeout.
The latter is preferred for network operations such as web requests. If a network service you depend on is temporarily offline you should consider doing retries (send to separate topic):
main_topic = app.topic('main') deadletter_topic = app.topic('main_deadletter') async def send_request(value, timeout: float = None) -> None: await app.http_client.get('http://foo.com', timeout=timeout) @app.agent(main_topic) async def main(stream): async for value in stream: try: await send_request(value, timeout=5) except asyncio.TimeoutError: await deadletter_topic.send(value) @app.agent(deadletter_topic) async def main_deadletter(stream): async for value in stream: # wait for 30 seconds before retrying. await stream.sleep(30) await send_request(value)
-
property
stream_publish_on_commit
¶ Stream delay producing until commit time.
If enabled we buffer up sending messages until the source topic offset related to that processing is committed. This means when we do commit, we may have buffered up a LOT of messages so commit needs to happen frequently (make sure to decrease
broker_commit_every
).
-
property
stream_recovery_delay
¶ Stream recovery delayl
Number of seconds to sleep before continuing after rebalance. We wait for a bit to allow for more nodes to join/leave before starting recovery tables and then processing streams. This to minimize the chance of errors rebalancing loops.
-
property
stream_wait_empty
¶ Stream wait empty.
This setting controls whether the worker should wait for the currently processing task in an agent to complete before rebalancing or shutting down.
On rebalance/shut down we clear the stream buffers. Those events will be reprocessed after the rebalance anyway, but we may have already started processing one event in every agent, and if we rebalance we will process that event again.
By default we will wait for the currently active tasks, but if your streams are idempotent you can disable it using this setting.
-
property
store
¶ Table storage backend URL.
The backend used for table storage.
Tables are stored in-memory by default, but you should not use the
memory://
store in production.In production, a persistent table store, such as
rocksdb://
is preferred.
-
property
table_cleanup_interval
¶ Table cleanup interval.
How often we cleanup tables to remove expired entries.
-
property
table_key_index_size
¶ Table key index size.
Tables keep a cache of key to partition number to speed up table lookups.
This setting configures the maximum size of that cache.
-
property
table_standby_replicas
¶ Table standby replicas.
The number of standby replicas for each table.
-
property
topic_allow_declare
¶ Allow creating new topics.
This setting disables the creation of internal topics.
Faust will only create topics that it considers to be fully owned and managed, such as intermediate repartition topics, table changelog topics etc.
Some Kafka managers does not allow services to create topics, in that case you should set this to
False
.
-
property
topic_disable_leader
¶ Disable leader election topic.
This setting disables the creation of the leader election topic.
If you’re not using the
on_leader=True
argument to task/timer/etc., decorators then use this setting to disable creation of the topic.
-
property
topic_partitions
¶ Topic partitions.
Default number of partitions for new topics.
Note
This defines the maximum number of workers we could distribute the workload of the application (also sometimes referred as the sharding factor of the application).
-
property
topic_replication_factor
¶ Topic replication factor.
The default replication factor for topics created by the application.
Note
Generally this should be the same as the configured replication factor for your Kafka cluster.
-
property
cache
¶ Cache backend URL.
Optional backend used for Memcached-style caching. URL can be:
redis://host
rediscluster://host
, ormemory://
.
-
property
web
¶ Web server driver to use.
-
property
web_bind
¶ Web network interface binding mask.
The IP network address mask that decides what interfaces the web server will bind to.
By default this will bind to all interfaces.
This option is usually set by
faust worker --web-bind
, not by passing it as a keyword argument toapp
.
-
property
web_cors_options
¶ Cross Origin Resource Sharing options.
Enable Cross-Origin Resource Sharing options for all web views in the internal web server.
This should be specified as a dictionary of URLs to
ResourceOptions
:app = App(..., web_cors_options={ 'http://foo.example.com': ResourceOptions( allow_credentials=True, allow_methods='*'k, ) })
Individual views may override the CORS options used as arguments to to
@app.page
andblueprint.route
.
-
property
web_enabled
¶ Enable/disable internal web server.
Enable web server and other web components.
This option can also be set using
faust worker --without-web
.
-
property
web_host
¶ Web server host name.
Hostname used to access this web server, used for generating the
canonical_url
setting.This option is usually set by
faust worker --web-host
, not by passing it as a keyword argument toapp
.
-
property
web_in_thread
¶ Run the web server in a separate thread.
Use this if you have a large value for
stream_buffer_maxsize
and want the web server to be responsive when the worker is otherwise busy processing streams.Note
Running the web server in a separate thread means web views and agents will not share the same event loop.
-
property
web_port
¶ Web server port.
A port number between 1024 and 65535 to use for the web server.
This option is usually set by
faust worker --web-port
, not by passing it as a keyword argument toapp
.
-
property
web_transport
¶ Network transport used for the web server.
Default is to use TCP, but this setting also enables you to use Unix domainN sockets. To use domain sockets specify an URL including the path to the file you want to create like this:
unix:///tmp/server.sock
This will create a new domain socket available in
/tmp/server.sock
.
-
property
canonical_url
¶ Node specific canonical URL.
You shouldn’t have to set this manually.
The canonical URL defines how to reach the web server on a running worker node, and is usually set by combining the
web_host
andweb_port
settings.
-
property
worker_redirect_stdouts
¶ Redirecting standard outputs.
Enable to have the worker redirect output to
sys.stdout
andsys.stderr
to the Python logging system.Enabled by default.
-
property
worker_redirect_stdouts_level
¶ Level used when redirecting standard outputs.
The logging level to use when redirect STDOUT/STDERR to logging.
-
property
Agent
¶ Agent class type.
The
Agent
class to use for agents, or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
class MyAgent(faust.Agent): ... app = App(..., Agent=MyAgent)
Example using the string path to a class:
app = App(..., Agent='myproj.agents.Agent')
-
property
ConsumerScheduler
¶ Consumer scheduler class.
A strategy which dictates the priority of topics and partitions for incoming records. The default strategy does first round-robin over topics and then round-robin over partitions.
Example using a class:
class MySchedulingStrategy(DefaultSchedulingStrategy): ... app = App(..., ConsumerScheduler=MySchedulingStrategy)
Example using the string path to a class:
app = App(..., ConsumerScheduler='myproj.MySchedulingStrategy')
-
property
Event
¶ Event class type.
The
Event
class to use for creating new event objects, or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
class MyBaseEvent(faust.Event): ... app = App(..., Event=MyBaseEvent)
Example using the string path to a class:
app = App(..., Event='myproj.events.Event')
-
property
Schema
¶ Schema class type.
The
Schema
class to use as the default schema type when no schema specified. or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
class MyBaseSchema(faust.Schema): ... app = App(..., Schema=MyBaseSchema)
Example using the string path to a class:
app = App(..., Schema='myproj.schemas.Schema')
-
property
Stream
¶ Stream class type.
The
Stream
class to use for streams, or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
class MyBaseStream(faust.Stream): ... app = App(..., Stream=MyBaseStream)
Example using the string path to a class:
app = App(..., Stream='myproj.streams.Stream')
-
property
Table
¶ Table class type.
The
Table
class to use for tables, or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
class MyBaseTable(faust.Table): ... app = App(..., Table=MyBaseTable)
Example using the string path to a class:
app = App(..., Table='myproj.tables.Table')
-
property
SetTable
¶ SetTable extension table.
The
SetTable
class to use for table-of-set tables, or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
class MySetTable(faust.SetTable): ... app = App(..., Table=MySetTable)
Example using the string path to a class:
app = App(..., Table='myproj.tables.MySetTable')
-
property
GlobalTable
¶ GlobalTable class type.
The
GlobalTable
class to use for tables, or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
class MyBaseGlobalTable(faust.GlobalTable): ... app = App(..., GlobalTable=MyBaseGlobalTable)
Example using the string path to a class:
app = App(..., GlobalTable='myproj.tables.GlobalTable')
-
property
SetGlobalTable
¶ SetGlobalTable class type.
The
SetGlobalTable
class to use for tables, or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
class MyBaseSetGlobalTable(faust.SetGlobalTable): ... app = App(..., SetGlobalTable=MyBaseGlobalSetTable)
Example using the string path to a class:
app = App(..., SetGlobalTable='myproj.tables.SetGlobalTable')
-
property
TableManager
¶ Table manager class type.
The
TableManager
used for managing tables, or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
from faust.tables import TableManager class MyTableManager(TableManager): ... app = App(..., TableManager=MyTableManager)
Example using the string path to a class:
app = App(..., TableManager='myproj.tables.TableManager')
-
property
Serializers
¶ Serializer registry class type.
The
Registry
class used for serializing/deserializing messages; or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
from faust.serialiers import Registry class MyRegistry(Registry): ... app = App(..., Serializers=MyRegistry)
Example using the string path to a class:
app = App(..., Serializers='myproj.serializers.Registry')
-
property
Worker
¶ Worker class type.
The
Worker
class used for starting a worker for this app; or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
import faust class MyWorker(faust.Worker): ... app = faust.App(..., Worker=Worker)
Example using the string path to a class:
app = faust.App(..., Worker='myproj.workers.Worker')
-
property
PartitionAssignor
¶ Partition assignor class type.
The
PartitionAssignor
class used for assigning topic partitions to worker instances; or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
from faust.assignor import PartitionAssignor class MyPartitionAssignor(PartitionAssignor): ... app = App(..., PartitionAssignor=PartitionAssignor)
Example using the string path to a class:
app = App(..., Worker='myproj.assignor.PartitionAssignor')
-
property
LeaderAssignor
¶ Leader assignor class type.
The
LeaderAssignor
class used for assigning a master Faust instance for the app; or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
from faust.assignor import LeaderAssignor class MyLeaderAssignor(LeaderAssignor): ... app = App(..., LeaderAssignor=LeaderAssignor)
Example using the string path to a class:
app = App(..., Worker='myproj.assignor.LeaderAssignor')
-
property
Router
¶ Router class type.
The
Router
class used for routing requests to a worker instance having the partition for a specific key (e.g. table key); or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
from faust.router import Router class MyRouter(Router): ... app = App(..., Router=Router)
Example using the string path to a class:
app = App(..., Router='myproj.routers.Router')
-
property
Topic
¶ Topic class type.
The
Topic
class used for defining new topics; or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
import faust class MyTopic(faust.Topic): ... app = faust.App(..., Topic=MyTopic)
Example using the string path to a class:
app = faust.App(..., Topic='myproj.topics.Topic')
-
property
HttpClient
¶ Http client class type
The
aiohttp.client.ClientSession
class used as a HTTP client; or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
import faust from aiohttp.client import ClientSession class HttpClient(ClientSession): ... app = faust.App(..., HttpClient=HttpClient)
Example using the string path to a class:
app = faust.App(..., HttpClient='myproj.http.HttpClient')
-
property
Monitor
¶ Monitor sensor class type.
The
Monitor
class as the main sensor gathering statistics for the application; or the fully-qualified path to one (supported bysymbol_by_name()
).Example using a class:
import faust from faust.sensors import Monitor class MyMonitor(Monitor): ... app = faust.App(..., Monitor=MyMonitor)
Example using the string path to a class:
app = faust.App(..., Monitor='myproj.monitors.Monitor')
-
property
stream_ack_cancelled_tasks
¶ Deprecated setting has no effect.
-
property
stream_ack_exceptions
¶ Deprecated setting has no effect.
-
SETTINGS
= {'Agent': <faust.types.settings.params._Symbol object>, 'ConsumerScheduler': <faust.types.settings.params._Symbol object>, 'Event': <faust.types.settings.params._Symbol object>, 'GlobalTable': <faust.types.settings.params._Symbol object>, 'HttpClient': <faust.types.settings.params._Symbol object>, 'LeaderAssignor': <faust.types.settings.params._Symbol object>, 'Monitor': <faust.types.settings.params._Symbol object>, 'PartitionAssignor': <faust.types.settings.params._Symbol object>, 'Router': <faust.types.settings.params._Symbol object>, 'Schema': <faust.types.settings.params._Symbol object>, 'Serializers': <faust.types.settings.params._Symbol object>, 'SetGlobalTable': <faust.types.settings.params._Symbol object>, 'SetTable': <faust.types.settings.params._Symbol object>, 'Stream': <faust.types.settings.params._Symbol object>, 'Table': <faust.types.settings.params._Symbol object>, 'TableManager': <faust.types.settings.params._Symbol object>, 'Topic': <faust.types.settings.params._Symbol object>, 'Worker': <faust.types.settings.params._Symbol object>, 'agent_supervisor': <faust.types.settings.params._Symbol object>, 'autodiscover': <faust.types.settings.params.Param object>, 'blocking_timeout': <faust.types.settings.params.Seconds object>, 'broker': <faust.types.settings.params.BrokerList object>, 'broker_api_version': <faust.types.settings.params.Str object>, 'broker_check_crcs': <faust.types.settings.params.Bool object>, 'broker_client_id': <faust.types.settings.params.Str object>, 'broker_commit_every': <faust.types.settings.params.UnsignedInt object>, 'broker_commit_interval': <faust.types.settings.params.Seconds object>, 'broker_commit_livelock_soft_timeout': <faust.types.settings.params.Seconds object>, 'broker_consumer': <faust.types.settings.params.BrokerList object>, 'broker_credentials': <faust.types.settings.params.Credentials object>, 'broker_heartbeat_interval': <faust.types.settings.params.Seconds object>, 'broker_max_poll_interval': <faust.types.settings.params.Seconds object>, 'broker_max_poll_records': <faust.types.settings.params.UnsignedInt object>, 'broker_producer': <faust.types.settings.params.BrokerList object>, 'broker_rebalance_timeout': <faust.types.settings.params.Seconds object>, 'broker_request_timeout': <faust.types.settings.params.Seconds object>, 'broker_session_timeout': <faust.types.settings.params.Seconds object>, 'cache': <faust.types.settings.params.URL object>, 'canonical_url': <faust.types.settings.params.URL object>, 'consumer_api_version': <faust.types.settings.params.Str object>, 'consumer_auto_offset_reset': <faust.types.settings.params.Str object>, 'consumer_max_fetch_size': <faust.types.settings.params.UnsignedInt object>, 'datadir': <faust.types.settings.params.Path object>, 'debug': <faust.types.settings.params.Bool object>, 'env_prefix': <faust.types.settings.params.Str object>, 'id_format': <faust.types.settings.params.Str object>, 'key_serializer': <faust.types.settings.params.Codec object>, 'logging_config': <faust.types.settings.params.Dict object>, 'loghandlers': <faust.types.settings.params.LogHandlers object>, 'origin': <faust.types.settings.params.Str object>, 'processing_guarantee': <faust.types.settings.params.Enum.<locals>.EnumParam object>, 'producer_acks': <faust.types.settings.params.Int object>, 'producer_api_version': <faust.types.settings.params.Str object>, 'producer_compression_type': <faust.types.settings.params.Str object>, 'producer_linger': <faust.types.settings.params.Seconds object>, 'producer_linger_ms': <faust.types.settings.params.UnsignedInt object>, 'producer_max_batch_size': <faust.types.settings.params.UnsignedInt object>, 'producer_max_request_size': <faust.types.settings.params.UnsignedInt object>, 'producer_partitioner': <faust.types.settings.params._Symbol object>, 'producer_request_timeout': <faust.types.settings.params.Seconds object>, 'reply_create_topic': <faust.types.settings.params.Bool object>, 'reply_expires': <faust.types.settings.params.Seconds object>, 'reply_to': <faust.types.settings.params.Str object>, 'reply_to_prefix': <faust.types.settings.params.Str object>, 'ssl_context': <faust.types.settings.params.SSLContext object>, 'store': <faust.types.settings.params.URL object>, 'stream_ack_cancelled_tasks': <faust.types.settings.params.Bool object>, 'stream_ack_exceptions': <faust.types.settings.params.Bool object>, 'stream_buffer_maxsize': <faust.types.settings.params.UnsignedInt object>, 'stream_processing_timeout': <faust.types.settings.params.Seconds object>, 'stream_publish_on_commit': <faust.types.settings.params.Bool object>, 'stream_recovery_delay': <faust.types.settings.params.Seconds object>, 'stream_wait_empty': <faust.types.settings.params.Bool object>, 'table_cleanup_interval': <faust.types.settings.params.Seconds object>, 'table_key_index_size': <faust.types.settings.params.UnsignedInt object>, 'table_standby_replicas': <faust.types.settings.params.UnsignedInt object>, 'tabledir': <faust.types.settings.params.Path object>, 'timezone': <faust.types.settings.params.Timezone object>, 'topic_allow_declare': <faust.types.settings.params.Bool object>, 'topic_disable_leader': <faust.types.settings.params.Bool object>, 'topic_partitions': <faust.types.settings.params.UnsignedInt object>, 'topic_replication_factor': <faust.types.settings.params.UnsignedInt object>, 'url': <faust.types.settings.params.URL object>, 'value_serializer': <faust.types.settings.params.Codec object>, 'version': <faust.types.settings.params.Int object>, 'web': <faust.types.settings.params.URL object>, 'web_bind': <faust.types.settings.params.Str object>, 'web_cors_options': <faust.types.settings.params.Dict object>, 'web_enabled': <faust.types.settings.params.Bool object>, 'web_host': <faust.types.settings.params.Str object>, 'web_in_thread': <faust.types.settings.params.Bool object>, 'web_port': <faust.types.settings.params.Port object>, 'web_transport': <faust.types.settings.params.URL object>, 'worker_redirect_stdouts': <faust.types.settings.params.Bool object>, 'worker_redirect_stdouts_level': <faust.types.settings.params.Severity object>}¶
-
SETTINGS_BY_SECTION
= defaultdict(<class 'list'>, {<Section: SectionType.COMMON>: [<faust.types.settings.params.Param object>, <faust.types.settings.params.Path object>, <faust.types.settings.params.Path object>, <faust.types.settings.params.Bool object>, <faust.types.settings.params.Str object>, <faust.types.settings.params.Str object>, <faust.types.settings.params.Str object>, <faust.types.settings.params.Timezone object>, <faust.types.settings.params.Int object>, <faust.types.settings.params.Seconds object>, <faust.types.settings.params.BrokerList object>, <faust.types.settings.params.Credentials object>, <faust.types.settings.params.SSLContext object>, <faust.types.settings.params.Dict object>, <faust.types.settings.params.LogHandlers object>, <faust.types.settings.params.Enum.<locals>.EnumParam object>, <faust.types.settings.params.URL object>, <faust.types.settings.params.URL object>, <faust.types.settings.params.URL object>], <Section: SectionType.AGENT>: [<faust.types.settings.params._Symbol object>], <Section: SectionType.BROKER>: [<faust.types.settings.params.BrokerList object>, <faust.types.settings.params.BrokerList object>, <faust.types.settings.params.Str object>, <faust.types.settings.params.Bool object>, <faust.types.settings.params.Str object>, <faust.types.settings.params.UnsignedInt object>, <faust.types.settings.params.Seconds object>, <faust.types.settings.params.Seconds object>, <faust.types.settings.params.Seconds object>, <faust.types.settings.params.Seconds object>, <faust.types.settings.params.UnsignedInt object>, <faust.types.settings.params.Seconds object>, <faust.types.settings.params.Seconds object>, <faust.types.settings.params.Seconds object>], <Section: SectionType.CONSUMER>: [<faust.types.settings.params.Str object>, <faust.types.settings.params.UnsignedInt object>, <faust.types.settings.params.Str object>, <faust.types.settings.params._Symbol object>], <Section: SectionType.SERIALIZATION>: [<faust.types.settings.params.Codec object>, <faust.types.settings.params.Codec object>], <Section: SectionType.PRODUCER>: [<faust.types.settings.params.Int object>, <faust.types.settings.params.Str object>, <faust.types.settings.params.Str object>, <faust.types.settings.params.Seconds object>, <faust.types.settings.params.UnsignedInt object>, <faust.types.settings.params.UnsignedInt object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params.Seconds object>, <faust.types.settings.params.UnsignedInt object>], <Section: SectionType.RPC>: [<faust.types.settings.params.Bool object>, <faust.types.settings.params.Seconds object>, <faust.types.settings.params.Str object>, <faust.types.settings.params.Str object>], <Section: SectionType.STREAM>: [<faust.types.settings.params.UnsignedInt object>, <faust.types.settings.params.Seconds object>, <faust.types.settings.params.Bool object>, <faust.types.settings.params.Seconds object>, <faust.types.settings.params.Bool object>, <faust.types.settings.params.Bool object>, <faust.types.settings.params.Bool object>], <Section: SectionType.TABLE>: [<faust.types.settings.params.Seconds object>, <faust.types.settings.params.UnsignedInt object>, <faust.types.settings.params.UnsignedInt object>], <Section: SectionType.TOPIC>: [<faust.types.settings.params.Bool object>, <faust.types.settings.params.Bool object>, <faust.types.settings.params.UnsignedInt object>, <faust.types.settings.params.UnsignedInt object>], <Section: SectionType.WEB_SERVER>: [<faust.types.settings.params.URL object>, <faust.types.settings.params.Str object>, <faust.types.settings.params.Dict object>, <faust.types.settings.params.Bool object>, <faust.types.settings.params.Str object>, <faust.types.settings.params.Bool object>, <faust.types.settings.params.Port object>, <faust.types.settings.params.URL object>, <faust.types.settings.params.URL object>], <Section: SectionType.WORKER>: [<faust.types.settings.params.Bool object>, <faust.types.settings.params.Severity object>], <Section: SectionType.EXTENSION>: [<faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>, <faust.types.settings.params._Symbol object>]})¶
-
property
producer_linger_ms
¶ Deprecated setting, please use
producer_linger
instead.This used to be provided as milliseconds, the new setting uses seconds.
-
-
faust.
HoppingWindow
¶ alias of
faust.windows._PyHoppingWindow
-
class
faust.
TumblingWindow
(size: Union[datetime.timedelta, float, str], expires: Union[datetime.timedelta, float, str] = None) → None[source]¶ Tumbling window type.
Fixed-size, non-overlapping, gap-less windows.
-
faust.
SlidingWindow
¶ alias of
faust.windows._PySlidingWindow