API#
API
- class celery_batches.Batches#
- Strategy(task: Batches, app: Celery, consumer: Consumer) Callable #
str(object=’’) -> str str(bytes_or_buffer[, encoding[, errors]]) -> str
Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.__str__() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to ‘strict’.
- abstract = True#
Deprecated attribute
abstract
here for compatibility.
- apply(args: Tuple[Any, ...] | None = None, kwargs: dict | None = None, *_args: Any, **options: Any) Any #
Execute the task synchronously as a batch of size 1.
- Arguments:
args: positional arguments passed on to the task.
- Returns:
celery.result.EagerResult: pre-evaluated result.
- flush(requests: Collection[Request]) Any #
- flush_every = 10#
Maximum number of message in buffer.
- flush_interval = 30#
Timeout in seconds before buffer is flushed anyway.
- ignore_result = False#
If enabled the worker won’t store task state and return values for this task. Defaults to the
task_ignore_result
setting.
- priority = None#
Default task priority.
- rate_limit = None#
Rate limit for this task type. Examples:
None
(no rate limit), ‘100/s’ (hundred tasks a second), ‘100/m’ (hundred tasks a minute),`’100/h’` (hundred tasks an hour)
- reject_on_worker_lost = None#
Even if
acks_late
is enabled, the worker will acknowledge tasks when the worker process executing them abruptly exits or is signaled (e.g.,KILL
/INT
, etc).Setting this to true allows the message to be re-queued instead, so that the task will execute again by the same worker, or another worker.
Warning: Enabling this can cause message loops; make sure you know what you’re doing.
- request_stack = <celery.utils.threads._LocalStack object>#
Task request stack, the current request will be the topmost.
- serializer = 'json'#
The name of a serializer that are registered with
kombu.serialization.registry
. Default is ‘json’.
- store_errors_even_if_ignored = False#
When enabled errors will be stored even if the task is otherwise configured to ignore results.
- track_started = False#
If enabled the task will report its status as ‘started’ when the task is executed by a worker. Disabled by default as the normal behavior is to not report that level of granularity. Tasks are either pending, finished, or waiting to be retried.
Having a ‘started’ status can be useful for when there are long running tasks and there’s a need to report what task is currently running.
The application default can be overridden using the
task_track_started
setting.
- typing = False#
Enable argument checking. You can set this to false if you don’t want the signature to be checked when calling the task. Defaults to
app.strict_typing
.
- class celery_batches.SimpleRequest(id: str, name: str, args: Tuple[Any, ...], kwargs: Dict[Any, Any], delivery_info: dict, hostname: str, ignore_result: bool, reply_to: str | None, correlation_id: str | None, request_dict: Dict[str, Any] | None)#
A request to execute a task.
A list of
SimpleRequest
instances is provided to the batch task during execution.This must be pickleable (if using the prefork pool), but generally should have the same properties as
Request
.- chord = None#
TODO
- correlation_id = None#
used similarly to reply_to
- delivery_info = None#
message delivery information.
- classmethod from_request(request: Request) SimpleRequest #
- hostname = None#
worker node name
- id = None#
task id
- ignore_result = None#
if the results of this request should be ignored
- name = None#
task name
- reply_to = None#
used by rpc backend when failures reported by parent process