tfwatcher.callbacks package
Submodules
tfwatcher.callbacks.epoch module
- class tfwatcher.callbacks.epoch.EpochEnd(schedule: Union[int, list] = 1, round_time: int = 2, print_logs: bool = False)[source]
Bases:
keras.callbacks.Callback
This class is a subclass of the tf.keras.callbacks.Callback abstract base class and overrides the methods
on_epoch_begin()
andon_epoch_end()
allowing logging after epochs in training. This class also uses thefirebase_helpers
to send data to Firebase Realtime database and also creates a 7 character unique string where the data is pushed on Firebase. Logging to Firebase is also controllable byschedule
argument, even providing a granular control for each epoch.Example:
1 import tfwatcher 2 3 # here we specify schedule = 1 to log after every epoch 4 monitor_callback = tfwatcher.callbacks.EpochEnd(schedule=1) 5 6 model.compile( 7 optimizer=..., 8 loss=..., 9 # metrics which will be logged 10 metrics=[...], 11 ) 12 13 model.fit(..., callbacks=[monitor_callback])
- Parameters
schedule (Union[int, list[int]], optional) – Use an integer value n to specify logging data every n epochs the first one being logged by default. Use a list of integers to control logging with a greater granularity, logs on all epoch numbers specified in the list taking the first epoch as epoch 1. Using a list will override loggging on the first epoch by default, defaults to 1
round_time (int, optional) – This argument allows specifying if you want to see the times on the web-app to be rounded, in most cases you would not be using this, defaults to 2
print_logs (bool, optional) – This argument should only be used when trying to debug if your logs do not appear in the web-app, if set to
True
this would print out the dictionary which is being pushed to Firebase, defaults to False
- Raises
ValueError – If the
schedule
is neither an integer or a list.Exception – If all the values in
schedule
list are not convertible to integer.
- on_epoch_begin(epoch: int, logs: Optional[dict] = None)[source]
Overrides the tf.keras.callbacks.Callback.on_epoch_begin method which is called at the start of an epoch. This function should only be called during TRAIN mode.
- Parameters
epoch (int) – Index of epoch
logs (dict, optional) – Currently no data is passed to this argument since there are no logs during the start of an epoch, defaults to None
- on_epoch_end(epoch: int, logs: Optional[dict] = None)[source]
Overrides the tf.keras.callbacks.Callback.on_epoch_end method which is called at the end of an epoch. This function should only be called during TRAIN mode. This method adds the epoch number, the average time taken and pushes it to Firebase using the
firebase_helpers
module.- Parameters
epoch (int) – Index of epoch
logs (dict, optional) – Metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with
val_
. For training epoch, the values of the Model’s metrics are returned. Example :{'loss': 0.2, 'accuracy': 0.7}
, defaults to None
tfwatcher.callbacks.predict module
- class tfwatcher.callbacks.predict.PredictEnd(round_time: int = 2, print_logs: bool = False)[source]
Bases:
keras.callbacks.Callback
This class is a subclass of the tf.keras.callbacks.Callback abstract base class and overrides the methods
on_predict_begin()
andon_predict_end()
allowing loging afterpredict
method is run. This class also uses thefirebase_helpers
module to send data to Firebase Realtime database and also creates a 7 character unique string where the data is pushed on Firebase.Note
This class does not have the
schedule
parameter like other clases in thetfwatcher.callbacks
subpackage since this would notify you once the prediction is over and there are no batches or epochs to make a schedule for.Example:
1 import tfwatcher 2 3 monitor_callback = tfwatcher.callbacks.PredictEnd() 4 5 model.compile( 6 optimizer=..., 7 loss=..., 8 # metrics which will be logged 9 metrics=[...], 10 ) 11 12 model.fit(..., callbacks=[monitor_callback])
- Parameters
round_time (int, optional) – This argument allows specifying if you want to see the times on the web-app to be rounded, in most cases you would not be using this, defaults to 2
print_logs (bool, optional) – This argument should only be used when trying to debug if your logs do not appear in the web-app, if set to
True
this would print out the dictionary which is being pushed to Firebase, defaults to False
- Raises
ValueError – If the
schedule
is neither an integer or a list.Exception – If all the values in
schedule
list are not convertible to integer.
- on_predict_begin(logs: Optional[dict] = None)[source]
Overrides the tf.keras.callbacks.Callback.on_predict_begin method which is called at the start of prediction.
- Parameters
logs (dict, optional) – Currently no data is passed to this argument since there are no logs during the start of an epoch, defaults to None
- on_predict_end(logs: Optional[dict] = None)[source]
Overrides the tf.keras.callbacks.Callback.on_predict_end method which is called at the end of prediction.
- Parameters
logs (dict, optional) – Currently no data is passed to this argument since there are no logs during the start of an epoch, defaults to None
tfwatcher.callbacks.predict_batch module
- class tfwatcher.callbacks.predict_batch.PredictBatchEnd(schedule: Union[int, list] = 1, round_time: int = 2, print_logs: bool = False)[source]
Bases:
keras.callbacks.Callback
This class is a subclass of the tf.keras.callbacks.Callback abstract base class and overrides the methods
on_predict_batch_begin()
andon_predict_batch_end()
allowing loging after batches inpredict
method. This class also uses thefirebase_helpers
to send data to Firebase Realtime database and also creates a 7 character unique string where the data is pushed on Firebase. Logging to Firebase is also controllable byschedule
argument, even providing a granular control for each batch inpredict
methods.Example:
1 import tfwatcher 2 3 # here we specify schedule = 1 to log after every batch 4 monitor_callback = tfwatcher.callbacks.PredictBatchEnd(schedule=1) 5 6 model.compile( 7 optimizer=..., 8 loss=..., 9 # metrics which will be logged 10 metrics=[...], 11 ) 12 13 model.fit(..., callbacks=[monitor_callback])
Warning
If the
steps_per_execution
argument to compile intf.keras.Model
is set to N, the logging code will only be called every N batches.- Parameters
schedule (Union[int, list[int]], optional) – Use an integer value n to specify logging data every n batches the first one being logged by default. Use a list of integers to control logging with a greater granularity, logs on all batch numbers specified in the list taking the first batch as batch 1. Using a list will override loggging on the first batch by default, defaults to 1
round_time (int, optional) – This argument allows specifying if you want to see the times on the web-app to be rounded, in most cases you would not be using this, defaults to 2
print_logs (bool, optional) – This argument should only be used when trying to debug if your logs do not appear in the web-app, if set to
True
this would print out the dictionary which is being pushed to Firebase, defaults to False
- Raises
ValueError – If the
schedule
is neither an integer or a list.Exception – If all the values in
schedule
list are not convertible to integer.
- on_predict_batch_begin(batch: int, logs: Optional[dict] = None)[source]
Overrides the tf.keras.callbacks.Callback.on_predict_batch_begin method which is called called at the beginning of a batch in predict methods.
- Parameters
batch (int) – Index of batch within the current epoch
logs (dict, optional) – contains the return value of
model.predict_step
, it typically returns a dict with a key ‘outputs’ containing the model’s outputs
- on_predict_batch_end(batch: int, logs: Optional[dict] = None)[source]
Overrides the tf.keras.callbacks.Callback.on_predict_batch_end method which is called called at the end of a batch in predict methods. This method adds the batch number, the average time taken and pushes it to Firebase using the
firebase_helpers
module.- Parameters
epoch (int) – Index of batch within the current epoch
logs (dict, optional) – Aggregated metric results up until this batch, defaults to None
tfwatcher.callbacks.test_batch module
- class tfwatcher.callbacks.test_batch.TestBatchEnd(schedule: Union[int, list] = 1, round_time: int = 2, print_logs: bool = False)[source]
Bases:
keras.callbacks.Callback
This class is a subclass of the tf.keras.callbacks.Callback abstract base class and overrides the methods
on_test_batch_begin()
andon_test_batch_end()
allowing loging after batches inevaluate
methods and at the beginning of a validation batch in the fit methods, if validation data is provided. This class also uses thefirebase_helpers
to send data to Firebase Realtime database and also creates a 7 character unique string where the data is pushed on Firebase. Logging to Firebase is also controllable byschedule
argument, even providing a granular control for each batch inevaluate
methods.Example:
1 import tfwatcher 2 3 # here we specify schedule = 1 to log after every batch 4 monitor_callback = tfwatcher.callbacks.TestBatchEnd(schedule=1) 5 6 model.compile( 7 optimizer=..., 8 loss=..., 9 # metrics which will be logged 10 metrics=[...], 11 ) 12 13 model.fit(..., callbacks=[monitor_callback])
Warning
If the
steps_per_execution
argument to compile intf.keras.Model
is set to N, the logging code will only be called every N batches.- Parameters
schedule (Union[int, list[int]], optional) – Use an integer value n to specify logging data every n batches the first one being logged by default. Use a list of integers to control logging with a greater granularity, logs on all batch numbers specified in the list taking the first batch as batch 1. Using a list will override loggging on the first batch by default, defaults to 1
round_time (int, optional) – This argument allows specifying if you want to see the times on the web-app to be rounded, in most cases you would not be using this, defaults to 2
print_logs (bool, optional) – This argument should only be used when trying to debug if your logs do not appear in the web-app, if set to
True
this would print out the dictionary which is being pushed to Firebase, defaults to False
- Raises
ValueError – If the
schedule
is neither an integer or a list.Exception – If all the values in
schedule
list are not convertible to integer.
- on_test_batch_begin(batch: int, logs: Optional[dict] = None)[source]
Overrides the tf.keras.callbacks.Callback.on_test_batch_begin method which is called called at the beginning of a batch in evaluate methods and at the beginning of a validation batch in the fit methods, if validation data is provided.
- Parameters
batch (int) – Index of batch within the current epoch
logs (dict, optional) – contains the return value of
model.test_step
. Typically, the values of the Model’s metrics are returned. Example:{'loss': 0.2, 'accuracy': 0.7}
.
- on_test_batch_end(batch: int, logs: Optional[dict] = None)[source]
Overrides the tf.keras.callbacks.Callback.on_test_batch_end method which is called called at the end of a batch in evaluate methods and at the beginning of a validation batch in the fit methods, if validation data is provided. This method adds the batch number, the average time taken and pushes it to Firebase using the
firebase_helpers
module.- Parameters
epoch (int) – Index of batch within the current epoch
logs (dict, optional) – Aggregated metric results up until this batch, defaults to None
tfwatcher.callbacks.train_batch module
- class tfwatcher.callbacks.train_batch.TrainBatchEnd(schedule: Union[int, list] = 1, round_time: int = 2, print_logs: bool = False)[source]
Bases:
keras.callbacks.Callback
This class is a subclass of the tf.keras.callbacks.Callback abstract base class and overrides the methods
on_train_batch_begin()
andon_train_batch_end()
allowing loging after a training batch in fit methods. This class also uses thefirebase_helpers
to send data to Firebase Realtime database and also creates a 7 character unique string where the data is pushed on Firebase. Logging to Firebase is also controllable byschedule
argument, even providing a granular control for each batch in fit methods.Example:
1 import tfwatcher 2 3 # here we specify schedule = 1 to log after every batch 4 monitor_callback = tfwatcher.callbacks.TrainBatchEnd(schedule=1) 5 6 model.compile( 7 optimizer=..., 8 loss=..., 9 # metrics which will be logged 10 metrics=[...], 11 ) 12 13 model.fit(..., callbacks=[monitor_callback])
Warning
If the
steps_per_execution
argument to compile intf.keras.Model
is set to N, the logging code will only be called every N batches.- Parameters
schedule (Union[int, list[int]], optional) – Use an integer value n to specify logging data every n batches the first one being logged by default. Use a list of integers to control logging with a greater granularity, logs on all batch numbers specified in the list taking the first batch as batch 1. Using a list will override loggging on the first batch by default, defaults to 1
round_time (int, optional) – This argument allows specifying if you want to see the times on the web-app to be rounded, in most cases you would not be using this, defaults to 2
print_logs (bool, optional) – This argument should only be used when trying to debug if your logs do not appear in the web-app, if set to
True
this would print out the dictionary which is being pushed to Firebase, defaults to False
- Raises
ValueError – If the
schedule
is neither an integer or a list.Exception – If all the values in
schedule
list are not convertible to integer.
- on_train_batch_begin(batch: int, logs: Optional[dict] = None)[source]
Overrides the tf.keras.callbacks.Callback.on_train_batch_begin method which is called called at the beginning of a training batch in fit methods.
- Parameters
batch (int) – Index of batch within the current epoch
logs (dict, optional) – Contains the return value of model.train_step. Typically, the values of the Model’s metrics are returned. Example:
{'loss': 0.2, 'accuracy': 0.7}
, defaults to None
- on_train_batch_end(batch: int, logs: Optional[dict] = None)[source]
Overrides the tf.keras.callbacks.Callback.on_train_batch_end method which is called called at the end of a batch in of a training batch in fit methods. This method adds the batch number, the average time taken and pushes it to Firebase using the
firebase_helpers
module.- Parameters
epoch (int) – Index of batch within the current epoch
logs (dict, optional) – Aggregated metric results up until this batch, defaults to None