logai.algorithms.nn_model package
Subpackages
- logai.algorithms.nn_model.forecast_nn package
- Submodules
- logai.algorithms.nn_model.forecast_nn.base_nn module
Embedder
ForecastBasedNN
ForecastBasedNNParams
ForecastBasedNNParams.batch_size
ForecastBasedNNParams.embedding_dim
ForecastBasedNNParams.eval_type
ForecastBasedNNParams.feature_type
ForecastBasedNNParams.freeze
ForecastBasedNNParams.gpu
ForecastBasedNNParams.hidden_size
ForecastBasedNNParams.label_type
ForecastBasedNNParams.learning_rate
ForecastBasedNNParams.metadata_filepath
ForecastBasedNNParams.model_name
ForecastBasedNNParams.num_train_epochs
ForecastBasedNNParams.output_dir
ForecastBasedNNParams.patience
ForecastBasedNNParams.topk
- logai.algorithms.nn_model.forecast_nn.cnn module
- logai.algorithms.nn_model.forecast_nn.lstm module
- logai.algorithms.nn_model.forecast_nn.transformer module
- logai.algorithms.nn_model.forecast_nn.utils module
- Module contents
- logai.algorithms.nn_model.logbert package
- Submodules
- logai.algorithms.nn_model.logbert.configs module
LogBERTConfig
LogBERTConfig.eval_accumulation_steps
LogBERTConfig.eval_steps
LogBERTConfig.evaluation_strategy
LogBERTConfig.learning_rate
LogBERTConfig.logging_steps
LogBERTConfig.mask_ngram
LogBERTConfig.max_token_len
LogBERTConfig.mlm_probability
LogBERTConfig.model_dirname
LogBERTConfig.model_name
LogBERTConfig.num_eval_shards
LogBERTConfig.num_train_epochs
LogBERTConfig.output_dir
LogBERTConfig.per_device_eval_batch_size
LogBERTConfig.per_device_train_batch_size
LogBERTConfig.pretrain_from_scratch
LogBERTConfig.resume_from_checkpoint
LogBERTConfig.save_steps
LogBERTConfig.tokenizer_dirpath
LogBERTConfig.weight_decay
- logai.algorithms.nn_model.logbert.eval_metric_utils module
- logai.algorithms.nn_model.logbert.predict module
- logai.algorithms.nn_model.logbert.predict_utils module
- logai.algorithms.nn_model.logbert.tokenizer_utils module
- logai.algorithms.nn_model.logbert.train module
- Module contents
Submodules
logai.algorithms.nn_model.transformers module
- class logai.algorithms.nn_model.transformers.LogDataset(encodings, labels)
Bases:
Dataset
Wrapper class for Log Dataset, to wrap over torch Dataset class.
- class logai.algorithms.nn_model.transformers.TransformerAlgo(config: TransformerAlgoConfig)
Bases:
object
HuggingFace Transformer based Pretrained Language model (e.g. “bert-base-cased”), with a sequence classifier head for any supervised log classification task. For e.g. log anomaly detection is one type of log classfication task where the labels are Normal (Label 0) or Anomalous (Label 1). Currently it supports only binary classification, to change this num_labels of AutoModelForSequenceClassification has to be changed accordingly along with the prediction logic in predict method.
- predict(test_logs: Series, test_labels: Series) Tuple[Series, ndarray, Dict[str, float]]
Predict method for running evaluation on test log data.
- Parameters:
test_logs – The test log features data (output of LogVectorizer).
test_labels – The labels of test log data.
- Returns:
res (pd.Series): Predicted test labels as pandas Series object.
label_ids (np.ndarray, optional): True test labels (if the dataset contained some).
metrics (Dict[str, float], optional): The potential dictionary of metrics.
- save(output_dir: str)
Save model in given directory.
- Parameters:
output_dir – The path to output directory where model should be dumped.
- train(train_logs: Series, train_labels: Series)
Train method for Transformer based pretrained language model with a sequence classification head for supervised log classification task. Internally this method also splits the available training logs into train and dev data.
- Parameters:
train_logs – The training log vectors data (after LogVectorizer).
train_labels – The training label data.
- train_with_native_torch(train_logs: Series, train_labels: Series)
Train models in native torch way.
- Parameters:
train_logs – The training log features data (after LogVectorizer).
train_labels – The label data for training logs.