logml.feature_importance.extractors.model_interpret
Classes
|
Feature importance method based on model explainability. |
- class logml.feature_importance.extractors.model_interpret.ModelInterpretImportanceExtractor(config=None, **kwargs)
Bases:
logml.feature_importance.base.BaseImportanceExtractor
Feature importance method based on model explainability.
- LABEL = 'model_interpret'
- CONFIG_CLASS = None
- EXPLANATIONS_DUMP_FILENAME = 'explanation_result.pickle'
- extract_model_feature_importance(model_name: Optional[str] = None, model_cls: Optional[Type[logml.models.base.BaseModel]] = None, params: Optional[dict] = None, dataset: Optional[logml.data.datasets.cv_dataset.ModelingDataset] = None, model=None)
Feature importance extraction for single model.
- raw_fis: Dict[str, List]