Table Of Contents

Previous topic

clfs.glmnet

Next topic

clfs.gpr

This content refers to the previous stable release of PyMVPA. Please visit www.pymvpa.org for the most recent version of PyMVPA and its documentation.

clfs.gnb

Module: clfs.gnb

Inheritance diagram for mvpa.clfs.gnb:

Gaussian Naive Bayes Classifier

EXPERIMENTAL ;) Basic implementation of Gaussian Naive Bayes classifier.

GNB

class mvpa.clfs.gnb.GNB(**kwargs)

Bases: mvpa.clfs.base.Classifier

Gaussian Naive Bayes Classifier.

GNB is a probabilistic classifier relying on Bayes rule to estimate posterior probabilities of labels given the data. Naive assumption in it is an independence of the features, which allows to combine per-feature likelihoods by a simple product across likelihoods of”independent” features. See http://en.wikipedia.org/wiki/Naive_bayes for more information.

Provided here implementation is “naive” on its own – various aspects could be improved, but has its own advantages:

  • implementation is simple and straightforward
  • no data copying while considering samples of specific class
  • provides alternative ways to assess prior distribution of the classes in the case of unbalanced sets of samples (see parameter prior)
  • makes use of NumPy broadcasting mechanism, so should be relatively efficient
  • should work for any dimensionality of samples

GNB is listed both as linear and non-linear classifier, since specifics of separating boundary depends on the data and/or parameters: linear separation is achieved whenever samples are balanced (or prior=’uniform’) and features have the same variance across different classes (i.e. if common_variance=True to enforce this).

Whenever decisions are made based on log-probabilities (parameter logprob=True, which is the default), then state variable values if enabled would also contain log-probabilities. Also mention that normalization by the evidence (P(data)) is disabled by default since it has no impact per se on classification decision. You might like set parameter normalize to True if you want to access properly scaled probabilities in values state variable.

Note

Available state variables:

  • feature_ids: Feature IDS which were used for the actual training.
  • predicting_time+: Time (in seconds) which took classifier to predict
  • predictions+: Most recent set of predictions
  • trained_dataset: The dataset it has been trained on
  • trained_labels+: Set of unique labels it has been trained on
  • trained_nsamples+: Number of samples it has been trained on
  • training_confusion: Confusion matrix of learning performance
  • training_time+: Time (in seconds) which took classifier to train
  • values+: Internal classifier values the most recent predictions are based on

(States enabled by default are listed with +)

See also

Please refer to the documentation of the base class for more information:

Classifier

Initialize an GNB classifier.

Parameters:
  • common_variance – Use the same variance across all classes. (Default: False)
  • prior – How to compute prior distribution. (Default: laplacian_smoothing)
  • logprob – Operate on log probabilities. Preferable to avoid unneeded exponentiation and loose precision. If set, logprobs are stored in values. (Default: True)
  • normalize – Normalize (log)prob by P(data). Requires probabilities thus for logprob case would require exponentiation of ‘logprob’s, thus disabled by default since does not impact classification output. . (Default: False)
  • enable_states (None or list of basestring) – Names of the state variables which should be enabled additionally to default ones
  • disable_states (None or list of basestring) – Names of the state variables which should be disabled
means = None

Means of features per class

priors = None

Class probabilities

ulabels = None

Labels classifier was trained on

untrain()

Untrain classifier and reset all learnt params

variances = None

Variances per class, but “vars” is taken ;)