Data decomposition using Independent Component Analysis (ICA).
This object estimates independent components from mne.io.Raw,
mne.Epochs, or mne.Evoked objects. Components can
optionally be removed (for artifact repair) prior to signal reconstruction.
Warning
ICA is sensitive to low-frequency drifts and therefore
requires the data to be high-pass filtered prior to fitting.
Typically, a cutoff frequency of 1 Hz is recommended.
Will select the smallest number of components required to explain
the cumulative variance of the data greater than n_components.
Consider this hypothetical example: we have 3 components, the first
explaining 70%, the second 20%, and the third the remaining 10%
of the variance. Passing 0.8 here (corresponding to 80% of
explained variance) would yield the first two components,
explaining 90% of the variance: only by using both components the
requested threshold of 80% explained variance can be exceeded. The
third component, on the other hand, would be excluded.
None
0.999999 will be used. This is done to avoid numerical
stability problems when whitening, particularly when working with
rank-deficient data.
Defaults to None. The actual number used when executing the
ICA.fit() method will be stored in the attribute
n_components_ (note the trailing underscore).
Changed in version 0.22: For a float, the number of components will account
for greater than the given variance level instead of less than or
equal to it. The default (None) will also take into account the
rank deficiency of the data.
Noise covariance used for pre-whitening. If None (default), channels
are scaled to unit variance (“z-standardized”) as a group by channel
type prior to the whitening by PCA.
A seed for the NumPy random number generator (RNG). If None (default),
the seed will be obtained from the operating system
(see RandomState for details), meaning it will most
likely produce different output every time this function or method is run.
To achieve reproducible results, pass a value here to explicitly initialize
the RNG with a defined state.
method‘fastica’ | ‘infomax’ | ‘picard’
The ICA method to use in the fit method. Use the fit_params argument
to set additional parameters. Specifically, if you want Extended
Infomax, set method='infomax' and fit_params=dict(extended=True)
(this also works for method='picard'). Defaults to 'fastica'.
For reference, see [1][2][3][4].
Additional parameters passed to the ICA estimator as specified by
method. Allowed entries are determined by the various algorithm
implementations: see FastICA,
picard(), infomax().
Maximum number of iterations during fit. If 'auto', it
will set maximum iterations to 1000 for 'fastica'
and to 500 for 'infomax' or 'picard'. The actual number of
iterations it took ICA.fit() to complete will be stored in the
n_iter_ attribute.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
If fit, the whitened mixing matrix to go back from ICA space to PCA
space.
It is, in combination with the pca_components_, used by
ICA.apply() and ICA.get_components() to re-mix/project
a subset of the ICA components into the observed channel space.
The former method also removes the pre-whitening (z-scaling) and the
de-meaning.
If fit, the whitened matrix to go from PCA space to ICA space.
Used, in combination with the pca_components_, by the methods
ICA.get_sources() and ICA.apply() to unmix the observed
data.
List or np.array of sources indices to exclude when re-mixing the data
in the ICA.apply() method, i.e. artifactual ICA components.
The components identified manually and by the various automatic
artifact detection methods should be (manually) appended
(e.g. ica.exclude.extend(eog_inds)).
(There is also an exclude parameter in the ICA.apply()
method.) To scrap all marked components, set this attribute to an empty
list.
A dictionary of independent component indices, grouped by types of
independent components. This attribute is set by some of the artifact
detection functions.
Assign score to components based on statistic or metric.
Notes
Changed in version 0.23: Version 0.23 introduced the max_iter='auto' settings for maximum
iterations. With version 0.24 'auto' will be the new
default, replacing the current max_iter=200.
Changed in version 0.23: Warn if Epochs were baseline-corrected.
Note
If you intend to fit ICA on Epochs, it is recommended to
high-pass filter, but not baseline correct the data for good
ICA performance. A warning will be emitted otherwise.
A trailing _ in an attribute name signifies that the attribute was
added to the object during fitting, consistent with standard scikit-learn
practice.
Whitening the data by means of a pre-whitening step
(using noise_cov if provided, or the standard deviation of each
channel type) and then principal component analysis (PCA).
Passing the n_components largest-variance components to the ICA
algorithm to obtain the unmixing matrix (and by pseudoinversion, the
mixing matrix).
Includes ICA components based on ica.include and ica.exclude.
Re-mixes the data with mixing_matrix_.
Restores any data not passed to the ICA algorithm, i.e., the PCA
components between n_components and n_pca_components.
n_pca_components determines how many PCA components will be kept when
reconstructing the data when calling apply(). This parameter can be
used for dimensionality reduction of the data, or dealing with low-rank
data (such as those with projections, or MEG data processed by SSS). It is
important to remove any numerically-zero-variance components in the data,
otherwise numerical instability causes problems when computing the mixing
matrix. Alternatively, using n_components as a float will also avoid
numerical stability problems.
The n_components parameter determines how many components out of
the n_channels PCA components the ICA algorithm will actually fit.
This is not typically used for EEG data, but for MEG data, it’s common to
use n_components<n_channels. For example, full-rank
306-channel MEG data might use n_components=40 to find (and
later exclude) only large, dominating artifacts in the data, but still
reconstruct the data using all 306 PCA components. Setting
n_pca_components=40, on the other hand, would actually reduce the
rank of the reconstructed data to 40, which is typically undesirable.
If you are migrating from EEGLAB and intend to reduce dimensionality via
PCA, similarly to EEGLAB’s runica(...,'pca',n) functionality,
pass n_components=n during initialization and then
n_pca_components=n during apply(). The resulting reconstructed
data after apply() will have rank n.
Note
Commonly used for reasons of i) computational efficiency and
ii) additional noise reduction, it is a matter of current debate
whether pre-ICA dimensionality reduction could decrease the
reliability and stability of the ICA, at least for EEG data and
especially during preprocessing [5].
(But see also [6] for a
possibly confounding effect of the different whitening/sphering
methods used in this paper (ZCA vs. PCA).)
On the other hand, for rank-deficient data such as EEG data after
average reference or interpolation, it is recommended to reduce
the dimensionality (by 1 for average reference and 1 for each
interpolated channel) for optimal ICA performance (see the
EEGLAB wiki).
Caveat! If supplying a noise covariance, keep track of the projections
available in the cov or in the raw object. For example, if you are
interested in EOG or ECG artifacts, EOG and ECG projections should be
temporally removed before fitting ICA, for example:
Methods currently implemented are FastICA (default), Infomax, and Picard.
Standard Infomax can be quite sensitive to differences in floating point
arithmetic. Extended Infomax seems to be more stable in this respect,
enhancing reproducibility and stability of results; use Extended Infomax
via method='infomax',fit_params=dict(extended=True). Allowed entries
in fit_params are determined by the various algorithm implementations:
see FastICA, picard(),
infomax().
Note
Picard can be used to solve the same problems as FastICA,
Infomax, and extended Infomax, but typically converges faster
than either of those methods. To make use of Picard’s speed while
still obtaining the same solution as with other algorithms, you
need to specify method='picard' and fit_params as a
dictionary with the following combination of keys:
dict(ortho=False,extended=False) for Infomax
dict(ortho=False,extended=True) for extended Infomax
dict(ortho=True,extended=True) for FastICA
Reducing the tolerance (set in fit_params) speeds up estimation at the
cost of consistency of the obtained results. It is difficult to directly
compare tolerance levels between Infomax and Picard, but for Picard and
FastICA a good rule of thumb is tol_fastica==tol_picard**2.
Given the unmixing matrix, transform the data,
zero out all excluded components, and inverse-transform the data.
This procedure will reconstruct M/EEG signals from which
the dynamics described by the excluded components is subtracted.
The indices referring to columns in the ummixing matrix. The
components to be kept. If None (default), all components
will be included (minus those defined in ica.exclude
and the exclude parameter, see below).
The indices referring to columns in the ummixing matrix. The
components to be zeroed out. If None (default) or an
empty list, only components from ica.exclude will be
excluded. Else, the union of exclude and ica.exclude
will be excluded.
The number of PCA components to be kept, either absolute (int)
or fraction of the explained variance (float). If None (default),
the ica.n_pca_components from initialization will be used in 0.22;
in 0.23 all components will be used.
How to handle baseline-corrected epochs or evoked data.
Can be 'raise' to raise an error, 'warn' (default) to emit a
warning, 'ignore' to ignore, or “reapply” to reapply the baseline
after applying ICA.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Applying ICA may introduce a DC shift. If you pass
baseline-corrected Epochs or Evoked data,
the baseline period of the cleaned data may not be of
zero mean anymore. If you require baseline-corrected
data, apply baseline correction again after cleaning
via ICA. A warning will be emitted to remind you of this
fact if you pass baseline-corrected data.
Changed in version 0.23: Warn if instance was baseline-corrected.
Cross-trial phase statistics [7] or Pearson
correlation can be used for detection.
Note
If no ECG channel is available, an artificial ECG channel will be
created based on cross-channel averaging of "mag" or "grad"
channels. If neither of these channel types are available in
inst, artificial ECG channel creation is impossible.
The name of the channel to use for ECG peak detection.
If None (default), ECG channel is used if present. If None and
no ECG channel is present, a synthetic ECG channel is created from
the cross-channel average. This synthetic channel can only be created from
MEG channels.
First sample to include. If float, data will be interpreted as
time in seconds. If None, data will be used from the first sample.
When working with Epochs or Evoked objects, must be float or None.
Last sample to not include. If float, data will be interpreted as
time in seconds. If None, data will be used to the last sample.
When working with Epochs or Evoked objects, must be float or None.
Whether to omit bad segments from the data before fitting. If True
(default), annotated segments whose description begins with 'bad' are
omitted. If False, no rejection based on annotations is performed.
New in v0.14.0.
measure‘zscore’ | ‘correlation’
Which method to use for finding outliers among the components:
'zscore' (default) is the iterative z-scoring method. This method
computes the z-score of the component’s scores and masks the components
with a z-score above threshold. This process is repeated until no
supra-threshold component remains.
'correlation' is an absolute raw correlation threshold ranging from 0
to 1.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
The threshold, method, and measure parameters interact in
the following ways:
If method='ctps', threshold refers to the significance value
of a Kuiper statistic, and threshold='auto' will compute the
threshold automatically based on the sampling frequency.
If method='correlation' and measure='correlation',
threshold refers to the Pearson correlation value, and
threshold='auto' sets the threshold to 0.9.
If method='correlation' and measure='zscore', threshold
refers to the z-score value (i.e., standard deviations) used in the
iterative z-scoring method, and threshold='auto' sets the
threshold to 3.0.
Detection is based on Pearson correlation between the
filtered data and the filtered EOG channel.
Thresholding is based on adaptive z-scoring. The above threshold
components will be masked and the z-score will be recomputed
until no supra-threshold component remains.
Whether to omit bad segments from the data before fitting. If True
(default), annotated segments whose description begins with 'bad' are
omitted. If False, no rejection based on annotations is performed.
New in v0.14.0.
measure‘zscore’ | ‘correlation’
Which method to use for finding outliers among the components:
'zscore' (default) is the iterative z-scoring method. This method
computes the z-score of the component’s scores and masks the components
with a z-score above threshold. This process is repeated until no
supra-threshold component remains.
'correlation' is an absolute raw correlation threshold ranging from 0
to 1.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Detection is based on [8] which uses
data from a subject who has been temporarily paralyzed
[9]. The criteria are threefold:
Positive log-log spectral slope from 7 to 45 Hz
Peripheral component power (farthest away from the vertex)
A single focal point measured by low spatial smoothness
The threshold is relative to the slope, focal point and smoothness
of a typical muscle-related ICA component. Note the high frequency
of the power spectral density slope was 75 Hz in the reference but
has been modified to 45 Hz as a default based on the criteria being
more accurate in practice.
If inst is supplied without sensor positions, only the first criterion
(slope) is applied.
The sphere parameters to use for the head outline. Can be array-like of
shape (4,) to give the X/Y/Z origin and radius in meters, or a single float
to give just the radius (origin assumed 0, 0, 0). Can also be an instance
of a spherical ConductorModel to use the origin and
radius from that object. If 'auto' the sphere is fit to digitization
points. If 'eeglab' the head circle is defined by EEG electrodes
'Fpz', 'Oz', 'T7', and 'T8' (if 'Fpz' is not present,
it will be approximated from the coordinates of 'Oz'). None (the
default) is equivalent to 'auto' when enough extra digitization points
are available, and (0, 0, 0, 0.095) otherwise.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Whether to omit bad segments from the data before fitting. If True
(default), annotated segments whose description begins with 'bad' are
omitted. If False, no rejection based on annotations is performed.
method‘together’ | ‘separate’
Method to use to identify reference channel related components.
Defaults to 'together'. See notes.
New in v0.21.
measure‘zscore’ | ‘correlation’
Which method to use for finding outliers among the components:
'zscore' (default) is the iterative z-scoring method. This method
computes the z-score of the component’s scores and masks the components
with a z-score above threshold. This process is repeated until no
supra-threshold component remains.
'correlation' is an absolute raw correlation threshold ranging from 0
to 1.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
ICA decomposition on MEG reference channels is used to assess external
magnetic noise and remove it from the MEG. Two methods are supported:
With the 'together' method, only one ICA fit is used, which
encompasses both MEG and reference channels together. Components which
have particularly strong weights on the reference channels may be
thresholded and marked for removal.
With 'separate' selected components from a separate ICA
decomposition on the reference channels are used as a ground truth for
identifying bad components in an ICA fit done on MEG channels only. The
logic here is similar to an EOG/ECG, with reference components
replacing the EOG/ECG channels. Recommended procedure is to perform ICA
separately on reference channels, extract them using
get_sources(), and then append them to the
inst using add_channels(), preferably with the prefix
REF_ICA so that they can be automatically detected.
With 'together', thresholding is based on adaptative z-scoring.
With 'separate':
If measure is 'zscore', thresholding is based on adaptative
z-scoring.
If measure is 'correlation', threshold defines the absolute
threshold on the correlation between 0 and 1.
Validation and further documentation for this technique can be found
in [10].
Caveat! If supplying a noise covariance keep track of the projections
available in the cov, the raw or the epochs object. For example,
if you are interested in EOG or ECG artifacts, EOG and ECG projections
should be temporally removed before fitting the ICA.
Channels to include. Slices and lists of integers will be interpreted as
channel indices. In lists, channel type strings (e.g., ['meg','eeg']) will pick channels of those types, channel name strings (e.g.,
['MEG0111','MEG2623'] will pick the given channels. Can also be the
string values 'all' to pick all channels, or 'data' to pick
data channels. None (default) will pick good data channels
(excluding reference MEG channels). Note that channels in info['bads']will be included if their names or indices are explicitly provided.
This selection remains throughout the initialized ICA solution.
First and last sample to include. If float, data will be
interpreted as time in seconds. If None, data will be used from
the first sample and to the last sample, respectively.
Note
These parameters only have an effect if inst is
Raw data.
Rejection parameters based on peak-to-peak amplitude (PTP)
in the continuous data. Signal periods exceeding the thresholds
in reject or less than the thresholds in flat will be
removed before fitting the ICA.
Note
These parameters only have an effect if inst is
Raw data. For Epochs, perform PTP
rejection via drop_bad().
Valid keys are all channel types present in the data. Values must
be integers or floats.
If None, no PTP-based rejection will be performed. Example:
reject=dict(grad=4000e-13,# T / m (gradiometers)mag=4e-12,# T (magnetometers)eeg=40e-6,# V (EEG channels)eog=250e-6# V (EOG channels))flat=None# no rejection based on flatness
Whether to omit bad segments from the data before fitting. If True
(default), annotated segments whose description begins with 'bad' are
omitted. If False, no rejection based on annotations is performed.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Channels to include. Slices and lists of integers will be interpreted as
channel indices. In lists, channel type strings (e.g., ['meg','eeg']) will pick channels of those types, channel name strings (e.g.,
['MEG0111','MEG2623'] will pick the given channels. Can also be the
string values 'all' to pick all channels, or 'data' to pick
data channels. None (default) will pick all channels. Note that
channels in info['bads']will be included if their names or indices
are explicitly provided.
The component(s) for which to do the calculation. If more than one
component is specified, explained variance will be calculated
jointly across all supplied components. If None (default), uses
all available components.
The fraction of variance in inst that can be explained by the
ICA components, calculated separately for each channel type.
Dictionary keys are the channel types, and corresponding explained
variance ratios are the values.
Notes
A value similar to EEGLAB’s pvaf (percent variance accounted for)
will be calculated for the specified component(s).
Since ICA components cannot be assumed to be aligned orthogonally, the
sum of the proportion of variance explained by all components may not
be equal to 1. In certain situations, the proportion of variance
explained by a component may even be negative.
Indices of the independent components (ICs) to visualize.
If an integer, represents the index of the IC to pick.
Multiple ICs can be selected using a list of int or a slice.
The indices are 0-indexed, so picks=1 will pick the second
IC: ICA001. None will pick all independent components in the order fitted.
The channel type to plot. For 'grad', the gradiometers are
collected in pairs and the RMS for each pair is plotted. If
None the first available channel type from order shown above is used. Defaults to None.
To be able to see component properties after clicking on component
topomap you need to pass relevant data - instances of Raw or Epochs
(for example the data that ICA was trained on). This takes effect
only when running matplotlib in interactive mode.
Whether to plot standard deviation in ERP/ERF and spectrum plots.
Defaults to True, which plots one standard deviation above/below.
If set to float allows to control how many standard deviations are
plotted. For example 2.5 will plot 2.5 standard deviation above/below.
Allows to specify rejection parameters used to drop epochs
(or segments if continuous signal is passed as inst).
If None, no rejection is applied. The default is ‘auto’,
which applies the rejection parameters used when fitting
the ICA object.
Whether to add markers for sensor locations. If str, should be a
valid matplotlib format string (e.g., 'r+' for red plusses, see the
Notes section of plot()). If True (the
default), black circles will be used.
If True, show channel names next to each sensor marker. If callable,
channel names will be formatted using the callable; e.g., to
delete the prefix ‘MEG ‘ from all channel names, pass the function
lambdax:x.replace('MEG',''). If mask is not None, only
non-masked sensor names will be shown.
The number of contour lines to draw. If 0, no contours will be drawn.
If a positive integer, that number of contour levels are chosen using the
matplotlib tick locator (may sometimes be inaccurate, use array for
accuracy). If array-like, the array values are used as the contour levels.
The values should be in µV for EEG, fT for magnetometers and fT/m for
gradiometers. If colorbar=True, the colorbar will have ticks
corresponding to the contour levels. Default is 6.
The outlines to be drawn. If ‘head’, the default head scheme will be
drawn. If dict, each key refers to a tuple of x and y positions, the values
in ‘mask_pos’ will serve as image mask.
Alternatively, a matplotlib patch object can be passed for advanced
masking options, either directly or as a function that returns patches
(required for multi-axis plots). If None, nothing will be drawn.
Defaults to ‘head’.
The sphere parameters to use for the head outline. Can be array-like of
shape (4,) to give the X/Y/Z origin and radius in meters, or a single float
to give just the radius (origin assumed 0, 0, 0). Can also be an instance
of a spherical ConductorModel to use the origin and
radius from that object. If 'auto' the sphere is fit to digitization
points. If 'eeglab' the head circle is defined by EEG electrodes
'Fpz', 'Oz', 'T7', and 'T8' (if 'Fpz' is not present,
it will be approximated from the coordinates of 'Oz'). None (the
default) is equivalent to 'auto' when enough extra digitization points
are available, and (0, 0, 0, 0.095) otherwise.
Extrapolate to four points placed to form a square encompassing all
data points, where each side of the square is three times the range
of the data in the respective dimension.
'local' (default for MEG sensors)
Extrapolate only to nearby points (approximately to points closer than
median inter-electrode distance). This will also set the
mask to be polygonal based on the convex hull of the sensors.
'head' (default for non-MEG sensors)
Extrapolate out to the edges of the clipping circle. This will be on
the head circle when the sensors are contained within the head circle,
but it can extend beyond the head when sensors are plotted outside
the head circle.
Colormap to use. If tuple, the first value indicates the colormap
to use and the second value is a boolean defining interactivity. In
interactive mode the colors are adjustable by clicking and dragging the
colorbar with left and right mouse button. Left mouse button moves the
scale up and down and right mouse button adjusts the range. Hitting
space bar resets the range. Up and down arrows can be used to change
the colormap. If None, 'Reds' is used for data that is either
all-positive or all-negative, and 'RdBu_r' is used otherwise.
'interactive' is equivalent to (None,True). Defaults to None.
Warning
Interactive mode works smoothly only for a small amount
of topomaps. Interactive mode is disabled by default for more than
2 topomaps.
Lower and upper bounds of the colormap, typically a numeric value in the same
units as the data.
If both entries are None, the bounds are set at (min(data),max(data)).
Providing None for just one entry will set the corresponding boundary at the
min/max of the data. Defaults to (None,None).
How to normalize the colormap. If None, standard linear normalization
is performed. If not None, vmin and vmax will be ignored.
See Matplotlib docs
for more details on colormap normalization, and
the ERDs example for an example of its use.
The subplot(s) to plot to. Either a single Axes or an iterable of Axes
if more than one subplot is needed. The number of subplots must match
the number of selected components. If None, new figures will be created
with the number of subplots per figure controlled by nrows and
ncols.
The number of rows and columns of topographies to plot. If both nrows
and ncols are 'auto', will plot up to 20 components in a 5×4 grid,
and return multiple figures if more than 20 components are requested.
If one is 'auto' and the other a scalar, a single figure is generated.
If scalars are provided for both arguments, will plot up to nrows*ncols
components in a grid and return multiple figures as needed. Default is
nrows='auto',ncols='auto'.
Dictionary of arguments to pass to plot_epochs_image()
in interactive mode. Ignored if inst is not supplied. If None,
nothing is passed. Defaults to None.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
When run in interactive mode, plot_ica_components allows to reject
components by clicking on their title label. The state of each component
is indicated by its label color (gray: rejected; black: retained). It is
also possible to open component properties by clicking on the component
topomap (this option is only available when the inst argument is
supplied).
The signal to plot. If Raw, the raw data per channel type is displayed
before and after cleaning. A second panel with the RMS for MEG sensors and the
GFP for EEG sensors is displayed. If Evoked, butterfly traces for
signals before and after cleaning will be superimposed.
Channels to include. Slices and lists of integers will be interpreted as
channel indices. In lists, channel type strings (e.g., ['meg','eeg']) will pick channels of those types, channel name strings (e.g.,
['MEG0111','MEG2623'] will pick the given channels. Can also be the
string values 'all' to pick all channels, or 'data' to pick
data channels. None (default) will pick all channels that were included during fitting.
The first and last time point (in seconds) of the data to plot. If
inst is a Raw object, start=None and stop=None
will be translated into start=0. and stop=3., respectively. For
Evoked, None refers to the beginning and end of the evoked
signal.
The number of PCA components to be kept, either absolute (int)
or fraction of the explained variance (float). If None (default),
the ica.n_pca_components from initialization will be used in 0.22;
in 0.23 all components will be used.
How to handle baseline-corrected epochs or evoked data.
Can be 'raise' to raise an error, 'warn' (default) to emit a
warning, 'ignore' to ignore, or “reapply” to reapply the baseline
after applying ICA.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Indices of the independent components (ICs) to visualize.
If an integer, represents the index of the IC to pick.
Multiple ICs can be selected using a list of int or a slice.
The indices are 0-indexed, so picks=1 will pick the second
IC: ICA001. None will pick the first 5 components.
List of five matplotlib axes to use in plotting: [topomap_axis,
image_axis, erp_axis, spectrum_axis, variance_axis]. If None a new
figure with relevant axes is created. Defaults to None.
Whether to plot standard deviation/confidence intervals in ERP/ERF and
spectrum plots.
Defaults to True, which plots one standard deviation above/below for
the spectrum. If set to float allows to control how many standard
deviations are plotted for the spectrum. For example 2.5 will plot 2.5
standard deviation above/below.
For the ERP/ERF, by default, plot the 95 percent parametric confidence
interval is calculated. To change this, use ci in ts_args in
image_args (see below).
Allows to specify rejection parameters used to drop epochs
(or segments if continuous signal is passed as inst).
If None, no rejection is applied. The default is ‘auto’,
which applies the rejection parameters used when fitting
the ICA object.
Whether to omit bad segments from the data before fitting. If True
(default), annotated segments whose description begins with 'bad' are
omitted. If False, no rejection based on annotations is performed.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
The labels to consider for the axes tests. Defaults to None.
If list, should match the outer shape of scores.
If ‘ecg’ or ‘eog’, the labels_ attributes will be looked up.
Note that ‘/’ is used internally for sublabels specifying ECG and
EOG channels.
Scores are plotted in a grid. This parameter controls how
many to plot side by side before starting a new row. By
default, a number will be chosen to make the grid as square as
possible.
Indices of the independent components (ICs) to visualize.
If an integer, represents the index of the IC to pick.
Multiple ICs can be selected using a list of int or a slice.
The indices are 0-indexed, so picks=1 will pick the second
IC: ICA001. None will pick all independent components in the order fitted.
If inst is a Raw or an Evoked object, the first and
last time point (in seconds) of the data to plot. If inst is a
Raw object, start=None and stop=None will be
translated into start=0. and stop=3., respectively. For
Evoked, None refers to the beginning and end of the evoked
signal. If inst is an Epochs object, specifies the index of
the first and last epoch to show.
Whether to halt program execution until the figure is closed.
Useful for interactive selection of components in raw and epoch
plotter. For evoked, this parameter has no effect. Defaults to False.
Whether to show scrollbars when the plot is initialized. Can be toggled
after initialization by pressing z (“zen mode”) while the plot
window is focused. Default is True.
New in v0.19.0.
time_format‘float’ | ‘clock’
Style of time labels on the horizontal axis. If 'float', labels will be
number of seconds from the start of the recording. If 'clock',
labels will show “clock time” (hours/minutes/seconds) inferred from
raw.info['meas_date']. Default is 'float'.
Whether to load all data (not just the visible portion) into RAM and
apply preprocessing (e.g., projectors) to the full data array in a separate
processor thread, instead of window-by-window during scrolling. The default
None uses the MNE_BROWSER_PRECOMPUTE variable, which defaults to
'auto'. 'auto' compares available RAM space to the expected size of
the precomputed data, and precomputes only if enough RAM is available.
This is only used with the Qt backend.
New in v0.24.
Changed in version 1.0: Support for the MNE_BROWSER_PRECOMPUTE config variable.
Whether to use OpenGL when rendering the plot (requires pyopengl).
May increase performance, but effect is dependent on system CPU and
graphics hardware. Only works if using the Qt backend. Default is
None, which will use False unless the user configuration variable
MNE_BROWSER_USE_OPENGL is set to 'true',
see mne.set_config().
Can be “auto”, “light”, or “dark” or a path-like to a
custom stylesheet. For Dark-Mode and automatic Dark-Mode-Detection,
qdarkstyle and
darkdetect,
respectively, are required. If None (default), the config option MNE_BROWSER_THEME will be used,
defaulting to “auto” if it’s not found.
Only supported by the 'qt' backend.
Can be “channels”, “empty”, or “hidden” to set the overview bar mode
for the 'qt' backend. If None (default), the config option
MNE_BROWSER_OVERVIEW_MODE will be used, defaulting to “channels”
if it’s not found.
For raw and epoch instances, it is possible to select components for
exclusion by clicking on the line. The selected components are added to
ica.exclude on close.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.
Signal to which the sources shall be compared. It has to be of
the same shape as the sources. If str, a routine will try to find
a matching channel name. If None, a score
function expecting only one input-array argument must be used,
for instance, scipy.stats.skew (default).
Callable taking as arguments either two input arrays
(e.g. Pearson correlation) or one input
array (e. g. skewness) and returns a float. For convenience the
most common score_funcs are available via string labels:
Currently, all distance metrics from scipy.spatial and All
functions from scipy.stats taking compatible input arguments are
supported. These function have been modified to support iteration
over the rows of a 2D array.
Whether to omit bad segments from the data before fitting. If True
(default), annotated segments whose description begins with 'bad' are
omitted. If False, no rejection based on annotations is performed.
Control verbosity of the logging output. If None, use the default
verbosity level. See the logging documentation and
mne.verbose() for details. Should only be passed as a keyword
argument.