None
Artificial Intelligence Ontology
2024-03-29
definition
definition
textual definition
The official OBI definition, explaining the meaning of a class or property. Shall be Aristotelian, formalized and normalized. Can be augmented with colloquial definitions.
The official definition, explaining the meaning of a class or property. Shall be Aristotelian, formalized and normalized. Can be augmented with colloquial definitions.
2012-04-05:
Barry Smith
The official OBI definition, explaining the meaning of a class or property: 'Shall be Aristotelian, formalized and normalized. Can be augmented with colloquial definitions' is terrible.
Can you fix to something like:
A statement of necessary and sufficient conditions explaining the meaning of an expression referring to a class or property.
Alan Ruttenberg
Your proposed definition is a reasonable candidate, except that it is very common that necessary and sufficient conditions are not given. Mostly they are necessary, occasionally they are necessary and sufficient or just sufficient. Often they use terms that are not themselves defined and so they effectively can't be evaluated by those criteria.
On the specifics of the proposed definition:
We don't have definitions of 'meaning' or 'expression' or 'property'. For 'reference' in the intended sense I think we use the term 'denotation'. For 'expression', I think we you mean symbol, or identifier. For 'meaning' it differs for class and property. For class we want documentation that let's the intended reader determine whether an entity is instance of the class, or not. For property we want documentation that let's the intended reader determine, given a pair of potential relata, whether the assertion that the relation holds is true. The 'intended reader' part suggests that we also specify who, we expect, would be able to understand the definition, and also generalizes over human and computer reader to include textual and logical definition.
Personally, I am more comfortable weakening definition to documentation, with instructions as to what is desirable.
We also have the outstanding issue of how to aim different definitions to different audiences. A clinical audience reading chebi wants a different sort of definition documentation/definition from a chemistry trained audience, and similarly there is a need for a definition that is adequate for an ontologist to work with.
PERSON:Daniel Schober
GROUP:OBI:<http://purl.obolibrary.org/obo/obi>
definition
definition
textual definition
If R <- P o Q is a defining property chain axiom, then it also holds that R -> P o Q. Note that this cannot be expressed directly in OWL
is a defining property chain axiom
If R <- P o Q is a defining property chain axiom, then (1) R -> P o Q holds and (2) Q is either reflexive or locally reflexive. A corollary of this is that P SubPropertyOf R.
is a defining property chain axiom where second argument is reflexive
description
Mark Miller
2018-05-11T13:47:29Z
license
title
label
label
'is about' relates an information entity to other entities in which the information entity holds some information which dscribes some facet of the other entity, such as the arrow direction on a sign.
IAO
James Malone
Alan Ruttenberg
is about
is part of
my brain is part of my body (continuant parthood, two material entities)
my stomach cavity is part of my stomach (continuant parthood, immaterial entity is part of material entity)
this day is part of this year (occurrent parthood)
a core relation that holds between a part and its whole
Everything is part of itself. Any part of any part of a thing is itself part of that thing. Two distinct things cannot be part of each other.
Occurrents are not subject to change and so parthood between occurrents holds for all the times that the part exists. Many continuants are subject to change, so parthood between continuants will only hold at certain times, but this is difficult to specify in OWL. See http://purl.obolibrary.org/obo/ro/docs/temporal-semantics/
Occurrents are not subject to change and so parthood between occurrents holds for all the times that the part exists. Many continuants are subject to change, so parthood between continuants will only hold at certain times, but this is difficult to specify in OWL. See https://code.google.com/p/obo-relations/wiki/ROAndTime
Parthood requires the part and the whole to have compatible classes: only an occurrent can be part of an occurrent; only a process can be part of a process; only a continuant can be part of a continuant; only an independent continuant can be part of an independent continuant; only an immaterial entity can be part of an immaterial entity; only a specifically dependent continuant can be part of a specifically dependent continuant; only a generically dependent continuant can be part of a generically dependent continuant. (This list is not exhaustive.)
A continuant cannot be part of an occurrent: use 'participates in'. An occurrent cannot be part of a continuant: use 'has participant'. A material entity cannot be part of an immaterial entity: use 'has location'. A specifically dependent continuant cannot be part of an independent continuant: use 'inheres in'. An independent continuant cannot be part of a specifically dependent continuant: use 'bearer of'.
part_of
part of
http://www.obofoundry.org/ro/#OBO_REL:part_of
has part
my body has part my brain (continuant parthood, two material entities)
my stomach has part my stomach cavity (continuant parthood, material entity has part immaterial entity)
this year has part this day (occurrent parthood)
a core relation that holds between a whole and its part
Everything has itself as a part. Any part of any part of a thing is itself part of that thing. Two distinct things cannot have each other as a part.
Occurrents are not subject to change and so parthood between occurrents holds for all the times that the part exists. Many continuants are subject to change, so parthood between continuants will only hold at certain times, but this is difficult to specify in OWL. See http://purl.obolibrary.org/obo/ro/docs/temporal-semantics/
Occurrents are not subject to change and so parthood between occurrents holds for all the times that the part exists. Many continuants are subject to change, so parthood between continuants will only hold at certain times, but this is difficult to specify in OWL. See https://code.google.com/p/obo-relations/wiki/ROAndTime
Parthood requires the part and the whole to have compatible classes: only an occurrent have an occurrent as part; only a process can have a process as part; only a continuant can have a continuant as part; only an independent continuant can have an independent continuant as part; only a specifically dependent continuant can have a specifically dependent continuant as part; only a generically dependent continuant can have a generically dependent continuant as part. (This list is not exhaustive.)
A continuant cannot have an occurrent as part: use 'participates in'. An occurrent cannot have a continuant as part: use 'has participant'. An immaterial entity cannot have a material entity as part: use 'location of'. An independent continuant cannot have a specifically dependent continuant as part: use 'bearer of'. A specifically dependent continuant cannot have an independent continuant as part: use 'inheres in'.
has_part
has part
realized in
this disease is realized in this disease course
this fragility is realized in this shattering
this investigator role is realized in this investigation
is realized by
realized_in
[copied from inverse property 'realizes'] to say that b realizes c at t is to assert that there is some material entity d & b is a process which has participant d at t & c is a disposition or role of which d is bearer_of at t& the type instantiated by b is correlated with the type instantiated by c. (axiom label in BFO2 Reference: [059-003])
Paraphrase of elucidation: a relation between a realizable entity and a process, where there is some material entity that is bearer of the realizable entity and participates in the process, and the realizable entity comes to be realized in the course of the process
realized in
realizes
this disease course realizes this disease
this investigation realizes this investigator role
this shattering realizes this fragility
to say that b realizes c at t is to assert that there is some material entity d & b is a process which has participant d at t & c is a disposition or role of which d is bearer_of at t& the type instantiated by b is correlated with the type instantiated by c. (axiom label in BFO2 Reference: [059-003])
Paraphrase of elucidation: a relation between a process and a realizable entity, where there is some material entity that is bearer of the realizable entity and participates in the process, and the realizable entity comes to be realized in the course of the process
realizes
preceded by
x is preceded by y if and only if the time point at which y ends is before or equivalent to the time point at which x starts. Formally: x preceded by y iff ω(y) <= α(x), where α is a function that maps a process to a start point, and ω is a function that maps a process to an end point.
An example is: translation preceded_by transcription; aging preceded_by development (not however death preceded_by aging). Where derives_from links classes of continuants, preceded_by links classes of processes. Clearly, however, these two relations are not independent of each other. Thus if cells of type C1 derive_from cells of type C, then any cell division involving an instance of C1 in a given lineage is preceded_by cellular processes involving an instance of C. The assertion P preceded_by P1 tells us something about Ps in general: that is, it tells us something about what happened earlier, given what we know about what happened later. Thus it does not provide information pointing in the opposite direction, concerning instances of P1 in general; that is, that each is such as to be succeeded by some instance of P. Note that an assertion to the effect that P preceded_by P1 is rather weak; it tells us little about the relations between the underlying instances in virtue of which the preceded_by relation obtains. Typically we will be interested in stronger relations, for example in the relation immediately_preceded_by, or in relations which combine preceded_by with a condition to the effect that the corresponding instances of P and P1 share participants, or that their participants are connected by relations of derivation, or (as a first step along the road to a treatment of causality) that the one process in some way affects (for example, initiates or regulates) the other.
is preceded by
preceded_by
http://www.obofoundry.org/ro/#OBO_REL:preceded_by
preceded by
precedes
x precedes y if and only if the time point at which x ends is before or equivalent to the time point at which y starts. Formally: x precedes y iff ω(x) <= α(y), where α is a function that maps a process to a start point, and ω is a function that maps a process to an end point.
precedes
This document is about information artifacts and their representations
A (currently) primitive relation that relates an information artifact to an entity.
7/6/2009 Alan Ruttenberg. Following discussion with Jonathan Rees, and introduction of "mentions" relation. Weaken the is_about relationship to be primitive.
We will try to build it back up by elaborating the various subproperties that are more precisely defined.
Some currently missing phenomena that should be considered "about" are predications - "The only person who knows the answer is sitting beside me" , Allegory, Satire, and other literary forms that can be topical without explicitly mentioning the topic.
person:Alan Ruttenberg
Smith, Ceusters, Ruttenberg, 2000 years of philosophy
is about
has_specified_input
has_specified_input
see is_input_of example_of_usage
The inverse property of is_specified_input_of
8/17/09: specified inputs of one process are not necessarily specified inputs of a larger process that it is part of. This is in contrast to how 'has participant' works.
PERSON: Alan Ruttenberg
PERSON: Bjoern Peters
PERSON: Larry Hunter
PERSON: Melanie Coutot
has_specified_input
is_specified_input_of
some Autologous EBV(Epstein-Barr virus)-transformed B-LCL (B lymphocyte cell line) is_input_for instance of Chromum Release Assay described at https://wiki.cbil.upenn.edu/obiwiki/index.php/Chromium_Release_assay
A relation between a planned process and a continuant participating in that process that is not created during the process. The presence of the continuant during the process is explicitly specified in the plan specification which the process realizes the concretization of.
Alan Ruttenberg
PERSON:Bjoern Peters
is_specified_input_of
has_specified_output
has_specified_output
The inverse property of is_specified_output_of
PERSON: Alan Ruttenberg
PERSON: Bjoern Peters
PERSON: Larry Hunter
PERSON: Melanie Courtot
has_specified_output
is_specified_output_of
is_specified_output_of
A relation between a planned process and a continuant participating in that process. The presence of the continuant at the end of the process is explicitly specified in the objective specification which the process realizes the concretization of.
Alan Ruttenberg
PERSON:Bjoern Peters
is_specified_output_of
achieves_planned_objective
A cell sorting process achieves the objective specification 'material separation objective'
This relation obtains between a planned process and a objective specification when the criteria specified in the objective specification are met at the end of the planned process.
BP, AR, PPPB branch
PPPB branch derived
modified according to email thread from 1/23/09 in accordince with DT and PPPB branch
achieves_planned_objective
objective_achieved_by
This relation obtains between an objective specification and a planned process when the criteria specified in the objective specification are met at the end of the planned process.
OBI
OBI
objective_achieved_by
inheres in
this fragility inheres in this vase
this fragility is a characteristic of this vase
this red color inheres in this apple
this red color is a characteristic of this apple
a relation between a specifically dependent continuant (the characteristic) and any other entity (the bearer), in which the characteristic depends on the bearer for its existence.
a relation between a specifically dependent continuant (the dependent) and an independent continuant (the bearer), in which the dependent specifically depends on the bearer for its existence
A dependent inheres in its bearer at all times for which the dependent exists.
inheres_in
Note that this relation was previously called "inheres in", but was changed to be called "characteristic of" because BFO2 uses "inheres in" in a more restricted fashion. This relation differs from BFO2:inheres_in in two respects: (1) it does not impose a range constraint, and thus it allows qualities of processes, as well as of information entities, whereas BFO2 restricts inheres_in to only apply to independent continuants (2) it is declared functional, i.e. something can only be a characteristic of one thing.
characteristic of
inheres in
bearer of
this apple is bearer of this red color
this vase is bearer of this fragility
Inverse of characteristic_of
a relation between an independent continuant (the bearer) and a specifically dependent continuant (the dependent), in which the dependent specifically depends on the bearer for its existence
A bearer can have many dependents, and its dependents can exist for different periods of time, but none of its dependents can exist when the bearer does not exist.
bearer_of
is bearer of
bearer of
has characteristic
participates in
this blood clot participates in this blood coagulation
this input material (or this output material) participates in this process
this investigator participates in this investigation
a relation between a continuant and a process, in which the continuant is somehow involved in the process
participates_in
Andy Brown
Please see the official RO definition for the inverse of this property, 'has participant.'
participates in
participates in
has participant
The relation obtains, for example, when this particular process of oxygen exchange across this particular alveolar membrane has_participant this particular sample of hemoglobin at this particular time.
this blood coagulation has participant this blood clot
this investigation has participant this investigator
this process has participant this input material (or this output material)
Has_participant is a primitive instance-level relation between a process, a continuant, and a time at which the continuant participates in some way in the process.
a relation between a process and a continuant, in which the continuant is somehow involved in the process
Has_participant is a primitive instance-level relation between a process, a continuant, and a time at which the continuant participates in some way in the process. The relation obtains, for example, when this particular process of oxygen exchange across this particular alveolar membrane has_participant this particular sample of hemoglobin at this particular time.
has_participant
http://obo-relations.googlecode.com/svn/trunk/src/ontology/core.owl
http://www.obofoundry.org/ro/#OBO_REL:has_participant
Andy Brown
has participant
has participant
A journal article is an information artifact that inheres in some number of printed journals. For each copy of the printed journal there is some quality that carries the journal article, such as a pattern of ink. The journal article (a generically dependent continuant) is concretized as the quality (a specifically dependent continuant), and both depend on that copy of the printed journal (an independent continuant).
An investigator reads a protocol and forms a plan to carry out an assay. The plan is a realizable entity (a specifically dependent continuant) that concretizes the protocol (a generically dependent continuant), and both depend on the investigator (an independent continuant). The plan is then realized by the assay (a process).
A relationship between a generically dependent continuant and a specifically dependent continuant, in which the generically dependent continuant depends on some independent continuant in virtue of the fact that the specifically dependent continuant also depends on that same independent continuant. A generically dependent continuant may be concretized as multiple specifically dependent continuants.
is concretized as
A journal article is an information artifact that inheres in some number of printed journals. For each copy of the printed journal there is some quality that carries the journal article, such as a pattern of ink. The quality (a specifically dependent continuant) concretizes the journal article (a generically dependent continuant), and both depend on that copy of the printed journal (an independent continuant).
An investigator reads a protocol and forms a plan to carry out an assay. The plan is a realizable entity (a specifically dependent continuant) that concretizes the protocol (a generically dependent continuant), and both depend on the investigator (an independent continuant). The plan is then realized by the assay (a process).
A relationship between a specifically dependent continuant and a generically dependent continuant, in which the generically dependent continuant depends on some independent continuant in virtue of the fact that the specifically dependent continuant also depends on that same independent continuant. Multiple specifically dependent continuants can concretize the same generically dependent continuant.
concretizes
this catalysis function is a function of this enzyme
a relation between a function and an independent continuant (the bearer), in which the function specifically depends on the bearer for its existence
A function inheres in its bearer at all times for which the function exists, however the function need not be realized at all the times that the function exists.
function_of
is function of
This relation is modeled after the BFO relation of the same name which was in BFO2, but is used in a more restricted sense - specifically, we model this relation as functional (inherited from characteristic-of). Note that this relation is now removed from BFO2020.
function of
this red color is a quality of this apple
a relation between a quality and an independent continuant (the bearer), in which the quality specifically depends on the bearer for its existence
A quality inheres in its bearer at all times for which the quality exists.
is quality of
quality_of
This relation is modeled after the BFO relation of the same name which was in BFO2, but is used in a more restricted sense - specifically, we model this relation as functional (inherited from characteristic-of). Note that this relation is now removed from BFO2020.
quality of
this investigator role is a role of this person
a relation between a role and an independent continuant (the bearer), in which the role specifically depends on the bearer for its existence
A role inheres in its bearer at all times for which the role exists, however the role need not be realized at all the times that the role exists.
is role of
role_of
This relation is modeled after the BFO relation of the same name which was in BFO2, but is used in a more restricted sense - specifically, we model this relation as functional (inherited from characteristic-of). Note that this relation is now removed from BFO2020.
role of
this enzyme has function this catalysis function (more colloquially: this enzyme has this catalysis function)
a relation between an independent continuant (the bearer) and a function, in which the function specifically depends on the bearer for its existence
A bearer can have many functions, and its functions can exist for different periods of time, but none of its functions can exist when the bearer does not exist. A function need not be realized at all the times that the function exists.
has_function
has function
this apple has quality this red color
a relation between an independent continuant (the bearer) and a quality, in which the quality specifically depends on the bearer for its existence
A bearer can have many qualities, and its qualities can exist for different periods of time, but none of its qualities can exist when the bearer does not exist.
has_quality
has quality
this person has role this investigator role (more colloquially: this person has this role of investigator)
a relation between an independent continuant (the bearer) and a role, in which the role specifically depends on the bearer for its existence
A bearer can have many roles, and its roles can exist for different periods of time, but none of its roles can exist when the bearer does not exist. A role need not be realized at all the times that the role exists.
has_role
has role
has_role
a relation between an independent continuant (the bearer) and a disposition, in which the disposition specifically depends on the bearer for its existence
has disposition
inverse of has disposition
This relation is modeled after the BFO relation of the same name which was in BFO2, but is used in a more restricted sense - specifically, we model this relation as functional (inherited from characteristic-of). Note that this relation is now removed from BFO2020.
disposition of
A 'has regulatory component activity' B if A and B are GO molecular functions (GO_0003674), A has_component B and A is regulated by B.
dos
2017-05-24T09:30:46Z
has regulatory component activity
A relationship that holds between a GO molecular function and a component of that molecular function that negatively regulates the activity of the whole. More formally, A 'has regulatory component activity' B iff :A and B are GO molecular functions (GO_0003674), A has_component B and A is negatively regulated by B.
dos
2017-05-24T09:31:01Z
By convention GO molecular functions are classified by their effector function. Internal regulatory functions are treated as components. For example, NMDA glutmate receptor activity is a cation channel activity with positive regulatory component 'glutamate binding' and negative regulatory components including 'zinc binding' and 'magnesium binding'.
has negative regulatory component activity
A relationship that holds between a GO molecular function and a component of that molecular function that positively regulates the activity of the whole. More formally, A 'has regulatory component activity' B iff :A and B are GO molecular functions (GO_0003674), A has_component B and A is positively regulated by B.
dos
2017-05-24T09:31:17Z
By convention GO molecular functions are classified by their effector function and internal regulatory functions are treated as components. So, for example calmodulin has a protein binding activity that has positive regulatory component activity calcium binding activity. Receptor tyrosine kinase activity is a tyrosine kinase activity that has positive regulatory component 'ligand binding'.
has positive regulatory component activity
dos
2017-05-24T09:44:33Z
A 'has component activity' B if A is A and B are molecular functions (GO_0003674) and A has_component B.
has component activity
w 'has process component' p if p and w are processes, w 'has part' p and w is such that it can be directly disassembled into into n parts p, p2, p3, ..., pn, where these parts are of similar type.
dos
2017-05-24T09:49:21Z
has component process
dos
2017-09-17T13:52:24Z
Process(P2) is directly regulated by process(P1) iff: P1 regulates P2 via direct physical interaction between an agent executing P1 (or some part of P1) and an agent executing P2 (or some part of P2). For example, if protein A has protein binding activity(P1) that targets protein B and this binding regulates the kinase activity (P2) of protein B then P1 directly regulates P2.
directly regulated by
Process(P2) is directly regulated by process(P1) iff: P1 regulates P2 via direct physical interaction between an agent executing P1 (or some part of P1) and an agent executing P2 (or some part of P2). For example, if protein A has protein binding activity(P1) that targets protein B and this binding regulates the kinase activity (P2) of protein B then P1 directly regulates P2.
GOC:dos
Process(P2) is directly negatively regulated by process(P1) iff: P1 negatively regulates P2 via direct physical interaction between an agent executing P1 (or some part of P1) and an agent executing P2 (or some part of P2). For example, if protein A has protein binding activity(P1) that targets protein B and this binding negatively regulates the kinase activity (P2) of protein B then P2 directly negatively regulated by P1.
dos
2017-09-17T13:52:38Z
directly negatively regulated by
Process(P2) is directly negatively regulated by process(P1) iff: P1 negatively regulates P2 via direct physical interaction between an agent executing P1 (or some part of P1) and an agent executing P2 (or some part of P2). For example, if protein A has protein binding activity(P1) that targets protein B and this binding negatively regulates the kinase activity (P2) of protein B then P2 directly negatively regulated by P1.
GOC:dos
Process(P2) is directly postively regulated by process(P1) iff: P1 positively regulates P2 via direct physical interaction between an agent executing P1 (or some part of P1) and an agent executing P2 (or some part of P2). For example, if protein A has protein binding activity(P1) that targets protein B and this binding positively regulates the kinase activity (P2) of protein B then P2 is directly postively regulated by P1.
dos
2017-09-17T13:52:47Z
directly positively regulated by
Process(P2) is directly postively regulated by process(P1) iff: P1 positively regulates P2 via direct physical interaction between an agent executing P1 (or some part of P1) and an agent executing P2 (or some part of P2). For example, if protein A has protein binding activity(P1) that targets protein B and this binding positively regulates the kinase activity (P2) of protein B then P2 is directly postively regulated by P1.
GOC:dos
A 'has effector activity' B if A and B are GO molecular functions (GO_0003674), A 'has component activity' B and B is the effector (output function) of B. Each compound function has only one effector activity.
dos
2017-09-22T14:14:36Z
This relation is designed for constructing compound molecular functions, typically in combination with one or more regulatory component activity relations.
has effector activity
A 'has effector activity' B if A and B are GO molecular functions (GO_0003674), A 'has component activity' B and B is the effector (output function) of B. Each compound function has only one effector activity.
GOC:dos
David Osumi-Sutherland
X ends_after Y iff: end(Y) before_or_simultaneous_with end(X)
ends after
David Osumi-Sutherland
starts_at_end_of
X immediately_preceded_by Y iff: end(X) simultaneous_with start(Y)
immediately preceded by
David Osumi-Sutherland
ends_at_start_of
meets
X immediately_precedes_Y iff: end(X) simultaneous_with start(Y)
immediately precedes
x overlaps y if and only if there exists some z such that x has part z and z part of y
http://purl.obolibrary.org/obo/BFO_0000051 some (http://purl.obolibrary.org/obo/BFO_0000050 some ?Y)
overlaps
true
w 'has component' p if w 'has part' p and w is such that it can be directly disassembled into into n parts p, p2, p3, ..., pn, where these parts are of similar type.
The definition of 'has component' is still under discussion. The challenge is in providing a definition that does not imply transitivity.
For use in recording has_part with a cardinality constraint, because OWL does not permit cardinality constraints to be used in combination with transitive object properties. In situations where you would want to say something like 'has part exactly 5 digit, you would instead use has_component exactly 5 digit.
has component
p regulates q iff p is causally upstream of q, the execution of p is not constant and varies according to specific conditions, and p influences the rate or magnitude of execution of q due to an effect either on some enabler of q or some enabler of a part of q.
GO
Regulation precludes parthood; the regulatory process may not be within the regulated process.
regulates (processual)
false
regulates
p negatively regulates q iff p regulates q, and p decreases the rate or magnitude of execution of q.
negatively regulates (process to process)
negatively regulates
p positively regulates q iff p regulates q, and p increases the rate or magnitude of execution of q.
positively regulates (process to process)
positively regulates
mechanosensory neuron capable of detection of mechanical stimulus involved in sensory perception (GO:0050974)
osteoclast SubClassOf 'capable of' some 'bone resorption'
A relation between a material entity (such as a cell) and a process, in which the material entity has the ability to carry out the process.
has function realized in
For compatibility with BFO, this relation has a shortcut definition in which the expression "capable of some P" expands to "bearer_of (some realized_by only P)".
capable of
c stands in this relationship to p if and only if there exists some p' such that c is capable_of p', and p' is part_of p.
has function in
capable of part of
true
Do not use this relation directly. It is ended as a grouping for relations between occurrents involving the relative timing of their starts and ends.
https://docs.google.com/document/d/1kBv1ep_9g3sTR-SD3jqzFqhuwo9TPNF-l-9fUDbO6rM/edit?pli=1
A relation that holds between two occurrents. This is a grouping relation that collects together all the Allen relations.
temporally related to
p has input c iff: p is a process, c is a material entity, c is a participant in p, c is present at the start of p, and the state of c is modified during p.
consumes
has input
A faulty traffic light (material entity) whose malfunctioning (a process) is causally upstream of a traffic collision (a process): the traffic light acts upstream of the collision.
c acts upstream of p if and only if c enables some f that is involved in p' and p' occurs chronologically before p, is not part of p, and affects the execution of p. c is a material entity and f, p, p' are processes.
acts upstream of
A gene product that has some activity, where that activity may be a part of a pathway or upstream of the pathway.
c acts upstream of or within p if c is enables f, and f is causally upstream of or within p. c is a material entity and p is an process.
affects
acts upstream of or within
p is causally upstream of, positive effect q iff p is casually upstream of q, and the execution of p is required for the execution of q.
holds between x and y if and only if x is causally upstream of y and the progression of x increases the frequency, rate or extent of y
causally upstream of, positive effect
p is causally upstream of, negative effect q iff p is casually upstream of q, and the execution of p decreases the execution of q.
causally upstream of, negative effect
q characteristic of part of w if and only if there exists some p such that q inheres in p and p part of w.
Because part_of is transitive, inheres in is a sub-relation of characteristic of part of
inheres in part of
characteristic of part of
true
A mereological relationship or a topological relationship
Do not use this relation directly. It is ended as a grouping for a diverse set of relations, all involving parthood or connectivity relationships
mereotopologically related to
a particular instances of akt-2 enables some instance of protein kinase activity
c enables p iff c is capable of p and c acts to execute p.
catalyzes
executes
has
is catalyzing
is executing
This relation differs from the parent relation 'capable of' in that the parent is weaker and only expresses a capability that may not be actually realized, whereas this relation is always realized.
enables
A grouping relationship for any relationship directly involving a function, or that holds because of a function of one of the related entities.
This is a grouping relation that collects relations used for the purpose of connecting structure and function
functionally related to
this relation holds between c and p when c is part of some c', and c' is capable of p.
false
part of structure that is capable of
true
c involved_in p if and only if c enables some process p', and p' is part of p
actively involved in
enables part of
involved in
inverse of enables
enabled by
inverse of regulates
regulated by (processual)
regulated by
inverse of negatively regulates
negatively regulated by
inverse of positively regulates
positively regulated by
An organism that is a member of a population of organisms
is member of is a mereological relation between a item and a collection.
is member of
member part of
SIO
member of
has member is a mereological relation between a collection and an item.
SIO
has member
An entity A is the 'input of' another entity B if A was put into the system, entity or software represented by B.
inverse of has input
Allyson Lister
input of
input of
inverse of upstream of
causally downstream of
immediately causally downstream of
p indirectly positively regulates q iff p is indirectly causally upstream of q and p positively regulates q.
indirectly activates
indirectly positively regulates
p indirectly negatively regulates q iff p is indirectly causally upstream of q and p negatively regulates q.
indirectly inhibits
indirectly negatively regulates
relation that links two events, processes, states, or objects such that one event, process, state, or object (a cause) contributes to the production of another event, process, state, or object (an effect) where the cause is partly or wholly responsible for the effect, and the effect is partly or wholly dependent on the cause.
This branch of the ontology deals with causal relations between entities. It is divided into two branches: causal relations between occurrents/processes, and causal relations between material entities. We take an 'activity flow-centric approach', with the former as primary, and define causal relations between material entities in terms of causal relations between occurrents.
To define causal relations in an activity-flow type network, we make use of 3 primitives:
* Temporal: how do the intervals of the two occurrents relate?
* Is the causal relation regulatory?
* Is the influence positive or negative?
The first of these can be formalized in terms of the Allen Interval Algebra. Informally, the 3 bins we care about are 'direct', 'indirect' or overlapping. Note that all causal relations should be classified under a RO temporal relation (see the branch under 'temporally related to'). Note that all causal relations are temporal, but not all temporal relations are causal. Two occurrents can be related in time without being causally connected. We take causal influence to be primitive, elucidated as being such that has the upstream changed, some qualities of the donwstream would necessarily be modified.
For the second, we consider a relationship to be regulatory if the system in which the activities occur is capable of altering the relationship to achieve some objective. This could include changing the rate of production of a molecule.
For the third, we consider the effect of the upstream process on the output(s) of the downstream process. If the level of output is increased, or the rate of production of the output is increased, then the direction is increased. Direction can be positive, negative or neutral or capable of either direction. Two positives in succession yield a positive, two negatives in succession yield a positive, otherwise the default assumption is that the net effect is canceled and the influence is neutral.
Each of these 3 primitives can be composed to yield a cross-product of different relation types.
Do not use this relation directly. It is intended as a grouping for a diverse set of relations, all involving cause and effect.
causally related to
relation that links two events, processes, states, or objects such that one event, process, state, or object (a cause) contributes to the production of another event, process, state, or object (an effect) where the cause is partly or wholly responsible for the effect, and the effect is partly or wholly dependent on the cause.
https://en.wikipedia.org/wiki/Causality
p is causally upstream of q iff p is causally related to q, the end of p precedes the end of q, and p is not an occurrent part of q.
causally upstream of
p is immediately causally upstream of q iff p is causally upstream of q, and the end of p is coincident with the beginning of q.
immediately causally upstream of
p is 'causally upstream or within' q iff p is causally related to q, and the end of p precedes, or is coincident with, the end of q.
We would like to make this disjoint with 'preceded by', but this is prohibited in OWL2
influences (processual)
affects
causally upstream of or within
inverse of causally upstream of or within
causally downstream of or within
c involved in regulation of p if c is involved in some p' and p' regulates some p
involved in regulation of
c involved in regulation of p if c is involved in some p' and p' positively regulates some p
involved in positive regulation of
c involved in regulation of p if c is involved in some p' and p' negatively regulates some p
involved in negative regulation of
c involved in or regulates p if and only if either (i) c is involved in p or (ii) c is involved in regulation of p
OWL does not allow defining object properties via a Union
involved in or reguates
involved in or involved in regulation of
A relationship that holds between two entities in which the processes executed by the two entities are causally connected.
This relation and all sub-relations can be applied to either (1) pairs of entities that are interacting at any moment of time (2) populations or species of entity whose members have the disposition to interact (3) classes whose members have the disposition to interact.
Considering relabeling as 'pairwise interacts with'
Note that this relationship type, and sub-relationship types may be redundant with process terms from other ontologies. For example, the symbiotic relationship hierarchy parallels GO. The relations are provided as a convenient shortcut. Consider using the more expressive processual form to capture your data. In the future, these relations will be linked to their cognate processes through rules.
in pairwise interaction with
interacts with
http://purl.obolibrary.org/obo/ro/docs/interaction-relations/
http://purl.obolibrary.org/obo/MI_0914
An interaction relationship in which the two partners are molecular entities that directly physically interact with each other for example via a stable binding interaction or a brief interaction during which one modifies the other.
binds
molecularly binds with
molecularly interacts with
http://purl.obolibrary.org/obo/MI_0915
Axiomatization to GO to be added later
An interaction relation between x and y in which x catalyzes a reaction in which a phosphate group is added to y.
phosphorylates
The entity A, immediately upstream of the entity B, has an activity that regulates an activity performed by B. For example, A and B may be gene products and binding of B by A regulates the kinase activity of B.
A and B can be physically interacting but not necessarily. Immediately upstream means there are no intermediate entity between A and B.
molecularly controls
directly regulates activity of
The entity A, immediately upstream of the entity B, has an activity that negatively regulates an activity performed by B.
For example, A and B may be gene products and binding of B by A negatively regulates the kinase activity of B.
directly inhibits
molecularly decreases activity of
directly negatively regulates activity of
The entity A, immediately upstream of the entity B, has an activity that positively regulates an activity performed by B.
For example, A and B may be gene products and binding of B by A positively regulates the kinase activity of B.
directly activates
molecularly increases activity of
directly positively regulates activity of
This property or its subproperties is not to be used directly. These properties exist as helper properties that are used to support OWL reasoning.
helper property (not for use in curation)
is kinase activity
A relationship between a material entity and a process where the material entity has some causal role that influences the process
causal agent in process
p is causally related to q if and only if p or any part of p and q or any part of q are linked by a chain of events where each event pair is one where the execution of p influences the execution of q. p may be upstream, downstream, part of, or a container of q.
Do not use this relation directly. It is intended as a grouping for a diverse set of relations, all involving cause and effect.
causal relation between processes
depends on
The intent is that the process branch of the causal property hierarchy is primary (causal relations hold between occurrents/processes), and that the material branch is defined in terms of the process branch
Do not use this relation directly. It is intended as a grouping for a diverse set of relations, all involving cause and effect.
causal relation between entities
causally influenced by (entity-centric)
causally influenced by
interaction relation helper property
http://purl.obolibrary.org/obo/ro/docs/interaction-relations/
molecular interaction relation helper property
The entity or characteristic A is causally upstream of the entity or characteristic B, A having an effect on B. An entity corresponds to any biological type of entity as long as a mass is measurable. A characteristic corresponds to a particular specificity of an entity (e.g., phenotype, shape, size).
causally influences (entity-centric)
causally influences
p directly regulates q iff p is immediately causally upstream of q and p regulates q.
directly regulates (processual)
directly regulates
gland SubClassOf 'has part structure that is capable of' some 'secretion by cell'
s 'has part structure that is capable of' p if and only if there exists some part x such that s 'has part' x and x 'capable of' p
has part structure that is capable of
A relationship that holds between a material entity and a process in which causality is involved, with either the material entity or some part of the material entity exerting some influence over the process, or the process influencing some aspect of the material entity.
Do not use this relation directly. It is intended as a grouping for a diverse set of relations, all involving cause and effect.
causal relation between material entity and a process
pyrethroid -> growth
Holds between c and p if and only if c is capable of some activity a, and a regulates p.
capable of regulating
Holds between c and p if and only if c is capable of some activity a, and a negatively regulates p.
capable of negatively regulating
renin -> arteriolar smooth muscle contraction
Holds between c and p if and only if c is capable of some activity a, and a positively regulates p.
capable of positively regulating
Inverse of 'causal agent in process'
process has causal agent
p directly positively regulates q iff p is immediately causally upstream of q, and p positively regulates q.
directly positively regulates (process to process)
directly positively regulates
p directly negatively regulates q iff p is immediately causally upstream of q, and p negatively regulates q.
directly negatively regulates (process to process)
directly negatively regulates
Holds between an entity and an process P where the entity enables some larger compound process, and that larger process has-part P.
2018-01-25T23:20:13Z
enables subfunction
2018-01-26T23:49:30Z
acts upstream of or within, positive effect
2018-01-26T23:49:51Z
acts upstream of or within, negative effect
c 'acts upstream of, positive effect' p if c is enables f, and f is causally upstream of p, and the direction of f is positive
2018-01-26T23:53:14Z
acts upstream of, positive effect
c 'acts upstream of, negative effect' p if c is enables f, and f is causally upstream of p, and the direction of f is negative
2018-01-26T23:53:22Z
acts upstream of, negative effect
2018-03-13T23:55:05Z
causally upstream of or within, negative effect
2018-03-13T23:55:19Z
causally upstream of or within, positive effect
The entity A has an activity that regulates an activity of the entity B. For example, A and B are gene products where the catalytic activity of A regulates the kinase activity of B.
regulates activity of
p is indirectly causally upstream of q iff p is causally upstream of q and there exists some process r such that p is causally upstream of r and r is causally upstream of q.
pg
2022-09-26T06:07:17Z
indirectly causally upstream of
p indirectly regulates q iff p is indirectly causally upstream of q and p regulates q.
pg
2022-09-26T06:08:01Z
indirectly regulates
A diagnostic testing device utilizes a specimen.
X device utilizes material Y means X and Y are material entities, and X is capable of some process P that has input Y.
A diagnostic testing device utilizes a specimen means that the diagnostic testing device is capable of an assay, and this assay a specimen as its input.
See github ticket https://github.com/oborel/obo-relations/issues/497
2021-11-08T12:00:00Z
utilizes
device utilizes material
A relationship that holds between a process and a characteristic in which process (P) regulates characteristic (C) iff: P results in the existence of C OR affects the intensity or magnitude of C.
regulates characteristic
A relationship that holds between a process and a characteristic in which process (P) positively regulates characteristic (C) iff: P results in an increase in the intensity or magnitude of C.
positively regulates characteristic
A relationship that holds between a process and a characteristic in which process (P) negatively regulates characteristic (C) iff: P results in a decrease in the intensity or magnitude of C.
negatively regulates characteristic
Microsoft version 2007 is directly preceded by Microsoft version 2003.
Entity A is 'directly preceded by' entity B if there are no intermediate entities temporally between the two entities. WIthin SWO this property is mainly used to describe versions of entities such as software.
Allyson Lister
OBO Foundry
directly preceded by
'directly followed by' is an object property which further specializes the parent 'followed by' property. In the assertion 'C directly followed by C1', says that Cs generally are immediately followed by C1s.
Allyson Lister
directly followed by
AL 2.9.22: When incorporating all BFO annotations, it became clear that we were using BFO 'precedes' (which was the original parent for 'directly followed by') incorrectly. 'precedes' has a range and domain of occurent, and the IAO version number class is an ICE, which is a continuant. This led to an inconsistent ontology. To fix this for now, we have created a new class that performs a similar function to BFO 'precedes' but without the domain/range restrictions.
followed by
AL 2.9.22: When incorporating all BFO annotations, it became clear that we were using BFO 'precedes' and 'preceded by' (which was the original parent for 'directly preceded by') incorrectly. 'precedes' has a range and domain of occurent, and the IAO version number class is an ICE, which is a continuant. This led to an inconsistent ontology. To fix this for now, we have created a new class (and this, its inverse) that performs a similar function to BFO 'precedes' but without the domain/range restrictions.
follows
Linking a type of software to its particular programming language.
Is encoded in is an "is about" relationship which describes the type of encoding used for the referenced class.
Allyson Lister
is encoded in
A planned process that has specified output a software product and that involves the creation of source code.
Mathias Brochhausen
William R. Hogan
http://en.wikipedia.org/wiki/Software_development
A planned process resulting in a software product involving the creation of source code.
software development
creating a data set
A planned process that has a data set as its specified output.
William R. Hogan
data set creation
dataset creation
dataset creating
root node
entity
Entity
Julius Caesar
Verdi’s Requiem
the Second World War
your body mass index
BFO 2 Reference: In all areas of empirical inquiry we encounter general terms of two sorts. First are general terms which refer to universals or types:animaltuberculosissurgical procedurediseaseSecond, are general terms used to refer to groups of entities which instantiate a given universal but do not correspond to the extension of any subuniversal of that universal because there is nothing intrinsic to the entities in question by virtue of which they – and only they – are counted as belonging to the given group. Examples are: animal purchased by the Emperortuberculosis diagnosed on a Wednesdaysurgical procedure performed on a patient from Stockholmperson identified as candidate for clinical trial #2056-555person who is signatory of Form 656-PPVpainting by Leonardo da VinciSuch terms, which represent what are called ‘specializations’ in [81
Entity doesn't have a closure axiom because the subclasses don't necessarily exhaust all possibilites. For example Werner Ceusters 'portions of reality' include 4 sorts, entities (as BFO construes them), universals, configurations, and relations. It is an open question as to whether entities as construed in BFO will at some point also include these other portions of reality. See, for example, 'How to track absolutely everything' at http://www.referent-tracking.com/_RTU/papers/CeustersICbookRevised.pdf
An entity is anything that exists or has existed or will exist. (axiom label in BFO2 Reference: [001-001])
entity
Entity doesn't have a closure axiom because the subclasses don't necessarily exhaust all possibilites. For example Werner Ceusters 'portions of reality' include 4 sorts, entities (as BFO construes them), universals, configurations, and relations. It is an open question as to whether entities as construed in BFO will at some point also include these other portions of reality. See, for example, 'How to track absolutely everything' at http://www.referent-tracking.com/_RTU/papers/CeustersICbookRevised.pdf
per discussion with Barry Smith
An entity is anything that exists or has existed or will exist. (axiom label in BFO2 Reference: [001-001])
continuant
Continuant
continuant
An entity that exists in full at any time in which it exists at all, persists through time while maintaining its identity and has no temporal parts.
BFO 2 Reference: Continuant entities are entities which can be sliced to yield parts only along the spatial dimension, yielding for example the parts of your table which we call its legs, its top, its nails. ‘My desk stretches from the window to the door. It has spatial parts, and can be sliced (in space) in two. With respect to time, however, a thing is a continuant.’ [60, p. 240
Continuant doesn't have a closure axiom because the subclasses don't necessarily exhaust all possibilites. For example, in an expansion involving bringing in some of Ceuster's other portions of reality, questions are raised as to whether universals are continuants
A continuant is an entity that persists, endures, or continues to exist through time while maintaining its identity. (axiom label in BFO2 Reference: [008-002])
if b is a continuant and if, for some t, c has_continuant_part b at t, then c is a continuant. (axiom label in BFO2 Reference: [126-001])
if b is a continuant and if, for some t, cis continuant_part of b at t, then c is a continuant. (axiom label in BFO2 Reference: [009-002])
if b is a material entity, then there is some temporal interval (referred to below as a one-dimensional temporal region) during which b exists. (axiom label in BFO2 Reference: [011-002])
(forall (x y) (if (and (Continuant x) (exists (t) (continuantPartOfAt y x t))) (Continuant y))) // axiom label in BFO2 CLIF: [009-002]
(forall (x y) (if (and (Continuant x) (exists (t) (hasContinuantPartOfAt y x t))) (Continuant y))) // axiom label in BFO2 CLIF: [126-001]
(forall (x) (if (Continuant x) (Entity x))) // axiom label in BFO2 CLIF: [008-002]
(forall (x) (if (Material Entity x) (exists (t) (and (TemporalRegion t) (existsAt x t))))) // axiom label in BFO2 CLIF: [011-002]
continuant
Continuant doesn't have a closure axiom because the subclasses don't necessarily exhaust all possibilites. For example, in an expansion involving bringing in some of Ceuster's other portions of reality, questions are raised as to whether universals are continuants
A continuant is an entity that persists, endures, or continues to exist through time while maintaining its identity. (axiom label in BFO2 Reference: [008-002])
if b is a continuant and if, for some t, c has_continuant_part b at t, then c is a continuant. (axiom label in BFO2 Reference: [126-001])
if b is a continuant and if, for some t, cis continuant_part of b at t, then c is a continuant. (axiom label in BFO2 Reference: [009-002])
if b is a material entity, then there is some temporal interval (referred to below as a one-dimensional temporal region) during which b exists. (axiom label in BFO2 Reference: [011-002])
(forall (x y) (if (and (Continuant x) (exists (t) (continuantPartOfAt y x t))) (Continuant y))) // axiom label in BFO2 CLIF: [009-002]
(forall (x y) (if (and (Continuant x) (exists (t) (hasContinuantPartOfAt y x t))) (Continuant y))) // axiom label in BFO2 CLIF: [126-001]
(forall (x) (if (Continuant x) (Entity x))) // axiom label in BFO2 CLIF: [008-002]
(forall (x) (if (Material Entity x) (exists (t) (and (TemporalRegion t) (existsAt x t))))) // axiom label in BFO2 CLIF: [011-002]
occurrent
Occurrent
An entity that has temporal parts and that happens, unfolds or develops through time.
BFO 2 Reference: every occurrent that is not a temporal or spatiotemporal region is s-dependent on some independent continuant that is not a spatial region
BFO 2 Reference: s-dependence obtains between every process and its participants in the sense that, as a matter of necessity, this process could not have existed unless these or those participants existed also. A process may have a succession of participants at different phases of its unfolding. Thus there may be different players on the field at different times during the course of a football game; but the process which is the entire game s-depends_on all of these players nonetheless. Some temporal parts of this process will s-depend_on on only some of the players.
Occurrent doesn't have a closure axiom because the subclasses don't necessarily exhaust all possibilites. An example would be the sum of a process and the process boundary of another process.
Simons uses different terminology for relations of occurrents to regions: Denote the spatio-temporal location of a given occurrent e by 'spn[e]' and call this region its span. We may say an occurrent is at its span, in any larger region, and covers any smaller region. Now suppose we have fixed a frame of reference so that we can speak not merely of spatio-temporal but also of spatial regions (places) and temporal regions (times). The spread of an occurrent, (relative to a frame of reference) is the space it exactly occupies, and its spell is likewise the time it exactly occupies. We write 'spr[e]' and `spl[e]' respectively for the spread and spell of e, omitting mention of the frame.
An occurrent is an entity that unfolds itself in time or it is the instantaneous boundary of such an entity (for example a beginning or an ending) or it is a temporal or spatiotemporal region which such an entity occupies_temporal_region or occupies_spatiotemporal_region. (axiom label in BFO2 Reference: [077-002])
Every occurrent occupies_spatiotemporal_region some spatiotemporal region. (axiom label in BFO2 Reference: [108-001])
b is an occurrent entity iff b is an entity that has temporal parts. (axiom label in BFO2 Reference: [079-001])
(forall (x) (if (Occurrent x) (exists (r) (and (SpatioTemporalRegion r) (occupiesSpatioTemporalRegion x r))))) // axiom label in BFO2 CLIF: [108-001]
(forall (x) (iff (Occurrent x) (and (Entity x) (exists (y) (temporalPartOf y x))))) // axiom label in BFO2 CLIF: [079-001]
occurrent
Occurrent doesn't have a closure axiom because the subclasses don't necessarily exhaust all possibilites. An example would be the sum of a process and the process boundary of another process.
per discussion with Barry Smith
Simons uses different terminology for relations of occurrents to regions: Denote the spatio-temporal location of a given occurrent e by 'spn[e]' and call this region its span. We may say an occurrent is at its span, in any larger region, and covers any smaller region. Now suppose we have fixed a frame of reference so that we can speak not merely of spatio-temporal but also of spatial regions (places) and temporal regions (times). The spread of an occurrent, (relative to a frame of reference) is the space it exactly occupies, and its spell is likewise the time it exactly occupies. We write 'spr[e]' and `spl[e]' respectively for the spread and spell of e, omitting mention of the frame.
An occurrent is an entity that unfolds itself in time or it is the instantaneous boundary of such an entity (for example a beginning or an ending) or it is a temporal or spatiotemporal region which such an entity occupies_temporal_region or occupies_spatiotemporal_region. (axiom label in BFO2 Reference: [077-002])
Every occurrent occupies_spatiotemporal_region some spatiotemporal region. (axiom label in BFO2 Reference: [108-001])
b is an occurrent entity iff b is an entity that has temporal parts. (axiom label in BFO2 Reference: [079-001])
(forall (x) (if (Occurrent x) (exists (r) (and (SpatioTemporalRegion r) (occupiesSpatioTemporalRegion x r))))) // axiom label in BFO2 CLIF: [108-001]
(forall (x) (iff (Occurrent x) (and (Entity x) (exists (y) (temporalPartOf y x))))) // axiom label in BFO2 CLIF: [079-001]
ic
IndependentContinuant
a chair
a heart
a leg
a molecule
a spatial region
an atom
an orchestra.
an organism
the bottom right portion of a human torso
the interior of your mouth
A continuant that is a bearer of quality and realizable entity entities, in which other entities inhere and which itself cannot inhere in anything.
b is an independent continuant = Def. b is a continuant which is such that there is no c and no t such that b s-depends_on c at t. (axiom label in BFO2 Reference: [017-002])
For any independent continuant b and any time t there is some spatial region r such that b is located_in r at t. (axiom label in BFO2 Reference: [134-001])
For every independent continuant b and time t during the region of time spanned by its life, there are entities which s-depends_on b during t. (axiom label in BFO2 Reference: [018-002])
(forall (x t) (if (IndependentContinuant x) (exists (r) (and (SpatialRegion r) (locatedInAt x r t))))) // axiom label in BFO2 CLIF: [134-001]
(forall (x t) (if (and (IndependentContinuant x) (existsAt x t)) (exists (y) (and (Entity y) (specificallyDependsOnAt y x t))))) // axiom label in BFO2 CLIF: [018-002]
(iff (IndependentContinuant a) (and (Continuant a) (not (exists (b t) (specificallyDependsOnAt a b t))))) // axiom label in BFO2 CLIF: [017-002]
independent continuant
b is an independent continuant = Def. b is a continuant which is such that there is no c and no t such that b s-depends_on c at t. (axiom label in BFO2 Reference: [017-002])
For any independent continuant b and any time t there is some spatial region r such that b is located_in r at t. (axiom label in BFO2 Reference: [134-001])
For every independent continuant b and time t during the region of time spanned by its life, there are entities which s-depends_on b during t. (axiom label in BFO2 Reference: [018-002])
(forall (x t) (if (IndependentContinuant x) (exists (r) (and (SpatialRegion r) (locatedInAt x r t))))) // axiom label in BFO2 CLIF: [134-001]
(forall (x t) (if (and (IndependentContinuant x) (existsAt x t)) (exists (y) (and (Entity y) (specificallyDependsOnAt y x t))))) // axiom label in BFO2 CLIF: [018-002]
(iff (IndependentContinuant a) (and (Continuant a) (not (exists (b t) (specificallyDependsOnAt a b t))))) // axiom label in BFO2 CLIF: [017-002]
process
Process
a process of cell-division, \ a beating of the heart
a process of meiosis
a process of sleeping
the course of a disease
the flight of a bird
the life of an organism
your process of aging.
An occurrent that has temporal proper parts and for some time t, p s-depends_on some material entity at t.
p is a process = Def. p is an occurrent that has temporal proper parts and for some time t, p s-depends_on some material entity at t. (axiom label in BFO2 Reference: [083-003])
BFO 2 Reference: The realm of occurrents is less pervasively marked by the presence of natural units than is the case in the realm of independent continuants. Thus there is here no counterpart of ‘object’. In BFO 1.0 ‘process’ served as such a counterpart. In BFO 2.0 ‘process’ is, rather, the occurrent counterpart of ‘material entity’. Those natural – as contrasted with engineered, which here means: deliberately executed – units which do exist in the realm of occurrents are typically either parasitic on the existence of natural units on the continuant side, or they are fiat in nature. Thus we can count lives; we can count football games; we can count chemical reactions performed in experiments or in chemical manufacturing. We cannot count the processes taking place, for instance, in an episode of insect mating behavior.Even where natural units are identifiable, for example cycles in a cyclical process such as the beating of a heart or an organism’s sleep/wake cycle, the processes in question form a sequence with no discontinuities (temporal gaps) of the sort that we find for instance where billiard balls or zebrafish or planets are separated by clear spatial gaps. Lives of organisms are process units, but they too unfold in a continuous series from other, prior processes such as fertilization, and they unfold in turn in continuous series of post-life processes such as post-mortem decay. Clear examples of boundaries of processes are almost always of the fiat sort (midnight, a time of death as declared in an operating theater or on a death certificate, the initiation of a state of war)
(iff (Process a) (and (Occurrent a) (exists (b) (properTemporalPartOf b a)) (exists (c t) (and (MaterialEntity c) (specificallyDependsOnAt a c t))))) // axiom label in BFO2 CLIF: [083-003]
process
p is a process = Def. p is an occurrent that has temporal proper parts and for some time t, p s-depends_on some material entity at t. (axiom label in BFO2 Reference: [083-003])
(iff (Process a) (and (Occurrent a) (exists (b) (properTemporalPartOf b a)) (exists (c t) (and (MaterialEntity c) (specificallyDependsOnAt a c t))))) // axiom label in BFO2 CLIF: [083-003]
disposition
Disposition
an atom of element X has the disposition to decay to an atom of element Y
certain people have a predisposition to colon cancer
children are innately disposed to categorize objects in certain ways.
the cell wall is disposed to filter chemicals in endocytosis and exocytosis
BFO 2 Reference: Dispositions exist along a strength continuum. Weaker forms of disposition are realized in only a fraction of triggering cases. These forms occur in a significant number of cases of a similar type.
b is a disposition means: b is a realizable entity & b’s bearer is some material entity & b is such that if it ceases to exist, then its bearer is physically changed, & b’s realization occurs when and because this bearer is in some special physical circumstances, & this realization occurs in virtue of the bearer’s physical make-up. (axiom label in BFO2 Reference: [062-002])
If b is a realizable entity then for all t at which b exists, b s-depends_on some material entity at t. (axiom label in BFO2 Reference: [063-002])
(forall (x t) (if (and (RealizableEntity x) (existsAt x t)) (exists (y) (and (MaterialEntity y) (specificallyDepends x y t))))) // axiom label in BFO2 CLIF: [063-002]
(forall (x) (if (Disposition x) (and (RealizableEntity x) (exists (y) (and (MaterialEntity y) (bearerOfAt x y t)))))) // axiom label in BFO2 CLIF: [062-002]
disposition
b is a disposition means: b is a realizable entity & b’s bearer is some material entity & b is such that if it ceases to exist, then its bearer is physically changed, & b’s realization occurs when and because this bearer is in some special physical circumstances, & this realization occurs in virtue of the bearer’s physical make-up. (axiom label in BFO2 Reference: [062-002])
If b is a realizable entity then for all t at which b exists, b s-depends_on some material entity at t. (axiom label in BFO2 Reference: [063-002])
(forall (x t) (if (and (RealizableEntity x) (existsAt x t)) (exists (y) (and (MaterialEntity y) (specificallyDepends x y t))))) // axiom label in BFO2 CLIF: [063-002]
(forall (x) (if (Disposition x) (and (RealizableEntity x) (exists (y) (and (MaterialEntity y) (bearerOfAt x y t)))))) // axiom label in BFO2 CLIF: [062-002]
realizable
RealizableEntity
the disposition of this piece of metal to conduct electricity.
the disposition of your blood to coagulate
the function of your reproductive organs
the role of being a doctor
the role of this boundary to delineate where Utah and Colorado meet
A specifically dependent continuant that inheres in continuant entities and are not exhibited in full at every time in which it inheres in an entity or group of entities. The exhibition or actualization of a realizable entity is a particular manifestation, functioning or process that occurs under certain circumstances.
To say that b is a realizable entity is to say that b is a specifically dependent continuant that inheres in some independent continuant which is not a spatial region and is of a type instances of which are realized in processes of a correlated type. (axiom label in BFO2 Reference: [058-002])
All realizable dependent continuants have independent continuants that are not spatial regions as their bearers. (axiom label in BFO2 Reference: [060-002])
(forall (x t) (if (RealizableEntity x) (exists (y) (and (IndependentContinuant y) (not (SpatialRegion y)) (bearerOfAt y x t))))) // axiom label in BFO2 CLIF: [060-002]
(forall (x) (if (RealizableEntity x) (and (SpecificallyDependentContinuant x) (exists (y) (and (IndependentContinuant y) (not (SpatialRegion y)) (inheresIn x y)))))) // axiom label in BFO2 CLIF: [058-002]
realizable entity
To say that b is a realizable entity is to say that b is a specifically dependent continuant that inheres in some independent continuant which is not a spatial region and is of a type instances of which are realized in processes of a correlated type. (axiom label in BFO2 Reference: [058-002])
All realizable dependent continuants have independent continuants that are not spatial regions as their bearers. (axiom label in BFO2 Reference: [060-002])
(forall (x t) (if (RealizableEntity x) (exists (y) (and (IndependentContinuant y) (not (SpatialRegion y)) (bearerOfAt y x t))))) // axiom label in BFO2 CLIF: [060-002]
(forall (x) (if (RealizableEntity x) (and (SpecificallyDependentContinuant x) (exists (y) (and (IndependentContinuant y) (not (SpatialRegion y)) (inheresIn x y)))))) // axiom label in BFO2 CLIF: [058-002]
quality
Quality
the ambient temperature of this portion of air
the color of a tomato
the length of the circumference of your waist
the mass of this piece of gold.
the shape of your nose
the shape of your nostril
a quality is a specifically dependent continuant that, in contrast to roles and dispositions, does not require any further process in order to be realized. (axiom label in BFO2 Reference: [055-001])
If an entity is a quality at any time that it exists, then it is a quality at every time that it exists. (axiom label in BFO2 Reference: [105-001])
(forall (x) (if (Quality x) (SpecificallyDependentContinuant x))) // axiom label in BFO2 CLIF: [055-001]
(forall (x) (if (exists (t) (and (existsAt x t) (Quality x))) (forall (t_1) (if (existsAt x t_1) (Quality x))))) // axiom label in BFO2 CLIF: [105-001]
quality
a quality is a specifically dependent continuant that, in contrast to roles and dispositions, does not require any further process in order to be realized. (axiom label in BFO2 Reference: [055-001])
If an entity is a quality at any time that it exists, then it is a quality at every time that it exists. (axiom label in BFO2 Reference: [105-001])
(forall (x) (if (Quality x) (SpecificallyDependentContinuant x))) // axiom label in BFO2 CLIF: [055-001]
(forall (x) (if (exists (t) (and (existsAt x t) (Quality x))) (forall (t_1) (if (existsAt x t_1) (Quality x))))) // axiom label in BFO2 CLIF: [105-001]
sdc
SpecificallyDependentContinuant
specifically dependent continuant
Reciprocal specifically dependent continuants: the function of this key to open this lock and the mutually dependent disposition of this lock: to be opened by this key
of one-sided specifically dependent continuants: the mass of this tomato
of relational dependent continuants (multiple bearers): John’s love for Mary, the ownership relation between John and this statue, the relation of authority between John and his subordinates.
the disposition of this fish to decay
the function of this heart: to pump blood
the mutual dependence of proton donors and acceptors in chemical reactions [79
the mutual dependence of the role predator and the role prey as played by two organisms in a given interaction
the pink color of a medium rare piece of grilled filet mignon at its center
the role of being a doctor
the shape of this hole.
the smell of this portion of mozzarella
A continuant that inheres in or is borne by other entities. Every instance of A requires some specific instance of B which must always be the same.
b is a relational specifically dependent continuant = Def. b is a specifically dependent continuant and there are n > 1 independent continuants c1, … cn which are not spatial regions are such that for all 1 i < j n, ci and cj share no common parts, are such that for each 1 i n, b s-depends_on ci at every time t during the course of b’s existence (axiom label in BFO2 Reference: [131-004])
b is a specifically dependent continuant = Def. b is a continuant & there is some independent continuant c which is not a spatial region and which is such that b s-depends_on c at every time t during the course of b’s existence. (axiom label in BFO2 Reference: [050-003])
Specifically dependent continuant doesn't have a closure axiom because the subclasses don't necessarily exhaust all possibilites. We're not sure what else will develop here, but for example there are questions such as what are promises, obligation, etc.
(iff (RelationalSpecificallyDependentContinuant a) (and (SpecificallyDependentContinuant a) (forall (t) (exists (b c) (and (not (SpatialRegion b)) (not (SpatialRegion c)) (not (= b c)) (not (exists (d) (and (continuantPartOfAt d b t) (continuantPartOfAt d c t)))) (specificallyDependsOnAt a b t) (specificallyDependsOnAt a c t)))))) // axiom label in BFO2 CLIF: [131-004]
(iff (SpecificallyDependentContinuant a) (and (Continuant a) (forall (t) (if (existsAt a t) (exists (b) (and (IndependentContinuant b) (not (SpatialRegion b)) (specificallyDependsOnAt a b t))))))) // axiom label in BFO2 CLIF: [050-003]
specifically dependent continuant
b is a relational specifically dependent continuant = Def. b is a specifically dependent continuant and there are n > 1 independent continuants c1, … cn which are not spatial regions are such that for all 1 i < j n, ci and cj share no common parts, are such that for each 1 i n, b s-depends_on ci at every time t during the course of b’s existence (axiom label in BFO2 Reference: [131-004])
b is a specifically dependent continuant = Def. b is a continuant & there is some independent continuant c which is not a spatial region and which is such that b s-depends_on c at every time t during the course of b’s existence. (axiom label in BFO2 Reference: [050-003])
Specifically dependent continuant doesn't have a closure axiom because the subclasses don't necessarily exhaust all possibilites. We're not sure what else will develop here, but for example there are questions such as what are promises, obligation, etc.
per discussion with Barry Smith
(iff (RelationalSpecificallyDependentContinuant a) (and (SpecificallyDependentContinuant a) (forall (t) (exists (b c) (and (not (SpatialRegion b)) (not (SpatialRegion c)) (not (= b c)) (not (exists (d) (and (continuantPartOfAt d b t) (continuantPartOfAt d c t)))) (specificallyDependsOnAt a b t) (specificallyDependsOnAt a c t)))))) // axiom label in BFO2 CLIF: [131-004]
(iff (SpecificallyDependentContinuant a) (and (Continuant a) (forall (t) (if (existsAt a t) (exists (b) (and (IndependentContinuant b) (not (SpatialRegion b)) (specificallyDependsOnAt a b t))))))) // axiom label in BFO2 CLIF: [050-003]
role
Role
John’s role of husband to Mary is dependent on Mary’s role of wife to John, and both are dependent on the object aggregate comprising John and Mary as member parts joined together through the relational quality of being married.
the priest role
the role of a boundary to demarcate two neighboring administrative territories
the role of a building in serving as a military target
the role of a stone in marking a property boundary
the role of subject in a clinical trial
the student role
A realizable entity the manifestation of which brings about some result or end that is not essential to a continuant in virtue of the kind of thing that it is but that can be served or participated in by that kind of continuant in some kinds of natural, social or institutional contexts.
BFO 2 Reference: One major family of examples of non-rigid universals involves roles, and ontologies developed for corresponding administrative purposes may consist entirely of representatives of entities of this sort. Thus ‘professor’, defined as follows,b instance_of professor at t =Def. there is some c, c instance_of professor role & c inheres_in b at t.denotes a non-rigid universal and so also do ‘nurse’, ‘student’, ‘colonel’, ‘taxpayer’, and so forth. (These terms are all, in the jargon of philosophy, phase sortals.) By using role terms in definitions, we can create a BFO conformant treatment of such entities drawing on the fact that, while an instance of professor may be simultaneously an instance of trade union member, no instance of the type professor role is also (at any time) an instance of the type trade union member role (any more than any instance of the type color is at any time an instance of the type length).If an ontology of employment positions should be defined in terms of roles following the above pattern, this enables the ontology to do justice to the fact that individuals instantiate the corresponding universals – professor, sergeant, nurse – only during certain phases in their lives.
b is a role means: b is a realizable entity & b exists because there is some single bearer that is in some special physical, social, or institutional set of circumstances in which this bearer does not have to be& b is not such that, if it ceases to exist, then the physical make-up of the bearer is thereby changed. (axiom label in BFO2 Reference: [061-001])
(forall (x) (if (Role x) (RealizableEntity x))) // axiom label in BFO2 CLIF: [061-001]
role
b is a role means: b is a realizable entity & b exists because there is some single bearer that is in some special physical, social, or institutional set of circumstances in which this bearer does not have to be& b is not such that, if it ceases to exist, then the physical make-up of the bearer is thereby changed. (axiom label in BFO2 Reference: [061-001])
(forall (x) (if (Role x) (RealizableEntity x))) // axiom label in BFO2 CLIF: [061-001]
gdc
GenericallyDependentContinuant
The entries in your database are patterns instantiated as quality instances in your hard drive. The database itself is an aggregate of such patterns. When you create the database you create a particular instance of the generically dependent continuant type database. Each entry in the database is an instance of the generically dependent continuant type IAO: information content entity.
the pdf file on your laptop, the pdf file that is a copy thereof on my laptop
the sequence of this protein molecule; the sequence that is a copy thereof in that protein molecule.
A continuant that is dependent on one or other independent continuant bearers. For every instance of A requires some instance of (an independent continuant type) B but which instance of B serves can change from time to time.
b is a generically dependent continuant = Def. b is a continuant that g-depends_on one or more other entities. (axiom label in BFO2 Reference: [074-001])
(iff (GenericallyDependentContinuant a) (and (Continuant a) (exists (b t) (genericallyDependsOnAt a b t)))) // axiom label in BFO2 CLIF: [074-001]
generically dependent continuant
b is a generically dependent continuant = Def. b is a continuant that g-depends_on one or more other entities. (axiom label in BFO2 Reference: [074-001])
(iff (GenericallyDependentContinuant a) (and (Continuant a) (exists (b t) (genericallyDependsOnAt a b t)))) // axiom label in BFO2 CLIF: [074-001]
function
Function
the function of a hammer to drive in nails
the function of a heart pacemaker to regulate the beating of a heart through electricity
the function of amylase in saliva to break down starch into sugar
BFO 2 Reference: In the past, we have distinguished two varieties of function, artifactual function and biological function. These are not asserted subtypes of BFO:function however, since the same function – for example: to pump, to transport – can exist both in artifacts and in biological entities. The asserted subtypes of function that would be needed in order to yield a separate monoheirarchy are not artifactual function, biological function, etc., but rather transporting function, pumping function, etc.
A function is a disposition that exists in virtue of the bearer’s physical make-up and this physical make-up is something the bearer possesses because it came into being, either through evolution (in the case of natural biological entities) or through intentional design (in the case of artifacts), in order to realize processes of a certain sort. (axiom label in BFO2 Reference: [064-001])
(forall (x) (if (Function x) (Disposition x))) // axiom label in BFO2 CLIF: [064-001]
function
A function is a disposition that exists in virtue of the bearer’s physical make-up and this physical make-up is something the bearer possesses because it came into being, either through evolution (in the case of natural biological entities) or through intentional design (in the case of artifacts), in order to realize processes of a certain sort. (axiom label in BFO2 Reference: [064-001])
(forall (x) (if (Function x) (Disposition x))) // axiom label in BFO2 CLIF: [064-001]
material
MaterialEntity
material entity
material entity
Collection of random bacteria, a chair, dorsal surface of the body.
a flame
a forest fire
a human being
a hurricane
a photon
a puff of smoke
a sea wave
a tornado
an aggregate of human beings.
an energy wave
an epidemic
the undetached arm of a human being
An independent continuant [snap:IndependentContinuant] that is spatially extended whose identity is independent of that of other entities and can be maintained through time. Note: Material entity [snap:MaterialEntity] subsumes object [snap:Object], fiat object part [snap:FiatObjectPart], and object aggregate [snap:ObjectAggregate], which assume a three level theory of granularity, which is inadequate for some domains, such as biology.
An independent continuant that is spatially extended whose identity is independent of that of other entities and can be maintained through time.
BFO 2 Reference: Material entities (continuants) can preserve their identity even while gaining and losing material parts. Continuants are contrasted with occurrents, which unfold themselves in successive temporal parts or phases [60
BFO 2 Reference: Object, Fiat Object Part and Object Aggregate are not intended to be exhaustive of Material Entity. Users are invited to propose new subcategories of Material Entity.
BFO 2 Reference: ‘Matter’ is intended to encompass both mass and energy (we will address the ontological treatment of portions of energy in a later version of BFO). A portion of matter is anything that includes elementary particles among its proper or improper parts: quarks and leptons, including electrons, as the smallest particles thus far discovered; baryons (including protons and neutrons) at a higher level of granularity; atoms and molecules at still higher levels, forming the cells, organs, organisms and other material entities studied by biologists, the portions of rock studied by geologists, the fossils studied by paleontologists, and so on.Material entities are three-dimensional entities (entities extended in three spatial dimensions), as contrasted with the processes in which they participate, which are four-dimensional entities (entities extended also along the dimension of time).According to the FMA, material entities may have immaterial entities as parts – including the entities identified below as sites; for example the interior (or ‘lumen’) of your small intestine is a part of your body. BFO 2.0 embodies a decision to follow the FMA here.
BFO
A material entity is an independent continuant that has some portion of matter as proper or improper continuant part. (axiom label in BFO2 Reference: [019-002])
Every entity which has a material entity as continuant part is a material entity. (axiom label in BFO2 Reference: [020-002])
every entity of which a material entity is continuant part is also a material entity. (axiom label in BFO2 Reference: [021-002])
(forall (x) (if (MaterialEntity x) (IndependentContinuant x))) // axiom label in BFO2 CLIF: [019-002]
(forall (x) (if (and (Entity x) (exists (y t) (and (MaterialEntity y) (continuantPartOfAt x y t)))) (MaterialEntity x))) // axiom label in BFO2 CLIF: [021-002]
(forall (x) (if (and (Entity x) (exists (y t) (and (MaterialEntity y) (continuantPartOfAt y x t)))) (MaterialEntity x))) // axiom label in BFO2 CLIF: [020-002]
material entity
material entity
A material entity is an independent continuant that has some portion of matter as proper or improper continuant part. (axiom label in BFO2 Reference: [019-002])
Every entity which has a material entity as continuant part is a material entity. (axiom label in BFO2 Reference: [020-002])
every entity of which a material entity is continuant part is also a material entity. (axiom label in BFO2 Reference: [021-002])
(forall (x) (if (MaterialEntity x) (IndependentContinuant x))) // axiom label in BFO2 CLIF: [019-002]
(forall (x) (if (and (Entity x) (exists (y t) (and (MaterialEntity y) (continuantPartOfAt x y t)))) (MaterialEntity x))) // axiom label in BFO2 CLIF: [021-002]
(forall (x) (if (and (Entity x) (exists (y t) (and (MaterialEntity y) (continuantPartOfAt y x t)))) (MaterialEntity x))) // axiom label in BFO2 CLIF: [020-002]
A molecular process that can be carried out by the action of a single macromolecular machine, usually via direct physical interactions with other molecular entities. Function in this sense denotes an action, or activity, that a gene product (or a complex) performs. These actions are described from two distinct but related perspectives: (1) biochemical activity, and (2) role as a component in a larger system/process.
molecular function
GO:0003674
Note that, in addition to forming the root of the molecular function ontology, this term is recommended for use for the annotation of gene products whose molecular function is unknown. When this term is used for annotation, it indicates that no information was available about the molecular function of the gene product annotated as of the date the annotation was made; the evidence code 'no data' (ND), is used to indicate this. Despite its name, this is not a type of 'function' in the sense typically defined by upper ontologies such as Basic Formal Ontology (BFO). It is instead a BFO:process carried out by a single gene product or complex.
molecular_function
A molecular process that can be carried out by the action of a single macromolecular machine, usually via direct physical interactions with other molecular entities. Function in this sense denotes an action, or activity, that a gene product (or a complex) performs. These actions are described from two distinct but related perspectives: (1) biochemical activity, and (2) role as a component in a larger system/process.
GOC:pdt
A biological process represents a specific objective that the organism is genetically programmed to achieve. Biological processes are often described by their outcome or ending state, e.g., the biological process of cell division results in the creation of two daughter cells (a divided cell) from a single parent cell. A biological process is accomplished by a particular set of molecular functions carried out by specific gene products (or macromolecular complexes), often in a highly regulated manner and in a particular temporal sequence.
jl
2012-09-19T15:05:24Z
Wikipedia:Biological_process
biological process
physiological process
single organism process
single-organism process
GO:0008150
Note that, in addition to forming the root of the biological process ontology, this term is recommended for use for the annotation of gene products whose biological process is unknown. When this term is used for annotation, it indicates that no information was available about the biological process of the gene product annotated as of the date the annotation was made; the evidence code 'no data' (ND), is used to indicate this.
biological_process
A biological process represents a specific objective that the organism is genetically programmed to achieve. Biological processes are often described by their outcome or ending state, e.g., the biological process of cell division results in the creation of two daughter cells (a divided cell) from a single parent cell. A biological process is accomplished by a particular set of molecular functions carried out by specific gene products (or macromolecular complexes), often in a highly regulated manner and in a particular temporal sequence.
GOC:pdt
true
Catalysis of the transfer of a phosphate group, usually from ATP, to a substrate molecule.
Reactome:R-HSA-6788855
Reactome:R-HSA-6788867
phosphokinase activity
GO:0016301
Note that this term encompasses all activities that transfer a single phosphate group; although ATP is by far the most common phosphate donor, reactions using other phosphate donors are included in this term.
kinase activity
Catalysis of the transfer of a phosphate group, usually from ATP, to a substrate molecule.
ISBN:0198506732
Reactome:R-HSA-6788855
FN3KRP phosphorylates PsiAm, RibAm
Reactome:R-HSA-6788867
FN3K phosphorylates ketosamines
measurement unit label
Examples of measurement unit labels are liters, inches, weight per volume.
A measurement unit label is as a label that is part of a scalar measurement datum and denotes a unit of measure.
2009-03-16: provenance: a term measurement unit was
proposed for OBI (OBI_0000176) , edited by Chris Stoeckert and
Cristian Cocos, and subsequently moved to IAO where the objective for
which the original term was defined was satisfied with the definition
of this, different, term.
2009-03-16: review of this term done during during the OBI workshop winter 2009 and the current definition was considered acceptable for use in OBI. If there is a need to modify this definition please notify OBI.
PERSON: Alan Ruttenberg
PERSON: Melanie Courtot
measurement unit label
objective specification
In the protocol of a ChIP assay the objective specification says to identify protein and DNA interaction.
A directive information entity that describes an intended process endpoint. When part of a plan specification the concretization is realized in a planned process in which the bearer tries to effect the world so that the process endpoint is achieved.
2009-03-16: original definition when imported from OBI read: "objective is an non realizable information entity which can serve as that proper part of a plan towards which the realization of the plan is directed."
2014-03-31: In the example of usage ("In the protocol of a ChIP assay the objective specification says to identify protein and DNA interaction") there is a protocol which is the ChIP assay protocol. In addition to being concretized on paper, the protocol can be concretized as a realizable entity, such as a plan that inheres in a person. The objective specification is the part that says that some protein and DNA interactions are identified. This is a specification of a process endpoint: the boundary in the process before which they are not identified and after which they are. During the realization of the plan, the goal is to get to the point of having the interactions, and participants in the realization of the plan try to do that.
Answers the question, why did you do this experiment?
PERSON: Alan Ruttenberg
PERSON: Barry Smith
PERSON: Bjoern Peters
PERSON: Jennifer Fostel
goal specification
OBI Plan and Planned Process/Roles Branch
OBI_0000217
objective specification
Pour the contents of flask 1 into flask 2
A directive information entity that describes an action the bearer will take.
Alan Ruttenberg
OBI Plan and Planned Process branch
action specification
datum label
A label is a symbol that is part of some other datum and is used to either partially define the denotation of that datum or to provide a means for identifying the datum as a member of the set of data with the same label
http://www.golovchenko.org/cgi-bin/wnsearch?q=label#4n
GROUP: IAO
9/22/11 BP: changed the rdfs:label for this class from 'label' to 'datum label' to convey that this class is not intended to cover all kinds of labels (stickers, radiolabels, etc.), and not even all kind of textual labels, but rather the kind of labels occuring in a datum.
datum label
software
Software is a plan specification composed of a series of instructions that can be
interpreted by or directly executed by a processing unit.
see sourceforge tracker discussion at http://sourceforge.net/tracker/index.php?func=detail&aid=1958818&group_id=177891&atid=886178
PERSON: Alan Ruttenberg
PERSON: Bjoern Peters
PERSON: Chris Stoeckert
PERSON: Melanie Courtot
GROUP: OBI
software
http://www.ebi.ac.uk/swo/SWO_0000001
data item
Data items include counts of things, analyte concentrations, and statistical summaries.
An information content entity that is intended to be a truthful statement about something (modulo, e.g., measurement precision or other systematic errors) and is constructed/acquired by a method which reliably tends to produce (approximately) truthful statements.
2/2/2009 Alan and Bjoern discussing FACS run output data. This is a data item because it is about the cell population. Each element records an event and is typically further composed a set of measurment data items that record the fluorescent intensity stimulated by one of the lasers.
2009-03-16: data item deliberatly ambiguous: we merged data set and datum to be one entity, not knowing how to define singular versus plural. So data item is more general than datum.
2009-03-16: removed datum as alternative term as datum specifically refers to singular form, and is thus not an exact synonym.
2014-03-31: See discussion at http://odontomachus.wordpress.com/2014/03/30/aboutness-objects-propositions/
JAR: datum -- well, this will be very tricky to define, but maybe some
information-like stuff that might be put into a computer and that is
meant, by someone, to denote and/or to be interpreted by some
process... I would include lists, tables, sentences... I think I might
defer to Barry, or to Brian Cantwell Smith
JAR: A data item is an approximately justified approximately true approximate belief
PERSON: Alan Ruttenberg
PERSON: Chris Stoeckert
PERSON: Jonathan Rees
data
data item
http://www.ontobee.org/browser/rdf.php?o=IAO&iri=http://purl.obolibrary.org/obo/IAO_0000027
symbol
a serial number such as "12324X"
a stop sign
a written proper name such as "OBI"
An information content entity that is a mark(s) or character(s) used as a conventional representation of another entity.
20091104, MC: this needs work and will most probably change
2014-03-31: We would like to have a deeper analysis of 'mark' and 'sign' in the future (see https://github.com/information-artifact-ontology/IAO/issues/154).
PERSON: James A. Overton
PERSON: Jonathan Rees
based on Oxford English Dictionary
symbol
information content entity
Examples of information content entites include journal articles, data, graphical layouts, and graphs.
Examples of information content entites include journal articles, data, graphical layouts, and graphs.
A generically dependent continuant that is about some thing.
An information content entity is an entity that is generically dependent on some artifact and stands in relation of aboutness to some entity.
2014-03-10: The use of "thing" is intended to be general enough to include universals and configurations (see https://groups.google.com/d/msg/information-ontology/GBxvYZCk1oc/-L6B5fSBBTQJ).
information_content_entity 'is_encoded_in' some digital_entity in obi before split (040907). information_content_entity 'is_encoded_in' some physical_document in obi before split (040907).
Previous. An information content entity is a non-realizable information entity that 'is encoded in' some digital or physical entity.
PERSON: Chris Stoeckert
IAO
OBI_0000142
information content entity
information content entity
An information content entity whose concretizations indicate to their bearer how to realize them in a process.
2009-03-16: provenance: a term realizable information entity was proposed for OBI (OBI_0000337) , edited by the PlanAndPlannedProcess branch. Original definition was "is the specification of a process that can be concretized and realized by an actor" with alternative term "instruction".It has been subsequently moved to IAO where the objective for which the original term was defined was satisfied with the definitionof this, different, term.
2013-05-30 Alan Ruttenberg: What differentiates a directive information entity from an information concretization is that it can have concretizations that are either qualities or realizable entities. The concretizations that are realizable entities are created when an individual chooses to take up the direction, i.e. has the intention to (try to) realize it.
8/6/2009 Alan Ruttenberg: Changed label from "information entity about a realizable" after discussions at ICBO
Werner pushed back on calling it realizable information entity as it isn't realizable. However this name isn't right either. An example would be a recipe. The realizable entity would be a plan, but the information entity isn't about the plan, it, once concretized, *is* the plan. -Alan
PERSON: Alan Ruttenberg
PERSON: Bjoern Peters
directive information entity
dot plot
Dot plot of SSC-H and FSC-H.
A dot plot is a report graph which is a graphical representation of data where each data point is represented by a single dot placed on coordinates corresponding to data point values in particular dimensions.
person:Allyson Lister
person:Chris Stoeckert
OBI_0000123
group:OBI
dot plot
graph
A diagram that presents one or more tuples of information by mapping those tuples in to a two dimensional space in a non arbitrary way.
PERSON: Lawrence Hunter
person:Alan Ruttenberg
person:Allyson Lister
OBI_0000240
group:OBI
graph
algorithm
PMID: 18378114.Genomics. 2008 Mar 28. LINKGEN: A new algorithm to process data in genetic linkage studies.
A plan specification which describes inputs, output of mathematical functions as well as workflow of execution for achieving an predefined objective. Algorithms are realized usually by means of implementation as computer programs for execution by automata.
A plan specification which describes the inputs and output of mathematical functions as well as workflow of execution for achieving an predefined objective. Algorithms are realized usually by means of implementation as computer programs for execution by automata.
Philippe Rocca-Serra
PlanAndPlannedProcess Branch
IAO
OBI_0000270
adapted from discussion on OBI list (Matthew Pocock, Christian Cocos, Alan Ruttenberg)
algorithm
algorithm
curation status specification
The curation status of the term. The allowed values come from an enumerated list of predefined terms. See the specification of these instances for more detailed definitions of each enumerated value.
Better to represent curation as a process with parts and then relate labels to that process (in IAO meeting)
PERSON:Bill Bug
GROUP:OBI:<http://purl.obolibrary.org/obo/obi>
OBI_0000266
curation status specification
source code module
The written source code that implements part of an algorithm. Test - if you know that it was written in a specific language, then it can be source code module. We mean here, roughly, the wording of a document such as a perl script.
A source code module is a directive information entity that specifies, using a programming language, some algorithm.
person:Alan Ruttenberg
person:Chris Stoeckert
OBI_0000039
group:OBI
source code module
data format specification
A data format specification is the information content borne by the document published defining the specification.
Example: The ISO document specifying what encompasses an XML document; The instructions in a XSD file
2009-03-16: provenance: term imported from OBI_0000187, which had original definition "A data format specification is a plan which organizes
information. Example: The ISO document specifying what encompasses an
XML document; The instructions in a XSD file"
PERSON: Alan Ruttenberg
PlanAndPlannedProcess Branch
OBI branch derived
OBI_0000187
data format specification
data set
Intensity values in a CEL file or from multiple CEL files comprise a data set (as opposed to the CEL files themselves).
A data item that is an aggregate of other data items of the same type that have something in common. Averages and distributions can be determined for data sets.
2009/10/23 Alan Ruttenberg. The intention is that this term represent collections of like data. So this isn't for, e.g. the whole contents of a cel file, which includes parameters, metadata etc. This is more like java arrays of a certain rather specific type
2014-05-05: Data sets are aggregates and thus must include two or more data items. We have chosen not to add logical axioms to make this restriction.
person:Allyson Lister
person:Chris Stoeckert
OBI_0000042
group:OBI
data set
image
An image is an affine projection to a two dimensional surface, of measurements of some quality of an entity or entities repeated at regular intervals across a spatial range, where the measurements are represented as color and luminosity on the projected on surface.
person:Alan Ruttenberg
person:Allyson
person:Chris Stoeckert
OBI_0000030
group:OBI
image
data about an ontology part
Data about an ontology part is a data item about a part of an ontology, for example a term
Person:Alan Ruttenberg
data about an ontology part
plan specification
PMID: 18323827.Nat Med. 2008 Mar;14(3):226.New plan proposed to help resolve conflicting medical advice.
A directive information entity with action specifications and objective specifications as parts that, when concretized, is realized in a process in which the bearer tries to achieve the objectives by taking the actions specified.
A directive information entity with action specifications and objective specifications as parts, and that may be concretized as a realizable entity that, if realized, is realized in a process in which the bearer tries to achieve the objectives by taking the actions specified.
2009-03-16: provenance: a term a plan was proposed for OBI (OBI_0000344) , edited by the PlanAndPlannedProcess branch. Original definition was " a plan is a specification of a process that is realized by an actor to achieve the objective specified as part of the plan". It has been subsequently moved to IAO where the objective for which the original term was defined was satisfied with the definitionof this, different, term.
2014-03-31: A plan specification can have other parts, such as conditional specifications.
2022-01-16 Updated definition to that proposed by Clint Dowloand, IAO Issue 231.
Alternative previous definition: a plan is a set of instructions that specify how an objective should be achieved
Alan Ruttenberg
Clint Dowland
OBI Plan and Planned Process branch
OBI_0000344
2/3/2009 Comment from OBI review.
Action specification not well enough specified.
Conditional specification not well enough specified.
Question whether all plan specifications have objective specifications.
Request that IAO either clarify these or change definitions not to use them
plan specification
https://github.com/information-artifact-ontology/IAO/issues/231#issuecomment-1010455131
version number
A version number is an information content entity which is a sequence of characters borne by part of each of a class of manufactured products or its packaging and indicates its order within a set of other products having the same name.
Note: we feel that at the moment we are happy with a general version number, and that we will subclass as needed in the future. For example, see 7. genome sequence version
GROUP: IAO
version name
version number
material information bearer
A page of a paperback novel with writing on it. The paper itself is a material information bearer, the pattern of ink is the information carrier.
a brain
a hard drive
A material entity in which a concretization of an information content entity inheres.
GROUP: IAO
material information bearer
histogram
A histogram is a report graph which is a statistical description of a
distribution in terms of occurrence frequencies of different event classes.
PERSON:Chris Stoeckert
PERSON:James Malone
PERSON:Melanie Courtot
GROUP:OBI
histogram
heatmap
A heatmap is a report graph which is a graphical representation of data
where the values taken by a variable(s) are shown as colors in a
two-dimensional map.
PERSON:Chris Stoeckert
PERSON:James Malone
PERSON:Melanie Courtot
GROUP:OBI
heatmap
dendrogram
Dendrograms are often used in computational biology to
illustrate the clustering of genes.
A dendrogram is a report graph which is a tree diagram
frequently used to illustrate the arrangement of the clusters produced by a
clustering algorithm.
PERSON:Chris Stoeckert
PERSON:James Malone
PERSON:Melanie Courtot
WEB: http://en.wikipedia.org/wiki/Dendrogram
dendrogram
scatter plot
Comparison of gene expression values in two samples can be displayed in a scatter plot
A scatterplot is a graph which uses Cartesian coordinates to display values for two variables for a set of data. The data is displayed as a collection of points, each having the value of one variable determining the position on the horizontal axis and the value of the other variable determining the position on the vertical axis.
PERSON:Chris Stoeckert
PERSON:James Malone
PERSON:Melanie Courtot
scattergraph
WEB: http://en.wikipedia.org/wiki/Scatterplot
scatter plot
obsolescence reason specification
The reason for which a term has been deprecated. The allowed values come from an enumerated list of predefined terms. See the specification of these instances for more detailed definitions of each enumerated value.
The creation of this class has been inspired in part by Werner Ceusters' paper, Applying evolutionary terminology auditing to the Gene Ontology.
PERSON: Alan Ruttenberg
PERSON: Melanie Courtot
obsolescence reason specification
figure
Any picture, diagram or table
An information content entity consisting of a two dimensional arrangement of information content entities such that the arrangement itself is about something.
PERSON: Lawrence Hunter
figure
diagram
A molecular structure ribbon cartoon showing helices, turns and sheets and their relations to each other in space.
A figure that expresses one or more propositions
PERSON: Lawrence Hunter
diagram
document
A journal article, patent application, laboratory notebook, or a book
A collection of information content entities intended to be understood together as a whole
PERSON: Lawrence Hunter
document
denotator type
The Basic Formal Ontology ontology makes a distinction between Universals and defined classes, where the formal are "natural kinds" and the latter arbitrary collections of entities.
A denotator type indicates how a term should be interpreted from an ontological perspective.
Alan Ruttenberg
Barry Smith, Werner Ceusters
denotator type
Viruses
Viruses
Euteleostomi
bony vertebrates
Euteleostomi
Bacteria
eubacteria
Bacteria
Archaea
Archaea
Eukaryota
eucaryotes
eukaryotes
Eukaryota
Euarchontoglires
Euarchontoglires
Tetrapoda
tetrapods
Tetrapoda
Amniota
amniotes
Amniota
Opisthokonta
Opisthokonta
Metazoa
metazoans
multicellular animals
Metazoa
Bilateria
Bilateria
Mammalia
mammals
Mammalia
Vertebrata <vertebrates>
Vertebrata
vertebrates
Vertebrata <vertebrates>
Homo sapiens
human
human being
Homo sapiens
planned process
planned process
Injecting mice with a vaccine in order to test its efficacy
A process that realizes a plan which is the concretization of a plan specification.
'Plan' includes a future direction sense. That can be problematic if plans are changed during their execution. There are however implicit contingencies for protocols that an agent has in his mind that can be considered part of the plan, even if the agent didn't have them in mind before. Therefore, a planned process can diverge from what the agent would have said the plan was before executing it, by adjusting to problems encountered during execution (e.g. choosing another reagent with equivalent properties, if the originally planned one has run out.)
We are only considering successfully completed planned processes. A plan may be modified, and details added during execution. For a given planned process, the associated realized plan specification is the one encompassing all changes made during execution. This means that all processes in which an agent acts towards achieving some
objectives is a planned process.
Bjoern Peters
branch derived
6/11/9: Edited at workshop. Used to include: is initiated by an agent
This class merges the previously separated objective driven process and planned process, as they the separation proved hard to maintain. (1/22/09, branch call)
planned process
regulator role
Fact sheet - Regulating the companies The role of the regulator. Ofwat is the economic regulator of the water and sewerage industry in England and Wales. http://www.ofwat.gov.uk/aptrix/ofwat/publish.nsf/Content/roleofregulator_factsheet170805
a regulatory role involved with making and/or enforcing relevant legislation and governmental orders
Person:Jennifer Fostel
regulator
OBI
regulator role
regulatory role
Regulatory agency, Ethics committee, Approval letter; example: Browse these EPA Regulatory Role subtopics http://www.epa.gov/ebtpages/enviregulatoryrole.html Feb 29, 2008
a role which inheres in material entities and is realized in the processes of making, enforcing or being defined by legislation or orders issued by a governmental body.
GROUP: Role branch
OBI, CDISC
govt agents responsible for creating regulations; proxies for enforcing regulations. CDISC definition: regulatory authorities. Bodies having the power to regulate. NOTE: In the ICH GCP guideline the term includes the authorities that review submitted clinical data and those that conduct inspections. These bodies are sometimes referred to as competent
regulatory role
material supplier role
Jackson Labs is an organization which provide mice as experimental material
a role realized through the process of supplying materials such as animal subjects, reagents or other materials used in an investigation.
Supplier role is a special kind of service, e.g. biobank
PERSON:Jennifer Fostel
material provider role
supplier
material supplier role
classified data set
A data set that is produced as the output of a class prediction data transformation and consists of a data set with assigned class labels.
PERSON: James Malone
PERSON: Monnie McGee
data set with assigned class labels
classified data set
processed material
Examples include gel matrices, filter paper, parafilm and buffer solutions, mass spectrometer, tissue samples
Is a material entity that is created or changed during material processing.
PERSON: Alan Ruttenberg
processed material
material processing
A cell lysis, production of a cloning vector, creating a buffer.
A planned process which results in physical changes in a specified input material
PERSON: Bjoern Peters
PERSON: Frank Gibson
PERSON: Jennifer Fostel
PERSON: Melanie Courtot
PERSON: Philippe Rocca Serra
material transformation
OBI branch derived
material processing
specimen role
liver section; a portion of a culture of cells; a nemotode or other animal once no longer a subject (generally killed); portion of blood from a patient.
a role borne by a material entity that is gained during a specimen collection process and that can be realized by use of the specimen in an investigation
22Jun09. The definition includes whole organisms, and can include a human. The link between specimen role and study subject role has been removed. A specimen taken as part of a case study is not considered to be a population representative, while a specimen taken as representing a population, e.g. person taken from a cohort, blood specimen taken from an animal) would be considered a population representative and would also bear material sample role.
Note: definition is in specimen creation objective which is defined as an objective to obtain and store a material entity for potential use as an input during an investigation.
blood taken from animal: animal continues in study, whereas blood has role specimen.
something taken from study subject, leaves the study and becomes the specimen.
parasite example
- when parasite in people we study people, people are subjects and parasites are specimen
- when parasite extracted, they become subject in the following study
specimen can later be subject.
GROUP: Role Branch
OBI
specimen role
organization
PMID: 16353909.AAPS J. 2005 Sep 22;7(2):E274-80. Review. The joint food and agriculture organization of the United Nations/World Health Organization Expert Committee on Food Additives and its role in the evaluation of the safety of veterinary drug residues in foods.
An entity that can bear roles, has members, and has a set of organization rules. Members of organizations are either organizations themselves or individual people. Members can bear specific organization member roles that are determined in the organization rules. The organization rules also determine how decisions are made on behalf of the organization by the organization members.
BP: The definition summarizes long email discussions on the OBI developer, roles, biomaterial and denrie branches. It leaves open if an organization is a material entity or a dependent continuant, as no consensus was reached on that. The current placement as material is therefore temporary, in order to move forward with development. Here is the entire email summary, on which the definition is based:
1) there are organization_member_roles (president, treasurer, branch
editor), with individual persons as bearers
2) there are organization_roles (employer, owner, vendor, patent holder)
3) an organization has a charter / rules / bylaws, which specify what roles
there are, how they should be realized, and how to modify the
charter/rules/bylaws themselves.
It is debatable what the organization itself is (some kind of dependent
continuant or an aggregate of people). This also determines who/what the
bearer of organization_roles' are. My personal favorite is still to define
organization as a kind of 'legal entity', but thinking it through leads to
all kinds of questions that are clearly outside the scope of OBI.
Interestingly enough, it does not seem to matter much where we place
organization itself, as long as we can subclass it (University, Corporation,
Government Agency, Hospital), instantiate it (Affymetrix, NCBI, NIH, ISO,
W3C, University of Oklahoma), and have it play roles.
This leads to my proposal: We define organization through the statements 1 -
3 above, but without an 'is a' statement for now. We can leave it in its
current place in the is_a hierarchy (material entity) or move it up to
'continuant'. We leave further clarifications to BFO, and close this issue
for now.
PERSON: Alan Ruttenberg
PERSON: Bjoern Peters
PERSON: Philippe Rocca-Serra
PERSON: Susanna Sansone
GROUP: OBI
organization
organization
regulatory agency
The US Environmental Protection Agency
A regulatory agency is a organization that has responsibility over or for the legislation (acts and regulations) for a given sector of the government.
GROUP: OBI Biomaterial Branch
WEB: en.wikipedia.org/wiki/Regulator
regulatory agency
material transformation objective
The objective to create a mouse infected with LCM virus. The objective to create a defined solution of PBS.
an objective specifiction that creates an specific output object from input materials.
PERSON: Bjoern Peters
PERSON: Frank Gibson
PERSON: Jennifer Fostel
PERSON: Melanie Courtot
PERSON: Philippe Rocca-Serra
artifact creation objective
GROUP: OBI PlanAndPlannedProcess Branch
material transformation objective
manufacturing
Manufacturing is a process with the intent to produce a processed material which will have a function for future use. A person or organization (having manufacturer role) is a participant in this process
Manufacturing implies reproducibility and responsibility AR
This includes a single scientist making a processed material for personal use.
PERSON: Bjoern Peters
PERSON: Frank Gibson
PERSON: Jennifer Fostel
PERSON: Melanie Courtot
PERSON: Philippe Rocca-Serra
GROUP: OBI PlanAndPlannedProcess Branch
manufacturing
manufacturing objective
is the objective to manufacture a material of a certain function (device)
PERSON: Bjoern Peters
PERSON: Frank Gibson
PERSON: Jennifer Fostel
PERSON: Melanie Courtot
PERSON: Philippe Rocca-Serra
GROUP: OBI PlanAndPlannedProcess Branch
manufacturing objective
manufacturer role
With respect to The Accuri C6 Flow Cytometer System, the organization Accuri bears the role manufacturer role. With respect to a transformed line of tissue culture cells derived by a specific lab, the lab whose personnel isolated the cll line bears the role manufacturer role. With respect to a specific antibody produced by an individual scientist, the scientist who purifies, characterizes and distributes the anitbody bears the role manufacturer role.
Manufacturer role is a role which inheres in a person or organization and which is realized by a manufacturing process.
GROUP: Role Branch
OBI
manufacturer role
clustered data set
A clustered data set is the output of a K means clustering data transformation
A data set that is produced as the output of a class discovery data transformation and consists of a data set with assigned discovered class labels.
PERSON: James Malone
PERSON: Monnie McGee
data set with assigned discovered class labels
AR thinks could be a data item instead
clustered data set
specimen collection process
drawing blood from a patient for analysis, collecting a piece of a plant for depositing in a herbarium, buying meat from a butcher in order to measure its protein content in an investigation
A planned process with the objective of collecting a specimen.
Note: definition is in specimen creation objective which is defined as an objective to obtain and store a material entity for potential use as an input during an investigation.
Philly2013: A specimen collection can have as part a material entity acquisition, such as ordering from a bank. The distinction is that specimen collection necessarily involves the creation of a specimen role. However ordering cell lines cells from ATCC for use in an investigation is NOT a specimen collection, because the cell lines already have a specimen role.
Philly2013: The specimen_role for the specimen is created during the specimen collection process.
label changed to 'specimen collection process' on 10/27/2014, details see tracker:
http://sourceforge.net/p/obi/obi-terms/716/
Bjoern Peters
specimen collection
5/31/2012: This process is not necessarily an acquisition, as specimens may be collected from materials already in posession
6/9/09: used at workshop
specimen collection process
class prediction data transformation
A class prediction data transformation (sometimes called supervised classification) is a data transformation that has objective class prediction.
James Malone
supervised classification data transformation
PERSON: James Malone
class prediction data transformation
specimen collection objective
The objective to collect bits of excrement in the rainforest. The objective to obtain a blood sample from a patient.
A objective specification to obtain a material entity for potential use as an input during an investigation.
Bjoern Peters
Bjoern Peters
specimen collection objective
support vector machine
A support vector machine is a data transformation with a class prediction objective based on the construction of a separating hyperplane that maximizes the margin between two data sets of vectors in n-dimensional space.
James Malone
Ryan Brinkman
SVM
PERSON: Ryan Brinkman
support vector machine
decision tree induction objective
A decision tree induction objective is a data transformation objective in which a tree-like graph of edges and nodes is created and from which the selection of each branch requires that some type of logical decision is made.
James Malone
decision tree induction objective
decision tree building data transformation
A decision tree building data transformation is a data transformation that has objective decision tree induction.
James Malone
PERSON: James Malone
decision tree building data transformation
GenePattern software
a software that provides access to more than 100 tools for gene expression analysis, proteomics, SNP analysis and common data processing tasks.
James Malone
Person:Helen Parkinson
WEB: http://www.broadinstitute.org/cancer/software/genepattern/
GenePattern software
peak matching
Peak matching is a data transformation performed on a dataset of a graph of ordered data points (e.g. a spectrum) with the objective of pattern matching local maxima above a noise threshold
James Malone
Ryan Brinkman
PERSON: Ryan Brinkman
peak matching
k-nearest neighbors
A k-nearest neighbors is a data transformation which achieves a class discovery or partitioning objective, in which an input data object with vector y is assigned to a class label based upon the k closest training data set points to y; where k is the largest value that class label is assigned.
James Malone
k-NN
PERSON: James Malone
k-nearest neighbors
CART
A CART (classification and regression trees) is a data transformation method for producing a classification or regression model with a tree-based structure.
James Malone
classification and regression trees
BOOK: David J. Hand, Heikki Mannila and Padhraic Smyth (2001) Principles of Data Mining.
CART
statistical model validation
Using the expression levels of 20 proteins to predict whether a cancer patient will respond to a drug. A practical goal would be to determine which subset of the 20 features should be used to produce the best predictive model. - wikipedia
A data transformation which assesses how the results of a statistical analysis will generalize to an independent data set.
Helen Parkinson
http://en.wikipedia.org/wiki/Cross-validation_%28statistics%29
statistical model validation
manufacturer
A person or organization that has a manufacturer role
manufacturer
service provider role
Jackson Lab provides experimental animals, EBI provides training on databases, a core facility provides access to a DNA sequencer.
is a role which inheres in a person or organization and is realized in in a planned process which provides access to training, materials or execution of protocols for an organization or person
PERSON:Helen Parkinson
service provider role
processed specimen
A tissue sample that has been sliced and stained for a histology study.
A blood specimen that has been centrifuged to obtain the white blood cells.
A specimen that has been intentionally physically modified.
Bjoern Peters
Bjoern Peters
A tissue sample that has been sliced and stained for a histology study.
processed specimen
categorical label
The labels 'positive' vs. 'negative', or 'left handed', 'right handed', 'ambidexterous', or 'strongly binding', 'weakly binding' , 'not binding', or '+++', '++', '+', '-' etc. form scales of categorical labels.
A label that is part of a categorical datum and that indicates the value of the data item on the categorical scale.
Bjoern Peters
Bjoern Peters
categorical label
questionnaire
A document with a set of printed or written questions with a choice of answers, devised for the purposes of a survey or statistical study.
JT: It plays a role in collecting data that could be fleshed out more; but I'm thinking it is, in itself, an edited document.
JZ: based on textual definition of edited document, it can be defined as N&S. I prefer to leave questionnaire as a document now. We can add more restrictions in the future and use that to determine it is an edited document or not.
Need to clarify if this is a document or a directive information entity (or what their connection is))
PERSON: Jessica Turner
Merriam-Webster
questionnaire
categorical value specification
A value specification that is specifies one category out of a fixed number of nominal categories
PERSON:Bjoern Peters
categorical value specification
value specification
The value of 'positive' in a classification scheme of "positive or negative"; the value of '20g' on the quantitative scale of mass.
An information content entity that specifies a value within a classification scheme or on a quantitative scale.
This term is currently a descendant of 'information content entity', which requires that it 'is about' something. A value specification of '20g' for a measurement data item of the mass of a particular mouse 'is about' the mass of that mouse. However there are cases where a value specification is not clearly about any particular. In the future we may change 'value specification' to remove the 'is about' requirement.
PERSON:Bjoern Peters
value specification
collection of specimens
Blood cells collected from multiple donors over the course of a study.
A material entity that has two or more specimens as its parts.
Details see tracker: https://sourceforge.net/p/obi/obi-terms/778/
Person: Chris Stoeckert, Jie Zheng
OBIB, OBI
Biobank
collection of specimens
histologic grade according to AJCC 7th edition
G1:Well differentiated
G4: Undifferentiated
A categorical value specification that is a histologic grade assigned to a tumor slide specimen according to the American Joint Committee on Cancer (AJCC) 7th Edition grading system.
Chris Stoeckert, Helena Ellis
NCI BBRB, OBI
NCI BBRB
histologic grade according to AJCC 7th edition
histologic grade according to the Fuhrman Nuclear Grading System
A categorical value specification that is a histologic grade assigned to a tumor slide specimen according to the Fuhrman Nuclear Grading System.
Chris Stoeckert, Helena Ellis
Histologic Grade (Fuhrman Nuclear Grading System)
NCI BBRB, OBI
NCI BBRB
histologic grade according to the Fuhrman Nuclear Grading System
histologic grade for ovarian tumor
A categorical value specification that is a histologic grade assigned to a ovarian tumor.
Chris Stoeckert, Helena Ellis
NCI BBRB, OBI
NCI BBRB
histologic grade for ovarian tumor
histologic grade for ovarian tumor according to a two-tier grading system
A histologic grade for ovarian tumor that is from a two-tier histological classification of tumors.
Chris Stoeckert, Helena Ellis
NCI BBRB, OBI
NCI BBRB
histologic grade for ovarian tumor according to a two-tier grading system
histologic grade for ovarian tumor according to the World Health Organization
A histologic grade for ovarian tumor that is from a histological classification by the World Health Organization (WHO).
Chris Stoeckert, Helena Ellis
NCI BBRB, OBI
NCI BBRB
histologic grade for ovarian tumor according to the World Health Organization
pathologic primary tumor stage for colon and rectum according to AJCC 7th edition
A categorical value specification that is a pathologic finding about one or more characteristics of colorectal cancer following the rules of the TNM American Joint Committee on Cancer (AJCC) version 7 classification system as they pertain to staging of the primary tumor. TNM pathologic primary tumor findings are based on clinical findings supplemented by histopathologic examination of one or more tissue specimens acquired during surgery.
Chris Stoeckert, Helena Ellis
pT: Pathologic spread colorectal primary tumor (AJCC 7th Edition)
NCI BBRB, OBI
NCI BBRB
pathologic primary tumor stage for colon and rectum according to AJCC 7th edition
pathologic primary tumor stage for lung according to AJCC 7th edition
A categorical value specification that is a pathologic finding about one or more characteristics of lung cancer following the rules of the TNM American Joint Committee on Cancer (AJCC) version 7 classification system as they pertain to staging of the primary tumor. TNM pathologic primary tumor findings are based on clinical findings supplemented by histopathologic examination of one or more tissue specimens acquired during surgery.
Chris Stoeckert, Helena Ellis
pT: Pathologic spread lung primary tumor (AJCC 7th Edition)
NCI BBRB, OBI
NCI BBRB
pathologic primary tumor stage for lung according to AJCC 7th edition
pathologic primary tumor stage for kidney according to AJCC 7th edition
A categorical value specification that is a pathologic finding about one or more characteristics of renal cancer following the rules of the TNM AJCC v7 classification system as they pertain to staging of the primary tumor. TNM pathologic primary tumor findings are based on clinical findings supplemented by histopathologic examination of one or more tissue specimens acquired during surgery.
Chris Stoeckert, Helena Ellis
pT: Pathologic spread kidney primary tumor (AJCC 7th Edition)
NCI BBRB, OBI
NCI BBRB
pathologic primary tumor stage for kidney according to AJCC 7th edition
pathologic primary tumor stage for ovary according to AJCC 7th edition
A categorical value specification that is a pathologic finding about one or more characteristics of ovarian cancer following the rules of the TNM AJCC v7 classification system as they pertain to staging of the primary tumor. TNM pathologic primary tumor findings are based on clinical findings supplemented by histopathologic examination of one or more tissue specimens acquired during surgery.
Chris Stoeckert, Helena Ellis
pT: Pathologic spread ovarian primary tumor (AJCC 7th Edition)
NCI BBRB, OBI
NCI BBRB
pathologic primary tumor stage for ovary according to AJCC 7th edition
pathologic lymph node stage for colon and rectum according to AJCC 7th edition
A categorical value specification that is a pathologic finding about one or more characteristics of colorectal cancer following the rules of the TNM AJCC v7 classification system as they pertain to staging of regional lymph nodes.
Chris Stoeckert, Helena Ellis
pN: Pathologic spread colon lymph nodes (AJCC 7th Edition)
NCI BBRB, OBI
NCI BBRB
pathologic lymph node stage for colon and rectum according to AJCC 7th edition
pathologic lymph node stage for lung according to AJCC 7th edition
A categorical value specification that is a pathologic finding about one or more characteristics of lung cancer following the rules of the TNM AJCC v7 classification system as they pertain to staging of regional lymph nodes.
Chris Stoeckert, Helena Ellis
pN: Pathologic spread colon lymph nodes (AJCC 7th Edition)
NCI BBRB, OBI
NCI BBRB
pathologic lymph node stage for lung according to AJCC 7th edition
pathologic lymph node stage for kidney according to AJCC 7th edition
A categorical value specification that is a pathologic finding about one or more characteristics of renal cancer following the rules of the TNM AJCC v7 classification system as they pertain to staging of regional lymph nodes.
Chris Stoeckert, Helena Ellis
pN: Pathologic spread kidney lymph nodes (AJCC 7th Edition)
NCI BBRB, OBI
NCI BBRB
pathologic lymph node stage for kidney according to AJCC 7th edition
pathologic lymph node stage for ovary according to AJCC 7th edition
A categorical value specification that is a pathologic finding about one or more characteristics of ovarian cancer following the rules of the TNM AJCC v7 classification system as they pertain to staging of regional lymph nodes.
Chris Stoeckert, Helena Ellis
pN: Pathologic spread ovarian lymph nodes (AJCC 7th Edition)
NCI BBRB, OBI
NCI BBRB
pathologic lymph node stage for ovary according to AJCC 7th edition
pathologic distant metastases stage for colon according to AJCC 7th edition
A categorical value specification that is a pathologic finding about one or more characteristics of colon cancer following the rules of the TNM AJCC v7 classification system as they pertain to distant metastases. TNM pathologic distant metastasis findings are based on clinical findings supplemented by histopathologic examination of one or more tissue specimens acquired during surgery.
Chris Stoeckert, Helena Ellis
M: colon distant metastases (AJCC 7th Edition)
NCI BBRB, OBI
NCI BBRB
pathologic distant metastases stage for colon according to AJCC 7th edition
pathologic distant metastases stage for lung according to AJCC 7th edition
A categorical value specification that is a pathologic finding about one or more characteristics of lung cancer following the rules of the TNM AJCC v7 classification system as they pertain to distant metastases. TNM pathologic distant metastasis findings are based on clinical findings supplemented by histopathologic examination of one or more tissue specimens acquired during surgery.
Chris Stoeckert, Helena Ellis
M: lung distant metastases (AJCC 7th Edition)
NCI BBRB, OBI
NCI BBRB
pathologic distant metastases stage for lung according to AJCC 7th edition
pathologic distant metastases stage for kidney according to AJCC 7th edition
A categorical value specification that is a pathologic finding about one or more characteristics of renal cancer following the rules of the TNM AJCC v7 classification system as they pertain to distant metastases. TNM pathologic distant metastasis findings are based on clinical findings supplemented by histopathologic examination of one or more tissue specimens acquired during surgery.
Chris Stoeckert, Helena Ellis
M: kidney distant Metastases (AJCC 7th Edition)
NCI BBRB, OBI
NCI BBRB
pathologic distant metastases stage for kidney according to AJCC 7th edition
pathologic distant metastases stage for ovary according to AJCC 7th edition
A categorical value specification that is a pathologic finding about one or more characteristics of ovarian cancer following the rules of the TNM AJCC v7 classification system as they pertain to distant metastases. TNM pathologic distant metastasis findings are based on clinical findings supplemented by histopathologic examination of one or more tissue specimens acquired during surgery.
Chris Stoeckert, Helena Ellis
M: ovarian distant metastases (AJCC 7th Edition)
NCI BBRB, OBI
NCI BBRB
pathologic distant metastases stage for ovary according to AJCC 7th edition
clinical tumor stage group according to AJCC 7th edition
A categorical value specification that is an assessment of the stage of a cancer according to the American Joint Committee on Cancer (AJCC) v7 staging systems.
Chris Stoeckert, Helena Ellis
Clinical tumor stage group (AJCC 7th Edition)
NCI BBRB, OBI
NCI BBRB
clinical tumor stage group according to AJCC 7th edition
International Federation of Gynecology and Obstetrics cervical cancer stage value specification
A categorical value specification that is an assessment of the stage of a gynecologic cancer according to the International Federation of Gynecology and Obstetrics (FIGO) staging systems.
Chris Stoeckert, Helena Ellis
Clinical FIGO stage
NCI BBRB, OBI
NCI BBRB
International Federation of Gynecology and Obstetrics cervical cancer stage value specification
International Federation of Gynecology and Obstetrics ovarian cancer stage value specification
A categorical value specification that is a pathologic finding about one or more characteristics of ovarian cancer following the rules of the FIGO classification system.
Chris Stoeckert, Helena Ellis
Pathologic Tumor Stage Grouping for ovarian cancer (FIGO)
NCI BBRB, OBI
NCI BBRB
International Federation of Gynecology and Obstetrics ovarian cancer stage value specification
performance status value specification
A categorical value specification that is an assessment of a participant's performance status (general well-being and activities of daily life).
Chris Stoeckert, Helena Ellis
Performance Status Scale
https://en.wikipedia.org/wiki/Performance_status
NCI BBRB
performance status value specification
Eastern Cooperative Oncology Group score value specification
A performance status value specification designed by the Eastern Cooperative Oncology Group to assess disease progression and its affect on the daily living abilities of the patient.
Chris Stoeckert, Helena Ellis
ECOG score
NCI BBRB, OBI
NCI BBRB
Eastern Cooperative Oncology Group score value specification
Karnofsky score vaue specification
A performance status value specification designed for classifying patients 16 years of age or older by their functional impairment.
Chris Stoeckert, Helena Ellis
Karnofsky Score
NCI BBRB, OBI
NCI BBRB
Karnofsky score vaue specification
material supplier
A person or organization that provides material supplies to other people or organizations.
Rebecca Jackson
https://github.com/obi-ontology/obi/issues/1289
material supplier
organism
animal
fungus
plant
virus
A material entity that is an individual living system, such as animal, plant, bacteria or virus, that is capable of replicating or reproducing, growth and maintenance in the right environment. An organism may be unicellular or made up, like humans, of many billions of cells divided into specialized tissues and organs.
10/21/09: This is a placeholder term, that should ideally be imported from the NCBI taxonomy, but the high level hierarchy there does not suit our needs (includes plasmids and 'other organisms')
13-02-2009:
OBI doesn't take position as to when an organism starts or ends being an organism - e.g. sperm, foetus.
This issue is outside the scope of OBI.
GROUP: OBI Biomaterial Branch
WEB: http://en.wikipedia.org/wiki/Organism
organism
specimen
Biobanking of blood taken and stored in a freezer for potential future investigations stores specimen.
A material entity that has the specimen role.
Note: definition is in specimen creation objective which is defined as an objective to obtain and store a material entity for potential use as an input during an investigation.
PERSON: James Malone
PERSON: Philippe Rocca-Serra
GROUP: OBI Biomaterial Branch
specimen
data transformation
The application of a clustering protocol to microarray data or the application of a statistical testing method on a primary data set to determine a p-value.
A planned process that produces output data from input data.
Elisabetta Manduchi
Helen Parkinson
James Malone
Melanie Courtot
Philippe Rocca-Serra
Richard Scheuermann
Ryan Brinkman
Tina Hernandez-Boussard
data analysis
data processing
Branch editors
data transformation
leave one out cross validation method
The authors conducted leave-one-out cross validation to estimate the strength and accuracy of the differentially expressed filtered genes. http://bioinformatics.oxfordjournals.org/cgi/content/abstract/19/3/368
is a data transformation : leave-one-out cross-validation (LOOCV) involves using a single observation from the original sample as the validation data, and the remaining observations as the training data. This is repeated such that each observation in the sample is used once as the validation data
2009-11-10. Tracker: https://sourceforge.net/tracker/?func=detail&aid=2893049&group_id=177891&atid=886178
Person:Helen Parkinson
leave one out cross validation method
k-means clustering
A k-means clustering is a data transformation which achieves a class discovery or partitioning objective, which takes as input a collection of objects (represented as points in multidimensional space) and which partitions them into a specified number k of clusters. The algorithm attempts to find the centers of natural clusters in the data. The most common form of the algorithm starts by partitioning the input points into k initial sets, either at random or using some heuristic data. It then calculates the mean point, or centroid, of each set. It constructs a new partition by associating each point with the closest centroid. Then the centroids are recalculated for the new clusters, and the algorithm repeated by alternate applications of these two steps until convergence, which is obtained when the points no longer switch clusters (or alternatively centroids are no longer changed).
Elisabetta Manduchi
James Malone
Philippe Rocca-Serra
WEB: http://en.wikipedia.org/wiki/K-means
k-means clustering
hierarchical clustering
A hierarchical clustering is a data transformation which achieves a class discovery objective, which takes as input data item and builds a hierarchy of clusters. The traditional representation of this hierarchy is a tree (visualized by a dendrogram), with the individual input objects at one end (leaves) and a single cluster containing every object at the other (root).
James Malone
WEB: http://en.wikipedia.org/wiki/Data_clustering#Hierarchical_clustering
hierarchical clustering
dimensionality reduction
A dimensionality reduction is data partitioning which transforms each input m-dimensional vector (x_1, x_2, ..., x_m) into an output n-dimensional vector (y_1, y_2, ..., y_n), where n is smaller than m.
Elisabetta Manduchi
James Malone
Melanie Courtot
Philippe Rocca-Serra
data projection
PERSON: Elisabetta Manduchi
PERSON: James Malone
PERSON: Melanie Courtot
dimensionality reduction
principal components analysis dimensionality reduction
A principal components analysis dimensionality reduction is a dimensionality reduction achieved by applying principal components analysis and by keeping low-order principal components and excluding higher-order ones.
Elisabetta Manduchi
James Malone
Melanie Courtot
Philippe Rocca-Serra
pca data reduction
PERSON: Elisabetta Manduchi
PERSON: James Malone
PERSON: Melanie Courtot
principal components analysis dimensionality reduction
data visualization
Generation of a heatmap from a microarray dataset
An planned process that creates images, diagrams or animations from the input data.
Elisabetta Manduchi
James Malone
Melanie Courtot
Tina Boussard
data encoding as image
visualization
PERSON: Elisabetta Manduchi
PERSON: James Malone
PERSON: Melanie Courtot
PERSON: Tina Boussard
Possible future hierarchy might include this:
information_encoding
>data_encoding
>>image_encoding
data visualization
data transformation objective
normalize objective
An objective specification to transformation input data into output data
Modified definition in 2013 Philly OBI workshop
James Malone
PERSON: James Malone
data transformation objective
partitioning data transformation
A partitioning data transformation is a data transformation that has objective partitioning.
James Malone
PERSON: James Malone
partitioning data transformation
partitioning objective
A k-means clustering which has partitioning objective is a data transformation in which the input data is partitioned into k output sets.
A partitioning objective is a data transformation objective where the aim is to generate a collection of disjoint non-empty subsets whose union equals a non-empty input set.
Elisabetta Manduchi
James Malone
PERSON: Elisabetta Manduchi
partitioning objective
class discovery data transformation
A class discovery data transformation (sometimes called unsupervised classification) is a data transformation that has objective class discovery.
James Malone
clustering data transformation
unsupervised classification data transformation
PERSON: James Malone
class discovery data transformation
class discovery objective
A class discovery objective (sometimes called unsupervised classification) is a data transformation objective where the aim is to organize input data (typically vectors of attributes) into classes, where the number of classes and their specifications are not known a priori. Depending on usage, the class assignment can be definite or probabilistic.
James Malone
clustering objective
discriminant analysis objective
unsupervised classification objective
PERSON: Elisabetta Manduchi
PERSON: James Malone
class discovery objective
class prediction objective
A class prediction objective (sometimes called supervised classification) is a data transformation objective where the aim is to create a predictor from training data through a machine learning technique. The training data consist of pairs of objects (typically vectors of attributes) and
class labels for these objects. The resulting predictor can be used to attach class labels to any valid novel input object. Depending on usage, the prediction can be definite or probabilistic. A classification is learned from the training data and can then be tested on test data.
James Malone
classification objective
supervised classification objective
PERSON: Elisabetta Manduchi
PERSON: James Malone
class prediction objective
cross validation objective
A cross validation objective is a data transformation objective in which the aim is to partition a sample of data into subsets such that the analysis is initially performed on a single subset, while the other subset(s) are retained for subsequent use in confirming and validating the initial analysis.
James Malone
rotation estimation objective
WEB: http://en.wikipedia.org/wiki/Cross_validation
cross validation objective
clustered data visualization
A data visualization which has input of a clustered data set and produces an output of a report graph which is capable of rendering data of this type.
James Malone
clustered data visualization
A dependent entity that inheres in a bearer by virtue of how the bearer is related to other entities
PATO:0000001
quality
A dependent entity that inheres in a bearer by virtue of how the bearer is related to other entities
PATOC:GVG
length unit
A unit which is a standard measure of the distance between two points.
length unit
mass unit
A unit which is a standard measure of the amount of matter/energy of a physical object.
mass unit
time unit
A unit which is a standard measure of the dimension in which events occur in sequence.
time unit
temperature unit
temperature unit
substance unit
substance unit
concentration unit
concentration unit
volume unit
volume unit
frequency unit
frequency unit
volumetric flow rate unit
volumetric flow rate unit
rate unit
rate unit
A publisher role is a role borne by an organization or individual in which they are responsible for making software available to a particular consumer group. Such organizations or individuals do need to be involved in the development of the software.
James Malone
publisher role
Software developer role is a role borne by an organization or individual in which they are responsible for authoring software.
James Malone
software developer role
An organization or legal entity (including single person) that is responsible for developing software. Developing includes aspects of design, coding and testing.
software developer organization
An organization or legal entity (including single person) that is responsible for publishing software. Publishing here includes tasks such as designing and producing physical products, technical customer support, licensing arrangements and marketing.
software publisher organization
Obsolete Class
Abstract object representing an RNN cell. This is the base class for implementing RNN cells with custom behavior.
AbstractRNNCell
Abstract object representing an RNN cell. This is the base class for implementing RNN cells with custom behavior.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/AbstractRNNCell
Applies an activation function to an output.
Activation Layer
Applies an activation function to an output.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Activation
Methods which can interactively query a user (or some other information source) to label new data points with the desired outputs.
Query Learning
Active Learning
Methods which can interactively query a user (or some other information source) to label new data points with the desired outputs.
https://en.wikipedia.org/wiki/Active_learning_(machine_learning)
A type of selection bias that occurs when systems/platforms get their training data from their most active users, rather than those less active (or inactive).
Activity Bias
A type of selection bias that occurs when systems/platforms get their training data from their most active users, rather than those less active (or inactive).
https://doi.org/10.6028/NIST.SP.1270
Layer that applies an update to the cost function based input activity.
ActivityRegularization Layer
Layer that applies an update to the cost function based input activity.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/ActivityRegularization
Applies a 1D adaptive average pooling over an input signal composed of several input planes.
AdaptiveAvgPool1D
AdaptiveAvgPool1d
AdaptiveAvgPool1D Layer
Applies a 1D adaptive average pooling over an input signal composed of several input planes.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Applies a 2D adaptive average pooling over an input signal composed of several input planes.
AdaptiveAvgPool2D
AdaptiveAvgPool2d
AdaptiveAvgPool2D Layer
Applies a 2D adaptive average pooling over an input signal composed of several input planes.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Applies a 3D adaptive average pooling over an input signal composed of several input planes.
AdaptiveAvgPool3D
AdaptiveAvgPool3d
AdaptiveAvgPool3D Layer
Applies a 3D adaptive average pooling over an input signal composed of several input planes.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Applies a 1D adaptive max pooling over an input signal composed of several input planes.
AdaptiveMaxPool1D
AdaptiveMaxPool1d
AdaptiveMaxPool1D Layer
Applies a 1D adaptive max pooling over an input signal composed of several input planes.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Applies a 2D adaptive max pooling over an input signal composed of several input planes.
AdaptiveMaxPool2D
AdaptiveMaxPool2d
AdaptiveMaxPool2D Layer
Applies a 2D adaptive max pooling over an input signal composed of several input planes.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Applies a 3D adaptive max pooling over an input signal composed of several input planes.
AdaptiveMaxPool3D
AdaptiveMaxPool3d
AdaptiveMaxPool3D Layer
Applies a 3D adaptive max pooling over an input signal composed of several input planes.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Layer that adds a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape).
Add Layer
Layer that adds a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape).
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Add
Additive attention layer, a.k.a. Bahdanau-style attention.
AdditiveAttention Layer
Additive attention layer, a.k.a. Bahdanau-style attention.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/AdditiveAttention
An adversarial-resistant LLM is engineered to withstand or mitigate the effects of adversarial attacks, ensuring reliable performance even in the presence of deliberately misleading input designed to confuse the model.
Robust LLM
adversarial attacks
robustness
Adversarial-Resistant LLM
An adversarial-resistant LLM is engineered to withstand or mitigate the effects of adversarial attacks, ensuring reliable performance even in the presence of deliberately misleading input designed to confuse the model.
TBD
Applies Alpha Dropout to the input. Alpha Dropout is a Dropout that keeps mean and variance of inputs to their original values, in order to ensure the self-normalizing property even after this dropout. Alpha Dropout fits well to Scaled Exponential Linear Units by randomly setting activations to the negative saturation value.
AlphaDropout Layer
Applies Alpha Dropout to the input. Alpha Dropout is a Dropout that keeps mean and variance of inputs to their original values, in order to ensure the self-normalizing property even after this dropout. Alpha Dropout fits well to Scaled Exponential Linear Units by randomly setting activations to the negative saturation value.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/AlphaDropout
Arises when the distribution over prediction outputs is skewed in comparison to the prior distribution of the prediction target.
Amplification Bias
Arises when the distribution over prediction outputs is skewed in comparison to the prior distribution of the prediction target.
https://doi.org/10.6028/NIST.SP.1270
A cognitive bias, the influence of a particular reference point or anchor on people’s decisions. Often more fully referred to as anchoring-and-adjustment, or anchoring-and-adjusting: after an anchor is set, people adjust insufficiently from that anchor point to arrive at a final answer. Decision makers are biased towards an initially presented value.
Anchoring Bias
A cognitive bias, the influence of a particular reference point or anchor on people’s decisions. Often more fully referred to as anchoring-and-adjustment, or anchoring-and-adjusting: after an anchor is set, people adjust insufficiently from that anchor point to arrive at a final answer. Decision makers are biased towards an initially presented value.
https://doi.org/10.6028/NIST.SP.1270
When users rely on automation as a heuristic replacement for their own information seeking and processing. A form of individual bias but often discussed as a group bias, or the larger effects on natural language processing models.
Annotator Reporting Bias
When users rely on automation as a heuristic replacement for their own information seeking and processing. A form of individual bias but often discussed as a group bias, or the larger effects on natural language processing models.
https://doi.org/10.6028/NIST.SP.1270
An abstract parent class grouping LLMs based on model application focus.
Application Focus
An abstract parent class grouping LLMs based on model application focus.
TBD
An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons. An artificial neuron receives a signal then processes it and can signal neurons connected to it. The "signal" at a connection is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs. The connections are called edges. Neurons and edges typically have a weight that adjusts as Learning proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold. Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times.
ANN
NN
Artificial Neural Network
An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons. An artificial neuron receives a signal then processes it and can signal neurons connected to it. The "signal" at a connection is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs. The connections are called edges. Neurons and edges typically have a weight that adjusts as Learning proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold. Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times.
https://en.wikipedia.org/wiki/Artificial_neural_network
A rule-based machine learning method for discovering interesting relations between variables in large databases. It is intended to identify strong rules discovered in databases using some measures of interestingness.
Association Rule Learning
A rule-based machine learning method for discovering interesting relations between variables in large databases. It is intended to identify strong rules discovered in databases using some measures of interestingness.
https://en.wikipedia.org/wiki/Association_rule_learning
Dot-product attention layer, a.k.a. Luong-style attention.
Attention Layer
Dot-product attention layer, a.k.a. Luong-style attention.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Attention
An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data (unsupervised Learning). The encoding is validated and refined by attempting to regenerate the input from the encoding. The autoencoder learns a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore insignificant data (“noise”). (https://en.wikipedia.org/wiki/Autoencoder)
AE
Input, Hidden, Matched Output-Input
Auto Encoder Network
An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data (unsupervised Learning). The encoding is validated and refined by attempting to regenerate the input from the encoding. The autoencoder learns a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore insignificant data (“noise”). (https://en.wikipedia.org/wiki/Autoencoder)
https://en.wikipedia.org/wiki/Autoencoder
When humans over-rely on automated systems or have their skills attenuated by such over-reliance (e.g., spelling and autocorrect or spellcheckers).
Automation Complaceny
Automation Complacency Bias
When humans over-rely on automated systems or have their skills attenuated by such over-reliance (e.g., spelling and autocorrect or spellcheckers).
https://doi.org/10.6028/NIST.SP.1270
An autoregressive language model is a type of language model that generates text sequentially, predicting one token at a time based on the previously generated tokens. It excels at natural language generation tasks by modeling the probability distribution over sequences of tokens.
Autoregressive Language Model
generative language model
sequence-to-sequence model
Autoregressive Language Model
An autoregressive language model is a type of language model that generates text sequentially, predicting one token at a time based on the previously generated tokens. It excels at natural language generation tasks by modeling the probability distribution over sequences of tokens.
TBD
A mental shortcut whereby people tend to overweight what comes easily or quickly to mind, meaning that what is easier to recall—e.g., more “available”—receives greater emphasis in judgement and decision-making.
Availability Bias
Availability Heuristic
Availability Heuristic Bias
A mental shortcut whereby people tend to overweight what comes easily or quickly to mind, meaning that what is easier to recall—e.g., more “available”—receives greater emphasis in judgement and decision-making.
https://doi.org/10.6028/NIST.SP.1270
Layer that averages a list of inputs element-wise. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape).
Average Layer
Layer that averages a list of inputs element-wise. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape).
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Average
Average pooling for temporal data. Downsamples the input representation by taking the average value over the window defined by pool_size. The window is shifted by strides. The resulting output when using "valid" padding option has a shape of: output_shape = (input_shape - pool_size + 1) / strides). The resulting output shape when using the "same" padding option is: output_shape = input_shape / strides.
AvgPool1D
AvgPool1d
AveragePooling1D Layer
Average pooling for temporal data. Downsamples the input representation by taking the average value over the window defined by pool_size. The window is shifted by strides. The resulting output when using "valid" padding option has a shape of: output_shape = (input_shape - pool_size + 1) / strides). The resulting output shape when using the "same" padding option is: output_shape = input_shape / strides.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/AveragePooling1D
Average pooling operation for spatial data. Downsamples the input along its spatial dimensions (height and width) by taking the average value over an input window (of size defined by pool_size) for each channel of the input. The window is shifted by strides along each dimension. The resulting output when using "valid" padding option has a shape (number of rows or columns) of: output_shape = math.floor((input_shape - pool_size) / strides) + 1 (when input_shape >= pool_size). The resulting output shape when using the "same" padding option is: output_shape = math.floor((input_shape - 1) / strides) + 1.
AvgPool2D
AvgPool2d
AveragePooling2D Layer
Average pooling operation for spatial data. Downsamples the input along its spatial dimensions (height and width) by taking the average value over an input window (of size defined by pool_size) for each channel of the input. The window is shifted by strides along each dimension. The resulting output when using "valid" padding option has a shape (number of rows or columns) of: output_shape = math.floor((input_shape - pool_size) / strides) + 1 (when input_shape >= pool_size). The resulting output shape when using the "same" padding option is: output_shape = math.floor((input_shape - 1) / strides) + 1.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/AveragePooling2D
Average pooling operation for 3D data (spatial or spatio-temporal). Downsamples the input along its spatial dimensions (depth, height, and width) by taking the average value over an input window (of size defined by pool_size) for each channel of the input. The window is shifted by strides along each dimension.
AvgPool3D
AvgPool3d
AveragePooling3D Layer
Average pooling operation for 3D data (spatial or spatio-temporal). Downsamples the input along its spatial dimensions (depth, height, and width) by taking the average value over an input window (of size defined by pool_size) for each channel of the input. The window is shifted by strides along each dimension.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/AveragePooling3D
Applies a 1D average pooling over an input signal composed of several input planes.
AvgPool1D
AvgPool1d
AvgPool1D Layer
Applies a 1D average pooling over an input signal composed of several input planes.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Applies a 2D average pooling over an input signal composed of several input planes.
AvgPool2D
AvgPool2d
AvgPool2D Layer
Applies a 2D average pooling over an input signal composed of several input planes.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Applies a 3D average pooling over an input signal composed of several input planes.
AvgPool3D
AvgPool3d
AvgPool3D Layer
Applies a 3D average pooling over an input signal composed of several input planes.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Applies Batch Normalization over a 2D or 3D input as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
BatchNorm1D
BatchNorm1d
BatchNorm1D Layer
Applies Batch Normalization over a 2D or 3D input as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
https://pytorch.org/docs/stable/nn.html#normalization-layers
Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
BatchNorm2D
BatchNorm2d
BatchNorm2D Layer
Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
https://pytorch.org/docs/stable/nn.html#normalization-layers
Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
BatchNorm3D
BatchNorm3d
BatchNorm3D Layer
Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
https://pytorch.org/docs/stable/nn.html#normalization-layers
Layer that normalizes its inputs. Batch normalization applies a transformation that maintains the mean output close to 0 and the output standard deviation close to 1. Importantly, batch normalization works differently during training and during inference. During training (i.e. when using fit() or when calling the layer/model with the argument training=True), the layer normalizes its output using the mean and standard deviation of the current batch of inputs. That is to say, for each channel being normalized, the layer returns gamma * (batch - mean(batch)) / sqrt(var(batch) + epsilon) + beta, where: epsilon is small constant (configurable as part of the constructor arguments), gamma is a learned scaling factor (initialized as 1), which can be disabled by passing scale=False to the constructor. beta is a learned offset factor (initialized as 0), which can be disabled by passing center=False to the constructor. During inference (i.e. when using evaluate() or predict() or when calling the layer/model with the argument training=False (which is the default), the layer normalizes its output using a moving average of the mean and standard deviation of the batches it has seen during training. That is to say, it returns gamma * (batch - self.moving_mean) / sqrt(self.moving_var + epsilon) + beta. self.moving_mean and self.moving_var are non-trainable variables that are updated each time the layer in called in training mode, as such: moving_mean = moving_mean * momentum + mean(batch) * (1 - momentum) moving_var = moving_var * momentum + var(batch) * (1 - momentum).
BatchNormalization Layer
Layer that normalizes its inputs. Batch normalization applies a transformation that maintains the mean output close to 0 and the output standard deviation close to 1. Importantly, batch normalization works differently during training and during inference. During training (i.e. when using fit() or when calling the layer/model with the argument training=True), the layer normalizes its output using the mean and standard deviation of the current batch of inputs. That is to say, for each channel being normalized, the layer returns gamma * (batch - mean(batch)) / sqrt(var(batch) + epsilon) + beta, where: epsilon is small constant (configurable as part of the constructor arguments), gamma is a learned scaling factor (initialized as 1), which can be disabled by passing scale=False to the constructor. beta is a learned offset factor (initialized as 0), which can be disabled by passing center=False to the constructor. During inference (i.e. when using evaluate() or predict() or when calling the layer/model with the argument training=False (which is the default), the layer normalizes its output using a moving average of the mean and standard deviation of the batches it has seen during training. That is to say, it returns gamma * (batch - self.moving_mean) / sqrt(self.moving_var + epsilon) + beta. self.moving_mean and self.moving_var are non-trainable variables that are updated each time the layer in called in training mode, as such: moving_mean = moving_mean * momentum + mean(batch) * (1 - momentum) moving_var = moving_var * momentum + var(batch) * (1 - momentum).
https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization
A probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG).
Bayesian Network
A probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG).
https://en.wikipedia.org/wiki/Bayesian_network
Systematic distortions in user behavior across platforms or contexts, or across users represented in different datasets.
Behavioral Bias
Systematic distortions in user behavior across platforms or contexts, or across users represented in different datasets.
https://doi.org/10.6028/NIST.SP.1270
Systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others.
Bias
Systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others.
https://www.merriam-webster.com/dictionary/bias
Methods that simultaneously cluster the rows and columns of a matrix.
Block Clustering
Co-clustering
Joint Clustering
Two-mode Clustering
Two-way Clustering
Biclustering
Methods that simultaneously cluster the rows and columns of a matrix.
https://en.wikipedia.org/wiki/Biclustering
Bidirectional wrapper for RNNs.
Bidirectional Layer
Bidirectional wrapper for RNNs.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Bidirectional
Methods that classify the elements of a set into two groups (each called class) on the basis of a classification rule.
Binary Classification
Methods that classify the elements of a set into two groups (each called class) on the basis of a classification rule.
https://en.wikipedia.org/wiki/Binary_classification
A Boltzmann machine is a type of stochastic recurrent neural network. It is a Markov random field. It was translated from statistical physics for use in cognitive science. The Boltzmann machine is based on a stochastic spin-glass model with an external field, i.e., a Sherrington–Kirkpatrick model that is a stochastic Ising Model[2] and applied to machine Learning.
BM
Sherrington–Kirkpatrick model with external field
stochastic Hopfield network with hidden units
stochastic Ising-Lenz-Little model
Backfed Input, Probabilistic Hidden
Boltzmann Machine Network
A Boltzmann machine is a type of stochastic recurrent neural network. It is a Markov random field. It was translated from statistical physics for use in cognitive science. The Boltzmann machine is based on a stochastic spin-glass model with an external field, i.e., a Sherrington–Kirkpatrick model that is a stochastic Ising Model[2] and applied to machine Learning.
https://en.wikipedia.org/wiki/Boltzmann_machine
A layer that performs categorical data preprocessing operations.
Categorical Features Preprocessing Layer
A layer that performs categorical data preprocessing operations.
https://keras.io/guides/preprocessing_layers/
A preprocessing layer which encodes integer features. This layer provides options for condensing data into a categorical encoding when the total number of tokens are known in advance. It accepts integer values as inputs, and it outputs a dense or sparse representation of those inputs. For integer inputs where the total number of tokens is not known, use tf.keras.layers.IntegerLookup instead.
CategoryEncoding Layer
A preprocessing layer which encodes integer features. This layer provides options for condensing data into a categorical encoding when the total number of tokens are known in advance. It accepts integer values as inputs, and it outputs a dense or sparse representation of those inputs. For integer inputs where the total number of tokens is not known, use tf.keras.layers.IntegerLookup instead.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/CategoryEncoding
Probabilistic graphical models used to encode assumptions about the data-generating process.
Casaul Bayesian Network
Casaul Graph
DAG
Directed Acyclic Graph
Path Diagram
Causal Graphical Model
Probabilistic graphical models used to encode assumptions about the data-generating process.
https://en.wikipedia.org/wiki/Causal_graph
A causal LLM only attends to previous tokens in the sequence when generating text, modeling the probability distribution autoregressively from left-to-right or causally.
Causal LLM
autoregressive
unidirectional
Causal LLM
A causal LLM only attends to previous tokens in the sequence when generating text, modeling the probability distribution autoregressively from left-to-right or causally.
TBD
A preprocessing layer which crops images. This layers crops the central portion of the images to a target size. If an image is smaller than the target size, it will be resized and cropped so as to return the largest possible window in the image that matches the target aspect ratio. Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats.
CenterCrop Layer
A preprocessing layer which crops images. This layers crops the central portion of the images to a target size. If an image is smaller than the target size, it will be resized and cropped so as to return the largest possible window in the image that matches the target aspect ratio. Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/CenterCrop
Methods that distinguishand distribute kinds of "things" into different groups.
Classification
Methods that distinguishand distribute kinds of "things" into different groups.
https://en.wikipedia.org/wiki/Classification_(general_theory)
Removing irrelevant data, correcting typos, and standardizing text to reduce noise and ensure consistency in the data.
Data Cleansing
Standardization
Data cleaning
Text normalization
Cleaning And Normalization
Removing irrelevant data, correcting typos, and standardizing text to reduce noise and ensure consistency in the data.
TBD
Methods that group a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters).
Cluster analysis
Clustering
Methods that group a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters).
https://en.wikipedia.org/wiki/Cluster_analysis
A broad term referring generally to a systematic pattern of deviation from rational judgement and decision-making. A large variety of cognitive biases have been identified over many decades of research in judgement and decision-making, some of which are adaptive mental shortcuts known as heuristics.
Cognitive Bias
A broad term referring generally to a systematic pattern of deviation from rational judgement and decision-making. A large variety of cognitive biases have been identified over many decades of research in judgement and decision-making, some of which are adaptive mental shortcuts known as heuristics.
https://doi.org/10.6028/NIST.SP.1270
A compositional generalization LLM is trained to understand and recombine the underlying compositional structures in language, enabling better generalization to novel combinations and out-of-distribution examples.
Compositional Generalization LLM
out-of-distribution generalization
systematic generalization
Compositional Generalization LLM
A compositional generalization LLM is trained to understand and recombine the underlying compositional structures in language, enabling better generalization to novel combinations and out-of-distribution examples.
TBD
A systematic tendency which causes differences between results and facts. The bias exists in numbers of the process of data analysis, including the source of the data, the estimator chosen, and the ways the data was analyzed.
Statistical Bias
Computational Bias
A systematic tendency which causes differences between results and facts. The bias exists in numbers of the process of data analysis, including the source of the data, the estimator chosen, and the ways the data was analyzed.
https://en.wikipedia.org/wiki/Bias_(statistics)
Layer that concatenates a list of inputs. It takes as input a list of tensors, all of the same shape except for the concatenation axis, and returns a single tensor that is the concatenation of all inputs.
Concatenate Layer
Layer that concatenates a list of inputs. It takes as input a list of tensors, all of the same shape except for the concatenation axis, and returns a single tensor that is the concatenation of all inputs.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Concatenate
Use of a system outside the planned domain of application, and a common cause of performance gaps between laboratory settings and the real world.
Concept Drift
Concept Drift Bias
Use of a system outside the planned domain of application, and a common cause of performance gaps between laboratory settings and the real world.
https://doi.org/10.6028/NIST.SP.1270
A cognitive bias where people tend to prefer information that aligns with, or confirms, their existing beliefs. People can exhibit confirmation bias in the search for, interpretation of, and recall of information. In the famous Wason selection task experiments, participants repeatedly showed a preference for confirmation over falsification. They were tasked with identifying an underlying rule that applied to number triples they were shown, and they overwhelmingly tested triples that confirmed rather than falsified their hypothesized rule.
Confirmation Bias
A cognitive bias where people tend to prefer information that aligns with, or confirms, their existing beliefs. People can exhibit confirmation bias in the search for, interpretation of, and recall of information. In the famous Wason selection task experiments, participants repeatedly showed a preference for confirmation over falsification. They were tasked with identifying an underlying rule that applied to number triples they were shown, and they overwhelmingly tested triples that confirmed rather than falsified their hypothesized rule.
https://doi.org/10.6028/NIST.SP.1270
Arises when an algorithm or platform provides users with a new venue within which to express their biases, and may occur from either side, or party, in a digital interaction..
Consumer Bias
Arises when an algorithm or platform provides users with a new venue within which to express their biases, and may occur from either side, or party, in a digital interaction..
https://doi.org/10.6028/NIST.SP.1270
Arises from structural, lexical, semantic, and syntactic differences in the contents generated by users.
Content Production Bias
Arises from structural, lexical, semantic, and syntactic differences in the contents generated by users.
https://doi.org/10.6028/NIST.SP.1270
A concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available any more during training new ones.
Incremental Learning
Life-Long Learning
Continual Learning
A concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available any more during training new ones.
https://paperswithcode.com/task/continual-learning
A continual learning LLM is designed to continually acquire new knowledge and skills over time, without forgetting previously learned information. This allows the model to adapt and expand its capabilities as new data becomes available.
CL-LLM
Continual Learning LLM
catastrophic forgetting
lifelong learning
Continual Learning LLM
A continual learning LLM is designed to continually acquire new knowledge and skills over time, without forgetting previously learned information. This allows the model to adapt and expand its capabilities as new data becomes available.
TBD
Learning that encourages augmentations (views) of the same input to have more similar representations compared to augmentations of different inputs.
Contrastive Learning
Learning that encourages augmentations (views) of the same input to have more similar representations compared to augmentations of different inputs.
https://arxiv.org/abs/2202.14037
A contrastive learning LLM is trained to pull semantically similar samples closer together and push dissimilar samples apart in the representation space, learning high-quality features useful for downstream tasks.
Contrastive Learning LLM
Representation learning
Contrastive Learning LLM
A contrastive learning LLM is trained to pull semantically similar samples closer together and push dissimilar samples apart in the representation space, learning high-quality features useful for downstream tasks.
TBD
A controllable LLM allows for explicit control over certain attributes of the generated text, such as style, tone, topic, or other desired characteristics, through conditioning or specialized training objectives.
Controllable LLM
conditional generation
guided generation
Controllable LLM
A controllable LLM allows for explicit control over certain attributes of the generated text, such as style, tone, topic, or other desired characteristics, through conditioning or specialized training objectives.
TBD
1D Convolutional LSTM. Similar to an LSTM layer, but the input transformations and recurrent transformations are both convolutional.
ConvLSTM1D Layer
1D Convolutional LSTM. Similar to an LSTM layer, but the input transformations and recurrent transformations are both convolutional.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/ConvLSTM1D
2D Convolutional LSTM. Similar to an LSTM layer, but the input transformations and recurrent transformations are both convolutional.
ConvLSTM2D Layer
2D Convolutional LSTM. Similar to an LSTM layer, but the input transformations and recurrent transformations are both convolutional.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/ConvLSTM2D
3D Convolutional LSTM. Similar to an LSTM layer, but the input transformations and recurrent transformations are both convolutional.
ConvLSTM3D Layer
3D Convolutional LSTM. Similar to an LSTM layer, but the input transformations and recurrent transformations are both convolutional.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/ConvLSTM3D
1D convolution layer (e.g. temporal convolution).
Conv1D Layer
Conv1d
Convolution1D
Convolution1d
nn.Conv1d
Convolution1D Layer
1D convolution layer (e.g. temporal convolution).
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv1D
Transposed convolution layer (sometimes called Deconvolution). The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers or None, does not include the sample axis), e.g. input_shape=(128, 3) for data with 128 time steps and 3 channels.
Conv1DTranspose Layer
ConvTranspose1d
Convolution1DTranspose
Convolution1dTranspose
nn.ConvTranspose1d
Convolution1DTranspose Layer
Transposed convolution layer (sometimes called Deconvolution). The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers or None, does not include the sample axis), e.g. input_shape=(128, 3) for data with 128 time steps and 3 channels.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv1DTranspose
2D convolution layer (e.g. spatial convolution over images). This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers or None, does not include the sample axis), e.g. input_shape=(128, 128, 3) for 128x128 RGB pictures in data_format="channels_last". You can use None when a dimension has variable size.
Conv2D Layer
Conv2d
Convolution2D
Convolution2d
nn.Conv2d
Convolution2D Layer
2D convolution layer (e.g. spatial convolution over images). This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers or None, does not include the sample axis), e.g. input_shape=(128, 128, 3) for 128x128 RGB pictures in data_format="channels_last". You can use None when a dimension has variable size.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D
Transposed convolution layer (sometimes called Deconvolution).
Conv2DTranspose Layer
ConvTranspose2d
Convolution2DTranspose
Convolution2dTranspose
nn.ConvTranspose2d
Convolution2DTranspose Layer
Transposed convolution layer (sometimes called Deconvolution).
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2DTranspose
3D convolution layer (e.g. spatial convolution over volumes). This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers or None, does not include the sample axis), e.g. input_shape=(128, 128, 128, 1) for 128x128x128 volumes with a single channel, in data_format="channels_last".
Conv3D Layer
Conv3d
Convolution3D
Convolution3d
nn.Conv3d
Convolution3D Layer
3D convolution layer (e.g. spatial convolution over volumes). This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers or None, does not include the sample axis), e.g. input_shape=(128, 128, 128, 1) for 128x128x128 volumes with a single channel, in data_format="channels_last".
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv3D
Transposed convolution layer (sometimes called Deconvolution). The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers or None, does not include the sample axis), e.g. input_shape=(128, 128, 128, 3) for a 128x128x128 volume with 3 channels if data_format="channels_last".
Conv3DTranspose Layer
ConvTranspose3d
Convolution3DTranspose
Convolution3dTranspose
nn.ConvTranspose3d
Convolution3DTranspose Layer
Transposed convolution layer (sometimes called Deconvolution). The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers or None, does not include the sample axis), e.g. input_shape=(128, 128, 128, 3) for a 128x128x128 volume with 3 channels if data_format="channels_last".
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv3DTranspose
A convolutional layer is the main building block of a CNN. It contains a set of filters (or kernels), parameters of which are to be learned throughout the training. The size of the filters is usually smaller than the actual image. Each filter convolves with the image and creates an activation map.
Convolutional Layer
A convolutional layer is the main building block of a CNN. It contains a set of filters (or kernels), parameters of which are to be learned throughout the training. The size of the filters is usually smaller than the actual image. Each filter convolves with the image and creates an activation map.
https://www.sciencedirect.com/topics/engineering/convolutional-layer#:~:text=A%20convolutional%20layer%20is%20the,and%20creates%20an%20activation%20map.
Cropping layer for 1D input (e.g. temporal sequence). It crops along the time dimension (axis 1).
Cropping1D Layer
Cropping layer for 1D input (e.g. temporal sequence). It crops along the time dimension (axis 1).
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Cropping1D
Cropping layer for 2D input (e.g. picture). It crops along spatial dimensions, i.e. height and width.
Cropping2D Layer
Cropping layer for 2D input (e.g. picture). It crops along spatial dimensions, i.e. height and width.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Cropping2D
Cropping layer for 3D data (e.g. spatial or spatio-temporal).
Cropping3D Layer
Cropping layer for 3D data (e.g. spatial or spatio-temporal).
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Cropping3D
A cross-domain LLM is capable of performing well across a wide range of domains without significant loss in performance, facilitated by advanced domain adaptation techniques.
Domain-General LLM
cross-domain transfer
domain adaptation
Cross-Domain LLM
A cross-domain LLM is capable of performing well across a wide range of domains without significant loss in performance, facilitated by advanced domain adaptation techniques.
TBD
Training the model on simpler tasks or easier data first, then gradually introducing more complex tasks to improve learning efficiency and performance.
Sequential Learning
Structured Learning
Complexity grading
Sequential learning
Curriculum Learning
Training the model on simpler tasks or easier data first, then gradually introducing more complex tasks to improve learning efficiency and performance.
TBD
A curriculum learning LLM is trained by presenting learning examples in a meaningful order from simple to complex, mimicking the learning trajectory followed by humans.
Curriculum Learning LLM
Learning progression
Curriculum Learning LLM
A curriculum learning LLM is trained by presenting learning examples in a meaningful order from simple to complex, mimicking the learning trajectory followed by humans.
TBD
A data-to-text LLM generates natural language descriptions from structured data sources like tables, graphs, knowledge bases, etc. Requiring grounding meaning representations.
Data-to-Text LLM
Meaning representation
Data-to-Text LLM
A data-to-text LLM generates natural language descriptions from structured data sources like tables, graphs, knowledge bases, etc. Requiring grounding meaning representations.
TBD
Expanding the training dataset artificially by modifying existing data points to improve the model's robustness and generalization ability.
Data Enrichment
Data Expansion
Paraphrasing
Synonym replacement
Data Augmentation
Expanding the training dataset artificially by modifying existing data points to improve the model's robustness and generalization ability.
TBD
A statistical bias in which testing huge numbers of hypotheses of a dataset may appear to yield statistical significance even when the results are statistically nonsignificant.
Data Dredging
Data Dredging Bias
A statistical bias in which testing huge numbers of hypotheses of a dataset may appear to yield statistical significance even when the results are statistically nonsignificant.
https://doi.org/10.6028/NIST.SP.1270
The processes and techniques used to improve data quality and value for better decision-making, analysis, and AI model training.
DataEnhancement
The processes and techniques used to improve data quality and value for better decision-making, analysis, and AI model training.
TBD
Arises from the addition of synthetic or redundant data samples to a dataset.
Data Generation Bias
Arises from the addition of synthetic or redundant data samples to a dataset.
https://doi.org/10.6028/NIST.SP.1270
Methods that replace missing data with substituted values.
Data Imputation
Methods that replace missing data with substituted values.
https://en.wikipedia.org/wiki/Imputation_(statistics)
Techniques focused on preparing raw data for training, including cleaning, normalization, and tokenization.
Data Assembly
Data Curation
Data Processing
Data Preparation
Techniques focused on preparing raw data for training, including cleaning, normalization, and tokenization.
TBD
A decision support tool that uses a tree-like model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility.
Decision Tree
A decision support tool that uses a tree-like model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility.
https://en.wikipedia.org/wiki/Decision_tree
In the decoder-only architecture, the model consists of only a decoder, which is trained to predict the next token in a sequence given the previous tokens. The critical difference between the Decoder-only architecture and the Encoder-Decoder architecture is that the Decoder-only architecture does not have an explicit encoder to summarize the input information. Instead, the information is encoded implicitly in the hidden state of the decoder, which is updated at each step of the generation process.
LLM
Decoder LLM
In the decoder-only architecture, the model consists of only a decoder, which is trained to predict the next token in a sequence given the previous tokens. The critical difference between the Decoder-only architecture and the Encoder-Decoder architecture is that the Decoder-only architecture does not have an explicit encoder to summarize the input information. Instead, the information is encoded implicitly in the hidden state of the decoder, which is updated at each step of the generation process.
https://www.practicalai.io/understanding-transformer-model-architectures/#:~:text=Encoder%2Donly&text=These%20models%20have%20a%20pre,Named%20entity%20recognition
Deconvolutional Networks, a framework that permits the unsupervised construction of hierarchical image representations. These representations can be used for both low-level tasks such as denoising, as well as providing features for object recognition. Each level of the hierarchy groups information from the level beneath to form more complex features that exist over a larger scale in the image. (https://ieeexplore.ieee.org/document/5539957)
DN
Input, Kernel, Convolutional/Pool, Output
Deconvolutional Network
Deconvolutional Networks, a framework that permits the unsupervised construction of hierarchical image representations. These representations can be used for both low-level tasks such as denoising, as well as providing features for object recognition. Each level of the hierarchy groups information from the level beneath to form more complex features that exist over a larger scale in the image. (https://ieeexplore.ieee.org/document/5539957)
https://ieeexplore.ieee.org/document/5539957
The combination of deep learning and active learning, where active learning attempts to maximize a model’s performance gain while annotating the fewest samples possible.
DeepAL
Deep Active Learning
The combination of deep learning and active learning, where active learning attempts to maximize a model’s performance gain while annotating the fewest samples possible.
https://arxiv.org/pdf/2009.00236.pdf
In machine Learning, a deep belief network (DBN) is a generative graphical model, or alternatively a class of deep neural network, composed of multiple layers of latent variables ("hidden units"), with connections between the layers but not between units within each layer. When trained on a set of examples without supervision, a DBN can learn to probabilistically reconstruct its inputs. The layers then act as feature detectors. After this Learning step, a DBN can be further trained with supervision to perform classification. DBNs can be viewed as a composition of simple, unsupervised networks such as restricted Boltzmann machines (RBMs) or autoencoders, where each sub-network's hidden layer serves as the visible layer for the next. An RBM is an undirected, generative energy-based model with a "visible" input layer and a hidden layer and connections between but not within layers. This composition leads to a fast, layer-by-layer unsupervised training procedure, where contrastive divergence is applied to each sub-network in turn, starting from the "lowest" pair of layers (the lowest visible layer is a training set). The observation that DBNs can be trained greedily, one layer at a time, led to one of the first effective deep Learning algorithms. (https://en.wikipedia.org/wiki/Deep_belief_network)
DBN
Backfed Input, Probabilistic Hidden, Hidden, Matched Output-Input
Deep Belief Network
In machine Learning, a deep belief network (DBN) is a generative graphical model, or alternatively a class of deep neural network, composed of multiple layers of latent variables ("hidden units"), with connections between the layers but not between units within each layer. When trained on a set of examples without supervision, a DBN can learn to probabilistically reconstruct its inputs. The layers then act as feature detectors. After this Learning step, a DBN can be further trained with supervision to perform classification. DBNs can be viewed as a composition of simple, unsupervised networks such as restricted Boltzmann machines (RBMs) or autoencoders, where each sub-network's hidden layer serves as the visible layer for the next. An RBM is an undirected, generative energy-based model with a "visible" input layer and a hidden layer and connections between but not within layers. This composition leads to a fast, layer-by-layer unsupervised training procedure, where contrastive divergence is applied to each sub-network in turn, starting from the "lowest" pair of layers (the lowest visible layer is a training set). The observation that DBNs can be trained greedily, one layer at a time, led to one of the first effective deep Learning algorithms. (https://en.wikipedia.org/wiki/Deep_belief_network)
https://en.wikipedia.org/wiki/Deep_belief_network
A Deep Convolution Inverse Graphics Network (DC-IGN) is a model that learns an interpretable representation of images. This representation is disentangled with respect to transformations such as out-of-plane rotations and lighting variations. The DC-IGN model is composed of multiple layers of convolution and de-convolution operators and is trained using the Stochastic Gradient Variational Bayes (SGVB) algorithm. (https://arxiv.org/abs/1503.03167)
DCIGN
Input, Kernel, Convolutional/Pool, Probabilistic Hidden, Convolutional/Pool, Kernel, Output
Deep Convolutional Inverse Graphics Network
A Deep Convolution Inverse Graphics Network (DC-IGN) is a model that learns an interpretable representation of images. This representation is disentangled with respect to transformations such as out-of-plane rotations and lighting variations. The DC-IGN model is composed of multiple layers of convolution and de-convolution operators and is trained using the Stochastic Gradient Variational Bayes (SGVB) algorithm. (https://arxiv.org/abs/1503.03167)
TBD
A convolutional neural network (CNN, or ConvNet) is a class of artificial neural network, most commonly applied to analyze visual imagery. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation equivariant responses known as feature maps. CNNs are regularized versions of multilayer perceptrons. (https://en.wikipedia.org/wiki/Convolutional_neural_network)
CNN
ConvNet
Convolutional Neural Network
DCN
Input, Kernel, Convolutional/Pool, Hidden, Output
Deep Convolutional Network
A convolutional neural network (CNN, or ConvNet) is a class of artificial neural network, most commonly applied to analyze visual imagery. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation equivariant responses known as feature maps. CNNs are regularized versions of multilayer perceptrons. (https://en.wikipedia.org/wiki/Convolutional_neural_network)
https://en.wikipedia.org/wiki/Convolutional_neural_network
The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction—forward—from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.
DFF
FFN
Feedforward Network
MLP
Multilayer Perceptoron
Input, Hidden, Output
Deep FeedForward
The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction—forward—from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.
https://en.wikipedia.org/wiki/Feedforward_neural_network
A deep neural network (DNN) is an artificial neural network (ANN) with multiple layers between the input and output layers.[13][2] There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions. (https://en.wikipedia.org/wiki/Deep_Learning#:~:text=A%20deep%20neural%20network%20(DNN,weights%2C%20biases%2C%20and%20functions.)
DNN
Deep Neural Network
A deep neural network (DNN) is an artificial neural network (ANN) with multiple layers between the input and output layers.[13][2] There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions. (https://en.wikipedia.org/wiki/Deep_Learning#:~:text=A%20deep%20neural%20network%20(DNN,weights%2C%20biases%2C%20and%20functions.)
TBD
Deep transfer learning methods relax the hypothesis that the training data must be independent and identically distributed (i.i.d.) with the test data, which motivates us to use transfer learning to solve the problem of insufficient training data.
Deep Transfer Learning
Deep transfer learning methods relax the hypothesis that the training data must be independent and identically distributed (i.i.d.) with the test data, which motivates us to use transfer learning to solve the problem of insufficient training data.
https://arxiv.org/abs/1808.01974
Denoising Auto Encoders (DAEs) take a partially corrupted input and are trained to recover the original undistorted input. In practice, the objective of denoising autoencoders is that of cleaning the corrupted input, or denoising. (https://en.wikipedia.org/wiki/Autoencoder)
DAE
Denoising Autoencoder
Noisy Input, Hidden, Matched Output-Input
Denoising Auto Encoder
Denoising Auto Encoders (DAEs) take a partially corrupted input and are trained to recover the original undistorted input. In practice, the objective of denoising autoencoders is that of cleaning the corrupted input, or denoising. (https://en.wikipedia.org/wiki/Autoencoder)
https://doi.org/10.1145/1390156.1390294
A layer that produces a dense Tensor based on given feature_columns. Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single Tensor. This layer can be called multiple times with different features. This is the V2 version of this layer that uses name_scopes to create variables instead of variable_scopes. But this approach currently lacks support for partitioned variables. In that case, use the V1 version instead.
DenseFeatures Layer
A layer that produces a dense Tensor based on given feature_columns. Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single Tensor. This layer can be called multiple times with different features. This is the V2 version of this layer that uses name_scopes to create variables instead of variable_scopes. But this approach currently lacks support for partitioned variables. In that case, use the V1 version instead.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/DenseFeatures
Just your regular densely-connected NN layer.
Dense Layer
Just your regular densely-connected NN layer.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense
Arises when systems are used as decision aids for humans, since the human intermediary may act on predictions in ways that are typically not modeled in the system. However, it is still individuals using the deployed system.
Deployment Bias
Arises when systems are used as decision aids for humans, since the human intermediary may act on predictions in ways that are typically not modeled in the system. However, it is still individuals using the deployed system.
https://doi.org/10.6028/NIST.SP.1270
Depthwise 1D convolution. Depthwise convolution is a type of convolution in which each input channel is convolved with a different kernel (called a depthwise kernel). You can understand depthwise convolution as the first step in a depthwise separable convolution. It is implemented via the following steps: Split the input into individual channels. Convolve each channel with an individual depthwise kernel with depth_multiplier output channels. Concatenate the convolved outputs along the channels axis. Unlike a regular 1D convolution, depthwise convolution does not mix information across different input channels. The depth_multiplier argument determines how many filter are applied to one input channel. As such, it controls the amount of output channels that are generated per input channel in the depthwise step.
DepthwiseConv1D Layer
Depthwise 1D convolution. Depthwise convolution is a type of convolution in which each input channel is convolved with a different kernel (called a depthwise kernel). You can understand depthwise convolution as the first step in a depthwise separable convolution. It is implemented via the following steps: Split the input into individual channels. Convolve each channel with an individual depthwise kernel with depth_multiplier output channels. Concatenate the convolved outputs along the channels axis. Unlike a regular 1D convolution, depthwise convolution does not mix information across different input channels. The depth_multiplier argument determines how many filter are applied to one input channel. As such, it controls the amount of output channels that are generated per input channel in the depthwise step.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/DepthwiseConv1D
Depthwise 2D convolution.
DepthwiseConv2D Layer
Depthwise 2D convolution.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/DepthwiseConv2D
Systematic differences between groups in how outcomes are determined and may cause an over- or underestimation of the size of the effect.
Detection Bias
Systematic differences between groups in how outcomes are determined and may cause an over- or underestimation of the size of the effect.
https://doi.org/10.6028/NIST.SP.1270
A dialogue LLM is optimized for engaging in multi-turn conversations, understanding context and generating relevant, coherent responses continuously over many dialogue turns.
Dialogue LLM
conversational AI
multi-turn dialogue
Dialogue LLM
A dialogue LLM is optimized for engaging in multi-turn conversations, understanding context and generating relevant, coherent responses continuously over many dialogue turns.
TBD
A differentiable LLM has an architecture amenable to full end-to-end training via backpropagation, without relying on teacher forcing or unlikelihood training objectives.
Differentiable LLM
end-to-end training
fully backpropagable
Differentiable LLM
A differentiable LLM has an architecture amenable to full end-to-end training via backpropagation, without relying on teacher forcing or unlikelihood training objectives.
TBD
The transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally close to its intrinsic dimension.
Dimension Reduction
Dimensionality Reduction
The transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally close to its intrinsic dimension.
https://en.wikipedia.org/wiki/Dimensionality_reduction
A preprocessing layer which buckets continuous features by ranges.
Discretization Layer
A preprocessing layer which buckets continuous features by ranges.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Discretization
Knowledge distillation involves training a smaller model to replicate the behavior of a larger model, aiming to compress the knowledge into a more compact form without significant loss of performance.
Purification
Refining
Knowledge compression
Teacher-student model
Distillation
Knowledge distillation involves training a smaller model to replicate the behavior of a larger model, aiming to compress the knowledge into a more compact form without significant loss of performance.
https://doi.org/10.48550/arXiv.2105.13093
A domain-adapted LLM is first pre-trained on a broad corpus, then fine-tuned on domain-specific data to specialize its capabilities for particular domains or applications, like scientific literature or code generation.
Domain-Adapted LLM
domain robustness
transfer learning
Domain-Adapted LLM
A domain-adapted LLM is first pre-trained on a broad corpus, then fine-tuned on domain-specific data to specialize its capabilities for particular domains or applications, like scientific literature or code generation.
TBD
Layer that computes a dot product between samples in two tensors. E.g. if applied to a list of two tensors a and b of shape (batch_size, n), the output will be a tensor of shape (batch_size, 1) where each entry i will be the dot product between a[i] and b[i].
Dot Layer
Layer that computes a dot product between samples in two tensors. E.g. if applied to a list of two tensors a and b of shape (batch_size, n), the output will be a tensor of shape (batch_size, 1) where each entry i will be the dot product between a[i] and b[i].
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dot
Applies Dropout to the input. The Dropout layer randomly sets input units to 0 with a frequency of rate at each step during training time, which helps prevent overfitting. Inputs not set to 0 are scaled up by 1/(1 - rate) such that the sum over all inputs is unchanged. Note that the Dropout layer only applies when training is set to True such that no values are dropped during inference. When using model.fit, training will be appropriately set to True automatically, and in other contexts, you can set the kwarg explicitly to True when calling the layer. (This is in contrast to setting trainable=False for a Dropout layer. trainable does not affect the layer's behavior, as Dropout does not have any variables/weights that can be frozen during training.)
Dropout Layer
Applies Dropout to the input. The Dropout layer randomly sets input units to 0 with a frequency of rate at each step during training time, which helps prevent overfitting. Inputs not set to 0 are scaled up by 1/(1 - rate) such that the sum over all inputs is unchanged. Note that the Dropout layer only applies when training is set to True such that no values are dropped during inference. When using model.fit, training will be appropriately set to True automatically, and in other contexts, you can set the kwarg explicitly to True when calling the layer. (This is in contrast to setting trainable=False for a Dropout layer. trainable does not affect the layer's behavior, as Dropout does not have any variables/weights that can be frozen during training.)
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout
The tendency of people with low ability in a given area or task to overestimate their self-assessed ability. Typically measured by comparing self-assessment with objective performance, often called subjective ability and objective ability, respectively.
Dunning-Kruger Effect
Dunning-Kruger Effect Bias
The tendency of people with low ability in a given area or task to overestimate their self-assessed ability. Typically measured by comparing self-assessment with objective performance, often called subjective ability and objective ability, respectively.
https://doi.org/10.6028/NIST.SP.1270
The exponential linear unit (ELU) with alpha > 0 is: x if x > 0 and alpha * (exp(x) - 1) if x < 0 The ELU hyperparameter alpha controls the value to which an ELU saturates for negative net inputs. ELUs diminish the vanishing gradient effect. ELUs have negative values which pushes the mean of the activations closer to zero. Mean activations that are closer to zero enable faster Learning as they bring the gradient closer to the natural gradient. ELUs saturate to a negative value when the argument gets smaller. Saturation means a small derivative which decreases the variation and the information that is propagated to the next layer.
ELU
Exponential Linear Unit
ELU Function
The exponential linear unit (ELU) with alpha > 0 is: x if x > 0 and alpha * (exp(x) - 1) if x < 0 The ELU hyperparameter alpha controls the value to which an ELU saturates for negative net inputs. ELUs diminish the vanishing gradient effect. ELUs have negative values which pushes the mean of the activations closer to zero. Mean activations that are closer to zero enable faster Learning as they bring the gradient closer to the natural gradient. ELUs saturate to a negative value when the argument gets smaller. Saturation means a small derivative which decreases the variation and the information that is propagated to the next layer.
https://www.tensorflow.org/api_docs/python/tf/keras/activations/elu
Exponential Linear Unit.
ELU Layer
Exponential Linear Unit.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/ELU
The echo state network (ESN) is a type of reservoir computer that uses a recurrent neural network with a sparsely connected hidden layer (with typically 1% connectivity). The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can produce or reproduce specific temporal patterns. The main interest of this network is that although its behaviour is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.
ESN
Input, Recurrent, Output
Echo State Network
The echo state network (ESN) is a type of reservoir computer that uses a recurrent neural network with a sparsely connected hidden layer (with typically 1% connectivity). The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can produce or reproduce specific temporal patterns. The main interest of this network is that although its behaviour is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.
https://en.wikipedia.org/wiki/Echo_state_network#:~:text=The%20echo%20state%20network%20(ESN,are%20fixed%20and%20randomly%20assigned
Occurs when an inference is made about an individual based on their membership within a group.
Ecological Fallacy
Ecological Fallacy Bias
Occurs when an inference is made about an individual based on their membership within a group.
https://doi.org/10.6028/NIST.SP.1270
Turns positive integers (indexes) into dense vectors of fixed size.
Embedding Layer
Turns positive integers (indexes) into dense vectors of fixed size.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding
An embodied LLM integrates language with other modalities like vision, audio, robotics to enable grounded language understanding in real-world environments.
Embodied LLM
multimodal grounding
Embodied LLM
An embodied LLM integrates language with other modalities like vision, audio, robotics to enable grounded language understanding in real-world environments.
TBD
Emergent bias is the result of the use and reliance on algorithms across new or unanticipated contexts.
Emergent Bias
Emergent bias is the result of the use and reliance on algorithms across new or unanticipated contexts.
https://doi.org/10.6028/NIST.SP.1270
The Encoder-Decoder architecture was the original transformer architecture introduced in the Attention Is All You Need (https://arxiv.org/abs/1706.03762) paper. The encoder processes the input sequence and generates a hidden representation that summarizes the input information. The decoder uses this hidden representation to generate the desired output sequence. The encoder and decoder are trained end-to-end to maximize the likelihood of the correct output sequence given the input sequence.
LLM
Encoder-Decoder LLM
The Encoder-Decoder architecture was the original transformer architecture introduced in the Attention Is All You Need (https://arxiv.org/abs/1706.03762) paper. The encoder processes the input sequence and generates a hidden representation that summarizes the input information. The decoder uses this hidden representation to generate the desired output sequence. The encoder and decoder are trained end-to-end to maximize the likelihood of the correct output sequence given the input sequence.
https://www.practicalai.io/understanding-transformer-model-architectures/#:~:text=Encoder%2Donly&text=These%20models%20have%20a%20pre,Named%20entity%20recognition
The Encoder-only architecture is used when only encoding the input sequence is required and the decoder is not necessary. The input sequence is encoded into a fixed-length representation and then used as input to a classifier or a regressor to make a prediction. These models have a pre-trained general-purpose encoder but will require fine-tuning of the final classifier or regressor.
LLM
Encoder LLM
The Encoder-only architecture is used when only encoding the input sequence is required and the decoder is not necessary. The input sequence is encoded into a fixed-length representation and then used as input to a classifier or a regressor to make a prediction. These models have a pre-trained general-purpose encoder but will require fine-tuning of the final classifier or regressor.
https://www.practicalai.io/understanding-transformer-model-architectures/#:~:text=Encoder%2Donly&text=These%20models%20have%20a%20pre,Named%20entity%20recognition
An energy-based LLM models the explicit probability density over token sequences using an energy function, rather than an autoregressive factorization. This can improve modeling of long-range dependencies and global coherence.
Energy-Based LLM
energy scoring
explicit density modeling
Energy-Based LLM
An energy-based LLM models the explicit probability density over token sequences using an energy function, rather than an autoregressive factorization. This can improve modeling of long-range dependencies and global coherence.
TBD
An abstract parent class grouping LLMs based on model enhancement strategies.
Enhancement Strategies
An abstract parent class grouping LLMs based on model enhancement strategies.
TBD
Ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.
Ensemble Learning
Ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.
https://en.wikipedia.org/wiki/Ensemble_learning
The effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them.
Error Propagation
Error Propagation Bias
The effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them.
https://doi.org/10.6028/NIST.SP.1270
An ethical LLM is trained to uphold certain ethical principles, values or rules in its language generation to increase safety and trustworthiness.
Ethical LLM
constituitional AI
value alignment
Ethical LLM
An ethical LLM is trained to uphold certain ethical principles, values or rules in its language generation to increase safety and trustworthiness.
TBD
Arises when the testing or external benchmark populations do not equally represent the various parts of the user population or from the use of performance metrics that are not appropriate for the way in which the model will be used.
Evaluation Bias
Arises when the testing or external benchmark populations do not equally represent the various parts of the user population or from the use of performance metrics that are not appropriate for the way in which the model will be used.
https://doi.org/10.6028/NIST.SP.1270
An evolutionary LLM applies principles of evolutionary computation to optimize its structure and parameters, evolving over time to improve performance.
Evolutionary Language Model
evolutionary algorithms
genetic programming
Evolutionary LLM
An evolutionary LLM applies principles of evolutionary computation to optimize its structure and parameters, evolving over time to improve performance.
TBD
When specific groups of user populations are excluded from testing and subsequent analyses.
Exclusion Bias
When specific groups of user populations are excluded from testing and subsequent analyses.
https://doi.org/10.6028/NIST.SP.1270
An explainable LLM is designed to provide insights into its decision-making process, making it easier for users to understand and trust the model's outputs. It incorporates mechanisms for interpreting and explaining its predictions in human-understandable terms.
Explainable Language Model
XAI LLM
interpretability
model understanding
Explainable LLM
An explainable LLM is designed to provide insights into its decision-making process, making it easier for users to understand and trust the model's outputs. It incorporates mechanisms for interpreting and explaining its predictions in human-understandable terms.
TBD
The exponential function is a mathematical function denoted by f(x)=exp or e^{x}.
Exponential Function
The exponential function is a mathematical function denoted by f(x)=exp or e^{x}.
https://www.tensorflow.org/api_docs/python/tf/keras/activations/exponential
Extreme Learning machines are feedforward neural networks for classification, regression, clustering, sparse approximation, compression and feature Learning with a single layer or multiple layers of hidden nodes, where the parameters of hidden nodes (not just the weights connecting inputs to hidden nodes) need not be tuned. These hidden nodes can be randomly assigned and never updated (i.e. they are random projection but with nonlinear transforms), or can be inherited from their ancestors without being changed. In most cases, the output weights of hidden nodes are usually learned in a single step, which essentially amounts to Learning a linear model. (https://en.wikipedia.org/wiki/Extreme_Learning_machine)
ELM
Input, Hidden, Output
Extreme Learning Machine
Extreme Learning machines are feedforward neural networks for classification, regression, clustering, sparse approximation, compression and feature Learning with a single layer or multiple layers of hidden nodes, where the parameters of hidden nodes (not just the weights connecting inputs to hidden nodes) need not be tuned. These hidden nodes can be randomly assigned and never updated (i.e. they are random projection but with nonlinear transforms), or can be inherited from their ancestors without being changed. In most cases, the output weights of hidden nodes are usually learned in a single step, which essentially amounts to Learning a linear model. (https://en.wikipedia.org/wiki/Extreme_Learning_machine)
https://en.wikipedia.org/wiki/Extreme_Learning_machine
A factorized LLM decomposes the full language modeling task into multiple sub-components or experts that each focus on a subset of the information. This enables more efficient scaling.
Factorized LLM
Conditional masking
Product of experts
Factorized LLM
A factorized LLM decomposes the full language modeling task into multiple sub-components or experts that each focus on a subset of the information. This enables more efficient scaling.
TBD
Extracting specific features or patterns from the text before training to guide the model's learning process, including syntactic information or semantic embeddings.
Attribute Extraction
Feature Isolation
Semantic embeddings
Syntactic information
Feature Extraction
Extracting specific features or patterns from the text before training to guide the model's learning process, including syntactic information or semantic embeddings.
TBD
A federated LLM is trained in a decentralized manner across multiple devices or silos, without directly sharing private data. This enables collaborative training while preserving data privacy and security.
Federated LLM
decentralized training
privacy-preserving
Federated LLM
A federated LLM is trained in a decentralized manner across multiple devices or silos, without directly sharing private data. This enables collaborative training while preserving data privacy and security.
TBD
A technique that trains an algorithm across multiple decentralized edge devices or servers holding local data samples, without exchanging them.
Federated Learning
A technique that trains an algorithm across multiple decentralized edge devices or servers holding local data samples, without exchanging them.
https://en.wikipedia.org/wiki/Federated_learning
Effects that may occur when an algorithm learns from user behavior and feeds that behavior back into the model.
Feedback Loop Bias
Effects that may occur when an algorithm learns from user behavior and feeds that behavior back into the model.
https://doi.org/10.6028/NIST.SP.1270
A feedback based approach in which the representation is formed in an iterative manner based on a feedback received from previous iteration's output. (https://arxiv.org/abs/1612.09508)
FBN
Input, Hidden, Output, Hidden
Feedback Network
A feedback based approach in which the representation is formed in an iterative manner based on a feedback received from previous iteration's output. (https://arxiv.org/abs/1612.09508)
TBD
A statistical model in which the model parameters are fixed or non-random quantities.
FEM
Fixed Effects Model
A statistical model in which the model parameters are fixed or non-random quantities.
https://en.wikipedia.org/wiki/Fixed_effects_model
Flattens the input. Does not affect the batch size.
Flatten Layer
Flattens the input. Does not affect the batch size.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten
Applies a 2D fractional max pooling over an input signal composed of several input planes.
FractionalMaxPool2D
FractionalMaxPool2d
FractionalMaxPool2D Layer
Applies a 2D fractional max pooling over an input signal composed of several input planes.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Applies a 3D fractional max pooling over an input signal composed of several input planes.
FractionalMaxPool3D
FractionalMaxPool3d
FractionalMaxPool3D Layer
Applies a 3D fractional max pooling over an input signal composed of several input planes.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Function parent class
Function
Function parent class
TBD
Arises when biased results are reported in order to support or satisfy the funding agency or financial supporter of the research study, but it can also be the individual researcher.
Funding Bias
Arises when biased results are reported in order to support or satisfy the funding agency or financial supporter of the research study, but it can also be the individual researcher.
https://doi.org/10.6028/NIST.SP.1270
Gaussian error linear unit (GELU) computes x * P(X <= x), where P(X) ~ N(0, 1). The (GELU) nonlinearity weights inputs by their value, rather than gates inputs by their sign as in ReLU.
GELU
Gaussian Error Linear Unit
GELU Function
Gaussian error linear unit (GELU) computes x * P(X <= x), where P(X) ~ N(0, 1). The (GELU) nonlinearity weights inputs by their value, rather than gates inputs by their sign as in ReLU.
https://www.tensorflow.org/api_docs/python/tf/keras/activations/gelu
Cell class for the GRU layer. This class processes one step within the whole time sequence input, whereas tf.keras.layer.GRU processes the whole sequence.
GRUCell Layer
Cell class for the GRU layer. This class processes one step within the whole time sequence input, whereas tf.keras.layer.GRU processes the whole sequence.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/GRUCell
Gated Recurrent Unit - Cho et al. 2014. Based on available runtime hardware and constraints, this layer will choose different implementations (cuDNN-based or pure-TensorFlow) to maximize the performance. If a GPU is available and all the arguments to the layer meet the requirement of the cuDNN kernel (see below for details), the layer will use a fast cuDNN implementation. The requirements to use the cuDNN implementation are: activation == tanh, recurrent_activation == sigmoid, recurrent_dropout == 0, unroll is False, use_bias is True, reset_after is True. Inputs, if use masking, are strictly right-padded. Eager execution is enabled in the outermost context. There are two variants of the GRU implementation. The default one is based on v3 and has reset gate applied to hidden state before matrix multiplication. The other one is based on original and has the order reversed. The second variant is compatible with CuDNNGRU (GPU-only) and allows inference on CPU. Thus it has separate biases for kernel and recurrent_kernel. To use this variant, set reset_after=True and recurrent_activation='sigmoid'.
GRU Layer
Gated Recurrent Unit - Cho et al. 2014. Based on available runtime hardware and constraints, this layer will choose different implementations (cuDNN-based or pure-TensorFlow) to maximize the performance. If a GPU is available and all the arguments to the layer meet the requirement of the cuDNN kernel (see below for details), the layer will use a fast cuDNN implementation. The requirements to use the cuDNN implementation are: activation == tanh, recurrent_activation == sigmoid, recurrent_dropout == 0, unroll is False, use_bias is True, reset_after is True. Inputs, if use masking, are strictly right-padded. Eager execution is enabled in the outermost context. There are two variants of the GRU implementation. The default one is based on v3 and has reset gate applied to hidden state before matrix multiplication. The other one is based on original and has the order reversed. The second variant is compatible with CuDNNGRU (GPU-only) and allows inference on CPU. Thus it has separate biases for kernel and recurrent_kernel. To use this variant, set reset_after=True and recurrent_activation='sigmoid'.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/GRU
Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al. The GRU is like a long short-term memory (LSTM) with a forget gate, but has fewer parameters than LSTM, as it lacks an output gate. GRU's performance on certain tasks of polyphonic music modeling, speech signal modeling and natural language processing was found to be similar to that of LSTM.[4][5] GRUs have been shown to exhibit better performance on certain smaller and less frequent datasets.
GRU
Input, Memory Cell, Output
Gated Recurrent Unit
Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al. The GRU is like a long short-term memory (LSTM) with a forget gate, but has fewer parameters than LSTM, as it lacks an output gate. GRU's performance on certain tasks of polyphonic music modeling, speech signal modeling and natural language processing was found to be similar to that of LSTM.[4][5] GRUs have been shown to exhibit better performance on certain smaller and less frequent datasets.
https://en.wikipedia.org/wiki/Gated_recurrent_unit
Apply multiplicative 1-centered Gaussian noise. As it is a regularization layer, it is only active at training time.
GaussianDropout Layer
Apply multiplicative 1-centered Gaussian noise. As it is a regularization layer, it is only active at training time.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/GaussianDropout
Apply additive zero-centered Gaussian noise. This is useful to mitigate overfitting (you could see it as a form of random data augmentation). Gaussian Noise (GS) is a natural choice as corruption process for real valued inputs. As it is a regularization layer, it is only active at training time.
GaussianNoise Layer
Apply additive zero-centered Gaussian noise. This is useful to mitigate overfitting (you could see it as a form of random data augmentation). Gaussian Noise (GS) is a natural choice as corruption process for real valued inputs. As it is a regularization layer, it is only active at training time.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/GaussianNoise
Methods that can learn novel classes from only few samples per class, preventing catastrophic forgetting of base classes, and classifier calibration across novel and base classes.
GFSL
Generalized Few-shot Learning
Methods that can learn novel classes from only few samples per class, preventing catastrophic forgetting of base classes, and classifier calibration across novel and base classes.
https://paperswithcode.com/paper/generalized-and-incremental-few-shot-learning/review/
This model generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.
GLM
Generalized Linear Model
This model generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.
https://en.wikipedia.org/wiki/Generalized_linear_model
A generative adversarial network (GAN) is a class of machine Learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss). Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised Learning, GANs have also proven useful for semi-supervised Learning, fully supervised Learning,[ and reinforcement Learning. The core idea of a GAN is based on the "indirect" training through the discriminator,[clarification needed] which itself is also being updated dynamically. This basically means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner.
GAN
Backfed Input, Hidden, Matched Output-Input, Hidden, Matched Output-Input
Generative Adversarial Network
A generative adversarial network (GAN) is a class of machine Learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss). Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised Learning, GANs have also proven useful for semi-supervised Learning, fully supervised Learning,[ and reinforcement Learning. The core idea of a GAN is based on the "indirect" training through the discriminator,[clarification needed] which itself is also being updated dynamically. This basically means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner.
https://en.wikipedia.org/wiki/Generative_adversarial_network
A GAN-augmented LLM incorporates a generative adversarial network (GAN) into its training process, using a discriminator network to provide a signal for generating more realistic and coherent text. This adversarial training can improve the quality and diversity of generated text.
GAN-LLM
Generative Adversarial Network-Augmented LLM
adversarial training
text generation
Generative Adversarial Network-Augmented LLM
A GAN-augmented LLM incorporates a generative adversarial network (GAN) into its training process, using a discriminator network to provide a signal for generating more realistic and coherent text. This adversarial training can improve the quality and diversity of generated text.
TBD
A generative commonsense LLM is trained to understand and model basic physics, causality and common sense about how the real world works.
Generative Commonsense LLM
causal modeling
physical reasoning
Generative Commonsense LLM
A generative commonsense LLM is trained to understand and model basic physics, causality and common sense about how the real world works.
TBD
A generative language interface enables users to engage in an interactive dialogue with an LLM, providing feedback to guide and refine the generated outputs iteratively.
Generative Language Interface
Interactive generation
Generative Language Interface
A generative language interface enables users to engage in an interactive dialogue with an LLM, providing feedback to guide and refine the generated outputs iteratively.
TBD
Global average pooling operation for temporal data.
GlobalAvgPool1D
GlobalAvgPool1d
GlobalAveragePooling1D Layer
Global average pooling operation for temporal data.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling1D
Global average pooling operation for spatial data.
GlobalAvgPool2D
GlobalAvgPool2d
GlobalAveragePooling2D Layer
Global average pooling operation for spatial data.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling2D
Global Average pooling operation for 3D data.
GlobalAvgPool3D
GlobalAvgPool3d
GlobalAveragePooling3D Layer
Global Average pooling operation for 3D data.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling3D
Global max pooling operation for 1D temporal data.
GlobalMaxPool1D
GlobalMaxPool1d
GlobalMaxPooling1D Layer
Global max pooling operation for 1D temporal data.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalMaxPool1D
Global max pooling operation for spatial data.
GlobalMaxPool2D
GlobalMaxPool2d
GlobalMaxPooling2D Layer
Global max pooling operation for spatial data.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalMaxPool2D
Global Max pooling operation for 3D data.
GlobalMaxPool3D
GlobalMaxPool3d
GlobalMaxPooling3D Layer
Global Max pooling operation for 3D data.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalMaxPool3D
GCN is a type of convolutional neural network that can work directly on graphs and take advantage of their structural information. (https://arxiv.org/abs/1609.02907)
GCN
Input, Hidden, Hidden, Output
Graph Convolutional Network
GCN is a type of convolutional neural network that can work directly on graphs and take advantage of their structural information. (https://arxiv.org/abs/1609.02907)
https://arxiv.org/abs/1609.02907
Graph Convolutional Policy Network (GCPN), a general graph convolutional network based model for goal-directed graph generation through reinforcement Learning. The model is trained to optimize domain-specific rewards and adversarial loss through policy gradient, and acts in an environment that incorporates domain-specific rules.
GPCN
Input, Hidden, Hidden, Policy, Output
Graph Convolutional Policy Network
Graph Convolutional Policy Network (GCPN), a general graph convolutional network based model for goal-directed graph generation through reinforcement Learning. The model is trained to optimize domain-specific rewards and adversarial loss through policy gradient, and acts in an environment that incorporates domain-specific rules.
https://arxiv.org/abs/1806.02473
A graph LLM operates over structured inputs/outputs represented as graphs, enabling reasoning over explicit relational knowledge representations during language tasks.
Graph LLM
Structured representations
Graph LLM
A graph LLM operates over structured inputs/outputs represented as graphs, enabling reasoning over explicit relational knowledge representations during language tasks.
https://doi.org/10.48550/arXiv.2311.12399
A pattern of favoring members of one's in-group over out-group members. This can be expressed in evaluation of others, in allocation of resources, and in many other ways.
In-group Favoritism
In-group bias
In-group preference
In-group–out-group Bias
Intergroup bias
Group Bias
A pattern of favoring members of one's in-group over out-group members. This can be expressed in evaluation of others, in allocation of resources, and in many other ways.
https://en.wikipedia.org/wiki/In-group_favoritism
Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization
GroupNorm
GroupNorm Layer
Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization
https://pytorch.org/docs/stable/nn.html#normalization-layers
A psychological phenomenon that occurs when people in a group tend to make non-optimal decisions based on their desire to conform to the group, or fear of dissenting with the group. In groupthink, individuals often refrain from expressing their personal disagreement with the group, hesitating to voice opinions that do not align with the group.
Groupthink
Groupthink Bias
A psychological phenomenon that occurs when people in a group tend to make non-optimal decisions based on their desire to conform to the group, or fear of dissenting with the group. In groupthink, individuals often refrain from expressing their personal disagreement with the group, hesitating to voice opinions that do not align with the group.
https://doi.org/10.6028/NIST.SP.1270
A faster approximation of the sigmoid activation. Piecewise linear approximation of the sigmoid function. Ref: 'https://en.wikipedia.org/wiki/Hard_sigmoid'
Hard Sigmoid Function
A faster approximation of the sigmoid activation. Piecewise linear approximation of the sigmoid function. Ref: 'https://en.wikipedia.org/wiki/Hard_sigmoid'
https://www.tensorflow.org/api_docs/python/tf/keras/activations/hard_sigmoid
A preprocessing layer which hashes and bins categorical features. This layer transforms categorical inputs to hashed output. It element-wise converts a ints or strings to ints in a fixed range. The stable hash function uses tensorflow::ops::Fingerprint to produce the same output consistently across all platforms. This layer uses FarmHash64 by default, which provides a consistent hashed output across different platforms and is stable across invocations, regardless of device and context, by mixing the input bits thoroughly. If you want to obfuscate the hashed output, you can also pass a random salt argument in the constructor. In that case, the layer will use the SipHash64 hash function, with the salt value serving as additional input to the hash function.
Hashing Layer
A preprocessing layer which hashes and bins categorical features. This layer transforms categorical inputs to hashed output. It element-wise converts a ints or strings to ints in a fixed range. The stable hash function uses tensorflow::ops::Fingerprint to produce the same output consistently across all platforms. This layer uses FarmHash64 by default, which provides a consistent hashed output across different platforms and is stable across invocations, regardless of device and context, by mixing the input bits thoroughly. If you want to obfuscate the hashed output, you can also pass a random salt argument in the constructor. In that case, the layer will use the SipHash64 hash function, with the salt value serving as additional input to the hash function.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Hashing
A hidden layer is located between the input and output of the algorithm, in which the function applies weights to the inputs and directs them through an activation function as the output. In short, the hidden layers perform nonlinear transformations of the inputs entered into the network. Hidden layers vary depending on the function of the neural network, and similarly, the layers may vary depending on their associated weights.
Hidden Layer
A hidden layer is located between the input and output of the algorithm, in which the function applies weights to the inputs and directs them through an activation function as the output. In short, the hidden layers perform nonlinear transformations of the inputs entered into the network. Hidden layers vary depending on the function of the neural network, and similarly, the layers may vary depending on their associated weights.
https://deepai.org/machine-Learning-glossary-and-terms/hidden-layer-machine-Learning
Methods that group things according to a hierarchy.
Hierarchical Classification
Methods that group things according to a hierarchy.
https://en.wikipedia.org/wiki/Hierarchical_classification
Methods that seek to build a hierarchy of clusters.
HCL
Hierarchical Clustering
Methods that seek to build a hierarchy of clusters.
https://en.wikipedia.org/wiki/Hierarchical_clustering
A hierarchical LLM models language at multiple levels of granularity, learning hierarchical representations that can capture both low-level patterns and high-level abstractions.
Hierarchical LLM
multi-scale representations
Hierarchical LLM
A hierarchical LLM models language at multiple levels of granularity, learning hierarchical representations that can capture both low-level patterns and high-level abstractions.
TBD
Referring to the long-standing biases encoded in society over time. Related to, but distinct from, biases in historical description, or the interpretation, analysis, and explanation of history. A common example of historical bias is the tendency to view the larger world from a Western or European view.
Historical Bias
Referring to the long-standing biases encoded in society over time. Related to, but distinct from, biases in historical description, or the interpretation, analysis, and explanation of history. A common example of historical bias is the tendency to view the larger world from a Western or European view.
https://doi.org/10.6028/NIST.SP.1270
A Hopfield network is a form of recurrent artificial neural network and a type of spin glass system popularised by John Hopfield in 1982 as described earlier by Little in 1974 based on Ernst Ising's work with Wilhelm Lenz on the Ising model. Hopfield networks serve as content-addressable ("associative") memory systems with binary threshold nodes, or with continuous variables. Hopfield networks also provide a model for understanding human memory. (https://en.wikipedia.org/wiki/Hopfield_network)
HN
Ising model of a neural network
Ising–Lenz–Little model
Backfed input
Hopfield Network
A Hopfield network is a form of recurrent artificial neural network and a type of spin glass system popularised by John Hopfield in 1982 as described earlier by Little in 1974 based on Ernst Ising's work with Wilhelm Lenz on the Ising model. Hopfield networks serve as content-addressable ("associative") memory systems with binary threshold nodes, or with continuous variables. Hopfield networks also provide a model for understanding human memory. (https://en.wikipedia.org/wiki/Hopfield_network)
https://en.wikipedia.org/wiki/Hopfield_network
A bias wherein individuals perceive benign or ambiguous behaviors as hostile.
Hostile Attribution Bias
A bias wherein individuals perceive benign or ambiguous behaviors as hostile.
https://en.wikipedia.org/wiki/Interpretive_bias
Systematic errors in human thought based on a limited number of heuristic principles and predicting values to simpler judgmental operations.
Human Bias
Systematic errors in human thought based on a limited number of heuristic principles and predicting values to simpler judgmental operations.
https://doi.org/10.6028/NIST.SP.1270
When users rely on automation as a heuristic replacement for their own information seeking and processing.
Human Reporting Bias
When users rely on automation as a heuristic replacement for their own information seeking and processing.
https://doi.org/10.6028/NIST.SP.1270
A layer that performs image data preprocessing augmentations.
Image Augmentation Layer
A layer that performs image data preprocessing augmentations.
https://keras.io/guides/preprocessing_layers/
A layer that performs image data preprocessing operations.
Image Preprocessing Layer
A layer that performs image data preprocessing operations.
https://keras.io/guides/preprocessing_layers/
An unconscious belief, attitude, feeling, association, or stereotype that can affect the way in which humans process information, make decisions, and take actions.
Confirmatory Bias
Implicit Bias
An unconscious belief, attitude, feeling, association, or stereotype that can affect the way in which humans process information, make decisions, and take actions.
https://doi.org/10.6028/NIST.SP.1270
An implicit language model uses an energy function to score full sequences instead of factorizing probabilities autoregressively. This can better capture global properties and long-range dependencies.
Implicit Language Model
Energy-based models
Token-level scoring
Implicit Language Model
An implicit language model uses an energy function to score full sequences instead of factorizing probabilities autoregressively. This can better capture global properties and long-range dependencies.
TBD
Methods that train a network on a base set of classes and then is presented several novel classes, each with only a few labeled examples.
IFSL
Incremenetal Few-shot Learning
Methods that train a network on a base set of classes and then is presented several novel classes, each with only a few labeled examples.
https://arxiv.org/abs/1810.07218
Individual bias is a persistent point of view or limited list of such points of view that one applies ("parent", "academic", "professional", or etc.).
Individual Bias
Individual bias is a persistent point of view or limited list of such points of view that one applies ("parent", "academic", "professional", or etc.).
https://develop.consumerium.org/wiki/Individual_bias
Arises when applications that are built with machine Learning are used to generate inputs for other machine Learning algorithms. If the output is biased in any way, this bias may be inherited by systems using the output as input to learn other models.
Inherited Bias
Arises when applications that are built with machine Learning are used to generate inputs for other machine Learning algorithms. If the output is biased in any way, this bias may be inherited by systems using the output as input to learn other models.
https://doi.org/10.6028/NIST.SP.1270
The input layer of a neural network is composed of artificial input neurons, and brings the initial data into the system for further processing by subsequent layers of artificial neurons. The input layer is the very beginning of the workflow for the artificial neural network.
Input Layer
The input layer of a neural network is composed of artificial input neurons, and brings the initial data into the system for further processing by subsequent layers of artificial neurons. The input layer is the very beginning of the workflow for the artificial neural network.
https://www.techopedia.com/definition/33262/input-layer-neural-networks#:~:text=Explains%20Input%20Layer-,What%20Does%20Input%20Layer%20Mean%3F,for%20the%20artificial%20neural%20network.
Layer to be used as an entry point into a Network (a graph of layers).
InputLayer Layer
Layer to be used as an entry point into a Network (a graph of layers).
https://www.tensorflow.org/api_docs/python/tf/keras/layers/InputLayer
Specifies the rank, dtype and shape of every input to a layer. Layers can expose (if appropriate) an input_spec attribute: an instance of InputSpec, or a nested structure of InputSpec instances (one per input tensor). These objects enable the layer to run input compatibility checks for input structure, input rank, input shape, and input dtype. A None entry in a shape is compatible with any dimension, a None shape is compatible with any shape.
InputSpec Layer
Specifies the rank, dtype and shape of every input to a layer. Layers can expose (if appropriate) an input_spec attribute: an instance of InputSpec, or a nested structure of InputSpec instances (one per input tensor). These objects enable the layer to run input compatibility checks for input structure, input rank, input shape, and input dtype. A None entry in a shape is compatible with any dimension, a None shape is compatible with any shape.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/InputSpec
Applies Instance Normalization over a 2D (unbatched) or 3D (batched) input as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.
InstanceNorm1D
InstanceNorm1d
InstanceNorm1d Layer
Applies Instance Normalization over a 2D (unbatched) or 3D (batched) input as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.
https://pytorch.org/docs/stable/nn.html#normalization-layers
Applies Instance Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.
InstanceNorm2D
InstanceNorm2d
InstanceNorm2d
Applies Instance Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.
https://pytorch.org/docs/stable/nn.html#normalization-layers
Applies Instance Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.
InstanceNorm3D
InstanceNorm3d
InstanceNorm3d Layer
Applies Instance Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.
https://pytorch.org/docs/stable/nn.html#normalization-layers
In contrast to biases exhibited at the level of individual persons, institutional bias refers to a tendency exhibited at the level of entire institutions, where practices or norms result in the favoring or disadvantaging of certain social groups. Common examples include institutional racism and institutional sexism.
Institutional Bias
In contrast to biases exhibited at the level of individual persons, institutional bias refers to a tendency exhibited at the level of entire institutions, where practices or norms result in the favoring or disadvantaging of certain social groups. Common examples include institutional racism and institutional sexism.
https://doi.org/10.6028/NIST.SP.1270
An instruction-tuned LLM is fine-tuned to follow natural language instructions accurately and safely, learning to map from instructions to desired model behavior in a more controlled and principled way.
Instruction-Tuned LLM
constitutional AI
natural language instructions
Instruction-Tuned LLM
An instruction-tuned LLM is fine-tuned to follow natural language instructions accurately and safely, learning to map from instructions to desired model behavior in a more controlled and principled way.
TBD
A preprocessing layer which maps integer features to contiguous ranges.
IntegerLookup Layer
A preprocessing layer which maps integer features to contiguous ranges.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/IntegerLookup
An abstract parent class grouping LLMs based on model interfaces and integration.
Interface and Integration
An abstract parent class grouping LLMs based on model interfaces and integration.
TBD
An abstract parent class grouping LLMs based on model interpretability and ethics.
Interpretability and Ethics
An abstract parent class grouping LLMs based on model interpretability and ethics.
TBD
An interpretable LLM prioritizes transparency and ease of understanding in its operations, making its decision-making processes clear and rational to human users.
Interpretable Language Model
interpretability
model transparency
Interpretable LLM
An interpretable LLM prioritizes transparency and ease of understanding in its operations, making its decision-making processes clear and rational to human users.
TBD
A form of information processing bias that can occur when users interpret algorithmic outputs according to their internalized biases and views.
Interpretation Bias
A form of information processing bias that can occur when users interpret algorithmic outputs according to their internalized biases and views.
https://doi.org/10.6028/NIST.SP.1270
An algorithm to group objects by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors
K-NN
KNN
K-nearest Neighbor Algorithm
An algorithm to group objects by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors
https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm
An algorithm to classify objects by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors
K-NN
KNN
K-nearest Neighbor Classification Algorithm
An algorithm to classify objects by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors
https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm
An algorithm to assign the average of the values of k nearest neighbors to objects.
K-NN
KNN
K-nearest Neighbor Regression Algorithm
An algorithm to assign the average of the values of k nearest neighbors to objects.
https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm
A knowledge-grounded LLM incorporates external knowledge sources or knowledge bases into the model architecture, enabling it to generate more factually accurate and knowledge-aware text.
Knowledge-Grounded LLM
factual grounding
knowledge integration
Knowledge-Grounded LLM
A knowledge-grounded LLM incorporates external knowledge sources or knowledge bases into the model architecture, enabling it to generate more factually accurate and knowledge-aware text.
TBD
Starting the training from a model already trained on a related task to reduce training time and improve performance on tasks with limited data.
Inductive Transfer
Skill Acquisition
Adaptation
Pretrained models
Knowledge Transfer
Starting the training from a model already trained on a related task to reduce training time and improve performance on tasks with limited data.
https://doi.org/10.1016/j.knosys.2015.01.010
A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine Learning technique used to produce a low-dimensional (typically two-dimensional) representation of a higher dimensional data set while preserving the topological structure of the data. For example, a data set with p variables measured in n observations could be represented as clusters of observations with similar values for the variables. These clusters then could be visualized as a two-dimensional "map" such that observations in proximal clusters have more similar values than observations in distal clusters. This can make high-dimensional data easier to visualize and analyze. An SOM is a type of artificial neural network but is trained using competitive Learning rather than the error-correction Learning (e.g., backpropagation with gradient descent) used by other artificial neural networks. The SOM was introduced by the Finnish professor Teuvo Kohonen in the 1980s and therefore is sometimes called a Kohonen map or Kohonen network.[1][2] The Kohonen map or network is a computationally convenient abstraction building on biological models of neural systems from the 1970s[3] and morphogenesis models dating back to Alan Turing in the 1950s.
KN
SOFM
SOM
Self-Organizing Feature Map
Self-Organizing Map
Input, Hidden
Kohonen Network
A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine Learning technique used to produce a low-dimensional (typically two-dimensional) representation of a higher dimensional data set while preserving the topological structure of the data. For example, a data set with p variables measured in n observations could be represented as clusters of observations with similar values for the variables. These clusters then could be visualized as a two-dimensional "map" such that observations in proximal clusters have more similar values than observations in distal clusters. This can make high-dimensional data easier to visualize and analyze. An SOM is a type of artificial neural network but is trained using competitive Learning rather than the error-correction Learning (e.g., backpropagation with gradient descent) used by other artificial neural networks. The SOM was introduced by the Finnish professor Teuvo Kohonen in the 1980s and therefore is sometimes called a Kohonen map or Kohonen network.[1][2] The Kohonen map or network is a computationally convenient abstraction building on biological models of neural systems from the 1970s[3] and morphogenesis models dating back to Alan Turing in the 1950s.
https://en.wikipedia.org/wiki/Self-organizing_map
Applies a 1D power-average pooling over an input signal composed of several input planes.
LPPool1D
LPPool1d
LPPool1D Layer
Applies a 1D power-average pooling over an input signal composed of several input planes.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Applies a 2D power-average pooling over an input signal composed of several input planes.
LPPool2D
LPPool2d
LPPool2D Layer
Applies a 2D power-average pooling over an input signal composed of several input planes.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Cell class for the LSTM layer.
LSTMCell Layer
Cell class for the LSTM layer.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTMCell
Long Short-Term Memory layer - Hochreiter 1997. Based on available runtime hardware and constraints, this layer will choose different implementations (cuDNN-based or pure-TensorFlow) to maximize the performance. If a GPU is available and all the arguments to the layer meet the requirement of the cuDNN kernel (see below for details), the layer will use a fast cuDNN implementation. The requirements to use the cuDNN implementation are: 1. activation == tanh, 2. recurrent_activation == sigmoid, 3. recurrent_dropout == 0, 4. unroll is False, 5. use_bias is True, 6. Inputs, if use masking, are strictly right-padded, 7. Eager execution is enabled in the outermost context.
LSTM Layer
Long Short-Term Memory layer - Hochreiter 1997. Based on available runtime hardware and constraints, this layer will choose different implementations (cuDNN-based or pure-TensorFlow) to maximize the performance. If a GPU is available and all the arguments to the layer meet the requirement of the cuDNN kernel (see below for details), the layer will use a fast cuDNN implementation. The requirements to use the cuDNN implementation are: 1. activation == tanh, 2. recurrent_activation == sigmoid, 3. recurrent_dropout == 0, 4. unroll is False, 5. use_bias is True, 6. Inputs, if use masking, are strictly right-padded, 7. Eager execution is enabled in the outermost context.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM
Wraps arbitrary expressions as a Layer object.
Lambda Layer
Wraps arbitrary expressions as a Layer object.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Lambda
A language interface LLM supports interactive semantic parsing, enabling users to provide feedback/corrections which are used to dynamically refine and update the language model.
Language Interface LLM
Interactive learning
Language Interface LLM
A language interface LLM supports interactive semantic parsing, enabling users to provide feedback/corrections which are used to dynamically refine and update the language model.
TBD
A large language model (LLM) is a language model consisting of a neural network with many parameters (typically billions of weights or more), trained on large quantities of unlabeled text using self-supervised learning or semi-supervised learning.
LLM
Large Language Model
A large language model (LLM) is a language model consisting of a neural network with many parameters (typically billions of weights or more), trained on large quantities of unlabeled text using self-supervised learning or semi-supervised learning.
https://en.wikipedia.org/wiki/Large_language_model
A regression analysis method that performs both variable selection and regularizationin order to enhance the prediction accuracy and interpretability of the resulting statistical model.
Lasso Regression
A regression analysis method that performs both variable selection and regularizationin order to enhance the prediction accuracy and interpretability of the resulting statistical model.
https://en.wikipedia.org/wiki/Lasso_(statistics)
Network layer parent class
Layer
Network layer parent class
TBD
This is the class from which all layers inherit. A layer is a callable object that takes as input one or more tensors and that outputs one or more tensors. It involves computation, defined in the call() method, and a state (weight variables). State can be created in various places, at the convenience of the subclass implementer: in __init__(); in the optional build() method, which is invoked by the first __call__() to the layer, and supplies the shape(s) of the input(s), which may not have been known at initialization time; in the first invocation of call(), with some caveats discussed below. Users will just instantiate a layer and then treat it as a callable.
Layer Layer
This is the class from which all layers inherit. A layer is a callable object that takes as input one or more tensors and that outputs one or more tensors. It involves computation, defined in the call() method, and a state (weight variables). State can be created in various places, at the convenience of the subclass implementer: in __init__(); in the optional build() method, which is invoked by the first __call__() to the layer, and supplies the shape(s) of the input(s), which may not have been known at initialization time; in the first invocation of call(), with some caveats discussed below. Users will just instantiate a layer and then treat it as a callable.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer
Applies Layer Normalization over a mini-batch of inputs as described in the paper Layer Normalization
LayerNorm
LayerNorm Layer
Applies Layer Normalization over a mini-batch of inputs as described in the paper Layer Normalization
https://pytorch.org/docs/stable/nn.html#normalization-layers
Layer normalization layer (Ba et al., 2016). Normalize the activations of the previous layer for each given example in a batch independently, rather than across a batch like Batch Normalization. i.e. applies a transformation that maintains the mean activation within each example close to 0 and the activation standard deviation close to 1. Given a tensor inputs, moments are calculated and normalization is performed across the axes specified in axis.
LayerNormalization Layer
Layer normalization layer (Ba et al., 2016). Normalize the activations of the previous layer for each given example in a batch independently, rather than across a batch like Batch Normalization. i.e. applies a transformation that maintains the mean activation within each example close to 0 and the activation standard deviation close to 1. Given a tensor inputs, moments are calculated and normalization is performed across the axes specified in axis.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/LayerNormalization
A torch.nn.BatchNorm1d module with lazy initialization of the num_features argument of the BatchNorm1d that is inferred from the input.size(1).
LazyBatchNorm1D
LazyBatchNorm1d
LazyBatchNorm1D Layer
A torch.nn.BatchNorm1d module with lazy initialization of the num_features argument of the BatchNorm1d that is inferred from the input.size(1).
https://pytorch.org/docs/stable/nn.html#normalization-layers
A torch.nn.BatchNorm2d module with lazy initialization of the num_features argument of the BatchNorm2d that is inferred from the input.size(1).
LazyBatchNorm2D
LazyBatchNorm2d
LazyBatchNorm2D Layer
A torch.nn.BatchNorm2d module with lazy initialization of the num_features argument of the BatchNorm2d that is inferred from the input.size(1).
https://pytorch.org/docs/stable/nn.html#normalization-layers
A torch.nn.BatchNorm3d module with lazy initialization of the num_features argument of the BatchNorm3d that is inferred from the input.size(1).
LazyBatchNorm3D
LazyBatchNorm3d
LazyBatchNorm3D Layer
A torch.nn.BatchNorm3d module with lazy initialization of the num_features argument of the BatchNorm3d that is inferred from the input.size(1).
https://pytorch.org/docs/stable/nn.html#normalization-layers
A torch.nn.InstanceNorm1d module with lazy initialization of the num_features argument of the InstanceNorm1d that is inferred from the input.size(1).
LazyInstanceNorm1D
LazyInstanceNorm1d
LazyInstanceNorm1d Layer
A torch.nn.InstanceNorm1d module with lazy initialization of the num_features argument of the InstanceNorm1d that is inferred from the input.size(1).
https://pytorch.org/docs/stable/nn.html#normalization-layers
A torch.nn.InstanceNorm2d module with lazy initialization of the num_features argument of the InstanceNorm2d that is inferred from the input.size(1).
LazyInstanceNorm2D
LazyInstanceNorm2d
LazyInstanceNorm2d Layer
A torch.nn.InstanceNorm2d module with lazy initialization of the num_features argument of the InstanceNorm2d that is inferred from the input.size(1).
https://pytorch.org/docs/stable/nn.html#normalization-layers
A torch.nn.InstanceNorm3d module with lazy initialization of the num_features argument of the InstanceNorm3d that is inferred from the input.size(1).
LazyInstanceNorm3D
LazyInstanceNorm3d
LazyInstanceNorm3d Layer
A torch.nn.InstanceNorm3d module with lazy initialization of the num_features argument of the InstanceNorm3d that is inferred from the input.size(1).
https://pytorch.org/docs/stable/nn.html#normalization-layers
Leaky version of a Rectified Linear Unit.
LeakyReLU Layer
Leaky version of a Rectified Linear Unit.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/LeakyReLU
An abstract parent class grouping LLMs based on model learning paradigms.
Learning Paradigms
An abstract parent class grouping LLMs based on model learning paradigms.
TBD
A standard approach in regression analysis to approximate the solution of overdetermined systems(sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the residuals (a residual being the difference between an observed value and the fitted value provided by a model) made in the results of each individual equation.
Least-squares Analysis
A standard approach in regression analysis to approximate the solution of overdetermined systems(sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the residuals (a residual being the difference between an observed value and the fitted value provided by a model) made in the results of each individual equation.
https://en.wikipedia.org/wiki/Least_squares
A lifelong learning LLM can continually acquire new knowledge over time without forgetting previously learned information, maintaining a balance between plasticity and stability.
Continual Learning LLM
Lifelong Learning LLM
Catastrophic forgetting
Plasticity-Stability balance
Lifelong Learning LLM
A lifelong learning LLM can continually acquire new knowledge over time without forgetting previously learned information, maintaining a balance between plasticity and stability.
TBD
A linear function has the form f(x) = a + bx.
Linear Function
A linear function has the form f(x) = a + bx.
https://www.tensorflow.org/api_docs/python/tf/keras/activations/linear
A linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables).
Linear Regression
A linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables).
https://en.wikipedia.org/wiki/Linear_regression
Arises when network attributes obtained from user connections, activities, or interactions differ and misrepresent the true behavior of the users.
Linking Bias
Arises when network attributes obtained from user connections, activities, or interactions differ and misrepresent the true behavior of the users.
https://doi.org/10.6028/NIST.SP.1270
A liquid state machine (LSM) is a type of reservoir computer that uses a spiking neural network. An LSM consists of a large collection of units (called nodes, or neurons). Each node receives time varying input from external sources (the inputs) as well as from other nodes. Nodes are randomly connected to each other. The recurrent nature of the connections turns the time varying input into a spatio-temporal pattern of activations in the network nodes. The spatio-temporal patterns of activation are read out by linear discriminant units. The soup of recurrently connected nodes will end up computing a large variety of nonlinear functions on the input. Given a large enough variety of such nonlinear functions, it is theoretically possible to obtain linear combinations (using the read out units) to perform whatever mathematical operation is needed to perform a certain task, such as speech recognition or computer vision. The word liquid in the name comes from the analogy drawn to dropping a stone into a still body of water or other liquid. The falling stone will generate ripples in the liquid. The input (motion of the falling stone) has been converted into a spatio-temporal pattern of liquid displacement (ripples). (https://en.wikipedia.org/wiki/Liquid_state_machine)
LSM
Input, Spiking Hidden, Output
Liquid State Machine Network
A liquid state machine (LSM) is a type of reservoir computer that uses a spiking neural network. An LSM consists of a large collection of units (called nodes, or neurons). Each node receives time varying input from external sources (the inputs) as well as from other nodes. Nodes are randomly connected to each other. The recurrent nature of the connections turns the time varying input into a spatio-temporal pattern of activations in the network nodes. The spatio-temporal patterns of activation are read out by linear discriminant units. The soup of recurrently connected nodes will end up computing a large variety of nonlinear functions on the input. Given a large enough variety of such nonlinear functions, it is theoretically possible to obtain linear combinations (using the read out units) to perform whatever mathematical operation is needed to perform a certain task, such as speech recognition or computer vision. The word liquid in the name comes from the analogy drawn to dropping a stone into a still body of water or other liquid. The falling stone will generate ripples in the liquid. The input (motion of the falling stone) has been converted into a spatio-temporal pattern of liquid displacement (ripples). (https://en.wikipedia.org/wiki/Liquid_state_machine)
https://en.wikipedia.org/wiki/Liquid_state_machine
Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension.
LocalResponseNorm
LocalResponseNorm Layer
Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension.
https://pytorch.org/docs/stable/nn.html#normalization-layers
The LocallyConnected1D layer works similarly to the Convolution1D layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input.
Locally-connected Layer
The LocallyConnected1D layer works similarly to the Convolution1D layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input.
https://faroit.com/keras-docs/1.2.2/layers/local/
Locally-connected layer for 1D inputs. The LocallyConnected1D layer works similarly to the Conv1D layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input.
LocallyConnected1D Layer
Locally-connected layer for 1D inputs. The LocallyConnected1D layer works similarly to the Conv1D layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/LocallyConnected1D
Locally-connected layer for 2D inputs. The LocallyConnected2D layer works similarly to the Conv2D layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input.
LocallyConnected2D Layer
Locally-connected layer for 2D inputs. The LocallyConnected2D layer works similarly to the Conv2D layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/LocallyConnected2D
A statistical model that models the probability of an event taking place by having the log-odds for the event be a linear combination of one or more independent variables.
Logistic Regression
A statistical model that models the probability of an event taking place by having the log-odds for the event be a linear combination of one or more independent variables.
https://en.wikipedia.org/wiki/Logistic_regression
Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep Learning. Unlike standard feedforward neural networks, LSTM has feedback connections. It can process not only single data points (such as images), but also entire sequences of data (such as speech or video). For example, LSTM is applicable to tasks such as unsegmented, connected handwriting recognition, speech recognition and anomaly detection in network traffic or IDSs (intrusion detection systems). A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell remembers values over arbitrary time intervals and the three gates regulate the flow of information into and out of the cell.
LSTM
Input, Memory Cell, Output
Long Short Term Memory
Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep Learning. Unlike standard feedforward neural networks, LSTM has feedback connections. It can process not only single data points (such as images), but also entire sequences of data (such as speech or video). For example, LSTM is applicable to tasks such as unsegmented, connected handwriting recognition, speech recognition and anomaly detection in network traffic or IDSs (intrusion detection systems). A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell remembers values over arbitrary time intervals and the three gates regulate the flow of information into and out of the cell.
https://en.wikipedia.org/wiki/Long_short-term_memory
When automation leads to humans being unaware of their situation such that, when control of a system is given back to them in a situation where humans and machines cooperate, they are unprepared to assume their duties. This can be a loss of awareness over what automation is and isn’t taking care of.
Loss Of Situational Awareness Bias
When automation leads to humans being unaware of their situation such that, when control of a system is given back to them in a situation where humans and machines cooperate, they are unprepared to assume their duties. This can be a loss of awareness over what automation is and isn’t taking care of.
https://doi.org/10.6028/NIST.SP.1270
A low-resource LLM is optimized for performance in scenarios with limited data, computational resources, or for languages with sparse datasets.
Low-Resource Language Model
low-resource languages
resource-efficient
Low-Resource LLM
A low-resource LLM is optimized for performance in scenarios with limited data, computational resources, or for languages with sparse datasets.
TBD
A field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks.
Machine Learning
A field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks.
https://en.wikipedia.org/wiki/Machine_learning
Methods based on the assumption that one's observed data lie on a low-dimensional manifold embedded in a higher-dimensional space.
Manifold Learning
Methods based on the assumption that one's observed data lie on a low-dimensional manifold embedded in a higher-dimensional space.
https://arxiv.org/abs/2011.01307
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event.[1][2][3] A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). It is named after the Russian mathematician Andrey Markov.
MC
MP
Markov Process
Probalistic Hidden
Markov Chain
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event.[1][2][3] A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). It is named after the Russian mathematician Andrey Markov.
https://en.wikipedia.org/wiki/Markov_chain
A masked language model is a type of language model that is trained to predict randomly masked tokens in a sequence, based on the remaining unmasked tokens. This allows it to build deep bidirectional representations that can be effectively transferred to various NLP tasks via fine-tuning.
Masked Language Model
bidirectional encoder
denoising autoencoder
Masked Language Model
A masked language model is a type of language model that is trained to predict randomly masked tokens in a sequence, based on the remaining unmasked tokens. This allows it to build deep bidirectional representations that can be effectively transferred to various NLP tasks via fine-tuning.
TBD
Masks a sequence by using a mask value to skip timesteps. For each timestep in the input tensor (dimension #1 in the tensor), if all values in the input tensor at that timestep are equal to mask_value, then the timestep will be masked (skipped) in all downstream layers (as long as they support masking). If any downstream layer does not support masking yet receives such an input mask, an exception will be raised.
Masking Layer
Masks a sequence by using a mask value to skip timesteps. For each timestep in the input tensor (dimension #1 in the tensor), if all values in the input tensor at that timestep are equal to mask_value, then the timestep will be masked (skipped) in all downstream layers (as long as they support masking). If any downstream layer does not support masking yet receives such an input mask, an exception will be raised.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Masking
Max pooling operation for 1D temporal data. Downsamples the input representation by taking the maximum value over a spatial window of size pool_size. The window is shifted by strides. The resulting output, when using the "valid" padding option, has a shape of: output_shape = (input_shape - pool_size + 1) / strides) The resulting output shape when using the "same" padding option is: output_shape = input_shape / strides.
MaxPool1D
MaxPool1d
MaxPooling1D
MaxPooling1d
MaxPooling1D Layer
Max pooling operation for 1D temporal data. Downsamples the input representation by taking the maximum value over a spatial window of size pool_size. The window is shifted by strides. The resulting output, when using the "valid" padding option, has a shape of: output_shape = (input_shape - pool_size + 1) / strides) The resulting output shape when using the "same" padding option is: output_shape = input_shape / strides.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool1D
Max pooling operation for 2D spatial data.
MaxPool2D
MaxPool2d
MaxPooling2D
MaxPooling2d
MaxPooling2D Layer
Max pooling operation for 2D spatial data.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D
Max pooling operation for 3D data (spatial or spatio-temporal). Downsamples the input along its spatial dimensions (depth, height, and width) by taking the maximum value over an input window (of size defined by pool_size) for each channel of the input. The window is shifted by strides along each dimension.
MaxPool3D
MaxPool3d
MaxPooling3D
MaxPooling3d
MaxPooling3D Layer
Max pooling operation for 3D data (spatial or spatio-temporal). Downsamples the input along its spatial dimensions (depth, height, and width) by taking the maximum value over an input window (of size defined by pool_size) for each channel of the input. The window is shifted by strides along each dimension.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool3D
Computes a partial inverse of MaxPool1d.
MaxUnpool1D
MaxUnpool1d
MaxUnpool1D Layer
Computes a partial inverse of MaxPool1d.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Computes a partial inverse of MaxPool2d.
MaxUnpool2D
MaxUnpool2d
MaxUnpool2D Layer
Computes a partial inverse of MaxPool2d.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Computes a partial inverse of MaxPool3d.
MaxUnpool3D
MaxUnpool3d
MaxUnpool3D Layer
Computes a partial inverse of MaxPool3d.
https://pytorch.org/docs/stable/nn.html#pooling-layers
Layer that computes the maximum (element-wise) a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape).
Maximum Layer
Layer that computes the maximum (element-wise) a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape).
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Maximum
Arises when features and labels are proxies for desired quantities, potentially leaving out important factors or introducing group or input-dependent noise that leads to differential performance.
Measurement Bias
Arises when features and labels are proxies for desired quantities, potentially leaving out important factors or introducing group or input-dependent noise that leads to differential performance.
https://doi.org/10.6028/NIST.SP.1270
A memory-augmented LLM incorporates external writeable and readable memory components, allowing it to store and retrieve information over long contexts.
Memory-Augmented LLM
external memory
Memory-Augmented LLM
A memory-augmented LLM incorporates external writeable and readable memory components, allowing it to store and retrieve information over long contexts.
TBD
A layer used to merge a list of inputs.
Merging Layer
A layer used to merge a list of inputs.
https://www.tutorialspoint.com/keras/keras_merge_layer.htm
Automatic learning algorithms applied to metadata about machine Learning experiments.
Meta-Learning
Automatic learning algorithms applied to metadata about machine Learning experiments.
https://en.wikipedia.org/wiki/Meta_learning_(computer_science)
A meta-learning LLM is trained in a way that allows it to quickly adapt to new tasks or datasets through only a few examples or fine-tuning steps, leveraging meta-learned priors about how to efficiently learn.
Meta-Learning LLM
few-shot adaptation
learning to learn
Meta-Learning LLM
A meta-learning LLM is trained in a way that allows it to quickly adapt to new tasks or datasets through only a few examples or fine-tuning steps, leveraging meta-learned priors about how to efficiently learn.
TBD
Methods which can learn a representation function that maps objects into an embedded space.
Distance Metric Learning
Metric Learning
Methods which can learn a representation function that maps objects into an embedded space.
https://paperswithcode.com/task/metric-learning
Layer that computes the minimum (element-wise) a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape).
Minimum Layer
Layer that computes the minimum (element-wise) a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape).
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Minimum
A Mixture-of-Experts LLM dynamically selects and combines outputs from multiple expert submodels, allowing for efficient scaling by conditionally activating only a subset of model components for each input.
Mixture-of-Experts LLM
MoE LLM
conditional computation
model parallelism
Mixture-of-Experts LLM
A Mixture-of-Experts LLM dynamically selects and combines outputs from multiple expert submodels, allowing for efficient scaling by conditionally activating only a subset of model components for each input.
TBD
When modal interfaces confuse human operators, who misunderstand which mode the system is using, taking actions which are correct for a different mode but incorrect for their current situation. This is the cause of many deadly accidents, but also a source of confusion in everyday life.
Mode Confusion Bias
When modal interfaces confuse human operators, who misunderstand which mode the system is using, taking actions which are correct for a different mode but incorrect for their current situation. This is the cause of many deadly accidents, but also a source of confusion in everyday life.
https://doi.org/10.6028/NIST.SP.1270
An abstract parent class grouping LLMs based on model architecture.
Model Architecture
An abstract parent class grouping LLMs based on model architecture.
TBD
Techniques aimed at making models more efficient, such as knowledge distillation.
Computational Efficiency
Model Optimization
Model Efficiency
Techniques aimed at making models more efficient, such as knowledge distillation.
https://doi.org/10.1145/3578938
The bias introduced while using the data to select a single seemingly “best” model from a large set of models employing many predictor variables. Model selection bias also occurs when an explanatory variable has a weak relationship with the response variable.
Model Selection Bias
The bias introduced while using the data to select a single seemingly “best” model from a large set of models employing many predictor variables. Model selection bias also occurs when an explanatory variable has a weak relationship with the response variable.
https://doi.org/10.6028/NIST.SP.1270
A modular LLM consists of multiple specialized components or skills that can be dynamically composed and recombined to solve complex tasks, mimicking the modular structure of human cognition.
Modular LLM
component skills
skill composition
Modular LLM
A modular LLM consists of multiple specialized components or skills that can be dynamically composed and recombined to solve complex tasks, mimicking the modular structure of human cognition.
TBD
A multi-task LLM is trained jointly on multiple language tasks simultaneously, learning shared representations that transfer across tasks.
Multi-Task LLM
transfer learning
Multi-Task LLM
A multi-task LLM is trained jointly on multiple language tasks simultaneously, learning shared representations that transfer across tasks.
TBD
MultiHeadAttention layer. This is an implementation of multi-headed attention as described in the paper "Attention is all you Need" (Vaswani et al., 2017). If query, key, value are the same, then this is self-attention. Each timestep in query attends to the corresponding sequence in key, and returns a fixed-width vector.This layer first projects query, key and value. These are (effectively) a list of tensors of length num_attention_heads, where the corresponding shapes are (batch_size, <query dimensions>, key_dim), (batch_size, <key/value dimensions>, key_dim), (batch_size, <key/value dimensions>, value_dim).Then, the query and key tensors are dot-producted and scaled. These are softmaxed to obtain attention probabilities. The value tensors are then interpolated by these probabilities, then concatenated back to a single tensor. Finally, the result tensor with the last dimension as value_dim can take an linear projection and return. When using MultiHeadAttention inside a custom Layer, the custom Layer must implement build() and call MultiHeadAttention's _build_from_signature(). This enables weights to be restored correctly when the model is loaded.
MultiHeadAttention Layer
MultiHeadAttention layer. This is an implementation of multi-headed attention as described in the paper "Attention is all you Need" (Vaswani et al., 2017). If query, key, value are the same, then this is self-attention. Each timestep in query attends to the corresponding sequence in key, and returns a fixed-width vector.This layer first projects query, key and value. These are (effectively) a list of tensors of length num_attention_heads, where the corresponding shapes are (batch_size, <query dimensions>, key_dim), (batch_size, <key/value dimensions>, key_dim), (batch_size, <key/value dimensions>, value_dim).Then, the query and key tensors are dot-producted and scaled. These are softmaxed to obtain attention probabilities. The value tensors are then interpolated by these probabilities, then concatenated back to a single tensor. Finally, the result tensor with the last dimension as value_dim can take an linear projection and return. When using MultiHeadAttention inside a custom Layer, the custom Layer must implement build() and call MultiHeadAttention's _build_from_signature(). This enables weights to be restored correctly when the model is loaded.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/MultiHeadAttention
Methods that lassify instances into one of three or more classes (classifying instances into one of two classes is called binary classification).
Multinomial Classification
Multiclass Classification
Methods that lassify instances into one of three or more classes (classifying instances into one of two classes is called binary classification).
https://en.wikipedia.org/wiki/Multiclass_classification
A method that translates information about the pairwise distances among a set of objects or individuals into a configuration of points mapped into an abstract Cartesian space.
MDS
Multidimensional Scaling
A method that translates information about the pairwise distances among a set of objects or individuals into a configuration of points mapped into an abstract Cartesian space.
https://en.wikipedia.org/wiki/Multidimensional_scaling
A multilingual LLM is trained on text from multiple languages, learning shared representations that enable zero-shot or few-shot transfer to new languages.
Multilingual LLM
cross-lingual transfer
Multilingual LLM
A multilingual LLM is trained on text from multiple languages, learning shared representations that enable zero-shot or few-shot transfer to new languages.
TBD
Methods which can create models that can process and link information using various modalities.
Multimodal Deep Learning
Methods which can create models that can process and link information using various modalities.
https://arxiv.org/abs/2105.11087
A multimodal fusion LLM learns joint representations across different modalities like text, vision and audio in an end-to-end fashion for better cross-modal understanding and generation.
Multimodal Fusion LLM
cross-modal grounding
Multimodal Fusion LLM
A multimodal fusion LLM learns joint representations across different modalities like text, vision and audio in an end-to-end fashion for better cross-modal understanding and generation.
TBD
Methods which can represent the joint representations of different modalities.
Multimodal Learning
Methods which can represent the joint representations of different modalities.
TBD
A multimodal transformer is a transformer architecture that can process and relate information from different modalities, such as text, images, and audio. It uses a shared embedding space and attention mechanism to learn joint representations across modalities.
Multimodal Transformer
unified encoder
vision-language model
Multimodal Transformer
A multimodal transformer is a transformer architecture that can process and relate information from different modalities, such as text, images, and audio. It uses a shared embedding space and attention mechanism to learn joint representations across modalities.
TBD
Layer that multiplies (element-wise) a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape).
Multiply Layer
Layer that multiplies (element-wise) a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape).
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Multiply
A subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data.
NLP
Natural Language Processing
A subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data.
https://en.wikipedia.org/wiki/Natural_language_processing
Network parent class
Network
Network parent class
TBD
A Neural Turing machine (NTMs) is a recurrent neural network model. The approach was published by Alex Graves et al. in 2014. NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers. An NTM has a neural network controller coupled to external memory resources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them using gradient descent. An NTM with a long short-term memory (LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from examples alone.
NTM
Input, Hidden, Spiking Hidden, Output
Neural Turing Machine Network
A Neural Turing machine (NTMs) is a recurrent neural network model. The approach was published by Alex Graves et al. in 2014. NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers. An NTM has a neural network controller coupled to external memory resources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them using gradient descent. An NTM with a long short-term memory (LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from examples alone.
https://en.wikipedia.org/wiki/Neural_Turing_machine
A neuro-symbolic LLM combines neural language modeling with symbolic reasoning components, leveraging structured knowledge representations and logical inferences to improve reasoning capabilities.
Neuro-Symbolic LLM
knowledge reasoning
symbolic grounding
Neuro-Symbolic LLM
A neuro-symbolic LLM combines neural language modeling with symbolic reasoning components, leveraging structured knowledge representations and logical inferences to improve reasoning capabilities.
TBD
Noisy dense layer that injects random noise to the weights of dense layer. Noisy dense layers are fully connected layers whose weights and biases are augmented by factorised Gaussian noise. The factorised Gaussian noise is controlled through gradient descent by a second weights layer. A NoisyDense layer implements the operation: $$ mathrm{NoisyDense}(x) = mathrm{activation}(mathrm{dot}(x, mu + (sigma cdot epsilon)) mathrm{bias}) $$ where mu is the standard weights layer, epsilon is the factorised Gaussian noise, and delta is a second weights layer which controls epsilon.
Noise Dense Layer
Noisy dense layer that injects random noise to the weights of dense layer. Noisy dense layers are fully connected layers whose weights and biases are augmented by factorised Gaussian noise. The factorised Gaussian noise is controlled through gradient descent by a second weights layer. A NoisyDense layer implements the operation: $$ mathrm{NoisyDense}(x) = mathrm{activation}(mathrm{dot}(x, mu + (sigma cdot epsilon)) mathrm{bias}) $$ where mu is the standard weights layer, epsilon is the factorised Gaussian noise, and delta is a second weights layer which controls epsilon.
https://www.tensorflow.org/addons/api_docs/python/tfa/layers/NoisyDense
A preprocessing layer which normalizes continuous features.
Normalization Layer
A preprocessing layer which normalizes continuous features.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Normalization
A layer that performs numerical data preprocessing operations.
Numerical Features Preprocessing Layer
A layer that performs numerical data preprocessing operations.
https://keras.io/guides/preprocessing_layers/
A method which aims to classify objects from one, or only a few, examples.
OSL
One-shot Learning
A method which aims to classify objects from one, or only a few, examples.
https://en.wikipedia.org/wiki/One-shot_learning
An ordinal LLM is trained to model ordinal relationships and rank outputs, rather than model probability distributions over text sequences directly.
Ordinal LLM
preference modeling
ranking
Ordinal LLM
An ordinal LLM is trained to model ordinal relationships and rank outputs, rather than model probability distributions over text sequences directly.
TBD
The output layer in an artificial neural network is the last layer of neurons that produces given outputs for the program. Though they are made much like other artificial neurons in the neural network, output layer neurons may be built or observed in a different way, given that they are the last “actor” nodes on the network.
Output Layer
The output layer in an artificial neural network is the last layer of neurons that produces given outputs for the program. Though they are made much like other artificial neurons in the neural network, output layer neurons may be built or observed in a different way, given that they are the last “actor” nodes on the network.
https://www.techopedia.com/definition/33263/output-layer-neural-networks
Parametric Rectified Linear Unit.
PReLU Layer
Parametric Rectified Linear Unit.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/PReLU
The perceptron is an algorithm for supervised Learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. (https://en.wikipedia.org/wiki/Perceptron)
SLP
Single Layer Perceptron
Input, Output
Perceptron
The perceptron is an algorithm for supervised Learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. (https://en.wikipedia.org/wiki/Perceptron)
TBD
Permutes the dimensions of the input according to a given pattern. Useful e.g. connecting RNNs and convnets.
Permute Layer
Permutes the dimensions of the input according to a given pattern. Useful e.g. connecting RNNs and convnets.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Permute
A personalized LLM adapts its language modeling and generation to the preferences, style and persona of individual users or audiences.
Personalized LLM
user adaptation LLM
Personalized LLM
A personalized LLM adapts its language modeling and generation to the preferences, style and persona of individual users or audiences.
TBD
Pooling layers serve the dual purposes of mitigating the sensitivity of convolutional layers to location and of spatially downsampling representations.
Pooling Layer
Pooling layers serve the dual purposes of mitigating the sensitivity of convolutional layers to location and of spatially downsampling representations.
https://d2l.ai/chapter_convolutional-neural-networks/pooling.html
A form of selection bias that occurs when items that are more popular are more exposed and less popular items are under-represented.
Popularity Bias
A form of selection bias that occurs when items that are more popular are more exposed and less popular items are under-represented.
https://doi.org/10.6028/NIST.SP.1270
A form of selection bias that occurs when items that are more popular are more exposed and less popular items are under-represented.aSystematic distortions in demographics or other user characteristics between a population of users represented in a dataset or on a platform and some target population.
Population Bias
A form of selection bias that occurs when items that are more popular are more exposed and less popular items are under-represented.aSystematic distortions in demographics or other user characteristics between a population of users represented in a dataset or on a platform and some target population.
https://doi.org/10.6028/NIST.SP.1270
A range of techniques and processes applied to data before it is used in machine learning models or AI algorithms
Preprocessing
A range of techniques and processes applied to data before it is used in machine learning models or AI algorithms
https://doi.org/10.1109/ICDE.2019.00245
A layer that performs data preprocessing operations.
Preprocessing Layer
A layer that performs data preprocessing operations.
https://www.tensorflow.org/guide/keras/preprocessing_layers
Biases arising from how information is presented on the Web, via a user interface, due to rating or ranking of output, or through users’ own self-selected, biased interaction.
Presentation Bias
Biases arising from how information is presented on the Web, via a user interface, due to rating or ranking of output, or through users’ own self-selected, biased interaction.
https://doi.org/10.6028/NIST.SP.1270
A method for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and enabling the visualization of multidimensional data.
PCA
Principal Component Analysis
A method for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and enabling the visualization of multidimensional data.
https://en.wikipedia.org/wiki/Principal_component_analysis
A probabilistic model for which a graph expresses the conditional dependence structure between random variables.
Graphical Model
PGM
Structure Probabilistic Model
Probabilistic Graphical Model
A probabilistic model for which a graph expresses the conditional dependence structure between random variables.
https://en.wikipedia.org/wiki/Graphical_model
Methods that use statistical methods to analyze the words in each text to discover common themes, how those themes are connected to each other, and how they change over time.
Probabilistic Topic Model
Methods that use statistical methods to analyze the words in each text to discover common themes, how those themes are connected to each other, and how they change over time.
https://pyro.ai/examples/prodlda.html
Judgement modulated by affect, which is influenced by the level of efficacy and efficiency in information processing; in cognitive sciences, processing bias is often referred to as an aesthetic judgement.
Validation Bias
Processing Bias
Judgement modulated by affect, which is influenced by the level of efficacy and efficiency in information processing; in cognitive sciences, processing bias is often referred to as an aesthetic judgement.
https://royalsocietypublishing.org/doi/10.1098/rspb.2019.0165#d1e5237
A prompt-tuned LLM is fine-tuned on a small number of examples or prompts, rather than full task datasets. This allows for rapid adaptation to new tasks with limited data, leveraging the model's few-shot learning capabilities.
Prompt-based Fine-Tuning LLM
Prompt-tuned LLM
few-shot learning
in-context learning
Prompt-based Fine-Tuning LLM
A prompt-tuned LLM is fine-tuned on a small number of examples or prompts, rather than full task datasets. This allows for rapid adaptation to new tasks with limited data, leveraging the model's few-shot learning capabilities.
TBD
A surival modeling method where the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate.
Proportional Hazards Model
A surival modeling method where the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate.
https://en.wikipedia.org/wiki/Proportional_hazards_model
Base class for recurrent layers.
RNN Layer
Base class for recurrent layers.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/RNN
Like recurrent neural networks (RNNs), transformers are designed to handle sequential input data, such as natural language, for tasks such as translation and text summarization. However, unlike RNNs, transformers do not necessarily process the data in order. Rather, the attention mechanism provides context for any position in the input sequence.
RBFN
RBN
Radial Basis Function Network
Input, Hidden, Output
Radial Basis Network
Like recurrent neural networks (RNNs), transformers are designed to handle sequential input data, such as natural language, for tasks such as translation and text summarization. However, unlike RNNs, transformers do not necessarily process the data in order. Rather, the attention mechanism provides context for any position in the input sequence.
https://en.wikipedia.org/wiki/Radial_basis_function_network
A preprocessing layer which randomly adjusts brightness during training. This layer will randomly increase/reduce the brightness for the input RGB images. At inference time, the output will be identical to the input. Call the layer with training=True to adjust the brightness of the input. Note that different brightness adjustment factors will be apply to each the images in the batch.
RandomBrightness Layer
A preprocessing layer which randomly adjusts brightness during training. This layer will randomly increase/reduce the brightness for the input RGB images. At inference time, the output will be identical to the input. Call the layer with training=True to adjust the brightness of the input. Note that different brightness adjustment factors will be apply to each the images in the batch.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/RandomBrightness
A preprocessing layer which randomly adjusts contrast during training. This layer will randomly adjust the contrast of an image or images by a random factor. Contrast is adjusted independently for each channel of each image during training. For each channel, this layer computes the mean of the image pixels in the channel and then adjusts each component x of each pixel to (x - mean) * contrast_factor + mean. Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and in integer or floating point dtype. By default, the layer will output floats. The output value will be clipped to the range [0, 255], the valid range of RGB colors.
RandomContrast Layer
A preprocessing layer which randomly adjusts contrast during training. This layer will randomly adjust the contrast of an image or images by a random factor. Contrast is adjusted independently for each channel of each image during training. For each channel, this layer computes the mean of the image pixels in the channel and then adjusts each component x of each pixel to (x - mean) * contrast_factor + mean. Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and in integer or floating point dtype. By default, the layer will output floats. The output value will be clipped to the range [0, 255], the valid range of RGB colors.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/RandomContrast
A preprocessing layer which randomly crops images during training. During training, this layer will randomly choose a location to crop images down to a target size. The layer will crop all the images in the same batch to the same cropping location. At inference time, and during training if an input image is smaller than the target size, the input will be resized and cropped so as to return the largest possible window in the image that matches the target aspect ratio. If you need to apply random cropping at inference time, set training to True when calling the layer. Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats.
RandomCrop Layer
A preprocessing layer which randomly crops images during training. During training, this layer will randomly choose a location to crop images down to a target size. The layer will crop all the images in the same batch to the same cropping location. At inference time, and during training if an input image is smaller than the target size, the input will be resized and cropped so as to return the largest possible window in the image that matches the target aspect ratio. If you need to apply random cropping at inference time, set training to True when calling the layer. Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/RandomCrop
A statistical model where the model parameters are random variables.
REM
Random Effects Model
A statistical model where the model parameters are random variables.
https://en.wikipedia.org/wiki/Random_effects_model
A preprocessing layer which randomly flips images during training. This layer will flip the images horizontally and or vertically based on the mode attribute. During inference time, the output will be identical to input. Call the layer with training=True to flip the input. Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats.
RandomFlip Layer
A preprocessing layer which randomly flips images during training. This layer will flip the images horizontally and or vertically based on the mode attribute. During inference time, the output will be identical to input. Call the layer with training=True to flip the input. Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/RandomFlip
An ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time.
Random Forest
An ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time.
https://en.wikipedia.org/wiki/Random_forest
A preprocessing layer which randomly varies image height during training. This layer adjusts the height of a batch of images by a random factor. The input should be a 3D (unbatched) or 4D (batched) tensor in the "channels_last" image data format. Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats. By default, this layer is inactive during inference.
RandomHeight Layer
A preprocessing layer which randomly varies image height during training. This layer adjusts the height of a batch of images by a random factor. The input should be a 3D (unbatched) or 4D (batched) tensor in the "channels_last" image data format. Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats. By default, this layer is inactive during inference.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/RandomHeight
A preprocessing layer which randomly rotates images during training.
RandomRotation Layer
A preprocessing layer which randomly rotates images during training.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/RandomRotation
A preprocessing layer which randomly translates images during training. This layer will apply random translations to each image during training, filling empty space according to fill_mode. aInput pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats.
RandomTranslation Layer
A preprocessing layer which randomly translates images during training. This layer will apply random translations to each image during training, filling empty space according to fill_mode. aInput pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/RandomTranslation
A preprocessing layer which randomly varies image width during training. This layer will randomly adjusts the width of a batch of images of a batch of images by a random factor. The input should be a 3D (unbatched) or 4D (batched) tensor in the "channels_last" image data format. Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats. By default, this layer is inactive during inference.
RandomWidth Layer
A preprocessing layer which randomly varies image width during training. This layer will randomly adjusts the width of a batch of images of a batch of images by a random factor. The input should be a 3D (unbatched) or 4D (batched) tensor in the "channels_last" image data format. Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats. By default, this layer is inactive during inference.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/RandomWidth
A preprocessing layer which randomly zooms images during training. This layer will randomly zoom in or out on each axis of an image independently, filling empty space according to fill_mode.Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats.
RandomZoom Layer
A preprocessing layer which randomly zooms images during training. This layer will randomly zoom in or out on each axis of an image independently, filling empty space according to fill_mode.Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/RandomZoom
The idea that top-ranked results are the most relevant and important and will result in more clicks than other results.
Ranking Bias
The idea that top-ranked results are the most relevant and important and will result in more clicks than other results.
https://doi.org/10.6028/NIST.SP.1270
Refers to differences in perspective, memory and recall, interpretation, and reporting on the same event from multiple persons or witnesses.
Rashomon Effect
Rashomon Principle
Rashomon Effect Bias
Refers to differences in perspective, memory and recall, interpretation, and reporting on the same event from multiple persons or witnesses.
https://doi.org/10.6028/NIST.SP.1270
A rational LLM incorporates explicit reasoning capabilities, leveraging logical rules, axioms or external knowledge to make deductive inferences during language tasks.
Rational LLM
logical inferences
reasoning
Rational LLM
A rational LLM incorporates explicit reasoning capabilities, leveraging logical rules, axioms or external knowledge to make deductive inferences during language tasks.
TBD
The ReLU activation function returns: max(x, 0), the element-wise maximum of 0 and the input tensor.
ReLU
Rectified Linear Unit
ReLU Function
The ReLU activation function returns: max(x, 0), the element-wise maximum of 0 and the input tensor.
https://www.tensorflow.org/api_docs/python/tf/keras/activations/relu
Rectified Linear Unit activation function. With default values, it returns element-wise max(x, 0).
ReLU Layer
Rectified Linear Unit activation function. With default values, it returns element-wise max(x, 0).
https://www.tensorflow.org/api_docs/python/tf/keras/layers/ReLU
A layer of an RNB, composed of recurrent units and with the number of which is the hidden size of the layer.
Recurrent Layer
A layer of an RNB, composed of recurrent units and with the number of which is the hidden size of the layer.
https://docs.nvidia.com/deepLearning/performance/dl-performance-recurrent/index.html#recurrent-layer
A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs.
RN
RecNN
Recurrent Network
Input, Memory Cell, Output
Recurrent Neural Network
A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs.
https://en.wikipedia.org/wiki/Recurrent_neural_network
A recursive or self-attending LLM incorporates recursive self-attention mechanisms, allowing it to iteratively refine its own outputs and capture long-range dependencies more effectively.
Recursive LLM
Self-Attending LLM
iterative refinement
self-attention
Recursive LLM
A recursive or self-attending LLM incorporates recursive self-attention mechanisms, allowing it to iteratively refine its own outputs and capture long-range dependencies more effectively.
TBD
A recursive language model uses recursive neural network architectures like TreeLSTMs to learn syntactic composition functions, improving systematic generalization abilities.
RLM
Compositional generalization
Recursive Language Model
A recursive language model uses recursive neural network architectures like TreeLSTMs to learn syntactic composition functions, improving systematic generalization abilities.
https://doi.org/10.1609/aaai.v33i01.33017450
A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by traversing a given structure in topological order. Recursive neural networks, sometimes abbreviated as RvNNs, have been successful, for instance, in Learning sequence and tree structures in natural language processing, mainly phrase and sentence continuous representations based on word embedding.
RecuNN
RvNN
Recursive Neural Network
A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by traversing a given structure in topological order. Recursive neural networks, sometimes abbreviated as RvNNs, have been successful, for instance, in Learning sequence and tree structures in natural language processing, mainly phrase and sentence continuous representations based on word embedding.
https://en.wikipedia.org/wiki/Recursive_neural_network
A set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one or more independent variables (often called 'predictors', 'covariates', 'explanatory variables' or 'features').
Regression analysis
Regression model
Regression Analysis
A set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one or more independent variables (often called 'predictors', 'covariates', 'explanatory variables' or 'features').
https://en.wikipedia.org/wiki/Regression_analysis
Regularizers allow you to apply penalties on layer parameters or layer activity during optimization. These penalties are summed into the loss function that the network optimizes. Regularization penalties are applied on a per-layer basis.
Regularization Layer
Regularizers allow you to apply penalties on layer parameters or layer activity during optimization. These penalties are summed into the loss function that the network optimizes. Regularization penalties are applied on a per-layer basis.
https://keras.io/api/layers/regularizers/
Methods that do not need labelled input/output pairs be presented, nor needing sub-optimal actions to be explicitly corrected. Instead they focus on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).
Reinforcement Learning
Methods that do not need labelled input/output pairs be presented, nor needing sub-optimal actions to be explicitly corrected. Instead they focus on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).
https://en.wikipedia.org/wiki/Reinforcement_learning
An RL-LLM is a language model that is fine-tuned using reinforcement learning, where the model receives rewards for generating text that satisfies certain desired properties or objectives. This can improve the quality, safety, or alignment of generated text.
RL-LLM
Reinforcement Learning LLM
decision transformers
reward modeling
Reinforcement Learning LLM
An RL-LLM is a language model that is fine-tuned using reinforcement learning, where the model receives rewards for generating text that satisfies certain desired properties or objectives. This can improve the quality, safety, or alignment of generated text.
TBD
Repeats the input n times.
RepeatVector Layer
Repeats the input n times.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/RepeatVector
Arises due to non-random sampling of subgroups, causing trends estimated for one population to not be generalizable to data collected from a new population.
Representation Bias
Arises due to non-random sampling of subgroups, causing trends estimated for one population to not be generalizable to data collected from a new population.
https://doi.org/10.6028/NIST.SP.1270
Methods that allow a system to discover the representations required for feature detection or classification from raw data.
Feature Learning
Representation Learning
Methods that allow a system to discover the representations required for feature detection or classification from raw data.
https://en.wikipedia.org/wiki/Feature_learning
A preprocessing layer which rescales input values to a new range.
Rescaling Layer
A preprocessing layer which rescales input values to a new range.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Rescaling
Layer that reshapes inputs into the given shape.
Reshape Layer
Layer that reshapes inputs into the given shape.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Reshape
Reshape layers are used to change the shape of the input.
Reshape Layer
Reshaping Layer
Reshape layers are used to change the shape of the input.
https://keras.io/api/layers/reshaping_layers/reshape/
A residual neural network (ResNet) is an artificial neural network (ANN) of a kind that builds on constructs known from pyramidal cells in the cerebral cortex. Residual neural networks do this by utilizing skip connections, or shortcuts to jump over some layers. Typical ResNet models are implemented with double- or triple- layer skips that contain nonlinearities (ReLU) and batch normalization in between. An additional weight matrix may be used to learn the skip weights; these models are known as HighwayNets. Models with several parallel skips are referred to as DenseNets. In the context of residual neural networks, a non-residual network may be described as a 'plain network'.
DRN
Deep Residual Network
ResNN
ResNet
Input, Weight, BN, ReLU, Weight, BN, Addition, ReLU
Residual Neural Network
A residual neural network (ResNet) is an artificial neural network (ANN) of a kind that builds on constructs known from pyramidal cells in the cerebral cortex. Residual neural networks do this by utilizing skip connections, or shortcuts to jump over some layers. Typical ResNet models are implemented with double- or triple- layer skips that contain nonlinearities (ReLU) and batch normalization in between. An additional weight matrix may be used to learn the skip weights; these models are known as HighwayNets. Models with several parallel skips are referred to as DenseNets. In the context of residual neural networks, a non-residual network may be described as a 'plain network'.
https://en.wikipedia.org/wiki/Residual_neural_network
A preprocessing layer which resizes images. This layer resizes an image input to a target height and width. The input should be a 4D (batched) or 3D (unbatched) tensor in "channels_last" format. Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats. This layer can be called on tf.RaggedTensor batches of input images of distinct sizes, and will resize the outputs to dense tensors of uniform size.
Resizing Layer
A preprocessing layer which resizes images. This layer resizes an image input to a target height and width. The input should be a 4D (batched) or 3D (unbatched) tensor in "channels_last" format. Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and of interger or floating point dtype. By default, the layer will output floats. This layer can be called on tf.RaggedTensor batches of input images of distinct sizes, and will resize the outputs to dense tensors of uniform size.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Resizing
A restricted Boltzmann machine (RBM) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs.
RBM
Backfed Input, Probabilistic Hidden
Restricted Boltzmann Machine
A restricted Boltzmann machine (RBM) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs.
https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine
A retrieval-augmented LLM combines a pre-trained language model with a retrieval system that can access external knowledge sources. This allows the model to condition its generation on relevant retrieved knowledge, improving factual accuracy and knowledge grounding.
Retrieval-Augmented LLM
knowledge grounding
open-book question answering
Retrieval-Augmented LLM
A retrieval-augmented LLM combines a pre-trained language model with a retrieval system that can access external knowledge sources. This allows the model to condition its generation on relevant retrieved knowledge, improving factual accuracy and knowledge grounding.
TBD
A method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated.[1] It has been used in many fields including econometrics, chemistry, and engineering.
Ridge Regression
A method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated.[1] It has been used in many fields including econometrics, chemistry, and engineering.
https://en.wikipedia.org/wiki/Ridge_regression
The SELU activation function multiplies scale (> 1) with the output of the ELU function to ensure a slope larger than one for positive inputs.
SELU
Scaled Exponential Linear Unit
SELU Function
The SELU activation function multiplies scale (> 1) with the output of the ELU function to ensure a slope larger than one for positive inputs.
https://www.tensorflow.org/api_docs/python/tf/keras/activations/selu
Bias introduced by the selection of individuals, groups, or data for analysis in such a way that proper randomization is not achieved, thereby failing to ensure that the sample obtained is representative of the population intended to be analyzed.
Sampling Bias
Selection Bias
Selection Effect
Selection And Sampling Bias
Bias introduced by the selection of individuals, groups, or data for analysis in such a way that proper randomization is not achieved, thereby failing to ensure that the sample obtained is representative of the population intended to be analyzed.
https://en.wikipedia.org/wiki/Selection_bias
Decision-makers’ inclination to selectively adopt algorithmic advice when it matches their pre-existing beliefs and stereotypes.
Selective Adherence Bias
Decision-makers’ inclination to selectively adopt algorithmic advice when it matches their pre-existing beliefs and stereotypes.
https://doi.org/10.6028/NIST.SP.1270
A self-supervised LLM learns rich representations by solving pretext tasks that involve predicting parts of the input from other observed parts of the data, without relying on human-annotated labels.
Self-Supervised LLM
Pretext tasks
Self-Supervised LLM
A self-supervised LLM learns rich representations by solving pretext tasks that involve predicting parts of the input from other observed parts of the data, without relying on human-annotated labels.
TBD
Regarded as an intermediate form between supervised and unsupervised learning.
Self-supervised Learning
Regarded as an intermediate form between supervised and unsupervised learning.
https://en.wikipedia.org/wiki/Self-supervised_learning
A semi-supervised LLM combines self-supervised pretraining on unlabeled data with supervised fine-tuning on labeled task data.
Semi-Supervised LLM
self-training
Semi-Supervised LLM
A semi-supervised LLM combines self-supervised pretraining on unlabeled data with supervised fine-tuning on labeled task data.
TBD
Depthwise separable 1D convolution. This layer performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels. If use_bias is True and a bias initializer is provided, it adds a bias vector to the output. It then optionally applies an activation function to produce the final output.a
SeparableConv1D Layer
SeparableConvolution1D Layer
Depthwise separable 1D convolution. This layer performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels. If use_bias is True and a bias initializer is provided, it adds a bias vector to the output. It then optionally applies an activation function to produce the final output.a
https://www.tensorflow.org/api_docs/python/tf/keras/layers/SeparableConv1D
Depthwise separable 2D convolution. Separable convolutions consist of first performing a depthwise spatial convolution (which acts on each input channel separately) followed by a pointwise convolution which mixes the resulting output channels. The depth_multiplier argument controls how many output channels are generated per input channel in the depthwise step. Intuitively, separable convolutions can be understood as a way to factorize a convolution kernel into two smaller kernels, or as an extreme version of an Inception block.
SeparableConv2D Layer
SeparableConvolution2D Layer
Depthwise separable 2D convolution. Separable convolutions consist of first performing a depthwise spatial convolution (which acts on each input channel separately) followed by a pointwise convolution which mixes the resulting output channels. The depth_multiplier argument controls how many output channels are generated per input channel in the depthwise step. Intuitively, separable convolutions can be understood as a way to factorize a convolution kernel into two smaller kernels, or as an extreme version of an Inception block.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/SeparableConv2D
Applies the sigmoid activation function sigmoid(x) = 1 / (1 + exp(-x)). For small values (<-5), sigmoid returns a value close to zero, and for large values (>5) the result of the function gets close to 1. Sigmoid is equivalent to a 2-element Softmax, where the second element is assumed to be zero. The sigmoid function always returns a value between 0 and 1.
Sigmoid Function
Applies the sigmoid activation function sigmoid(x) = 1 / (1 + exp(-x)). For small values (<-5), sigmoid returns a value close to zero, and for large values (>5) the result of the function gets close to 1. Sigmoid is equivalent to a 2-element Softmax, where the second element is assumed to be zero. The sigmoid function always returns a value between 0 and 1.
https://www.tensorflow.org/api_docs/python/tf/keras/activations/sigmoid
Cell class for SimpleRNN. This class processes one step within the whole time sequence input, whereas tf.keras.layer.SimpleRNN processes the whole sequence.
SimpleRNNCell Layer
Cell class for SimpleRNN. This class processes one step within the whole time sequence input, whereas tf.keras.layer.SimpleRNN processes the whole sequence.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/SimpleRNNCell
Fully-connected RNN where the output is to be fed back to input.
SimpleRNN Layer
Fully-connected RNN where the output is to be fed back to input.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/SimpleRNN
Can be positive or negative, and take a number of different forms, but is typically characterized as being for or against groups or individuals based on social identities, demographic factors, or immutable physical characteristics. Societal or social biases are often stereotypes. Common examples of societal or social biases are based on concepts like race, ethnicity, gender, sexual orientation, socioeconomic status, education, and more. Societal bias is often recognized and discussed in the context of NLP (Natural Language Processing) models.
Social Bias
Societal Bias
Can be positive or negative, and take a number of different forms, but is typically characterized as being for or against groups or individuals based on social identities, demographic factors, or immutable physical characteristics. Societal or social biases are often stereotypes. Common examples of societal or social biases are based on concepts like race, ethnicity, gender, sexual orientation, socioeconomic status, education, and more. Societal bias is often recognized and discussed in the context of NLP (Natural Language Processing) models.
https://doi.org/10.6028/NIST.SP.1270
The elements of the output vector are in range (0, 1) and sum to 1. Each vector is handled independently. The axis argument sets which axis of the input the function is applied along. Softmax is often used as the activation for the last layer of a classification network because the result could be interpreted as a probability distribution. The softmax of each vector x is computed as exp(x) / tf.reduce_sum(exp(x)). The input values in are the log-odds of the resulting probability.
Softmax Function
The elements of the output vector are in range (0, 1) and sum to 1. Each vector is handled independently. The axis argument sets which axis of the input the function is applied along. Softmax is often used as the activation for the last layer of a classification network because the result could be interpreted as a probability distribution. The softmax of each vector x is computed as exp(x) / tf.reduce_sum(exp(x)). The input values in are the log-odds of the resulting probability.
https://www.tensorflow.org/api_docs/python/tf/keras/activations/softmax
Softmax activation function.
Softmax Layer
Softmax activation function.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Softmax
softplus(x) = log(exp(x) + 1)
Softplus Function
softplus(x) = log(exp(x) + 1)
https://www.tensorflow.org/api_docs/python/tf/keras/activations/softplus
softsign(x) = x / (abs(x) + 1)
Softsign Function
softsign(x) = x / (abs(x) + 1)
https://www.tensorflow.org/api_docs/python/tf/keras/activations/softsign
Sparse autoencoders may include more (rather than fewer) hidden units than inputs, but only a small number of the hidden units are allowed to be active at the same time (thus, sparse). This constraint forces the model to respond to the unique statistical features of the training data. (https://en.wikipedia.org/wiki/Autoencoder)
SAE
Sparse AE
Sparse Autoencoder
Input, Hidden, Matched Output-Input
Sparse Auto Encoder
Sparse autoencoders may include more (rather than fewer) hidden units than inputs, but only a small number of the hidden units are allowed to be active at the same time (thus, sparse). This constraint forces the model to respond to the unique statistical features of the training data. (https://en.wikipedia.org/wiki/Autoencoder)
TBD
A sparse LLM uses techniques like pruning or quantization to reduce the number of non-zero parameters in the model, making it more parameter-efficient and easier to deploy on resource-constrained devices.
Sparse LLM
model compression
parameter efficiency
Sparse LLM
A sparse LLM uses techniques like pruning or quantization to reduce the number of non-zero parameters in the model, making it more parameter-efficient and easier to deploy on resource-constrained devices.
TBD
Methods which aim to find sparse representations of the input data in the form of a linear combination of basic elements as well as those basic elements themselves.
Sparse coding
Sparse dictionary Learning
Sparse Learning
Methods which aim to find sparse representations of the input data in the form of a linear combination of basic elements as well as those basic elements themselves.
https://en.wikipedia.org/wiki/Sparse_dictionary_learning
Spatial 1D version of Dropout. This version performs the same function as Dropout, however, it drops entire 1D feature maps instead of individual elements. If adjacent frames within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective Learning rate decrease. In this case, SpatialDropout1D will help promote independence between feature maps and should be used instead.
SpatialDropout1D Layer
Spatial 1D version of Dropout. This version performs the same function as Dropout, however, it drops entire 1D feature maps instead of individual elements. If adjacent frames within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective Learning rate decrease. In this case, SpatialDropout1D will help promote independence between feature maps and should be used instead.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/SpatialDropout1D
Spatial 2D version of Dropout. This version performs the same function as Dropout, however, it drops entire 2D feature maps instead of individual elements. If adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective Learning rate decrease. In this case, SpatialDropout2D will help promote independence between feature maps and should be used instead.a
SpatialDropout2D Layer
Spatial 2D version of Dropout. This version performs the same function as Dropout, however, it drops entire 2D feature maps instead of individual elements. If adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective Learning rate decrease. In this case, SpatialDropout2D will help promote independence between feature maps and should be used instead.a
https://www.tensorflow.org/api_docs/python/tf/keras/layers/SpatialDropout2D
Spatial 3D version of Dropout. This version performs the same function as Dropout, however, it drops entire 3D feature maps instead of individual elements. If adjacent voxels within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective Learning rate decrease. In this case, SpatialDropout3D will help promote independence between feature maps and should be used instead.
SpatialDropout3D Layer
Spatial 3D version of Dropout. This version performs the same function as Dropout, however, it drops entire 3D feature maps instead of individual elements. If adjacent voxels within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective Learning rate decrease. In this case, SpatialDropout3D will help promote independence between feature maps and should be used instead.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/SpatialDropout3D
Regression method used to model spatial relationships.
Spatial Regression
Regression method used to model spatial relationships.
https://gisgeography.com/spatial-regression-models-arcgis/
Wrapper allowing a stack of RNN cells to behave as a single cell. Used to implement efficient stacked RNNs.
StackedRNNCells Layer
Wrapper allowing a stack of RNN cells to behave as a single cell. Used to implement efficient stacked RNNs.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/StackedRNNCells
A bias whereby people tend to search only where it is easiest to look.
Streetlight Effect
Streetlight Effect Bias
A bias whereby people tend to search only where it is easiest to look.
https://doi.org/10.6028/NIST.SP.1270
A preprocessing layer which maps string features to integer indices.
StringLookup Layer
A preprocessing layer which maps string features to integer indices.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/StringLookup
Layer that subtracts two inputs. It takes as input a list of tensors of size 2, both of the same shape, and returns a single tensor, (inputs[0] - inputs[1]), also of the same shape.
Subtract Layer
Layer that subtracts two inputs. It takes as input a list of tensors of size 2, both of the same shape, and returns a single tensor, (inputs[0] - inputs[1]), also of the same shape.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Subtract
Utilizing techniques like Byte Pair Encoding (BPE) or SentencePiece to break down words into smaller units, allowing the model to handle a wide range of vocabulary with a fixed-size list.
Fragmentation
Part-word Division
Byte Pair Encoding
SentencePiece
Subword Segmentation
Utilizing techniques like Byte Pair Encoding (BPE) or SentencePiece to break down words into smaller units, allowing the model to handle a wide range of vocabulary with a fixed-size list.
TBD
A human tendency where people opt to continue with an endeavor or behavior due to previously spent or invested resources, such as money, time, and effort, regardless of whether costs outweigh benefits. For example, in AI, the sunk cost fallacy could lead development teams and organizations to feel that because they have already invested so much time and money into a particular AI application, they must pursue it to market rather than deciding to end the effort, even in the face of significant technical debt and/or ethical debt.
Sunk Cost Fallacy
Sunk Cost Fallacy Bias
A human tendency where people opt to continue with an endeavor or behavior due to previously spent or invested resources, such as money, time, and effort, regardless of whether costs outweigh benefits. For example, in AI, the sunk cost fallacy could lead development teams and organizations to feel that because they have already invested so much time and money into a particular AI application, they must pursue it to market rather than deciding to end the effort, even in the face of significant technical debt and/or ethical debt.
https://doi.org/10.6028/NIST.SP.1270
Methods that simultaneously cluster the rows and columns of a labeled matrix, also taking into account the data label contributions to cluster coherence.
Supervised Block Clustering
Supervised Co-clustering
Supervised Joint Clustering
Supervised Two-mode Clustering
Supervised Two-way Clustering
Supervised Biclustering
Methods that simultaneously cluster the rows and columns of a labeled matrix, also taking into account the data label contributions to cluster coherence.
https://en.wikipedia.org/wiki/Biclustering
Methods that group a set of labeled objects in such a way that objects in the same group (called a cluster) are more similarly labeled (in some sense) relative to those in other groups (clusters).
Cluster analysis
Supervised Clustering
Methods that group a set of labeled objects in such a way that objects in the same group (called a cluster) are more similarly labeled (in some sense) relative to those in other groups (clusters).
https://en.wikipedia.org/wiki/Cluster_analysis
Methods that can learn a function that maps an input to an output based on example input-output pairs.
Supervised Learning
Methods that can learn a function that maps an input to an output based on example input-output pairs.
https://en.wikipedia.org/wiki/Supervised_learning
In machine Learning, support-vector machines (SVMs, also support-vector networks) are supervised Learning models with associated Learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratories by Vladimir Vapnik with colleagues (Boser et al., 1992, Guyon et al., 1993, Vapnik et al., 1997) SVMs are one of the most robust prediction methods, being based on statistical Learning frameworks or VC theory proposed by Vapnik (1982, 1995) and Chervonenkis (1974). Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). SVM maps training examples to points in space so as to maximise the width of the gap between the two categories. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.
SVM
SVN
Supper Vector Network
Input, Hidden, Output
Support Vector Machine
In machine Learning, support-vector machines (SVMs, also support-vector networks) are supervised Learning models with associated Learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratories by Vladimir Vapnik with colleagues (Boser et al., 1992, Guyon et al., 1993, Vapnik et al., 1997) SVMs are one of the most robust prediction methods, being based on statistical Learning frameworks or VC theory proposed by Vapnik (1982, 1995) and Chervonenkis (1974). Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). SVM maps training examples to points in space so as to maximise the width of the gap between the two categories. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.
https://en.wikipedia.org/wiki/Support-vector_machine
Methods for nalyzing the expected duration of time until one event occurs, such as death in biological organisms and failure in mechanical systems.
Survival Analysis
Methods for nalyzing the expected duration of time until one event occurs, such as death in biological organisms and failure in mechanical systems.
https://en.wikipedia.org/wiki/Survival_analysis
Tendency for people to focus on the items, observations, or people that “survive” or make it past a selection process, while overlooking those that did not.
Survivorship Bias
Tendency for people to focus on the items, observations, or people that “survive” or make it past a selection process, while overlooking those that did not.
https://doi.org/10.6028/NIST.SP.1270
x*sigmoid(x). It is a smooth, non-monotonic function that consistently matches or outperforms ReLU on deep networks, it is unbounded above and bounded below.
Swish Function
x*sigmoid(x). It is a smooth, non-monotonic function that consistently matches or outperforms ReLU on deep networks, it is unbounded above and bounded below.
https://www.tensorflow.org/api_docs/python/tf/keras/activations/swish
Like recurrent networks, but the connections between units are symmetrical (they have the same weight in both directions).
SCN
Symmetrically Connected Network
Like recurrent networks, but the connections between units are symmetrical (they have the same weight in both directions).
https://ieeexplore.ieee.org/document/287176
Applies Batch Normalization over a N-Dimensional input (a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
SyncBatchNorm
SyncBatchNorm Layer
Applies Batch Normalization over a N-Dimensional input (a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
https://pytorch.org/docs/stable/nn.html#normalization-layers
Biases that result from procedures and practices of particular institutions that operate in ways which result in certain social groups being advantaged or favored and others being disadvantaged or devalued.
Institutional Bias
Societal Bias
Systemic Bias
Biases that result from procedures and practices of particular institutions that operate in ways which result in certain social groups being advantaged or favored and others being disadvantaged or devalued.
https://doi.org/10.6028/NIST.SP.1270
Hyperbolic tangent activation function.
hyperbolic tangent
Tanh Function
Hyperbolic tangent activation function.
https://www.tensorflow.org/api_docs/python/tf/keras/activations/tanh
Bias that arises from differences in populations and behaviors over time.
Temporal Bias
Bias that arises from differences in populations and behaviors over time.
https://doi.org/10.6028/NIST.SP.1270
A layer that performs text data preprocessing operations.
Text Preprocessing Layer
A layer that performs text data preprocessing operations.
https://keras.io/guides/preprocessing_layers/
A preprocessing layer which maps text features to integer sequences.
TextVectorization Layer
A preprocessing layer which maps text features to integer sequences.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/TextVectorization
Thresholded Rectified Linear Unit.
ThresholdedReLU Layer
Thresholded Rectified Linear Unit.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/ThresholdedReLU
This wrapper allows to apply a layer to every temporal slice of an input. Every input should be at least 3D, and the dimension of index one of the first input will be considered to be the temporal dimension. Consider a batch of 32 video samples, where each sample is a 128x128 RGB image with channels_last data format, across 10 timesteps. The batch input shape is (32, 10, 128, 128, 3). You can then use TimeDistributed to apply the same Conv2D layer to each of the 10 timesteps, independently:
TimeDistributed Layer
This wrapper allows to apply a layer to every temporal slice of an input. Every input should be at least 3D, and the dimension of index one of the first input will be considered to be the temporal dimension. Consider a batch of 32 video samples, where each sample is a 128x128 RGB image with channels_last data format, across 10 timesteps. The batch input shape is (32, 10, 128, 128, 3). You can then use TimeDistributed to apply the same Conv2D layer to each of the 10 timesteps, independently:
https://www.tensorflow.org/api_docs/python/tf/keras/layers/TimeDistributed
Methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data.
Time Series Analysis
Methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data.
https://en.wikipedia.org/wiki/Time_series
Methods that predict future values based on previously observed values.
Time Series Forecasting
Methods that predict future values based on previously observed values.
https://en.wikipedia.org/wiki/Time_series
Breaking down text data into manageable pieces called tokens and reducing the model's vocabulary to streamline processing.
Lexical Simplification
Vocabulary Condensation
Tokenization
Vocabulary size reduction
Tokenization And Vocabulary Reduction
Breaking down text data into manageable pieces called tokens and reducing the model's vocabulary to streamline processing.
TBD
Specific strategies or methodologies employed during model training.
Instructional Methods
Learning Techniques
Training Strategies
Specific strategies or methodologies employed during model training.
TBD
Methods which can reuse or transfer information from previously learned tasks for the Learning of new tasks.
Transfer Learning
Methods which can reuse or transfer information from previously learned tasks for the Learning of new tasks.
https://en.wikipedia.org/wiki/Transfer_learning
A transfer learning LLM leverages knowledge acquired during training on one task to improve performance on different but related tasks, facilitating more efficient learning and adaptation.
Transfer LLM
transfer learning
Transfer Learning LLM
A transfer learning LLM leverages knowledge acquired during training on one task to improve performance on different but related tasks, facilitating more efficient learning and adaptation.
TBD
A transformer is a deep Learning model that adopts the mechanism of attention, differentially weighing the significance of each part of the input data. It is used primarily in the field of natural language processing (NLP) and in computer vision (CV). (https://en.wikipedia.org/wiki/Transformer_(machine_Learning_model))
Transformer Network
A transformer is a deep Learning model that adopts the mechanism of attention, differentially weighing the significance of each part of the input data. It is used primarily in the field of natural language processing (NLP) and in computer vision (CV). (https://en.wikipedia.org/wiki/Transformer_(machine_Learning_model))
https://en.wikipedia.org/wiki/Transformer_(machine_Learning_model)
Arises when predictive algorithms favor groups that are better represented in the training data, since there will be less uncertainty associated with those predictions.
Uncertainty Bias
Arises when predictive algorithms favor groups that are better represented in the training data, since there will be less uncertainty associated with those predictions.
https://doi.org/10.6028/NIST.SP.1270
Unit normalization layer. Normalize a batch of inputs so that each input in the batch has a L2 norm equal to 1 (across the axes specified in axis).
UnitNormalization Layer
Unit normalization layer. Normalize a batch of inputs so that each input in the batch has a L2 norm equal to 1 (across the axes specified in axis).
https://www.tensorflow.org/api_docs/python/tf/keras/layers/UnitNormalization
Methods that simultaneously cluster the rows and columns of an unlabeled input matrix.
Block Clustering
Co-clustering
Joint Clustering
Two-mode Clustering
Two-way Clustering
Unsupervised Biclustering
Methods that simultaneously cluster the rows and columns of an unlabeled input matrix.
https://en.wikipedia.org/wiki/Biclustering
Methods that group a set of objects in such a way that objects without labels in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters).
Cluster analysis
Unsupervised Clustering
Methods that group a set of objects in such a way that objects without labels in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters).
https://en.wikipedia.org/wiki/Cluster_analysis
An unsupervised LLM is trained solely on unlabeled data using self-supervised objectives like masked language modeling, without any supervised fine-tuning.
Unsupervised LLM
self-supervised
Unsupervised LLM
An unsupervised LLM is trained solely on unlabeled data using self-supervised objectives like masked language modeling, without any supervised fine-tuning.
TBD
Algorithms that learns patterns from unlabeled data.
Unsupervised Learning
Algorithms that learns patterns from unlabeled data.
https://en.wikipedia.org/wiki/Unsupervised_learning
Unsupervised pre-training initializes a discriminative neural net from one which was trained using an unsupervised criterion, such as a deep belief network or a deep autoencoder. This method can sometimes help with both the optimization and the overfitting issues.
UPN
Unsupervised Pretrained Network
Unsupervised pre-training initializes a discriminative neural net from one which was trained using an unsupervised criterion, such as a deep belief network or a deep autoencoder. This method can sometimes help with both the optimization and the overfitting issues.
https://metacademy.org/graphs/concepts/unsupervised_pre_training#:~:text=Unsupervised%20pre%2Dtraining%20initializes%20a,optimization%20and%20the%20overfitting%20issues
Upsampling layer for 1D inputs. Repeats each temporal step size times along the time axis.
UpSampling1D Layer
Upsampling layer for 1D inputs. Repeats each temporal step size times along the time axis.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/UpSampling1D
Upsampling layer for 2D inputs. Repeats the rows and columns of the data by size[0] and size[1] respectively.
UpSampling2D Layer
Upsampling layer for 2D inputs. Repeats the rows and columns of the data by size[0] and size[1] respectively.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/UpSampling2D
Upsampling layer for 3D inputs.
UpSampling3D Layer
Upsampling layer for 3D inputs.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/UpSampling3D
An information-processing bias, the tendency to inappropriately analyze ambiguous stimuli, scenarios and events.
Interpretive Bias
Use And Interpretation Bias
An information-processing bias, the tendency to inappropriately analyze ambiguous stimuli, scenarios and events.
https://en.wikipedia.org/wiki/Interpretive_bias
Arises when a user imposes their own self-selected biases and behavior during interaction with data, output, results, etc.
User Interaction Bias
Arises when a user imposes their own self-selected biases and behavior during interaction with data, output, results, etc.
https://doi.org/10.6028/NIST.SP.1270
Variational autoencoders are meant to compress the input information into a constrained multivariate latent distribution (encoding) to reconstruct it as accurately as possible (decoding). (https://en.wikipedia.org/wiki/Variational_autoencoder)
VAE
Input, Probabilistic Hidden, Matched Output-Input
Variational Auto Encoder
Variational autoencoders are meant to compress the input information into a constrained multivariate latent distribution (encoding) to reconstruct it as accurately as possible (decoding). (https://en.wikipedia.org/wiki/Variational_autoencoder)
TBD
Abstract wrapper base class. Wrappers take another layer and augment it in various ways. Do not use this class as a layer, it is only an abstract base class. Two usable wrappers are the TimeDistributed and Bidirectional wrappers.
Wrapper Layer
Abstract wrapper base class. Wrappers take another layer and augment it in various ways. Do not use this class as a layer, it is only an abstract base class. Two usable wrappers are the TimeDistributed and Bidirectional wrappers.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Wrapper
A zero-shot learning LLM is able to perform tasks or understand concepts it has not explicitly been trained on, demonstrating a high degree of generalization and understanding.
Zero-Shot LLM
zero-shot learning
Zero-Shot Learning LLM
A zero-shot learning LLM is able to perform tasks or understand concepts it has not explicitly been trained on, demonstrating a high degree of generalization and understanding.
TBD
Methods where at test time, a learner observes samples from classes, which were not observed during training, and needs to predict the class that they belong to.
ZSL
Zero-shot Learning
Methods where at test time, a learner observes samples from classes, which were not observed during training, and needs to predict the class that they belong to.
https://en.wikipedia.org/wiki/Zero-shot_learning
Zero-padding layer for 1D input (e.g. temporal sequence).
ZeroPadding1D Layer
Zero-padding layer for 1D input (e.g. temporal sequence).
https://www.tensorflow.org/api_docs/python/tf/keras/layers/ZeroPadding1D
Zero-padding layer for 2D input (e.g. picture). This layer can add rows and columns of zeros at the top, bottom, left and right side of an image tensor.
ZeroPadding2D Layer
Zero-padding layer for 2D input (e.g. picture). This layer can add rows and columns of zeros at the top, bottom, left and right side of an image tensor.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/ZeroPadding2D
Zero-padding layer for 3D data (spatial or spatio-temporal).
ZeroPadding3D Layer
Zero-padding layer for 3D data (spatial or spatio-temporal).
https://www.tensorflow.org/api_docs/python/tf/keras/layers/ZeroPadding3D
In the continuous bag-of-words architecture, the model predicts the current node from a window of surrounding context nodes. The order of context nodes does not influence prediction (bag-of-words assumption).
N2V-CBOW
CBOW
Input, Hidden, Output
node2vec-CBOW
In the continuous bag-of-words architecture, the model predicts the current node from a window of surrounding context nodes. The order of context nodes does not influence prediction (bag-of-words assumption).
https://en.wikipedia.org/wiki/Word2vec
In the continuous skip-gram architecture, the model uses the current node to predict the surrounding window of context nodes. The skip-gram architecture weighs nearby context nodes more heavily than more distant context nodes. (https://en.wikipedia.org/wiki/Word2vec)
N2V-SkipGram
SkipGram
Input, Hidden, Output
node2vec-SkipGram
In the continuous skip-gram architecture, the model uses the current node to predict the surrounding window of context nodes. The skip-gram architecture weighs nearby context nodes more heavily than more distant context nodes. (https://en.wikipedia.org/wiki/Word2vec)
https://en.wikipedia.org/wiki/Word2vec
A statistical method for visualizing high-dimensional data by giving each datapoint a location in a two or three-dimensional map.
t-SNE
tSNE
t-Distributed Stochastic Neighbor embedding
A statistical method for visualizing high-dimensional data by giving each datapoint a location in a two or three-dimensional map.
https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding
In the continuous bag-of-words architecture, the model predicts the current word from a window of surrounding context words. The order of context words does not influence prediction (bag-of-words assumption). (https://en.wikipedia.org/wiki/Word2vec)
W2V-CBOW
CBOW
Input, Hidden, Output
word2vec-CBOW
In the continuous bag-of-words architecture, the model predicts the current word from a window of surrounding context words. The order of context words does not influence prediction (bag-of-words assumption). (https://en.wikipedia.org/wiki/Word2vec)
https://en.wikipedia.org/wiki/Word2vec
In the continuous skip-gram architecture, the model uses the current word to predict the surrounding window of context words. The skip-gram architecture weighs nearby context words more heavily than more distant context words.
W2V-SkipGram
SkipGram
Input, Hidden, Output
word2vec-SkipGram
In the continuous skip-gram architecture, the model uses the current word to predict the surrounding window of context words. The skip-gram architecture weighs nearby context words more heavily than more distant context words.
https://en.wikipedia.org/wiki/Word2vec
A statistical phenomenon where the marginal association between two categorical variables is qualitatively different from the partial association between the same two variables after controlling for one or more other variables. For example, the statistical association or correlation that has been detected between two variables for an entire population disappears or reverses when the population is divided into subgroups.
Simpson's Paradox
Simpon's Paradox Bias
A statistical phenomenon where the marginal association between two categorical variables is qualitatively different from the partial association between the same two variables after controlling for one or more other variables. For example, the statistical association or correlation that has been detected between two variables for an entire population disappears or reverses when the population is divided into subgroups.
https://doi.org/10.6028/NIST.SP.1270
example to be eventually removed
example to be eventually removed
failed exploratory term
The term was used in an attempt to structure part of the ontology but in retrospect failed to do a good job
Person:Alan Ruttenberg
failed exploratory term
metadata complete
Class has all its metadata, but is either not guaranteed to be in its final location in the asserted IS_A hierarchy or refers to another class that is not complete.
metadata complete
organizational term
Term created to ease viewing/sort terms for development purpose, and will not be included in a release
PERSON:Alan Ruttenberg
organizational term
ready for release
Class has undergone final review, is ready for use, and will be included in the next release. Any class lacking "ready_for_release" should be considered likely to change place in hierarchy, have its definition refined, or be obsoleted in the next release. Those classes deemed "ready_for_release" will also derived from a chain of ancestor classes that are also "ready_for_release."
ready for release
metadata incomplete
Class is being worked on; however, the metadata (including definition) are not complete or sufficiently clear to the branch editors.
metadata incomplete
uncurated
Nothing done yet beyond assigning a unique class ID and proposing a preferred term.
uncurated
pending final vetting
All definitions, placement in the asserted IS_A hierarchy and required minimal metadata are complete. The class is awaiting a final review by someone other than the term editor.
pending final vetting
placeholder removed
placeholder removed
terms merged
An editor note should explain what were the merged terms and the reason for the merge.
terms merged
term imported
This is to be used when the original term has been replaced by a term imported from an other ontology. An editor note should indicate what is the URI of the new term to use.
term imported
term split
This is to be used when a term has been split in two or more new terms. An editor note should indicate the reason for the split and indicate the URIs of the new terms created.
term split
universal
Hard to give a definition for. Intuitively a "natural kind" rather than a collection of any old things, which a class is able to be, formally. At the meta level, universals are defined as positives, are disjoint with their siblings, have single asserted parents.
Alan Ruttenberg
A Formal Theory of Substances, Qualities, and Universals, http://ontology.buffalo.edu/bfo/SQU.pdf
universal
defined class
A defined class is a class that is defined by a set of logically necessary and sufficient conditions but is not a universal
"definitions", in some readings, always are given by necessary and sufficient conditions. So one must be careful (and this is difficult sometimes) to distinguish between defined classes and universal.
Alan Ruttenberg
defined class
named class expression
A named class expression is a logical expression that is given a name. The name can be used in place of the expression.
named class expressions are used in order to have more concise logical definition but their extensions may not be interesting classes on their own. In languages such as OWL, with no provisions for macros, these show up as actuall classes. Tools may with to not show them as such, and to replace uses of the macros with their expansions
Alan Ruttenberg
named class expression
to be replaced with external ontology term
Terms with this status should eventually replaced with a term from another ontology.
Alan Ruttenberg
group:OBI
to be replaced with external ontology term
requires discussion
A term that is metadata complete, has been reviewed, and problems have been identified that require discussion before release. Such a term requires editor note(s) to identify the outstanding issues.
Alan Ruttenberg
group:OBI
requires discussion
Transformation-ML
Transformation-ML file describing parameter transformations used in a GvHD experiment.
Transformation-ML is a format standard of a digital entity that is conformant with the Transformation-ML standard.(http://wiki.ficcs.org/ficcs/Transformation-ML?action=AttachFile&do=get&target=Transformation-ML_v1.0.26.pdf)
person:Jennifer Fostel
web-page:http://wiki.ficcs.org/ficcs/Transformation-ML?action=AttachFile&do=get&target=Transformation-ML_v1.0.26.pdf
Transformation-ML
ACS
d06.acs, ACS1.0 data file of well D06 of plate 2 of part 1 of a GvHD experiment.
ACS is a format standard of a digital entity that is conformant with the Analytical Cytometry Standard. (http://www.isac-net.org/content/view/607/150/)
person:Jennifer Fostel
web-page:http://www.isac-net.org/content/view/607/150/
ACS
XML
RDF/XML file, OWL file, Compensation-ML file, WSDL document, SVG document
XML is a format standard of a digital entity that is conformant with the W3C Extensible Markup Language Recommendation.(http://www.w3.org/XML/)
person:Jennifer Fostel
web-page:http://www.w3.org/XML/
XML
RDF
A FOAF file, a SKOS file, an OWL file.
RDF is a format standard of a digital entity that is conformant with the W3C Resource Description Framework RDF/XML Syntax specification.(http://www.w3.org/RDF/)
person:Jennifer Fostel
web-page:http://www.w3.org/RDF/
RDF
zip
MagicDraw MDZIP archive, Java JAR file.
zip is a format standard of a digital entity that is conformant with the PKWARE .ZIP file format specification (http://www.pkware.com/index.php?option=com_content&task=view&id=59&Itemid=103/)
person:Jennifer Fostel
web-page:http://www.pkware.com/index.php?option=com_content&task=view&id=59&Itemid=103/
zip
tar
Example.tar file.
tar is a format standard of a digital entity that is conformant with the tape archive file format as standardized by POSIX.1-1998, POSIX.1-2001, or any other tar format compliant with the GNU tar specification. (http://www.gnu.org/software/tar/manual/)
person:Jennifer Fostel
web-page:http://www.gnu.org/software/tar/manual/
tar
FCS
d01.fcs, FCS3 data file of well D06 of plate 2 of part 1 of a GvHD experiment.
FCS is a format standard of a digital entity that is conformant with the Flow Cytometry Data File Standard.(http://www.fcspress.com/)
person:Jennifer Fostel
web-page:http://www.fcspress.com/
FCS
Compensation-ML
compfoo.xml, Compensation-ML file describing compensation used in a GvHD experiment
Compensation-ML is a format standard of a digital entity that is conformant with the Compensation-ML standard. (http://wiki.ficcs.org/ficcs/Compensation-ML?action=AttachFile&do=get&target=Compensation-ML_v1.0.24.pdf)
person:Jennifer Fostel
web-page:http://wiki.ficcs.org/ficcs/Compensation-ML?action=AttachFile&do=get&target=Compensation-ML_v1.0.24.pdf
Compensation-ML
Gating-ML
foogate.xml, Gating-ML file describing gates used in a GvHD experiment.
Gating-ML is a format standard of a digital entity that is conformant with the Gating-ML standard. (http://www.flowcyt.org/gating/)
person:Jennifer Fostel
web-page:http://www.flowcyt.org/gating/
Gating-ML
OWL
OBI ontology file, Basic Formal Ontology file, BIRNLex file, BioPAX file.
OWL is a format standard of a digital entity that is conformant with the W3C Web Ontology Language specification.(http://www.w3.org/2004/OWL/)
person:Jennifer Fostel
web-page:http://www.w3.org/2004/OWL/
OWL
Affymetrix
Affymetrix supplied microarray
An organization which supplies technology, tools and protocols for use in high throughput applications
Affymetrix
Thermo
Philippe Rocca-Serra
Thermo
Waters
Philippe Rocca-Serra
Waters
BIO-RAD
Philippe Rocca-Serra
BIO-RAD
GenePattern hierarchical clustering
James Malone
GenePattern hierarchical clustering
Ambion
Philippe Rocca-Serra
Ambion
Helicos
Philippe Rocca-Serra
Helicos
Roche
Philippe Rocca-Serra
Roche
Illumina
Philippe Rocca-Serra
Illumina
GenePattern PCA
GenePattern PCA
GenePattern module SVM
GenePattern module SVM is a GenePattern software module which is used to run a support vector machine data transformation.
James Malone
Ryan Brinkman
GenePattern module SVM
GenePattern k-nearest neighbors
James Malone
GenePattern k-nearest neighbors
GenePattern LOOCV
GenePattern LOOCV
GenePattern k-means clustering
James Malone
GenePattern k-means clustering
Agilent
Philippe Rocca-Serra
Agilent
GenePattern module KMeansClustering
GenePattern module KMeansClustering is a GenePattern software module which is used to perform a k Means clustering data transformation.
James Malone
PERSON: James Malone
GenePattern module KMeansClustering
GenePattern CART
James Malone
GenePattern CART
GenePattern module CARTXValidation
GenePattern module CARTXValidation is a GenePattern software module which uses a CART decision tree induction with a leave one out cross validation data transformations.
GenePattern module CARTXValidation
Li-Cor
Philippe Rocca-Serra
Li-Cor
Bruker Corporation
Philippe Rocca-Serra
Bruker Corporation
GenePattern module KNNXValidation
GenePattern module KNNXValidation is a GenePattern software module which uses a k-nearest neighbours clustering with a leave one out cross validation data transformations.
James Malone
PERSON: James Malone
GenePattern module KNNXValidation
GenePattern module PeakMatch
GenePattern module PeakMatch
GenePattern module KNN
GenePattern module KNN is a GenePattern software module which perform a k-nearest neighbors data transformation.
James Malone
GenePattern module KNN
GenePattern module HierarchicalClustering
GenePattern module HierarchicalClustering is a GenePattern software module which is used to perform a hierarchical clustering data transformation.
James Malone
PERSON: James Malone
GenePattern module HierarchicalClustering
GenePattern SVM
James Malone
GenePattern SVM
Applied Biosystems
Philippe Rocca-Serra
Applied Biosystems
GenePattern module PCA
GenePattern module PCA is a GenePattern software module which is used to perform a principal components analysis dimensionality reduction data transformation.
James Malone
PERSON: James Malone
GenePattern module PCA
GenePattern peak matching
James Malone
Ryan Brinkman
GenePattern peak matching
Bruker Daltonics
Philippe Rocca-Serra
Bruker Daltonics
GenePattern HeatMapViewer data visualization
The GenePattern process of generating Heat Maps from clustered data.
James Malone
GenePattern HeatMapViewer data visualization
GenePattern HierarchicalClusteringViewer data visualization
The GenePattern process of generating hierarchical clustering visualization from clustered data.
James Malone
GenePattern HierarchicalClusteringViewer data visualization
GenePattern module HeatMapViewer
A GenePattern software module which is used to generate a heatmap view of data.
James Malone
GenePattern module HeatMapViewer
GenePattern module HierarchicalClusteringViewer
A GenePattern software module which is used to generate a view of data that has been hierarchically clustered.
James Malone
GenePattern module HierarchicalClusteringViewer
Sysmex Corporation, Kobe, Japan
WEB:http://www.sysmex.com/@2009/08/06
Sysmex Corporation, Kobe, Japan
U.S. Food and Drug Administration
FDA
U.S. Food and Drug Administration
right handed
right handed
ambidexterous
ambidexterous
left handed
left handed
Edingburgh handedness inventory
The Edinburgh Handedness Inventory is a set of questions used to assess the dominance of a person's right or left hand in everyday activities.
PERSON:Alan Ruttenberg
PERSON:Jessica Turner
PMID:5146491#Oldfield, R.C. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9, 97-113
WEB:http://www.cse.yorku.ca/course_archive/2006-07/W/4441/EdinburghInventory.html
Edingburgh handedness inventory
eBioscience
Karin Breuer
WEB:http://www.ebioscience.com/@2011/04/11
eBioscience
Cytopeia
Karin Breuer
WEB:http://www.cytopeia.com/@2011/04/11
Cytopeia
Exalpha Biological
Karin Breuer
WEB:http://www.exalpha.com/@2011/04/11
Exalpha Biological
Apogee Flow Systems
Karin Breuer
WEB:http://www.apogeeflow.com/@2011/04/11
Apogee Flow Systems
Exbio Antibodies
Karin Breuer
WEB:http://www.exbio.cz/@2011/04/11
Exbio Antibodies
Becton Dickinson (BD Biosciences)
Karin Breuer
WEB:http://www.bdbiosciences.com/@2011/04/11
Becton Dickinson (BD Biosciences)
Dako Cytomation
Karin Breuer
WEB:http://www.dakousa.com/@2011/04/11
Dako Cytomation
Millipore
Karin Breuer
WEB:http://www.guavatechnologies.com/@2011/04/11
Millipore
Antigenix
Karin Breuer
WEB:http://www.antigenix.com/@2011/04/11
Antigenix
Partec
Karin Breuer
WEB:http://www.partec.de/@2011/04/11
Partec
Beckman Coulter
Karin Breuer
WEB:http://www.beckmancoulter.com/@2011/04/11
Beckman Coulter
Advanced Instruments Inc. (AI Companies)
Karin Breuer
WEB:http://www.aicompanies.com/@2011/04/11
Advanced Instruments Inc. (AI Companies)
Miltenyi Biotec
Karin Breuer
WEB:http://www.miltenyibiotec.com/@2011/04/11
Miltenyi Biotec
AES Chemunex
Karin Breuer
WEB:http://www.aeschemunex.com/@2011/04/11
AES Chemunex
Bentley Instruments
Karin Breuer
WEB:http://bentleyinstruments.com/@2011/04/11
Bentley Instruments
Invitrogen
Karin Breuer
WEB:http://www.invitrogen.com/@2011/04/11
Invitrogen
Luminex
Karin Breuer
WEB:http://www.luminexcorp.com/@2011/04/11
Luminex
CytoBuoy
Karin Breuer
WEB:http://www.cytobuoy.com/@2011/04/11
CytoBuoy
Nimblegen
An organization that focuses on manufacturing target enrichment probe pools for DNA sequencing.
Person: Jie Zheng
Nimblegen
Pacific Biosciences
An organization that supplies tools for studying the synthesis and regulation of DNA, RNA and protein. It developed a powerful technology platform called single molecule real-time (SMRT) technology which enables real-time analysis of biomolecules with single molecule resolution.
Person: Jie Zheng
Pacific Biosciences
NanoString Technologies
An organization that supplies life science tools for translational research and molecular diagnostics based on a novel digital molecular barcoding technology. The NanoString platform can provide simple, multiplexed digital profiling of single molecules.
NanoString Technologies
Thermo Fisher Scientific
An organization that is an American multinational, biotechnology product development company, created in 2006 by the merger of Thermo Electron and Fisher Scientific.
Chris Stoeckert, Helena Ellis
https://en.wikipedia.org/wiki/Thermo_Fisher_Scientific
Thermo Fisher Scientific
G1: Well differentiated
A histologic grade according to AJCC 7th edition indicating that the tumor cells and the organization of the tumor tissue appear close to normal.
Chris Stoeckert, Helena Ellis
G1
https://www.cancer.gov/about-cancer/diagnosis-staging/prognosis/tumor-grade-fact-sheet
NCI BBRB
G1: Well differentiated
G2: Moderately differentiated
A histologic grade according to AJCC 7th edition indicating that the tumor cells are moderately differentiated and reflect an intermediate grade.
Chris Stoeckert, Helena Ellis
G2
https://www.cancer.gov/about-cancer/diagnosis-staging/prognosis/tumor-grade-fact-sheet
NCI BBRB
G2: Moderately differentiated
G3: Poorly differentiated
A histologic grade according to AJCC 7th edition indicating that the tumor cells are poorly differentiated and do not look like normal cells and tissue.
Chris Stoeckert, Helena Ellis
G3
https://www.cancer.gov/about-cancer/diagnosis-staging/prognosis/tumor-grade-fact-sheet
NCI BBRB
G3: Poorly differentiated
G4: Undifferentiated
A histologic grade according to AJCC 7th edition indicating that the tumor cells are undifferentiated and do not look like normal cells and tissue.
Chris Stoeckert, Helena Ellis
G4
https://www.cancer.gov/about-cancer/diagnosis-staging/prognosis/tumor-grade-fact-sheet
NCI BBRB
G4: Undifferentiated
G1 (Fuhrman)
A histologic grade according to the Fuhrman Nuclear Grading System indicating that nuclei are round, uniform, approximately 10um and that nucleoli are inconspicuous or absent.
Chris Stoeckert, Helena Ellis
Grade 1
NCI BBRB, OBI
NCI BBRB
G1 (Fuhrman)
G2 (Fuhrman)
A histologic grade according to the Fuhrman Nuclear Grading System indicating that nuclei are slightly irregular, approximately 15um and nucleoli are evident.
Chris Stoeckert, Helena Ellis
Grade 2
NCI BBRB, OBI
NCI BBRB
G2 (Fuhrman)
G3 (Fuhrman)
A histologic grade according to the Fuhrman Nuclear Grading System indicating that nuclei are very irregular, approximately 20um and nucleoli large and prominent.
Chris Stoeckert, Helena Ellis
Grade 3
NCI BBRB, OBI
NCI BBRB
G3 (Fuhrman)
G4 (Fuhrman)
A histologic grade according to the Fuhrman Nuclear Grading System indicating that nuclei arei bizarre and multilobulated, 20um or greater and nucleoli are prominent and chromatin clumped.
Chris Stoeckert, Helena Ellis
Grade 4
NCI BBRB, OBI
NCI BBRB
G4 (Fuhrman)
Low grade ovarian tumor
A histologic grade for ovarian tumor according to a two-tier grading system indicating that the tumor is low grade.
Chris Stoeckert, Helena Ellis
Low grade
NCI BBRB, OBI
NCI BBRB
Low grade ovarian tumor
High grade ovarian tumor
A histologic grade for ovarian tumor according to a two-tier grading system indicating that the tumor is high grade.
Chris Stoeckert, Helena Ellis
High grade
NCI BBRB, OBI
NCI BBRB
High grade ovarian tumor
G1 (WHO)
A histologic grade for ovarian tumor according to the World Health Organization indicating that the tumor is well differentiated.
Chris Stoeckert, Helena Ellis
G1
NCI BBRB, OBI
NCI BBRB
G1 (WHO)
G2 (WHO)
A histologic grade for ovarian tumor according to the World Health Organization indicating that the tumor is moderately differentiated.
Chris Stoeckert, Helena Ellis
G2
NCI BBRB, OBI
NCI BBRB
G2 (WHO)
G3 (WHO)
A histologic grade for ovarian tumor according to the World Health Organization indicating that the tumor is poorly differentiated.
Chris Stoeckert, Helena Ellis
G3
NCI BBRB, OBI
NCI BBRB
G3 (WHO)
G4 (WHO)
A histologic grade for ovarian tumor according to the World Health Organization indicating that the tumor is undifferentiated.
Chris Stoeckert, Helena Ellis
G4
NCI BBRB, OBI
NCI BBRB
G4 (WHO)
pT0 (colon)
A pathologic primary tumor stage for colon and rectum according to AJCC 7th edition indicating that there is no evidence of primary tumor.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_t/
NCI BBRB
pT0 (colon)
pTis (colon)
A pathologic primary tumor stage for colon and rectum according to AJCC 7th edition indicating carcinoma in situ (intraepithelial or invasion of lamina propria).
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_t/
NCI BBRB
pTis (colon)
pT1 (colon)
A pathologic primary tumor stage for colon and rectum according to AJCC 7th edition indicating that the tumor invades submucosa.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_t/
NCI BBRB
pT1 (colon)
pT2 (colon)
A pathologic primary tumor stage for colon and rectum according to AJCC 7th edition indicating that the tumor invades muscularis propria.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_t/
NCI BBRB
pT2 (colon)
pT3 (colon)
A pathologic primary tumor stage for colon and rectum according to AJCC 7th edition indicating that the tumor invades subserosa or into non-peritionealized pericolic or perirectal tissues.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_t/
NCI BBRB
pT3 (colon)
pT4a (colon)
A pathologic primary tumor stage for colon and rectum according to AJCC 7th edition indicating that the tumor perforates visceral peritoneum.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_t/
NCI BBRB
pT4a (colon)
pT4b (colon)
A pathologic primary tumor stage for colon and rectum according to AJCC 7th edition indicating that the tumor directly invades other organs or structures.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_t/
NCI BBRB
pT4b (colon)
pT0 (lung)
A pathologic primary tumor stage for lung according to AJCC 7th edition indicating that there is no evidence of primary tumor.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_t/
NCI BBRB
pT0 (lung)
pTis (lung)
A pathologic primary tumor stage for lung according to AJCC 7th edition indicating carcinoma in situ.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_t/
NCI BBRB
pTis (lung)
pT1 (lung)
A pathologic primary tumor stage for lung according to AJCC 7th edition indicating that the tumor is 3 cm or less in greatest dimension, surrounded by lung or visceral pleura without bronchoscopic evidence of invasion more proximal than the lobar bronchus (i.e., not in the main bronchus).
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_t/
NCI BBRB
pT1 (lung)
pT1a (lung)
A pathologic primary tumor stage for lung according to AJCC 7th edition indicating that the tumor is 2 cm or less in greatest dimension.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_t/
NCI BBRB
pT1a (lung)
pT1b (lung)
A pathologic primary tumor stage for lung according to AJCC 7th edition indicating that the tumor is more than 2 cm but not more than 3 cm in greatest dimension.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_t/
NCI BBRB
pT1b (lung)
pT2 (lung)
A pathologic primary tumor stage for lung according to AJCC 7th edition indicating that the tumor is more than 3 cm but not more than 7 cm or the tumor has any of the following features: involves main bronchus, 2 cm or more distal to the carina, invades visceral pleura, associated with atelectasis or obstructive pneumonitis that extends to the hilar region but does not involve the entire lung.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_t/
NCI BBRB
pT2 (lung)
pT2a (lung)
A pathologic primary tumor stage for lung according to AJCC 7th edition indicating that the tumor is more than 3 cm but not more than 5 cm in greatest dimension.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_t/
NCI BBRB
pT2a (lung)
pT2b (lung)
A pathologic primary tumor stage for lung according to AJCC 7th edition indicating that the tumor is more than 5 cm but not more than 7 cm in greatest dimension.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_t/
NCI BBRB
pT2b (lung)
pT3 (lung)
A pathologic primary tumor stage for lung according to AJCC 7th edition indicating that the tumor is more than 7 cm or one that directly invades any of: parietal pleura, chest wall (including superior sulcus tumors), diaphragm, phrenic nerve, mediastinal pleura, parietal pericardiu or the tumor is in the main bronchus less than 2 cm distal to the carina but without involvement of the carina or there is associated atelectasis or obstructive pneumonitis of the entire lung or there is separate tumor nodule(s) in the same lobe as the primary.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_t/
NCI BBRB
pT3 (lung)
pT4 (lung)
A pathologic primary tumor stage for lung according to AJCC 7th edition indicating that the tumor of any size that invades any of the following: mediastinum, heart, great vessels, trachea, recurrent laryngeal nerve, esophagus, vertebral body, carina or there is separate tumor nodule(s) in a different ipsilateral lobe to that of the primary.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_t/
NCI BBRB
pT4 (lung)
pT0 (kidney)
A pathologic primary tumor stage for kidney according to AJCC 7th edition indicating that there is no evidence of primary tumor.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_t/
NCI BBRB
pT0 (kidney)
pT1 (kidney)
A pathologic primary tumor stage for kidney according to AJCC 7th edition indicating that the tumor is 7 cm or less in greatest dimension and limited to the kidney.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_t/
NCI BBRB
pT1 (kidney)
pT1a (kidney)
A pathologic primary tumor stage for kidney according to AJCC 7th edition indicating that the tumor is 4 cm or less.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_t/
NCI BBRB
pT1a (kidney)
pT1b (kidney)
A pathologic primary tumor stage for kidney according to AJCC 7th edition indicating that the tumor is more than 4 cm but not more than 7 cm.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_t/
NCI BBRB
pT1b (kidney)
pT2 (kidney)
A pathologic primary tumor stage for kidney according to AJCC 7th edition indicating that the tumor is more than 7 cm in greatest dimension and limited to the kidney.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_t/
NCI BBRB
pT2 (kidney)
pT2a (kidney)
A pathologic primary tumor stage for kidney according to AJCC 7th edition indicating that the tumor is more than 7 cm but not more than 10 cm.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_t/
NCI BBRB
pT2a (kidney)
pT2b (kidney)
A pathologic primary tumor stage for kidney according to AJCC 7th edition indicating that the tumor is more than 10 cm and limited to the kidney.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_t/
NCI BBRB
pT2b (kidney)
pT3 (kidney)
A pathologic primary tumor stage for kidney according to AJCC 7th edition indicating that the tumor extends into major veins or perinephric tissues but not into the ipsilateral adrenal gland and not beyond the Gerota fascia.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_t/
NCI BBRB
pT3 (kidney)
pT3a (kidney)
A pathologic primary tumor stage for kidney according to AJCC 7th edition indicating that the tumor grossly extends into the renal vein or its segmental (muscle containing) branches, or the tumor invades perirenal and/or renal sinus fat (peripelvic) fat but not beyond Gerota fascia.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_t/
NCI BBRB
pT3a (kidney)
pT3b (kidney)
A pathologic primary tumor stage for kidney according to AJCC 7th edition indicating that the tumor grossly extends into vena cava below diaphragm.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_t/
NCI BBRB
pT3b (kidney)
pT3c (kidney)
A pathologic primary tumor stage for kidney according to AJCC 7th edition indicating that the tumor grossly extends into vena cava above the diaphragm or Invades the wall of the vena cava.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_t/
NCI BBRB
pT3c (kidney)
pT4 (kidney)
A pathologic primary tumor stage for kidney according to AJCC 7th edition indicating that the tumor invades beyond Gerota fascia (including contiguous extension into the ipsilateral adrenal gland).
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_t/
NCI BBRB
pT4 (kidney)
pT0 (ovary)
A pathologic primary tumor stage for ovary according to AJCC 7th edition indicating that there is no evidence of primary tumor.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_t/
NCI BBRB
pT0 (ovary)
pT1 (ovary)
A pathologic primary tumor stage for ovary according to AJCC 7th edition indicating that the tumor is limited to the ovaries (one or both).
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_t/
NCI BBRB
pT1 (ovary)
pT1a (ovary)
A pathologic primary tumor stage for ovary according to AJCC 7th edition indicating that the tumor is limited to one ovary; capsule intact, no tumor on ovarian surface and no malignant cells in ascites or peritoneal washings.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_t/
NCI BBRB
pT1a (ovary)
pT1b (ovary)
A pathologic primary tumor stage for ovary according to AJCC 7th edition indicating that the tumor is limited to both ovaries; capsule intact, no tumor on ovarian surface and no malignant cells in ascites or peritoneal washings.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_t/
NCI BBRB
pT1b (ovary)
pT1c (ovary)
A pathologic primary tumor stage for ovary according to AJCC 7th edition indicating that the tumor is limited to one or both ovaries with capsule ruptured, tumor on ovarian surface, or malignant cells in ascites or peritoneal washings.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_t/
NCI BBRB
pT1c (ovary)
pT2 (ovary)
A pathologic primary tumor stage for ovary according to AJCC 7th edition indicating that the tumor involves one or both ovaries with pelvic extension.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_t/
NCI BBRB
pT2 (ovary)
pT2a (ovary)
A pathologic primary tumor stage for ovary according to AJCC 7th edition indicating that the tumor has extension and/or implants on uterus and/or tube(s) and no malignant cells in ascites or peritoneal washings.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_t/
NCI BBRB
pT2a (ovary)
pT2b (ovary)
A pathologic primary tumor stage for ovary according to AJCC 7th edition indicating that the tumor has extension to other pelvic tissues and no malignant cells in ascites or peritoneal washings.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_t/
NCI BBRB
pT2b (ovary)
pT2c (ovary)
A pathologic primary tumor stage for ovary according to AJCC 7th edition indicating that the tumor has pelvic extension with malignant cells in ascites or peritoneal washings.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_t/
NCI BBRB
pT2c (ovary)
pT3 (ovary)
A pathologic primary tumor stage for ovary according to AJCC 7th edition indicating that the tumor involves one or both ovaries with microscopically confirmed peritoneal metastasis outside the pelvis and/or regional lymph node metastasis.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_t/
NCI BBRB
pT3 (ovary)
pT3a (ovary)
A pathologic primary tumor stage for ovary according to AJCC 7th edition indicating that the tumor has microscopic peritoneal metastasis beyond pelvis.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_t/
NCI BBRB
pT3a (ovary)
pT3b (ovary)
A pathologic primary tumor stage for ovary according to AJCC 7th edition indicating that the tumor has macroscopic peritoneal, metastatasis beyond pelvis, 2 cm or less in greatest dimension.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_t/
NCI BBRB
pT3b (ovary)
pT3c (ovary)
A pathologic primary tumor stage for ovary according to AJCC 7th edition indicating that the tumor has peritoneal metastasis beyond pelvis, more than 2 cm in greatest dimension and/or regional lymph node metastasis.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_t/
NCI BBRB
pT3c (ovary)
pN0 (colon)
A pathologic lymph node stage for colon and rectum according to AJCC 7th edition indicating no regional lymph node metastsis.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_n/
NCI BBRB
pN0 (colon)
pN1 (colon)
A pathologic lymph node stage for colon and rectum according to AJCC 7th edition indicating metastasis in 1-3 regional lymph nodes.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_n/
NCI BBRB
pN1 (colon)
pN1a (colon)
A pathologic lymph node stage for colon and rectum according to AJCC 7th edition indicating metastasis in 1 regional lymph node.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_n/
NCI BBRB
pN1a (colon)
pN1b (colon)
A pathologic lymph node stage for colon and rectum according to AJCC 7th edition indicating metastasis in 2-3 regional lymph nodes.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_n/
NCI BBRB
pN1b (colon)
pN1c (colon)
A pathologic lymph node stage for colon and rectum according to AJCC 7th edition indicating tumor deposit(s), i.e., satellites in the subserosa, or in non-peritonealized pericolic or perirectal soft tissue without regional lymph node metastasis.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_n/
NCI BBRB
pN1c (colon)
pN2 (colon)
A pathologic lymph node stage for colon and rectum according to AJCC 7th edition indicating metastasis in 4 or more regional lymph nodes.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_n/
NCI BBRB
pN2 (colon)
pN2a (colon)
A pathologic lymph node stage for colon and rectum according to AJCC 7th edition indicating metastasis in 4 to 6 regional lymph nodes.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_n/
NCI BBRB
pN2a (colon)
pN2b (colon)
A pathologic lymph node stage for colon and rectum according to AJCC 7th edition indicating metastasis in 7 or more regional lymph nodes.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_n/
NCI BBRB
pN2b (colon)
pN0 (lung)
A pathologic lymph node stage for lung according to AJCC 7th edition indicating no regional lymph node metastasis.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_n/
NCI BBRB
pN0 (lung)
pN1 (lung)
A pathologic lymph node stage for lung according to AJCC 7th edition indicating metastasis in ipsilateral peribronchial and/or ipsilateral hilar lymph nodes and intrapulmonary nodes, including involvement by direct extension.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_n/
NCI BBRB
pN1 (lung)
pN2 (lung)
A pathologic lymph node stage for lung according to AJCC 7th edition indicating metastasis in ipsilateral mediastinal and/or subcarinal lymph node(s).
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_n/
NCI BBRB
pN2 (lung)
pN3 (lung)
A pathologic lymph node stage for lung according to AJCC 7th edition indicating metastasis in contralateral mediastinal, contralateral hilar, ipsilateral or contralateral scalene, or supraclavicular lymph node(s).
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_n/
NCI BBRB
pN3 (lung)
pN0 (kidney)
A pathologic lymph node stage for kidney according to AJCC 7th edition indicating that there is no regional lymph node metastasis.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_n/
NCI BBRB
pN0 (kidney)
pN1 (kidney)
A pathologic lymph node stage for kidney according to AJCC 7th edition indicating that there is regional lymph node metastasis.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_n/
NCI BBRB
pN1 (kidney)
pN0 (ovary)
A pathologic lymph node stage for ovary according to AJCC 7th edition indicating that there is no regional lymph node metastasis.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_n/
NCI BBRB
pN0 (ovary)
pN1 (ovary)
A pathologic lymph node stage for ovary according to AJCC 7th edition indicating that there is regional lymph node metastasis.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_n/
NCI BBRB
pN1 (ovary)
cM0 (colon)
A pathologic distant metastases stage for colon according to AJCC 7th edition indicating that there are no symptoms or signs of distant metastasis.
Chris Stoeckert, Helena Ellis
https://en.wikipedia.org/wiki/Cancer_staging#Pathological_M_Categorization_.28cM_and_pM.29
NCI BBRB
cM0 (colon)
cM1 (colon)
A pathologic distant metastases stage for colon according to AJCC 7th edition indicating that there is clinical evidence of distant metastases by history, physical examination, imaging studies, or invasive procedures, but without microscopic evidence of the presumed distant metastases.
Chris Stoeckert, Helena Ellis
https://en.wikipedia.org/wiki/Cancer_staging#Pathological_M_Categorization_.28cM_and_pM.29
NCI BBRB
cM1 (colon)
cM1a (colon)
A pathologic distant metastases stage for colon according to AJCC 7th edition indicating that metastasis is confined to one organ based on clinical assessment.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_m/
NCI BBRB
cM1a (colon)
cM1b (colon)
A pathologic distant metastases stage for colon according to AJCC 7th edition indicating that metastasis is in more than one organ or the peritoneum based on clinical assessment.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_m/
NCI BBRB
cM1b (colon)
pM1 (colon)
A pathologic distant metastases stage for colon according to AJCC 7th edition indicating that there is microscopic evidence confirming distant metastatic disease.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_m/
NCI BBRB
pM1 (colon)
pM1a (colon)
A pathologic distant metastases stage for colon according to AJCC 7th edition indicating that metastasis is confined to one organ and histologically confirmed.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_m/
NCI BBRB
pM1a (colon)
pM1b (colon)
A pathologic distant metastases stage for colon according to AJCC 7th edition indicating that metastasis is in more than one organ or the peritoneum and histologically confirmed.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/colon/path_m/
NCI BBRB
pM1b (colon)
cM0 (lung)
A pathologic distant metastases stage for lung according to AJCC 7th edition indicating that there is no distant metastasis.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_m/
NCI BBRB
cM0 (lung)
cM1 (lung)
A pathologic distant metastases stage for lung according to AJCC 7th edition indicating that there are distant metastases based on clinical assessment.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_m/
NCI BBRB
cM1 (lung)
cM1a (lung)
A pathologic distant metastases stage for lung according to AJCC 7th edition indicating that metastasis is based on clinical assessment and a separate tumor nodule(s) in a contralateral lobe; tumor with pleural nodules OR malignant pleural or pericardial effusion.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_m/
NCI BBRB
cM1a (lung)
cM1b (lung)
A pathologic distant metastases stage for lung according to AJCC 7th edition indicating that there is a distant metastases based on clinical assessment.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_m/
NCI BBRB
cM1b (lung)
pM1 (lung)
A pathologic distant metastases stage for lung according to AJCC 7th edition indicating that there is a distant metastases that is histologically confirmed.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_m/
NCI BBRB
pM1 (lung)
pM1a (lung)
A pathologic distant metastases stage for lung according to AJCC 7th edition indicating that metastasis is histologically confirmed and a separate tumor nodule(s) in a contralateral lobe; tumor with pleural nodules OR malignant pleural or pericardial effusion.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_m/
NCI BBRB
pM1a (lung)
pM1b (lung)
A pathologic distant metastases stage for lung according to AJCC 7th edition indicating that there is a distant metastases that is histologically confirmed and associated with distant lymph nodes or carcinomatosis.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/lung/path_m/
NCI BBRB
pM1b (lung)
cM0 (kidney)
A pathologic distant metastases stage for kidney according to AJCC 7th edition indicating that there is no distant metastasis.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_m/
NCI BBRB
cM0 (kidney)
cM1 (kidney)
A pathologic distant metastases stage for kidney according to AJCC 7th edition indicating that there are distant metastases based on clinical assessment.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_m/
NCI BBRB
cM1 (kidney)
pM1 (kidney)
A pathologic distant metastases stage for kidney according to AJCC 7th edition indicating that there is a distant metastases that is histologically confirmed.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/kidney_parenchyma/path_m/
NCI BBRB
pM1 (kidney)
cM0 (ovary)
A pathologic distant metastases stage for ovary according to AJCC 7th edition indicating that there is no distant metastasis.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_m/
NCI BBRB
cM0 (ovary)
cM1 (ovary)
A pathologic distant metastases stage for ovary according to AJCC 7th edition indicating that there is distant metastasis except peritoneal metastasis based on clinical assessment.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_m/
NCI BBRB
cM1 (ovary)
pM1 (ovary)
A pathologic distant metastases stage for ovary according to AJCC 7th edition indicating that there is distant metastasis except peritoneal metastasis that is histologically confirmed.
Chris Stoeckert, Helena Ellis
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_m/
NCI BBRB
pM1 (ovary)
Occult Carcinoma (AJCC 7th)
A clinical tumor stage group according to AJCC 7th edition indicating a small carcinoma, either asymptomatic or giving rise to metastases without symptoms due to the primary carcinoma.
Chris Stoeckert, Helena Ellis
Occult Carcinoma
http://www.medilexicon.com/dictionary/14371
NCI BBRB
Occult Carcinoma (AJCC 7th)
Stage 0 (AJCC 7th)
A clinical tumor stage group according to AJCC 7th edition indicating a carcinoma in situ (or melanoma in situ for melanoma of the skin or germ cell neoplasia in situ for testicular germ cell tumors) and generally is considered to have no metastatic potential.
Chris Stoeckert, Helena Ellis
Stage 0
https://en.wikipedia.org/wiki/Cancer_staging
NCI BBRB
Stage 0 (AJCC 7th)
Stage I (AJCC 7th)
A clinical tumor stage group according to AJCC 7th edition indicating cancers that are smaller or less deeply invasive without regional disease or nodes.
Chris Stoeckert, Helena Ellis
Stage I
https://en.wikipedia.org/wiki/Cancer_staging
NCI BBRB
Stage I (AJCC 7th)
Stage IIA (AJCC 7th)
A clinical tumor stage group according to AJCC 7th edition indicating cancers with increasing tumor or nodal extent but less than in Stage III and with differing characteristics from IIB and IIC.
Chris Stoeckert, Helena Ellis
Stage IIA
https://en.wikipedia.org/wiki/Cancer_staging
NCI BBRB
Stage IIA (AJCC 7th)
Stage IIB (AJCC 7th)
A clinical tumor stage group according to AJCC 7th edition indicating cancers with increasing tumor or nodal extent but less than in Stage III and with differing characteristics from IIA and IIC.
Chris Stoeckert, Helena Ellis
Stage IIB
https://en.wikipedia.org/wiki/Cancer_staging
NCI BBRB
Stage IIB (AJCC 7th)
Stage IIC (AJCC 7th)
A clinical tumor stage group according to AJCC 7th edition indicating cancers with increasing tumor or nodal extent but less than in Stage III and with differing characteristics from IIA and IIB.
Chris Stoeckert, Helena Ellis
Stage IIC
https://en.wikipedia.org/wiki/Cancer_staging
NCI BBRB
Stage IIC (AJCC 7th)
Stage IIIA (AJCC 7th)
A clinical tumor stage group according to AJCC 7th edition indicating cancers with increasing tumor or nodal extent greater than in Stage II and with differing characteristics from IIIB and IIIC.
Chris Stoeckert, Helena Ellis
Stage IIIA
https://en.wikipedia.org/wiki/Cancer_staging
NCI BBRB
Stage IIIA (AJCC 7th)
Stage IIIB (AJCC 7th)
A clinical tumor stage group according to AJCC 7th edition indicating cancers with increasing tumor or nodal extent greater than in Stage II and with differing characteristics from IIIA and IIIC.
Chris Stoeckert, Helena Ellis
Stage IIIB
https://en.wikipedia.org/wiki/Cancer_staging
NCI BBRB
Stage IIIB (AJCC 7th)
Stage IIIC (AJCC 7th)
A clinical tumor stage group according to AJCC 7th edition indicating cancers with increasing tumor or nodal extent greater than in Stage II and with differing characteristics from IIIA and IIIB.
Chris Stoeckert, Helena Ellis
Stage IIIC
https://en.wikipedia.org/wiki/Cancer_staging
NCI BBRB
Stage IIIC (AJCC 7th)
Stage IVA (AJCC 7th)
A clinical tumor stage group according to AJCC 7th edition indicating cancers in patients who present with distant metastases at diagnosis and with differing characteristics from IVB.
Chris Stoeckert, Helena Ellis
Stage IVA
https://en.wikipedia.org/wiki/Cancer_staging
NCI BBRB
Stage IVA (AJCC 7th)
Stage IVB (AJCC 7th)
A clinical tumor stage group according to AJCC 7th edition indicating cancers in patients who present with distant metastases at diagnosis and with differing characteristics from IVA.
Chris Stoeckert, Helena Ellis
Stage IVB
https://en.wikipedia.org/wiki/Cancer_staging
NCI BBRB
Stage IVB (AJCC 7th)
Stage IA (FIGO)
An International Federation of Gynecology and Obstetrics cervical cancer stage value specification indicating invasive carcinoma which can be diagnosed only by microscopy, with deepest invasion <5 mm and the largest extension <7 mm.
Chris Stoeckert, Helena Ellis
Stage IA
https://en.wikipedia.org/wiki/Cervical_cancer_staging
NCI BBRB
Stage IA (FIGO)
Stage IA1 (FIGO)
An International Federation of Gynecology and Obstetrics cervical cancer stage value specification indicating measured stromal invasion of <3.0 mm in depth and extension of <7.0 mm.
Chris Stoeckert, Helena Ellis
Stage IA1
https://en.wikipedia.org/wiki/Cervical_cancer_staging
NCI BBRB
Stage IA1 (FIGO)
Stage IA2 (FIGO)
An International Federation of Gynecology and Obstetrics cervical cancer stage value specification indicating measured stromal invasion of >3.0 mm and not >5.0 mm with an extension of not >7.0 mm.
Chris Stoeckert, Helena Ellis
Stage IA2
https://en.wikipedia.org/wiki/Cervical_cancer_staging
NCI BBRB
Stage IA2 (FIGO)
Stage IB (FIGO)
An International Federation of Gynecology and Obstetrics cervical cancer stage value specification indicating clinically visible lesions limited to the cervix uteri or pre-clinical cancers greater than stage IA
Chris Stoeckert, Helena Ellis
Stage IB
https://en.wikipedia.org/wiki/Cervical_cancer_staging
NCI BBRB
Stage IB (FIGO)
Stage IB1 (FIGO)
An International Federation of Gynecology and Obstetrics cervical cancer stage value specification indicating clinically visible lesion limited to the cervix uteri or pre-clinical cancers greater than stage IA <4.0 cm in greatest dimension.
Chris Stoeckert, Helena Ellis
Stage IB1
https://en.wikipedia.org/wiki/Cervical_cancer_staging
NCI BBRB
Stage IB1 (FIGO)
Stage IB2 (FIGO)
An International Federation of Gynecology and Obstetrics cervical cancer stage value specification indicating clinically visible lesion limited to the cervix uteri or pre-clinical cancers greater than stage IA >4.0 cm in greatest dimension.
Chris Stoeckert, Helena Ellis
Stage IB2
https://en.wikipedia.org/wiki/Cervical_cancer_staging
NCI BBRB
Stage IB2 (FIGO)
Stage IIA (FIGO)
An International Federation of Gynecology and Obstetrics cervical cancer stage value specification indicating cervical carcinoma invades beyond the uterus, but not to the pelvic wall or to the lower third of the vagina without parametrial invasion.
Chris Stoeckert, Helena Ellis
Stage IIA
https://en.wikipedia.org/wiki/Cervical_cancer_staging
NCI BBRB
Stage IIA (FIGO)
Stage IIA1 (FIGO)
An International Federation of Gynecology and Obstetrics cervical cancer stage value specification indicating cervical carcinoma invades beyond the uterus, but not to the pelvic wall or to the lower third of the vagina without parametrial invasion and clinically visible lesion <4.0 cm in greatest dimension.
Chris Stoeckert, Helena Ellis
Stage IIA1
https://en.wikipedia.org/wiki/Cervical_cancer_staging
NCI BBRB
Stage IIA1 (FIGO)
Stage IIA2 (FIGO)
An International Federation of Gynecology and Obstetrics cervical cancer stage value specification indicating cervical carcinoma invades beyond the uterus, but not to the pelvic wall or to the lower third of the vagina without parametrial invasion and clinically visible lesion >4.0 cm in greatest dimension.
Chris Stoeckert, Helena Ellis
Stage IIA2
https://en.wikipedia.org/wiki/Cervical_cancer_staging
NCI BBRB
Stage IIA2 (FIGO)
Stage IIB (FIGO)
An International Federation of Gynecology and Obstetrics cervical cancer stage value specification indicating cervical carcinoma invades beyond the uterus, but not to the pelvic wall or to the lower third of the vagina with obvious parametrial invasion.
Chris Stoeckert, Helena Ellis
Stage IIB
https://en.wikipedia.org/wiki/Cervical_cancer_staging
NCI BBRB
Stage IIB (FIGO)
Stage IIIA (FIGO)
An International Federation of Gynecology and Obstetrics cervical cancer stage value specification indicating tumour involves lower third of the vagina, with no extension to the pelvic wall.
Chris Stoeckert, Helena Ellis
Stage IIIA
https://en.wikipedia.org/wiki/Cervical_cancer_staging
NCI BBRB
Stage IIIA (FIGO)
Stage IIIB (FIGO)
An International Federation of Gynecology and Obstetrics cervical cancer stage value specification indicating extension to the pelvic wall and/or hydronephrosis or non-functioning kidney.
Chris Stoeckert, Helena Ellis
Stage IIIB
https://en.wikipedia.org/wiki/Cervical_cancer_staging
NCI BBRB
Stage IIIB (FIGO)
Stage IVA (FIGO)
An International Federation of Gynecology and Obstetrics cervical cancer stage value specification indicating spread of the growth to adjacent organs.
Chris Stoeckert, Helena Ellis
Stage IVA
https://en.wikipedia.org/wiki/Cervical_cancer_staging
NCI BBRB
Stage IVA (FIGO)
Stage IVB (FIGO)
An International Federation of Gynecology and Obstetrics cervical cancer stage value specification indicating spread to distant organs.
Chris Stoeckert, Helena Ellis
Stage IVB
https://en.wikipedia.org/wiki/Cervical_cancer_staging
NCI BBRB
Stage IVB (FIGO)
Stage 1 (FIGO)
A International Federation of Gynecology and Obstetrics ovarian cancer stage value specification associated with TNM stage values of T1, N0, and M0.
Chris Stoeckert, Helena Ellis
Stage 1
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_stage_group_direct/
NCI BBRB
Stage 1 (FIGO)
Stage 1A (FIGO)
A International Federation of Gynecology and Obstetrics ovarian cancer stage value specification associated with TNM stage values of T1a, N0, and M0.
Chris Stoeckert, Helena Ellis
Stage 1A
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_stage_group_direct/
NCI BBRB
Stage 1A (FIGO)
Stage 1B (FIGO)
A International Federation of Gynecology and Obstetrics ovarian cancer stage value specification associated with TNM stage values of T1b, N0, and M0.
Chris Stoeckert, Helena Ellis
Stage 1B
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_stage_group_direct/
NCI BBRB
Stage 1B (FIGO)
Stage 1C (FIGO)
A International Federation of Gynecology and Obstetrics ovarian cancer stage value specification associated with TNM stage values of T1c, N0, and M0.
Chris Stoeckert, Helena Ellis
Stage 1C
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_stage_group_direct/
NCI BBRB
Stage 1C (FIGO)
Stage 2 (FIGO)
A International Federation of Gynecology and Obstetrics ovarian cancer stage value specification associated with TNM stage values of T2, N0, and M0.
Chris Stoeckert, Helena Ellis
Stage 2
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_stage_group_direct/
NCI BBRB
Stage 2 (FIGO)
Stage 2A (FIGO)
A International Federation of Gynecology and Obstetrics ovarian cancer stage value specification associated with TNM stage values of T2a, N0, and M0.
Chris Stoeckert, Helena Ellis
Stage 2A
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_stage_group_direct/
NCI BBRB
Stage 2A (FIGO)
Stage 2B (FIGO)
A International Federation of Gynecology and Obstetrics ovarian cancer stage value specification associated with TNM stage values of T2b, N0, and M0.
Chris Stoeckert, Helena Ellis
Stage 2B
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_stage_group_direct/
NCI BBRB
Stage 2B (FIGO)
Stage 2C (FIGO)
A International Federation of Gynecology and Obstetrics ovarian cancer stage value specification associated with TNM stage values of T2c, N0, and M0.
Chris Stoeckert, Helena Ellis
Stage 2C
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_stage_group_direct/
NCI BBRB
Stage 2C (FIGO)
Stage 3 (FIGO)
A International Federation of Gynecology and Obstetrics ovarian cancer stage value specification associated with TNM stage values of (T3, N0, and M0) or (T3,3a,3b, NX, and M0).
Chris Stoeckert, Helena Ellis
Stage 3
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_stage_group_direct/
NCI BBRB
Stage 3 (FIGO)
Stage 3A (FIGO)
A International Federation of Gynecology and Obstetrics ovarian cancer stage value specification associated with TNM stage values of T3a, N0, and M0 .
Chris Stoeckert, Helena Ellis
Stage 3A
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_stage_group_direct/
NCI BBRB
Stage 3A (FIGO)
Stage 3B (FIGO)
A International Federation of Gynecology and Obstetrics ovarian cancer stage value specification associated with TNM stage values of T3b, N0, and M0 .
Chris Stoeckert, Helena Ellis
Stage 3B
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_stage_group_direct/
NCI BBRB
Stage 3B (FIGO)
Stage 3C (FIGO)
A International Federation of Gynecology and Obstetrics ovarian cancer stage value specification associated with TNM stage values of (T3c, N0,X and M0) or (any T, N1 and M0).
Chris Stoeckert, Helena Ellis
Stage 3C
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_stage_group_direct/
NCI BBRB
Stage 3C (FIGO)
Stage 4 (FIGO)
A International Federation of Gynecology and Obstetrics ovarian cancer stage value specification associated with TNM stage values of any T, any N, and M1.
Chris Stoeckert, Helena Ellis
Stage 4
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_stage_group_direct/
NCI BBRB
Stage 4 (FIGO)
Stage Unknown (FIGO)
A International Federation of Gynecology and Obstetrics ovarian cancer stage value specification associated with TNM stage values of (T0, N0, and M0) or (T1,1a-1c,2,2a-2c, NX, and M0) or (TX, N0,X, M0).
Chris Stoeckert, Helena Ellis
Stage Unknown
https://staging.seer.cancer.gov/tnm/input/1.0/ovary/path_stage_group_direct/
NCI BBRB
Stage Unknown (FIGO)
3: symptomatic in bed more than 50% of the day but not bed ridden
An Eastern Cooperative Oncology Group score value specification indicating a patient is symptomatic and in bed for more than 50% of the day but is not bed ridden.
Chris Stoeckert, Helena Ellis
NCI BBRB, OBI
NCI BBRB
3: symptomatic in bed more than 50% of the day but not bed ridden
2: symptomatic but in bed less than 50% of the day
An Eastern Cooperative Oncology Group score value specification indicating a patient is symptomatic but is in bed for less than 50% of the day.
Chris Stoeckert, Helena Ellis
NCI BBRB, OBI
NCI BBRB
2: symptomatic but in bed less than 50% of the day
4: bed ridden
An Eastern Cooperative Oncology Group score value specification indicating a patient is symptomatic and is bed ridden.
Chris Stoeckert, Helena Ellis
NCI BBRB, OBI
NCI BBRB
4: bed ridden
0: asymptomatic
An Eastern Cooperative Oncology Group score value specification indicating a patient is asymptomatic.
Chris Stoeckert, Helena Ellis
NCI BBRB, OBI
NCI BBRB
0: asymptomatic
1: symptomatic but fully ambulatory
An Eastern Cooperative Oncology Group score value specification indicating a patient is symptomatic but is fully ambulatory.
Chris Stoeckert, Helena Ellis
NCI BBRB, OBI
NCI BBRB
1: symptomatic but fully ambulatory
100: asymptomatic
A Karnofsky score vaue specification indicating that a patient is asymptomatic.
Chris Stoeckert, Helena Ellis
NCI BBRB, OBI
NCI BBRB
100: asymptomatic
80-90: symptomatic but fully ambulatory
A Karnofsky score vaue specification indicating that a patient is symptomatic but fully ambulatory.
Chris Stoeckert, Helena Ellis
NCI BBRB, OBI
NCI BBRB
80-90: symptomatic but fully ambulatory
60-70: symptomatic but in bed less than 50% of the day
A Karnofsky score vaue specification indicating that a patient is symptomatic but in bed less than 50% of the day.
Chris Stoeckert, Helena Ellis
NCI BBRB, OBI
NCI BBRB
60-70: symptomatic but in bed less than 50% of the day
40-50: symptomatic, in bed more than 50% of the day, but not bed ridden
A Karnofsky score vaue specification indicating that a patient is symptomatic, in bed more than 50% of the day, but not bed ridden.
Chris Stoeckert, Helena Ellis
NCI BBRB, OBI
NCI BBRB
40-50: symptomatic, in bed more than 50% of the day, but not bed ridden
Oxford Nanopore Technologies
An organization that is developing and selling nanopore sequencing products and is based in the UK.
James A. Overton
https://en.wikipedia.org/wiki/Oxford_Nanopore_Technologies
Oxford Nanopore Technologies
BioGents
An organization that manufactures mosquito traps and other mosquito control products.
John Judkins
WEB:https://eu.biogents.com/about-biogents/
BioGents
The term was added to the ontology on the assumption it was in scope, but it turned out later that it was not.
This obsolesence reason should be used conservatively. Typical valid examples are: un-necessary grouping classes in disease ontologies, a phenotype term added on the assumption it was a disease.
https://github.com/information-artifact-ontology/ontology-metadata/issues/77
https://orcid.org/0000-0001-5208-3432
out of scope
meter
A length unit which is equal to the length of the path traveled by light in vacuum during a time interval of 1/299 792 458 of a second.
m
meter
kilogram
A mass unit which is equal to the mass of the International Prototype Kilogram kept by the BIPM at Svres, France.
kg
kilogram
second
A time unit which is equal to the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.
s
sec
second
centimeter
A length unit which is equal to one hundredth of a meter or 10^[-2] m.
cm
centimeter
millimeter
A length unit which is equal to one thousandth of a meter or 10^[-3] m.
mm
millimeter
micrometer
A length unit which is equal to one millionth of a meter or 10^[-6] m.
um
micrometer
nanometer
A length unit which is equal to one thousandth of one millionth of a meter or 10^[-9] m.
nm
nanometer
angstrom
A length unit which is equal to 10 [-10] m.
angstrom
gram
A mass unit which is equal to one thousandth of a kilogram or 10^[-3] kg.
g
gram
milligram
A mass unit which is equal to one thousandth of a gram or 10^[-3] g.
mg
milligram
microgram
A mass unit which is equal to one millionth of a gram or 10^[-6] g.
ug
microgram
nanogram
A mass unit which is equal to one thousandth of one millionth of a gram or 10^[-9] g.
ng
nanogram
picogram
A mass unit which is equal to 10^[-12] g.
pg
picogram
degree Celsius
A temperature unit which is equal to one kelvin degree. However, they have their zeros at different points. The centigrade scale has its zero at 273.15 K.
C
degree C
degree Celsius
minute
A time unit which is equal to 60 seconds.
min
minute
hour
A time unit which is equal to 3600 seconds or 60 minutes.
h
hr
hour
day
A time unit which is equal to 24 hours.
day
week
A time unit which is equal to 7 days.
week
month
A time unit which is approximately equal to the length of time of one of cycle of the moon's phases which in science is taken to be equal to 30 days.
month
year
A time unit which is equal to 12 months which is science is taken to be equal to 365.25 days.
year
micromole
A substance unit equal to a millionth of a mol or 10^[-6] mol.
umol
micromole
nanomole
A substance unit equal to one thousandth of one millionth of a mole or 10^[-9] mol.
nmol
nanomole
picomole
A substance unit equal to 10^[-12] mol.
pmol
picomole
molar
A unit of concentration which expresses a concentration of 1 mole of solute per liter of solution (mol/L).
M
molar
millimolar
A unit of molarity which is equal to one thousandth of a molar or 10^[-3] M.
mM
millimolar
micromolar
A unit of molarity which is equal to one millionth of a molar or 10^[-6] M.
uM
micromolar
nanomolar
A unit of molarity which is equal to one thousandth of one millionth of a molar or 10^[-9] M.
nM
nanomolar
picomolar
A unit of molarity which is equal to 10^[-12] M.
pM
picomolar
cubic centimeter
A volume unit which is equal to one millionth of a cubic meter or 10^[-9] m^[3], or to 1 ml.
cc
cubic centimeter
milliliter
A volume unit which is equal to one thousandth of a liter or 10^[-3] L, or to 1 cubic centimeter.
ml
milliliter
liter
A volume unit which is equal to one thousandth of a cubic meter or 10^[-3] m^[3], or to 1 decimeter.
L
liter
cubic decimeter
A volume unit which is equal to one thousand of a cubic meter or 10^[-3] m^[3], or to 1 L.
cubic decimeter
microliter
A volume unit which is equal to one millionth of a liter or 10^[-6] L.
ul
microliter
nanoliter
A volume unit which is equal to one thousandth of one millionth of a liter or 10^[-9] L.
nl
nanoliter
picoliter
A volume unit which is equal to 10^[-12] L.
pl
picoliter
hertz
A frequency unit which is equal to 1 complete cycle of a recurring phenomenon in 1 second.
hertz
mass percentage
A dimensionless concentration unit which denotes the mass of a substance in a mixture as a percentage of the mass of the entire mixture.
% w/w
percent weight pr weight
mass percentage
mass volume percentage
A dimensionless concentration unit which denotes the mass of the substance in a mixture as a percentage of the volume of the entire mixture.
% w/v
percent vol per vol
mass volume percentage
volume percentage
A dimensionless concentration unit which denotes the volume of the solute in mL per 100 mL of the resulting solution.
% v/v
percent vol per vol
volume percentage
gram per liter
A mass unit density which is equal to mass of an object in grams divided by the volume in liters.
g per L
g/L
gram per liter
milligram per milliliter
A mass unit density which is equal to mass of an object in milligrams divided by the volume in milliliters.
mg per ml
mg/ml
milligram per milliliter
degree Fahrenheit
A temperature unit which is equal to 5/9ths of a kelvin. Negative 40 degrees Fahrenheit is equal to negative 40 degrees Celsius.
degree Fahrenheit
pH
A dimensionless concentration notation which denotes the acidity of a solution in terms of activity of hydrogen ions (H+).
pH
milliliter per liter
A volume per unit volume unit which is equal to one millionth of a liter of solute in one liter of solution.
ml per L
ml/l
milliliter per liter
gram per deciliter
A mass density unit which is equal to mass of an object in grams divided by the volume in deciliters.
g/dl
gram per deciliter
colony forming unit per volume
A concentration unit which a measure of viable bacterial numbers in a given volume.
colony forming unit per volume
microliters per minute
A volumetric flow rate unit which is equal to one microliter volume through a given surface in one minute.
microliters per minute
count per nanomolar second
A rate unit which is equal to one over one nanomolar second.
count per nanomolar second
count per molar second
A rate unit which is equal to one over one molar second.
count per molar second
count per nanomolar
A rate unit which is equal to one over one nanomolar.
count per nanomolar
count per molar
A rate unit which is equal to one over one molar.
count per molar
microgram per liter
A mass unit density which is equal to mass of an object in micrograms divided by the volume in liters.
ng/ml
ug/L
microgram per liter
https://github.com/allysonlister/swo/issues/59
AL 5.4.23: We believe this was erroneously created some time ago and therefore are obsoleting it with no recommended alternative individual.
OBSOLETE OBI_0000245_1
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Affymetrix
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Agilent Technologies
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Applied Biosystems
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Applied Precision Life Science
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Bahler Lab
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Bio-Rad Laboratories
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Bio Discovery
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Bioconductor
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Biometric Research Branch
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete COSE
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Clontech Laboratories
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Dana-Farber Cancer Institute and Harvard School of Public Health
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete EMBL
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Fujifilm
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete GE Healthcare Life Sciences
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Genedata
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Genicon Sciences
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Grid grinder
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Havard School of Public Health
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Illumina
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Incyte Genomics
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Institute for Genomics and Bioinformatics Graz University of Technology
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete J. Craig Venter Institute
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete MWG Biotech
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Matforsk
true
Marked as obsolete by Allyson Lister.
0.3
MathWorks (SWO_0000291 and SWO_9000002), SWO_9000002 was retained, whileSWO_0000291 became an instance of obsolete class. Please use SWO_9000002 instead.
obsolete_MathWorks
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Molecular Devices
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Molecular Neuroscience Core
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Molecular Dynamics
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Motorola Life Sciences
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete NIH
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete PerkinElmer
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Raytest
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Research Genetics
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Rosetta Biosoftware
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete SAS Institute Inc.
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Speed Berkeley Research Group
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete obsolete_Stanford University
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete TIBCO Software Inc
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Technological Advances for Genomics and Clinics
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete UC Irvine
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete University Of California
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Walter and Eliza Hall Institute
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Cambridge Bluegnome
true
https://github.com/allysonlister/swo/issues/59
AL 2023-03-05: Improperly created IRI (with 'efo' in the IRI) replaced by the correct form for SWO IRIs.
obsolete Strand Life Sciences
true
Agilent Technologies
Applied Biosystems
Applied Precision Life Science
Genedata
Genicon Sciences
Grid grinder
Havard School of Public Health
Illumina
Incyte Genomics
Institute for Genomics and Bioinformatics Graz University of Technology
J. Craig Venter Institute
MWG Biotech
Matforsk
Molecular Devices
Motorola Life Sciences
NIH
PerkinElmer
Raytest
Research Genetics
Rosetta Biosoftware
SAS Institute Inc.
Speed Berkeley Research Group
Marked as obsolete by Allyson Lister.
0.3
Stanford University (SWO_0000431 and SWO_9000003). SWO_9000003 was retained, while SWO_0000431 became an instance of obsolete class. Please use SWO_9000003 instead.
obsolete_Stanford University
TIBCO Software Inc
Technological Advances for Genomics and Clinics, France
UC Irvine
University Of California, Berkeley
Walter and Eliza Hall Institute
Cambridge Bluegnome
Strand Life Sciences
Affymetrix
Bahler Lab
Bio-Rad Laboratories, Inc.
Bio Discovery
Bioconductor
Biometric Research Branch
COSE, France
Clontech Laboratories, Inc
Dana-Farber Cancer Institute and Harvard School of Public Health
European Molecular Biology Laboratory
EMBL
Fujifilm
GE Healthcare Life Sciences
Molecular Neuroscience Core, Center for Behavioral Neuroscience, Atlanta
Molecular Dynamics
FCS Data Standard Version 3.0
Microsoft
MathWorks
Stanford University
Omni
PLT Scheme Inc
MicroPro International
JetBrains
Free Software Foundation
Adobe Systems
Andy Brown
An undefined, or ill-defined, group of people. May be used, for example, to denote software development as being done by 'the community', with contributions by many individuals, but no over-arching organisation.
The Community
University of New Hampshire
The National Archives
Eclipse Foundation
Dropbox
Thompson Reuters
OMII-UK
Apple Inc.
The University of Manchester
European Bioinformatics Institute
EPCC
The National Archives, Tessella
Spotify Ltd.
Mozilla Foundation
Altova
The GIMP Development Team
Canonical Ltd
Apache Software Foundation
European Patent Office
National Cancer Institute
Drive5
Conway Institute UCD Dublin
Centre for Genomic Regulation (CRG) of Barcelona
IBM
Allyson Lister
AL 8.10.2019: References https://github.com/allysonlister/swo/issues/19
Uppsala Molekylmekaniska HB
UC Santa Cruz Computational Biology Group
beta
alpha
Microsoft 98 version
Microsoft 2002 version
Microsoft 2003 version
Microsoft 95 version
Microsoft 2007 version
Microsoft 2010 version
version 3
version 4
Matlab R14
Matlab R12
R2011a
Microsoft XP
Excel 14
Windows 5.1
3.5.1
Adobe Acrobat 10.1
1
3.0.1
1.6.9
6.02
Helios Service Release 2
6.3.0
2.0.0
Allyson Lister
BLAST+ version 2.2.26
Allyson Lister
ClustalW version 2.1
Allyson Lister
ClustalX version 2.1
Allyson Lister
Clustal Omega version 1.1
Allyson Lister
MUSCLE version 3.8.31
CRG TCoffee version 9.02.r1228
4.2.2
7.2.0
Windows 6.0
true
MF(X)-directly_regulates->MF(Y)-enabled_by->GP(Z) => MF(Y)-has_input->GP(Y) e.g. if 'protein kinase activity'(X) directly_regulates 'protein binding activity (Y)and this is enabled by GP(Z) then X has_input Z
infer input from direct reg
GP(X)-enables->MF(Y)-has_part->MF(Z) => GP(X) enables MF(Z),
e.g. if GP X enables ATPase coupled transporter activity' and 'ATPase coupled transporter activity' has_part 'ATPase activity' then GP(X) enables 'ATPase activity'
enabling an MF enables its parts
true
GP(X)-enables->MF(Y)-part_of->BP(Z) => GP(X) involved_in BP(Z) e.g. if X enables 'protein kinase activity' and Y 'part of' 'signal tranduction' then X involved in 'signal transduction'
involved in BP
If a molecular function (X) has a regulatory subfunction, then any gene product which is an input to that subfunction has an activity that directly_regulates X. Note: this is intended for cases where the regaultory subfunction is protein binding, so it could be tightened with an additional clause to specify this.
inferring direct reg edge from input to regulatory subfunction
inferring direct neg reg edge from input to regulatory subfunction
inferring direct positive reg edge from input to regulatory subfunction
effector input is compound function input
Input of effector is input of its parent MF
if effector directly regulates X, its parent MF directly regulates X
if effector directly positively regulates X, its parent MF directly positively regulates X
if effector directly negatively regulates X, its parent MF directly negatively regulates X