Physics (from Ancient Greek: φυσική (ἐπιστήμη), translit. phusikḗ (epistḗmē), lit. 'knowledge of nature', from φύσις phúsis "nature") is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental scientific disciplines, the main goal of physics is to understand how the universe behaves.Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy. Over the last two millennia, physics was a part of natural philosophy along with chemistry, biology, and certain branches of mathematics, but during the scientific revolution in the 17th century, the natural sciences emerged as unique research programs in their own right. Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the fundamental mechanisms of other sciences while opening new avenues of research in areas such as mathematics and philosophy.Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs. For example, advances in the understanding of electromagnetism or nuclear physics led directly to the development of new products that have dramatically transformed modern-day society, such as television, computers, domestic appliances, and nuclear weapons; advances in thermodynamics led to the development of industrialization, and advances in mechanics inspired the development of calculus. An aerometer is an instrument designed to measure the density (among other parameters) of the air and some gases.The word aerometer (or Ärometer, from Ancient Greek ἀήρ -aer "air" and μέτρον -métron "measure, scale") refers to various types of devices for mesuring or handling of gases. The instruments designated with this name can be used to find: the density, the flow, the amount or some other parameter of the air or a determined gas.Another instrument called areometer (from Ancient Greek ἀραιός -araiós "lightness" and μέτρον -métron "measure, scale"), also known as hydrometer, used for measuring liquids density, is often confused with the term aerometer here defined. Computational anatomy (CA) is a discipline within medical imaging focusing on the study of anatomical shape and form at the visible or gross anatomical scale of morphology. The field is broadly defined and includes foundations in anatomy, applied mathematics and pure mathematics, including medical imaging, neuroscience, physics, probability, and statistics. It focuses on the anatomical structures being imaged, rather than the medical imaging devices. The central focus of the sub-field of computational anatomy within medical imaging is mapping information across anatomical coordinate systems most often dense information measured within a magnetic resonance image (MRI). The introduction of flows into CA, which are akin to the equations of motion used in fluid dynamics, exploit the notion that dense coordinates in image analysis follow the Lagrangian and Eulerian equations of motion. In models based on Lagrangian and Eulerian flows of diffeomorphisms, the constraint is associated to topological properties, such as open sets being preserved, coordinates not crossing implying uniqueness and existence of the inverse mapping, and connected sets remaining connected. The use of diffeomorphic methods grew quickly to dominate the field of mapping methods post Christensen's original paper, with fast and symmetric methods becoming available. Group actions are central to Riemannian geometry and defining orbits (control theory). The orbits of computational anatomy consist of anatomical shapes and medical images; the anatomical shapes are submanifolds of differential geometry consisting of points, curves, surfaces and subvolumes,. This generalized the ideas of the more familiar orbits of linear algebra which are linear vector spaces. Medical images are scalar and tensor images from medical imaging. The group actions are used to define models of human shape which accommodate variation. These orbits are deformable templates as originally formulated more abstractly in pattern theory. Cabbeling is when two separate water parcels mix to form a third which sinks below both parents. The combined water parcel is denser than the original two water parcels.The two parent water parcels may have the same density, but they have different properties; for instance, different salinities and temperatures. Seawater almost always gets more dense if it gets either slightly colder or slightly saltier. But medium-warm, medium-salty water can be denser than both fresher, colder water and saltier, warmer water; in other words, the equation of state for seawater is monotonic, but non-linear. See diagram.Cabbeling may also occur in fresh water, since pure water is densest at about 4 °C (39 °F). A mixture of 1 °C water and 6 °C water, for instance, might have a temperature of 4 °C, making it denser than either parent. Ice is also less dense than water, so although ice floats in warm water, meltwater sinks in warm water.The densification of the new mixed water parcel is a result of a slight contraction upon mixing; a decrease in volume of the combined water parcel. A new water parcel that has the same mass, but is lower in volume, will be denser. Denser water sinks or downwells in the otherwise neutral surface of the water body, where the two initial water parcels originated. A camelback potential is potential energy curve that looks like a normal distribution with a distinct dip where the peak would be, so named because it resembles the humps on a camel's back. The term was applied to a configuration of a superconducting quantum interference device in 2009, and to an arrangement of magnets in 2014.The latter system consists of two parallel diametric cylindrical magnets, that is, magnets that are magnetized perpendicular to their axis, with the north and south poles located on the curved surface as opposed to either end. When a diamagnetic rod (usually graphite) is placed between the magnets, it will remain in place and move back and forth in harmonic motion when disturbed. This arrangement, also known as a "PDL trap" for "parallel dipole line", was the subject of the 2017 International Physics Olympiad.In the magnetic system, the camelback potential effect only occurs when the length of the diamagnetic rod is between two critical lengths. Below the minimum length, the magnet is hypothesized to align with magnetic field lines, hence not maintaining its orientation and touching the magnet. The maximum length is limited by the distance between the peaks of the camelback humps; thus, a rod longer than that will be unstable and fall out of the trap. Both the radius and the length of the rod determine the damping of the system. The damping is primarily due to Stokes drag, as damping is non-observable under vacuum.Possible practical uses of the concept include being a platform for custom-designed 1D potentials, a highly sensitive force-distance transducer or a trap for semiconductor nanowires. The Collaborative Computational Projects (CCP) group was responsible for the development of CCPForge, which is a software development tool produced through collaborations by the CCP community. CCPs allow experts in computational research to come together and develop scientific software which can be applied to numerous research fields. It is used as a tool in many research and development areas, and hosts a variety of projects. Every CCP project is the result of years of valuable work by computational researchers.It is advised for projects to have one application, this helps users to search a category and classification system so they can find the right project for their work. Furthermore, the project can be under up to three CCPs provided it is a collaboration. Each classification category will have sub-sections to filter the category further. CCPForge projects, such provide essential information which has been used in publications such as 'Recent developments in R-matrix applications to molecular processes' and 'Ab initio derivation of Hubbard models for cold atoms in optical lattices', in which codes from CCPQ were used.The Joint Information Systems Committee (JISC) and EPSRC both fund the CCPForge project. The Scientific Computing Department (SCD) of the Science and Technology Facilities Council is responsible for the development and maintenance of CCPForge, and this is funded by a long term support grant from EPSRC. The Center of Percussion is the point on an extended massive object attached to a pivot where a perpendicular impact will produce no reactive shock at the pivot. Translational and rotational motions cancel at the pivot when an impulsive blow is struck at the center of percussion. The center of percussion is often discussed in the context of a bat, racquet, door, sword or other extended object held at one end.The same point is called the center of oscillation for the object suspended from the pivot as a pendulum, meaning that a simple pendulum with all its mass concentrated at that point will have the same period of oscillation as the compound pendulum. In sports, the center of percussion of a bat or racquet is related to the so-called "sweet spot", but the latter is also related to vibrational bending of the object. A time crystal or space-time crystal is a structure that repeats periodically in time, as well as in space. Normal three-dimensional crystals have a repeating pattern in space, but remain unchanged with respect to time; time crystals repeat themselves in time as well, leading the crystal to change from moment to moment. A time crystal never reaches thermal equilibrium, as it is a type of non-equilibrium matter — a form of matter proposed in 2012, and first observed in 2017. This state of matter cannot be isolated from its environment – it is an open system in non-equilibrium.The idea of a time crystal was first described by Nobel laureate and MIT professor Frank Wilczek in 2012. Subsequent work developed a more precise definition for time crystals, ultimately leading to a proof that they cannot exist in equilibrium. Then in 2016, Norman Yao and colleagues at the University of California at Berkeley proposed a way to create non-equilibrium time crystals, which Christopher Monroe and Mikhail Lukin independently confirmed in their labs. Both experiments were published in Nature in 2017. From a biological standpoint, the goal-directed molecular motions inside living cells are carried out by biopolymers acting like molecular machines (e.g. myosin, RNA/DNA polymerase, ion pumps, etc.). These molecular machines are driven by conformons, that is sequence-specific mechanical strains generated by free energy released in chemical reactions or stress induced destabilisations in supercoiled biopolymer chains. Therefore, conformons can be defined as packets of conformational energy generated from substrate binding or chemical reactions and confined within biopolymers.On the other hand, from a physics standpoint, the conformon is a localization of elastic and electronic energy which may propagate in space with or without dissipation. The mechanism which involves dissipationless propagation is a form of molecular superconductivity. On quantum mechanical level both elastic/vibrational and electronic energy can be quantised, therefore the conformon carries a fixed portion of energy. This has led to the definition of quantum of conformation (shape). The Elitzur–Vaidman bomb-tester is a quantum mechanics thought experiment that uses interaction-free measurements to verify that a bomb is functional without having to detonate it. It was conceived in 1993 by Avshalom Elitzur and Lev Vaidman. Since their publication, real-world experiments have confirmed that their theoretical method works as predicted.The bomb tester takes advantage of two characteristics of elementary particles, such as photons or electrons: nonlocality and wave-particle duality. By placing the particle in a quantum superposition, the experiment can verify that the bomb works without ever triggering its detonation, although there is a 50% chance that the bomb will explode in the effort. In physics, an energy well describes a 'stable' equilibrium that is not at lowest possible energy.In general, modern physics holds the view that the universe - and systems therein - spontaneously drives toward a state of lower energy, if possible. For example, a bowling ball pitched atop a smooth hump (which has potential energy in the presence of gravity), will tend to roll down to the lowest point it possibly can. Once there, this reduces the total potential energy of the system.On the other hand, if the bowling ball is resting in a valley between two humps - no matter how big the drops outside the humps - it will stay there indefinitely. Even though the system could achieve a lower energy state, it cannot do so without external energy being applied: (locally) it is at its lowest energy state, and only a force from outside the system can 'push' it over one of the humps so a lower state can be achieved.The concept of an energy well is a key part of teaching basic physics, especially quantum mechanics. Here, students often solve the one-dimensional Schroedinger Equation for an electron trapped in a potential well from which it has insufficient energy to escape. The solution to this problem is a series of sinusoidal waves of fractional integral wavelengths determined by the width of the well. Epicatalysis is a newly identified class of gas-surface heterogeneous catalysis in which specific gas-surface reactions shift gas phase species concentrations away from those normally associated with gas-phase equilibrium. As a result, epicatalytically created gas phase concentrations (in a sealed, isothermal cavity) can remain in a stationary nonequilibrium state.Epicatalysis is predicted by standard kinetic theory when two criteria are met: 1) gas-surface interactions are appreciable; and 2) the mean free path for gas phase reactions is long compared with the distance between gas-surface collisions, usually taken to be size scale of the confining vessel. When these conditions are met, the catalytic effects of the gas-surface interactions outweigh the gas-phase interactions, resulting in a gas-phase in a non-equilibrium state.A traditional catalyst adheres to three general principles, namely: 1) it speeds up a chemical reaction; 2) it participates in, but is not consumed by, the reaction; and 3) it does not change the chemical equilibrium of the reaction. Epicatalysts overcome the third principle when gas-surface interactions are appreciable and gas-phase collisions are rare. Under these conditions the gas particles that desorb from an epicatalytic surface can retain characteristics of the gas-surface interactions far away from the surface because gas phase collisions are too infrequent to establish normal gas phase equilibrium. Because gas-surface interactions dominate over gas-phase interactions in determining the gas-phase concentrations, the gas phase can be held, continuously, far away from normal gas phase equilibrium.Several well-studied examples of epicatalysis have been hiding in plain sight in scientific literature for nearly a century, including plasmas created by surface ionization, notably in the Q-machine; and hydrogen dissociation on high-temperature transition metals (e.g., W, Re, Mo, Re, Ir, and Ta) Epicatalysis could enable a number of new applications, including chemical streams enriched in high-energy or desirable reactants; lower operating temperatures for chemical reactions; novel forms of alternative energy, and options for green chemistry.Until recently, epicatalysis has been observed only at high temperatures (T > 1500 K) owing to the nature of their gas-surface reactions. (Covalent chemical bond energies and ionization energies for atoms are usually several electron volts.) However, as of 2015, epicatalysis has been observed at room temperature with molecules involving much weaker hydrogen-bonds (e.g., methanol, formic acid, water). Epicatalysis has been invoked in experimental tests of Duncan's paradox. In rheology, the Farris Effect describes the decrease of the viscosity of a suspension upon increasing the dispersity of the solid additive, at constant volume fraction of the solid additive. That is, that a broader particle size distribution yields a lower viscosity than a narrow particle size distribution, for the same concentration of particles. The phenomenon is names after Richard J. Farris, who modeled the effect. The effect is relevant whenever suspensions are flowing, particularly for suspensions with high loading fractions. Examples include hydraulic fracturing fluids, metal injection molding feedstocks, cosmetics, and various geological processes including sedimentation and lava flows. A harmonic is any member of the harmonic series, a divergent infinite series. Its name derives from the concept of overtones, or harmonics in musical instruments: the wavelengths of the overtones of a vibrating string or a column of air (as with a tuba) are derived from the string's (or air column's) fundamental wavelength. Every term of the series (i.e., the higher harmonics) after the first is the "harmonic mean" of the neighboring terms. The phrase "harmonic mean" likewise derives from music.The term is employed in various disciplines, including music, physics, acoustics, electronic power transmission, radio technology, and other fields. It is typically applied to repeating signals, such as sinusoidal waves. A harmonic of such a wave is a wave with a frequency that is a positive integer multiple of the frequency of the original wave, known as the fundamental frequency. The original wave is also called the 1st harmonic, the following harmonics are known as higher harmonics. As all harmonics are periodic at the fundamental frequency, the sum of harmonics is also periodic at that frequency. For example, if the fundamental frequency is 50 Hz, a common AC power supply frequency, the frequencies of the first three higher harmonics are 100 Hz (2nd harmonic), 150 Hz (3rd harmonic), 200 Hz (4th harmonic) and any addition of waves with these frequencies is periodic at 50 Hz.In music, harmonics are used on string instruments and wind instruments as a way of producing sound on the instrument, particularly to play higher notes and, with strings, obtain notes that have a unique sound quality or "tone colour". On strings, harmonics that are bowed have a "glassy", pure tone. On stringed instruments, harmonics are played by touching (but not fully pressing down the string) at an exact point on the string while sounding the string (plucking, bowing, etc.); this allows the harmonic to sound, a pitch which is always higher than the fundamental frequency of the string. Homeokinetics is the study of self-organizing, complex systems. Standard physics studies systems at separate levels, such as atomic physics, nuclear physics, biophysics, social physics, and galactic physics. Homeokinetic physics studies the up-down processes that bind these levels. Tools such as mechanics, quantum field theory, and the laws of thermodynamics provide the key relationships. The subject, described as the physics and thermodynamics associated with the up down movement between levels of systems, originated in the late 1970s work of American physicists Harry Soodak and Arthur Iberall. Complex systems are universes, galaxies, social systems, people, or even those that seem as simple as gases. The basic premise is that the entire universe consists of atomistic-like units bound in interactive ensembles to form systems, level by level in a nested hierarchy. Homeokinetics treats all complex systems on an equal footing, animate and inanimate, providing them with a common viewpoint. The complexity in studying how they work is reduced by the emergence of common languages in all complex systems. The Hopkinson effect is a feature of ferromagnetic or ferrimagnetic materials, in which an increase in magnetic susceptibility is observed at temperatures between the blocking temperature and the Curie temperature of the material. The Hopkinson effect can be observed as a peak in thermomagnetic curves that immediately precedes the susceptibility drop associated with the Curie temperature. It was first observed by John Hopkinson in 1889 in a study on iron.In single domain particles, a large Hopkinson peak results from a transient superparamagnetic particle domain state. In the physical sciences, an interface is the boundary between two spatial regions occupied by different matter, or by matter in different physical states. The interface between matter and air, or matter and vacuum, is called a surface, and studied in surface science. In thermal equilibrium, the regions in contact are called phases, and the interface is called a phase boundary. An example for an interface out of equilibrium is the grain boundary in polycrystalline matter.The importance of the interface depends on the type of system: the bigger the quotient area/volume, the greater the effect the interface will have. Consequently, interfaces are very important in systems with large interface area-to-volume ratios, such as colloids.Interfaces can be flat or curved. For example, oil droplets in a salad dressing are spherical but the interface between water and air in a glass of water is mostly flat.Surface tension is the physical property which rules interface processes involving liquids. For a liquid film on flat surfaces, the liquid-vapor interface keeps flat to minimize interfacial area and system free energy. For a liquid film on rough surfaces, the surface tension tends to keep the meniscus flat, while the disjoining pressure makes the film conformal to the substrate. The equilibrium meniscus shape is a result of the competition between the capillary pressure and disjoining pressure.Interfaces may cause various optical phenomena, such as refraction. Optical lenses serve as an example of a practical application of the interface between glass and air.One topical interface system is the gas-liquid interface between aerosols and other atmospheric molecules. Large deformation diffeomorphic metric mapping (LDDMM) is a specific suite of algorithms used for diffeomorphic mapping and manipulating dense imagery based on diffeomorphic metric mapping within the academic discipline of Computational anatomy, to be distinguished from its precursor based on diffeomorphic mapping. The distinction between the two is that diffeomorphic metric maps satisfy the property that the length associated to their flow away from the identity induces a metric on the group of diffeomorphisms, which in turn induces a metric on the orbit of Shapes and Forms within the field of Computational Anatomy. The study of shapes and forms with the metric of diffeomorphic metric mapping is called Diffeomorphometry.A diffeomorphic mapping system is a system designed to map, manipulate, and transfer information which is stored in many types of spatially distributed medical imagery.Diffeomorphic mapping is the underlying technology for mapping and analyzing information measured in human anatomical coordinate systems which have been measured via Medical imaging. Diffeomorphic mapping is a broad term that actually refers to a number of different algorithms, processes, and methods. It is attached to many operations and has many applications for analysis and visualization. Diffeomorphic mapping can be used to relate various sources of information which are indexed as a function of spatial position as the key index variable. Diffeomorphisms are by their Latin root structure preserving transformations, which are in turn differentiable and therefore smooth, allowing for the calculation of metric based quantities such as arc length and surface areas. Spatial location and extents in human anatomical coordinate systems can be recorded via a variety of Medical imaging modalities, generally termed multi-modal medical imagery, providing either scalar and or vector quantities at each spatial location. Examples are scalar T1 or T2 Magnetic resonance imagery, or as 3x3 diffusion tensor matrices Diffusion MRI and Diffusion-weighted imaging, to scalar densities associated to Computed Tomography (CT), or functional imagery such as temporal data of functional magnetic resonance imaging and scalar densities such as Positron emission tomography (PET).Computational anatomy is a subdiscipline within the broader field of Neuroinformatics within Bioinformatics and Medical imaging. The first algorithm for dense image mapping via diffeomorphic metric mapping was Beg's LDDMM for volumes and Joshi's landmark matching for point sets with correspondence, with LDDMM algorithms now available for computing diffeomorphic metric maps between non-corresponding landmarks and landmark matching intrinsic to spherical manifolds, curves, currents and surfaces, tensors, varifolds, and time-series. The term LDDMM was first established as part of the National Institutes of Health supported Biomedical Informatics Research Network.In a more general sense, diffeomorphic mapping is any solution that registers or builds correspondences between dense coordinate systems in Medical imaging by ensuring the solutions are diffeomorphic. There are now many codes organized around diffeomorphic registration including ANTS, DARTEL, DEMONS, StationaryLDDMM , FastLDDMM, as examples of actively used computational codes for constructing correspondences between coordinate systems based on dense images.The distinction between diffeomorphic metric mapping forming the basis for LDDMM and the earliest methods of diffeomorphic mapping is the introduction of a Hamilton principle of least-action in which large deformations are selected of shortest length corresponding to geodesic flows. This important distinction arises from the original formulation of the Riemannian metric corresponding to the right-invariance. The lengths of these geodesics give the metric in the metric space structure of human anatomy. Non-geodeisc formulations of diffeomorphic mapping in general does not correspond to any metric formulation. A physical law or scientific law is a theoretical statement "inferred from particular facts, applicable to a defined group or class of phenomena, and expressible by the statement that a particular phenomenon always occurs if certain conditions be present." Physical laws are typically conclusions based on repeated scientific experiments and observations over many years and which have become accepted universally within the scientific community. The production of a summary description of our environment in the form of such laws is a fundamental aim of science. These terms are not used the same way by all authors.The distinction between natural law in the political-legal sense and law of nature or physical law in the scientific sense is a modern one, both concepts being equally derived from physis, the Greek word (translated into Latin as natura) for nature. Many-body theory (or many-body physics) is an area of physics which provides the framework for understanding the collective behavior of large numbers of interacting particles, often on the order of Avogadro's number. In general terms, many-body theory deals with effects that manifest themselves only in systems containing large numbers of constituents. While the underlying physical laws that govern the motion of each individual particle may (or may not) be simple, the study of the collection of particles can be extremely complex. In some cases emergent phenomena may arise which bear little resemblance to the underlying elementary laws. Normal contact stiffness is a physical quantity related to the generalized force displacement behavior of rough surfaces in contact with a rigid body or a second similar rough surface. As two solild bodies of the same material approach one another, they transition from conditions of non-contact to homogeneous bulk type behaviour. The varying values of stiffness and true contact area that is exhibited at an interface during this transition is dependent on conditions of applied pressure and is of notable importance for the study of systems involving the physical interactions of multiple bodies including granular matter, electrode contacts, and thermal contacts, where the interface-localized structures govern overall system performance. The Northwest Nuclear Consortium is an organization based in Washington state which uses a research grade ion collider to teach a class of high school students nuclear engineering principles based on the Department of Energy curriculum. They won the 1st Place at WSU Imagine Tomorrow in 2012. They also won the 1st place at the Washington State Science Fair, and the 2nd place worldwide at ISEF in 2013. In 2014 they won two 2nd place at the Central Sound Regional Science Fair at Bellevue College and they won 1st place twice in category at the Washington State Science & Engineering Fair at Bremerton. In 2015, they won 14 1st-place trophies at the Washington State Science and Engineering Fair, over $250,000 in scholarships at two different colleges and 3 of the 5 available trips to ISEF, where they won 4th place in the world against 72 countries. Phase stretch transform (PST) is a computational approach to signal and image processing. One of its utilities is for feature detection and classification. PST is related to time stretch dispersive Fourier transform. It transforms the image by emulating propagation through a diffractive medium with engineered 3D dispersive property (refractive index). The operation relies on symmetry of the dispersion profile and can be understood in terms of dispersive eigenfunctions or stretch modes. PST performs similar functionality as phase contrast microscopy but on digital images. PST is also applicable to digital images as well as temporal, time series, data. The pseudo Jahn-Teller effect (PJTE), occasionally also known as second-order JTE, is a direct extension of the Jahn-Teller effect (JTE) where spontaneous symmetry breaking in polyatomic systems (molecules and solids) occurs even in nondegenerate electronic states under the influence of sufficiently low-lying excited states of appropriate symmetry. "The pseudo Jahn-Teller effect is the only source of instability and distortions of high-symmetry configurations of polyatomic systems in nondegenerate states, and it contributes significantly to the instability in degenerate states". Rogue waves (also known as freak waves, monster waves, episodic waves, killer waves, extreme waves, and abnormal waves) are large, unexpected and suddenly appearing surface waves that can be extremely dangerous, even to large ships such as ocean liners.Rogue waves present considerable danger for several reasons: they are rare, unpredictable, may appear suddenly or without warning, and can impact with tremendous force. A 12-metre (39 ft) wave in the usual "linear" model would have a breaking pressure of 6 metric tons per square metre [t/m2] (8.5 psi). Although modern ships are designed to tolerate a breaking wave of 15 t/m2 (21 psi), a rogue wave can dwarf both of these figures with a breaking pressure of 100 t/m2 (140 psi).In oceanography, rogue waves are more precisely defined as waves whose height is more than twice the significant wave height (Hs or SWH), which is itself defined as the mean of the largest third of waves in a wave record. Therefore, rogue waves are not necessarily the biggest waves found on the water; they are, rather, unusually large waves for a given sea state. Rogue waves seem not to have a single distinct cause, but occur where physical factors such as high winds and strong currents cause waves to merge to create a single exceptionally large wave.Rogue waves can occur in media other than water. They appear to be ubiquitous in nature and have also been reported in liquid helium, in nonlinear optics and in microwave cavities. Recent research has focused on optical rogue waves which facilitate the study of the phenomenon in the laboratory. A 2015 paper studied the wave behavior around a rogue wave, including optical, and the Draupner wave, and concluded that "rogue events do not necessarily appear without a warning, but are often preceded by a short phase of relative order". Gagik Shmavonyan (Armenian: Գագիկ Շմավոնյան, born May 12, 1963) is Full Professor (2017) at Microelectronics and Biomedical Devices Department, National Polytechnic University of Armenia He got his PhD in Physics in 1996 and D.Sc in Engineering in 2009 at the same University.He did postdoc at National Taiwan University, Taiwan (2001-2002).He was a Visiting Scholar/Professor at theUniversity of Hull, UK (2000, 2003),Polytechnic of Milan, Como, Italy (2004-2005),University of Bremen, Germany (2002, 2006),Free University Berlin, Germany (2011),Trinity College Dublin, CRANN, Ireland (2012),University of Santiago de Compostela, Spain (2013-2014) University of Cergy-Pontoise, France (2016, 2017).His current research interests are:Semiconductor nanostructured optoelectronic devices: Solar cells (photoelectrochemical, photovoltaic and thermophotovoltaic cells), semiconductor lasers and semiconductor optical amplifiers, etc.2D atomic materials (graphene, etc.) and their hybrid structures,2D devices, smart materials.2D flexible .He has authored/co-authored:more than 200 refereed papers,20 patents (USSR, Armenia, Spain (extended to US and WO)),4 books and a chapter in a textbook for European students.His research awards:Cleantech Oscar Award at UNIDO Cleantech Open Global Ideas Competition (2015, Silicon Valley, USA);The best research publication Award, SEUA, Yerevan, Armenia, December, 2005.Graduation paper awards:- 1st prize, Alexander Popov Diploma of Honor, Soviet Union Students’ Scientific Competition, Baku, Azerbaijan, 1986.- 1st prize for the best graduation paper, Republican Students Competition, Armenia, 1985.- 1st prize, Diploma for Students’ excellent research at 32nd Scientific Competition, SEUA, Yerevan, Armenia, Nov. 23, 1985.He is member ofCOST Action MP1302 Nanospectroscopy, Management Committee member with Observer status,COST Action MP0901 "Designing Novel Materials for Nanodevices - from Theory to Practice (NanoTP)", Management Committee Member,Institute of Physics , Chartered Physicist (UK),American Physical Society (USA) Athens Institute for Education and Research (Greece) St. Petersburg Scientific and Educational Society (Russia)LanguagesArmenian (native),English (fluent),Russian (fluent) andSpanish (poor) Silicon nanowires, also referred to as SiNWs, are a type of nanowire most often formed from a silicon precursor by etching of a solid or through catalyzed growth from a vapor or liquid phase. Initial synthesis is often accompanied by thermal oxidation steps to yield structures of accurately tailored size and morphology.SiNWs have unique properties that are not seen in bulk (three-dimensional) Silicon materials. These properties arise from an unusual quasi one-dimensional electronic structure and are the subject of research across numerous disciplines and applications. The reason that SiNWs are considered as one of the most important one-dimensional materials is they could have a function as building blocks for nanoscale electronics assembled without the need for complex and costly fabrication facilities. SiNWs are frequently studied towards applications including photovoltaics, nanowire batteries, thermoelectrics and non-volatile memory. Solid-state physics is the study of rigid matter, or solids, through methods such as quantum mechanics, crystallography, electromagnetism, and metallurgy. It is the largest branch of condensed matter physics. Solid-state physics studies how the large-scale properties of solid materials result from their atomic-scale properties. Thus, solid-state physics forms a theoretical basis of materials science. It also has direct applications, for example in the technology of transistors and semiconductors. Statistical mechanics is a branch of theoretical physics that uses probability theory to study the average behaviour of a mechanical system whose exact state is uncertain.Statistical mechanics is commonly used to explain the thermodynamic behaviour of large systems. This branch of statistical mechanics, which treats and extends classical thermodynamics, is known as statistical thermodynamics or equilibrium statistical mechanics. Microscopic mechanical laws do not contain concepts such as temperature, heat, or entropy; however, statistical mechanics shows how these concepts arise from the natural uncertainty about the state of a system when that system is prepared in practice. The benefit of using statistical mechanics is that it provides exact methods to connect thermodynamic quantities (such as heat capacity) to microscopic behaviour, whereas, in classical thermodynamics, the only available option would be to just measure and tabulate such quantities for various materials. Statistical mechanics also makes it possible to extend the laws of thermodynamics to cases which are not considered in classical thermodynamics, such as microscopic systems and other mechanical systems with few degrees of freedom.Statistical mechanics also finds use outside equilibrium. An important subbranch known as non-equilibrium statistical mechanics deals with the issue of microscopically modelling the speed of irreversible processes that are driven by imbalances. Examples of such processes include chemical reactions or flows of particles and heat. Unlike with equilibrium, there is no exact formalism that applies to non-equilibrium statistical mechanics in general, and so this branch of statistical mechanics remains an active area of theoretical research. In physics, the total position-spread (TPS) tensor is a quantity originally introduced in the modern theory of electrical conductivity. In the case of molecular systems, this tensor measures the fluctuation of the electrons around their mean positions, which corresponds to the delocalization of the electronic charge within a molecular system. The TPS can discriminate between metals and insulators taking information from the ground state wave function. This quantity can be very useful as an indicator to characterize Intervalence charge transfer processes, the bond nature of molecules (covalent, ionic or weakly bonded), and Metal–insulator transition. Transparent wood composites are novel wood materials which have up to 90% transparency and higher mechanical properties than wood itself, made for the first time in 1992.When these materials are commercially available, a significant benefit is expected due to their inherent biodegradable properties since it is wood. These materials are significantly more biodegradable than glass and plastics. On the other hand, concerns may be relevant due to the use of non-biodegradable plastics for long lasting purpose, such as in building. Upconverting nanoparticles (UCNPs) are nanoscale particles (1–100 nm) that exhibit photon upconversion. In photon upconversion, two or more incident photons of relatively low energy are absorbed and converted into one emitted photon with higher energy. Generally, absorption occurs in the infrared, while emission occurs in the visible or ultraviolet regions of the electromagnetic spectrum. UCNPs are usually composed of lanthanide- or actinide-doped transition metals and are of particular interest for their applications in bio-imaging and bio-sensing at the deep tissue level. They also have potential applications in photovoltaics and security, such as infrared detection of hazardous materials.Before 1959, the anti-Stokes shift was believed to describe all situations in which emitted photons have higher energies than the corresponding incident photons. An anti-Stokes shift occurs when a thermally excited ground state is electronically excited, leading to a shift of only a few kBT, where kB is the Boltzmann constant, and T is temperature. At room temperature, kBT is 25.7 meV. In 1959, Nicolaas Bloembergen proposed an energy diagram for crystals containing ionic impurities (Figure 1). Bloembergen described the system as having excited-state emissions with energy differences much greater than kBT, in contrast to the anti-Stokes shift.Advances in laser technology in the 1960s allowed the observation of non-linear optical effects such as upconversion. This led to the experimental discovery of photon upconversion in 1966 by François Auzel. Auzel showed that a photon of infrared light could be upconverted into a photon of visible light in ytterbium–erbium and ytterbium–thulium systems. In a transition-metal lattice doped with rare-earth metals, an excited-state charge transfer exists between two excited ions. Auzel observed that this charge transfer allows an emission of photon with much higher energy than the corresponding absorbed photon. Thus, upconversion can occur through a stable and real excited state, supporting Bloembergen's earlier work. This result catapulted upconversion research in lattices doped with rare-earth metals. One of the first examples of efficient lanthanide doping, the Yb/Er-doped fluoride lattice, was achieved in 1972 by Menyuk et al. The Verwey transition is a low-temperature phase transition in the mineral magnetite near 125 kelvins associated with changes in its magnetic, electrical, and thermal properties. Upon warming through the Verwey transition temperature (TV), the magnetite crystal lattice changes from a monoclinic structure to the cubic inverse spinel structure that persists at room temperature. The phenomenon is named after Evert Verwey, a Dutch chemist who first recognized the connection between the structural transition and the changes in the physical properties of magnetite.The Verwey transition is near in temperature, but distinct from, a magnetic isotropic point in magnetite, at which the first magnetocrystalline anisotropy constant changes sign from positive to negative. The temperature and physical expression of the Verwey transition are highly sensitive to the stress state of magnetite and the stoichiometry. Non-stoichiometry in the form of metal cation substitution or partial oxidation can lower the transition temperature or suppress it entirely. In theoretical physics, the composition of two non-collinear Lorentz boosts results in a Lorentz transformation that is not a pure boost but is the composition of a boost and a rotation. This rotation is called Thomas rotation, Thomas–Wigner rotation or Wigner rotation. The rotation was discovered by Llewellyn Thomas in 1926, and derived by Wigner in 1939. If a sequence of non-collinear boosts returns an object to its initial velocity, then the sequence of Wigner rotations can combine to produce a net rotation called the Thomas precession.There are still ongoing discussions about the correct form of equations for the Thomas rotation in different reference systems with contradicting results. Goldstein:The spatial rotation resulting from the successive application of two non-collinear Lorentz transformations have been declared every bit as paradoxical as the more frequently discussed apparent violations of common sense, such as the twin paradox.Einstein's principle of velocity reciprocity (EPVR) readsWe postulate that the relation between the coordinates of the two systems is linear. Then the inverse transformation is also linear and the complete non-preference of the one or the other system demands that the transformation shall be identical with the original one, except for a change of v to −vWith less careful interpretation, the EPVR is seemingly violated in some models. There is, of course, no true paradox present. Zero-point energy (ZPE) or ground state energy is the lowest possible energy that a quantum mechanical system may have. Unlike in classical mechanics, quantum systems constantly fluctuate in their lowest energy state due to the Heisenberg uncertainty principle. As well as atoms and molecules, the empty space of the vacuum has these properties. According to Quantum Field Theory the universe can be thought of not as isolated particles but continuous fluctuating fields: matter fields, whose quanta are fermions (i.e. leptons and quarks), and force fields, whose quanta are bosons (e.g. photons and gluons). All these fields have zero-point energy. These fluctuating zero-point fields lead to a kind of reintroduction of an aether in physics, since some systems can detect the existence of this energy. However this aether cannot be thought of as a physical medium if it is to be Lorentz invariant such that there is no contradiction with Einstein's theory of special relativity.Physics currently lacks a full theoretical model for understanding zero-point energy, in particular the discrepancy between theorized and observed vacuum energy is a source of major contention. Physicists Richard Feynman and John Wheeler calculated the zero-point radiation of the vacuum to be an order of magnitude greater than nuclear energy, with a single light bulb containing enough energy to boil all the world's oceans. Yet according to Einstein's theory of general relativity any such energy would gravitate and the experimental evidence from both the expansion of the universe, dark energy and the Casimir effect show any such energy to be exceptionally weak. A popular proposal that attempts to address this issue is to say that the fermion field has a negative zero-point energy while the boson field has positive zero-point energy and thus these energies somehow cancel each other out. This idea would be true if supersymmetry were an exact symmetry of nature. However, the LHC at CERN has so far found no evidence to support supersymmetry. Moreover, it is known that if supersymmetry is valid at all, it is at most a broken symmetry, only true at very high energies, and no one has been able to show a theory where zero-point cancellations occur in the low energy universe we observe today. This discrepancy is known as the cosmological constant problem and it is one of the greatest unsolved mysteries in physics. Many physicists believe that "the vacuum holds the key to a full understanding of nature". Cycling, also called bicycling or biking, is the use of bicycles for transport, recreation, exercise or sport. Persons engaged in cycling are referred to as "cyclists", "bikers", or less commonly, as "bicyclists". Apart from two-wheeled bicycles, "cycling" also includes the riding of unicycles, tricycles, quadracycles, recumbent and similar human-powered vehicles (HPVs).Bicycles were introduced in the 19th century and now number approximately one billion worldwide. They are the principal means of transportation in many parts of the world.Cycling is widely regarded as a very effective and efficient mode of transportation optimal for short to moderate distances.Bicycles provide numerous benefits in comparison with motor vehicles, including the sustained physical exercise involved in cycling, easier parking, increased maneuverability, and access to roads, bike paths and rural trails. Cycling also offers a reduced consumption of fossil fuels, less air or noise pollution, and much reduced traffic congestion. These lead to less financial cost to the user as well as to society at large (negligible damage to roads, less road area required). By fitting bicycle racks on the front of buses, transit agencies can significantly increase the areas they can serve.Among the disadvantages of cycling are the requirement of bicycles (excepting tricycles or quadracycles) to be balanced by the rider in order to remain upright, the reduced protection in crashes in comparison to motor vehicles, often longer travel time (except in densely populated areas), vulnerability to weather conditions, difficulty in transporting passengers, and the fact that a basic level of fitness is required for cycling moderate to long distances. Current technological developments suggest that cars, as used today, will be replaced. Established alternatives to car use include public transit (buses, trolleybuses, trains, subways, monorails, tramways), cycling, walking, rollerblading and skateboarding.Bike-share systems have been implemented in over 1000 cities worldwide, and are especially common in many European and Chinese cities of all sizes. Similar programs have been implemented across the United States as well, including large cities like Washington, D.C., and New York City, as well as smaller cities like Buffalo, New York and Fort Collins, Colorado.Personal rapid transit is a scheme that has been discussed, in which small, automated vehicles would run on special elevated tracks spaced within walking distance throughout a city, and could provide direct service to a chosen station without stops. However, despite several concepts existing for decades personal rapid transit has failed to gain significant ground and several prototypes and experimental systems have been dismantled as failures. Another possibility is new forms of personal transport such as the Segway PT, which could serve as an alternative to cars and bicycles if they prove to be socially accepted.All of these alternative modes of transport pollute less than the conventional (petroleum-powered) car and contribute to transport sustainability. They also provide other significant benefits such as reduced traffic-related injuries and fatalities, reduced space requirements, both for parking and driving, reduced resource usage and pollution related to both production and driving, increased social inclusion, increased economic and social equity, and more livable streets and cities. Some alternative modes of transportation, especially cycling, also provide regular, low-impact exercise, tailored to the needs of human bodies. Public transport is also linked to increased exercise, because they are combined in a multi-modal transport chain that includes walking or cycling.According to the MIT Future Car Workshop, the benefits of possible future car technologies not yet in widespread use (such as zero-emissions vehicles) over these alternatives, would be:Increased mobility in rural settings and in some other areas where traffic jams are not severePossibly higher social statusOverall a better provision for privacyProfit for the multinational firms producing cars, and possibly for their employees Bicycle and motorcycle dynamics is the science of the motion of bicycles and motorcycles and their components, due to the forces acting on them. Dynamics falls under a branch of physics known as classical mechanics. Bike motions of interest include balancing, steering, braking, accelerating, suspension activation, and vibration. The study of these motions began in the late 19th century and continues today.Bicycles and motorcycles are both single-track vehicles and so their motions have many fundamental attributes in common and are fundamentally different from and more difficult to study than other wheeled vehicles such as dicycles, tricycles, and quadracycles. As with unicycles, bikes lack lateral stability when stationary, and under most circumstances can only remain upright when moving forward. Experimentation and mathematical analysis have shown that a bike stays upright when it is steered to keep its center of mass over its wheels. This steering is usually supplied by a rider, or in certain circumstances, by the bike itself. Several factors, including geometry, mass distribution, and gyroscopic effect all contribute in varying degrees to this self-stability, but long-standing hypotheses and claims that any single effect, such as gyroscopic or trail, is solely responsible for the stabilizing force have been discredited.While remaining upright may be the primary goal of beginning riders, a bike must lean in order to maintain balance in a turn: the higher the speed or smaller the turn radius, the more lean is required. This balances the roll torque about the wheel contact patches generated by centrifugal force due to the turn with that of the gravitational force. This lean is usually produced by a momentary steering in the opposite direction, called countersteering. Countersteering skill is usually acquired by motor learning and executed via procedural memory rather than by conscious thought. Unlike other wheeled vehicles, the primary control input on bikes is steering torque, not position.Although longitudinally stable when stationary, bikes often have a high enough center of mass and a short enough wheelbase to lift a wheel off the ground under sufficient acceleration or deceleration. When braking, depending on the location of the combined center of mass of the bike and rider with respect to the point where the front wheel contacts the ground, bikes can either skid the front wheel or flip the bike and rider over the front wheel. A similar situation is possible while accelerating, but with respect to the rear wheel. As with many consumer products, early bicycles were purchased solely for their usefulness or fashionableness and discarded as they wore out or were replaced by newer models. Some items were thrown into storage and survived, but many others went to the scrapyard. Decades later, those with an interest in cycling and history began to seek out older bikes, collecting different varieties. Like other forms of collecting, bike collectors can be completists or specialists, and many have extensive holdings in bike parts or literature, in addition to complete bicycles. Bicycle poverty reduction is the concept that access to bicycles and the transportation infrastructure to support them can dramatically reduce poverty. This has been demonstrated in various pilot projects in South Asia and Africa. Experiments done in Africa (Uganda and Tanzania) and Sri Lanka on hundreds of households have shown that a bicycle can increase the income of a poor family by as much as 35%. Transport, if analyzed for the cost–benefit analysis for rural poverty alleviation, has given one of the best returns in this regard. For example, road investments in India were a staggering 3–10 times more effective than almost all other investments and subsidies in rural economy in the decade of the 1990s. What a road does at a macro level to increase transport, the bicycle supports at the micro level. The bicycle, in that sense, can be one of the best means to eradicate poverty in poor nations. A bicycle tree or cycle tree or bike tree is a bicycle parking system that resembles a tree in shape. There are a few types that have been developed.Some are manual, some use mechanical means to move the bike, assisting the bike by raising into a particular spot, they can handle between 5-20 bicycles depending on size. They are made by various companies in Europe and North America. Still others, like the one made by JFE Steel of Japan, are fully automated and computerized and can handle and locate some 9,400 bicycles for example, underneath a major train station or university. A bike bus, also known as a bike train or a cycle train, is a group of people who cycle together on a set route following a set timetable. Cyclists may join or leave the bike bus at various points along the route. Most bike buses are a form of collective bicycle commuting.A bike bus is often seen as a cyclist's version of a walking bus, although walking buses tend to be seen as exclusively for children travelling to school.Bike buses may have social, environmental, or political aims. One of the founders of the Aire Valley Bike Bus said "The Aire Valley Bike Bus was set up ... to encourage people to take up cycling and make the journey to work a more interesting and sociable experience.". The stated aim of the Central Florida Bike Bus is "bringing together cyclists who want to commute by bike using the same roads as every other vehicle" Bike lanes (US) or cycle lanes (UK) are types of bikeways (cycleways) with lanes on the roadway for cyclists only. In the United Kingdom, an on-road cycle-lane can be restricted [to cycles] (marked with a solid white line, entry by motor vehicles is prohibited) or advisory (marked with a broken white line, entry by motor vehicles is permitted). In the United States, a designated bicycle lane (1988 MUTCD) or class II bikeway (Caltrans) is always marked by a solid white stripe on the pavement and is for 'preferential use' by bicyclists. There is also a class III bicycle route, which has roadside signs suggesting a route for cyclists, and urging sharing the road.In France, segregated cycling facilities on the carriageway are called bande cyclable, those beside the carriageway or totally independent ones piste cyclable, all together voie cyclable. In Belgium, traffic laws do not distinguish cycle lanes from cyclepaths. Cycle lanes are marked by two parallel broken white lines, and they are defined as being "not wide enough to allow use by motor vehicles". There is some confusion possible here: both in French (piste cyclable) and in Dutch (fietspad) the term for these lanes can also denote a segregated cycle track, marked by a road sign; the cycle lane is therefore often referred to as a "piste cyclable marquée" (in French) or a "gemarkeerd fietspad" (in Dutch), i.e. a cycle lane/track which is "marked" (i.e. identified by road markings) rather than one which is identified by a road sign. In the Netherlands the cycle lane is normally called "fietsstrook" instead of "fietspad". Bike rage refers to acts of verbal or gestural anger or physical aggression between cyclists and other users of bike paths or roadways, including pedestrians, other cyclists, motorcyclists, or drivers. Bike rage can consist of shouting at other road users, making obscene gestures or threats, hitting or punching, or in rare cases, even more violent acts. The term can refer either to acts committed by cyclists or by drivers. Bike rage is related to other explosive outbursts of anger such as road rage. Bike registries are databases of unique, identifying information about bicycles and their ownership. Most registration programs use the unique serial numbers which are permanently affixed to most bicycles during manufacture.Bicycle registration programs generally aim to reduce the prevalence of bike theft. Bicycle theft is one of the major factors that slow the development of utility cycling since it discourages people from investing in a bicycle.Bicycle registration may be a public service provided by local, state or national government, or be provided by an independent organization.Some registration programs are exclusively designed for spreading the word after a bike has been stolen, while others focus on registering bikes before they are stolen. A bike rental or bike hire business is a bicycle shop or other business that rents bikes for short periods of time (usually for a few hours) for a fee. Most rentals are provided by bike shops as a sideline to their main businesses of sales and service, but some shops specialize in rentals.As with car rental, bike rental shops primarily serve people who don't have access to a vehicle, typically travellers and particularly tourists. Specialized bike rental shops thus typically operate at beaches, parks, or other locations that tourists frequent. In this case, the fees are set to encourage renting the bikes for a few hours at a time, rarely more than a day.Other bike rental shops rent by the day or week as well as by the hour, and these provide an excellent opportunity for those who would like to avoid shipping their own bikes, but would like to do a multi-day bike tour of a particular area. Bikeability is the national programme for cycle training in England, Wales, and Scotland. The programme is purely voluntary - schools may sign up to host classes for children. Adults may also join classes. In England and Wales, the programme is based on the National Standard for Cycle Training, a UK Government standard run by the Department for Transport and approved by RoSPA, Road Safety GB, British Cycling, CTC, Sustrans and Cycling England.Bikeability is also a term for the extent to which an environment is friendly for bicycling. The bunny hop or bunnyhop, is a bicycle trick that allows the rider to launch their bike into the air as if jumping off a ramp. The pedals on the bicycle seem to stick to the rider's feet as the bike becomes hop very much like the way the skateboard seems to stick to the feet of the skater performing an Ollie. While the bunny hop can be quite challenging to learn, once mastered it opens up a whole new level of riding opportunities for both BMX and mountain bike rider alike.The bunny hop is also a useful skill for an urban cyclist/commuter, allowing the avoidance of potholes and other hazards, and allowing for quick mounting of curbs.More often, bunny hops are done on BMX bikes, which are smaller than mountain bikes and, because they are more lightweight, lend themselves to be lifted far more easily.There are two methods of performing a bunnyhop. The first, known simply as a bunnyhop, involves both wheels being lifted at once, and is typically easier to do when the rider is using clipless bicycle pedals. The second is known as a pro hop, involves the rider lifting the front wheel of the bike before the back wheel, and requires precise balance and body movements. If not done properly, a pro hop will very easily lead to a crash or forced dismounting due to the physics involved. In cycling, cadence (or pedalling rate) is the number of revolutions of the crank per minute; this is the rate at which a cyclist is pedalling/turning the pedals. Cadence is directly proportional to wheel speed, but is a distinct measurement and changes with gearing—which determines the ratio of crank rpm to wheel rpm.Cyclists typically have a cadence at which they feel most comfortable, and on bicycles with many gears it is possible to maintain a preferred cadence at a wide range of speeds. Recreational and utility cyclists typically cycle around 60–80 rpm. According to cadence measurement of 7 professional cyclists during 3 week races they cycle about 90 rpm during flat and long (~190 km) group stages and individual time trials of ∼50 km. During ∼15 km uphill cycling on high mountain passes they cycle about 70 rpm. Cyclists choose cadence to minimise muscular fatigue, and not metabolic demand, since oxygen consumption is lower at cadences 60-70 rpm. When cycling at 260 W, a pedal force was the lowest at 90 rpm, lower than at 60, 75, 105 or 120 rpm. It is primarily due to increase of inertia of the crank with increasing cadence.While fast cadence is also referred to as "spinning", slow cadence is referred as "mashing".Any particular cyclist has only a narrow range of preferred cadences, often smaller than the general ranges listed above. This in turn influences the number and range of gears which are appropriate for any particular cycling conditions.Certain cyclocomputers are able to measure cadence, and relay the reading to the cyclist via a display, typically mounted on the bicycle's handlebars. Chainline is the angle of a bicycle chain relative to the centerline of the bicycle frame. A bicycle is said to have perfect chainline if the chain is parallel to the centerline of the frame, which means that the rear sprocket is directly behind the front chainring. Chainline also refers to the distance between a sprocket and the centerline of the frame.Bicycles without a straight chainline are slightly less efficient due to frictional losses incurred by running the chain at an angle between the front chainring and rear sprocket. This is the main reason that a single-speed bicycle can be more efficient than a derailleur geared bicycle. Single-speed bicycles should have the straightest possible chainline. Cold-weather biking is the use of a bicycle during months when roads and paths are covered with ice, slush and snow. Cold weather cyclists face a number of challenges. Urban commuters on city streets may have to deal with "[s]now, slush, salt, and sand", which can cause rust and damage to metal bike components. Slush and ice can jam derailleurs. Some cyclists may bike differently in winter, by "...slow[ing] down on turns and brak[ing] gradually" in icy conditions. Gaining traction on snow and ice-covered roads can be difficult. Winter cyclists may use bikes with front and rear fenders, metal studded winter tires and flashing LED lights. Winter cyclists may wear layers of warm clothes and "ea[r], face, and han[d]" coverings may be used. Specialized winter bikes called fatbikes, which have wide, oversized tires that are typically inflated with low pressure, are used in snow trail riding and winter bike competitions. Countersteering is used by single-track vehicle operators, such as cyclists and motorcyclists, to initiate a turn toward a given direction by momentarily steering counter to the desired direction ("steer left to turn right"). To negotiate a turn successfully, the combined center of mass of the rider and the single-track vehicle must first be leaned in the direction of the turn, and steering briefly in the opposite direction causes that lean. The rider's action of countersteering is sometimes referred to as "giving a steering command".The scientific literature does not provide a clear and comprehensive definition of countersteering. In fact, "a proper distinction between steer torque and steer angle ... is not always made." Bicycle culture can refer to a mainstream culture that supports the use of bicycles or to a subculture. Although "bike culture" is often used to refer to various forms of associated fashion, it is erroneous to call fashion in and of itself a culture.Cycling culture refers to cities and countries which support a large percentage of utility cycling. Examples include Denmark, the Netherlands, Germany, Belgium (Flanders in particular), Sweden, China, Bangladesh and Japan. There are also towns in some countries where bicycle culture has been an integral part of the landscape for generations, even without much official support. That is the case of Ílhavo, in Portugal. North American cities with strong bicycle cultures include Madison, Portland, San Francisco, Boston, Toronto, Montreal, Lincoln, Peoria, and the Twin Cities.A city with a strong bicycle culture usually has a well-developed cycling infrastructure, including segregated bike lanes and extensive facilities catering to urban bicycles, such as bike racks. Cycle sport is competitive physical activity using bicycles. There are several categories of bicycle racing including road bicycle racing, time trialling, cyclo-cross, mountain bike racing, track cycling, BMX, and cycle speedway. Non-racing cycling sports include artistic cycling, cycle polo, freestyle BMX and mountain bike trials. The Union Cycliste Internationale (UCI) is the world governing body for cycling and international competitive cycling events. The International Human Powered Vehicle Association is the governing body for human-powered vehicles that imposes far fewer restrictions on their design than does the UCI. The UltraMarathon Cycling Association is the governing body for many ultra-distance cycling races.Bicycle racing is recognised as an Olympic sport. Bicycle races are popular all over the world, especially in Europe. The countries most devoted to bicycle racing include Belgium, Denmark, France, Germany, Italy, the Netherlands, Spain and Switzerland. Other countries with international standing include Australia, Luxembourg, the United Kingdom and the United States. A cycle track, separated bike lane or protected bike lane (sometimes historically referred to as a sidepath), is an exclusive bikeway that has elements of a separated path and on-road bike lane. A cycle track is located within or next to the roadway, but is made distinct from both the sidewalk and general purpose roadway by vertical barriers or elevation differences.In urban planning, cycle tracks are designed to encourage bicycling in an effort to relieve automobile congestion and reduce pollution, reduce bicycling fatalities and injuries by eliminating the need for cars and bicycles to jockey for the same road space, and to reduce overall confusion and tension for all users of the road. In the United States, an academic analysis of eight cycle tracks found that they had increased bike traffic on the street by 75 percent within one year of installation. Rider surveys indicated that 10 percent of riders after installation would have chosen a different mode for that trip without the cycle track, and 25 percent said they were biking more in general since the installation of the cycle track.Cycle tracks may be one-way or two-way, and may be at road level, at sidewalk level, or at an intermediate level. They all have in common some separation from motor traffic with bollards, car parking, barriers or boulevards. Barriers may include curbs, concrete berms, posts, planting/median strips, walls, trenches, or fences. They are often accompanied by a curb extension or other features at intersections to simplify crossing.In the UK, cycle track is a roadway constructed specifically for use by cyclists, but not by any other vehicles. In Ireland cycle track also covers cycle lanes marked on the carriageway but only if accompanied by a specific sign. In the UK, a cycle track may be alongside a roadway (or carriageway) for all vehicles or it may be on its own alignment. The term does not include cycle lanes or other facilities within an all-vehicle carriageway. Cycling advocacy consists of activities that call for, promote or enable increased adoption and support for cycling and improved safety and convenience for cyclists, usually within urbanized areas or semi-urban regions. Issues of concern typically include policy, administrative and legal changes (the consideration of cycling in all governance); advocating and establishing better cycling infrastructure (including road and junction design and the creation, maintenance of bike lanes and separate bike paths, and bike parking); public education regarding the health, transportational and environmental benefits of cycling for both individuals and communities, cycling and motoring skills; and increasing public and political support for bicycling.There are many organisations worldwide whose primary mission is to advocate these goals. Most are non-profit organisations supported by donations, membership dues, and volunteers. Cycle competitions are carried out on an age basis until the age of 23. Espoirs are under 23 years old but over 18 (ie aged 19 – 22), juveniles are between 16 and 18.Juniors and Youths are defined as schoolchildren (under 16's). In British Cycling terms they are;Youth E - U8Youth D - U10Youth C - U12Youth B - U14Youth A - U16The British Cycling year runs from 1 January so as an example if you are 14 now but have a birthday later in the year then you are classed as 15 years old and must therefore ride in the U16 race.British Schools Cycling Association (BSCA), formerly English Schools Cycling Association (ESCA) until joined by Welsh Cycling Association BSCA, uses the school year to dictate a riders age. The school year (and BSCA's year) runs from 1 September but age categories clash. BSCA categories use odd age categories of;U7 - School year 2 and belowU9 - School years 3 and 4U11 - School years 5 and 6U13 - School years 7 and 8U15 - School years 9 and 10O15 - School years 11 and above up to age 19At age 19 the rider becomes an espoir. Above the age of 22 the rider becomes a senior, and above the senior age bracket are the veteran age brackets. These are discipline specific. Cycling Time Trials describes a veteran as age 40 and above. These are then split into age brackets each spanning five-years. Cycling infrastructure refers to all infrastructure which may be used by cyclists. This includes the same network of roads and streets used by motorists, except those roads from which cyclists have been banned (e.g., many freeways/motorways), plus additional bikeways that are not available to motor vehicles, such as bike paths, bike lanes, cycle tracks and, where permitted, sidewalks, plus amenities like bike racks for parking and specialized traffic signs and signals.The manner in which the public road network is designed, built and managed can have a significant effect on the utility and safety of cycling. The cycling network may be able to provide the users with direct, convenient routes minimizing unnecessary delay and effort in reaching their destinations. Settlements with a dense road network of interconnected streets tend to be viable utility cycling environments. Footdown is a group bicycle game where the objective is to avoid put your foot on the ground. Participants cycle around until there is only one person who has not put their foot down on the floor, whether it be the full foot, or just a toe. Rules vary, sometimes hands may be used to knock opponents off their bike, sometimes feet may be used, or hands on and feet on is a more polite game.The playing area is usually flat area such as basketball courts or tennis courts, once a player has set a foot down on the ground and is eliminated they use their bike as a border, all the eliminated players will create the circular border making the area smaller and smaller as the final participants battle to the end.It is quite legitimate to steer a competitor into the curb so they have no choice but to put their foot down. Note that this game favors those with a good sense of balance. Footdown may be played on any type of bike. A variant of footdown called 'derby' is played by the SCUL bicycle chopper gang.On November 3, 2007 Circuit BMX shop, located in Pawtucket, RI, recently hosted the 1st Footdown World Championships. Twenty six contestants entered the event with George Costa taking home the overall win and cash purse to become the 2007 Footdown World Champion. Hand signals are given by cyclists and some motorists to indicate their intentions to other traffic. Under the terms of the Vienna Convention on Traffic, bicycles are considered to be vehicles and cyclists are considered to be drivers. The traffic codes of most countries reflect this.In some countries (such as in the Czech Republic, Canada, and the United States), hand signals are designated not only for cyclists, but for every vehicle that does not have signal lights or has damaged signal lights. For example, drivers of older cars and mopeds may be required to make hand signals.Similar to automobile signaling, there are three primary signals: Left turn/overtaking, Right turn, and Stopping/braking. A hill climb is a cycling event, as well as a basic skill of the sport. As events a hill climb may either be an individual time trial (which forbids cooperation, drafting, or team tactics) or a regular road race. A hill climb usually represents an event which gains altitude continuously, usually terminating at a summit. Well known hill climbs include the Mt. Evans Hill Climb and the Mount Washington Auto Road Bicycle Hillclimb. The Cycle to the Sun race is a young race run on a volcano in Maui, Hawaii. Hill climbs occasionally feature in major professional races, such as the Tour de France, but they are usually referred to as mountain time trials, and are not necessarily from the bottom to the top of a hill, although they usually are (they can simply be a time trial over hilly terrain).In Great Britain there is an end of season tradition of cycling clubs promoting hillclimb time trials in October, for small cash prizes. The hills tend to be relatively short, usually taking between three and five minutes to complete, and the races attract many spectators, including locals not otherwise interested in cycling, who come to watch the pain in the faces of the competitors.Hill climbing is one of the key skills required to make cycling more enjoyable. One of the best ways to learn this skill is through practice. There are several ways to practice. Solo intervals or group rides that focus on hill climbs.Being able to tackle hills efficiently can be a "Race Winner" for anyone. Because downhills can be decided in seconds, uphills takes minutes, and being a good climber makes it possible to drop several riders behind. ISO 5775 is an international standard for labeling the size of bicycle tires and rims. The system used was originally developed by the European Tyre and Rim Technical Organisation (ETRTO). It is designed to make tire sizing consistent and clear. It replaces overlapping informal systems that ambiguously distinguished between sizes. For example, at least 6 different "26 inch" sizes exist (just by American notation), and "27 inch" wheels have a larger diameter than American "28 inch" (French "700C") wheels. Lane splitting is riding a bicycle or motorcycle between lanes or rows of slow moving or stopped traffic moving in the same direction. It is sometimes called lane sharing, whitelining, filtering, or stripe-riding. This allows riders to save time, bypassing traffic congestion, and may also be safer than stopping behind stationary vehicles.Filtering or filtering forward describes moving through traffic that is stopped. Lane splitting means riding between two lanes of vehicles, while filtering can also refer to using space on the outside edge of same-direction traffic. A local bike shop or local bicycle shop is a small business specializing in bicycle sale, maintenance and parts. The expression distinguishes small bicycle shops from large chains and mail-order or online vendors is abbreviated LBS. In the UK and Ireland, the expression independent bicycle dealers (IBDs) is also used.The local bike shop is a key component of the bicycle industry and, in recognition of the value that local bike shops provide, some manufacturers only sell their bicycles through dealerships. Mamil or MAMIL (an acronym standing for "middle-aged man in lycra".) is someone who rides an expensive racing bicycle for leisure, wearing professional style body-hugging jerseys and shorts.The word was reportedly coined by British marketing research firm Mintel in 2010. It gained further popularity in the United Kingdom with the success of Bradley Wiggins in the 2012 Tour de France and at the 2012 Summer Olympics, held in London. The UCI World Championships victory in recent years have also spurred interest.In Australia the popularity of this sort of cycling has been associated with the Tour Down Under and the 2011 Tour de France winner Cadel Evans. Former Prime Minister Tony Abbott has been described as a "mamil".Buying an expensive road bicycle has been described as a more healthy response to a midlife crisis than buying an expensive sports car. Bicycle messengers (also known as bike or cycle couriers) are people who work for courier companies (also known as messenger companies) carrying and delivering items by bicycle. Bicycle messengers are most often found in the central business districts of metropolitan areas. Courier companies use bike messengers because bicycle travel is less subject to unexpected holdups in city traffic jams, and is not deterred by parking limitations, fees or fines in high density development that can hinder or prevent delivery by motor vehicle, thereby offering a predictable delivery time. mobius Future Racing (NRS Team Code: MBS, also known as mFR) is an Australian amateur road cycling team based in Sydney, Australia. Established in January 2015, the team takes its name from the original and current title sponsor 'mobius Marketing and Design Consultants' (owned by cycling enthusiasts Jane Tribe and Guy Bicknell), and racing team 'Future Racing' established by team director Tom Petty (not to be confused with singer/songwriter Tom Petty). mFR competes primarily in the Australian National Road Series (NRS) as well the UCI Oceania Tour. The team finished 2nd overall in the 2016 National Road Series behind Avanti IsoWhey Sports.Outside of the NRS, the team has raced UCI events in the USA such as Tour of the Gila in 2017 and had riders represent Australia and New Zealand at World Championships on the track and the road. It has well established presence in various local racing scenes throughout Australia, as well as consistent success in State and National Open events. Several riders also compete in races in Europe and Asia throughout the year. Robin Morton is an American former cycling team manager and was the first and only female manager in men's professional cycling. She also created the first Union Cycliste Internationale (UCI) registered American professional road racing team in 1984. Cycling in Europe is a traditionally male sport and includes rules prohibiting women from the race caravans. At managers' meetings prior to races in Europe, the race organization would vote on whether Morton would be allowed to ride in the team car. Robin was elected to the U.S. Bicycling Hall of Fame in 2016. Mountain pass cycling milestones are signposts that provide cyclists with information about their current position with regard to the summit of the mountain pass. They always provide information for cyclists going uphill. Sometimes the signs are two-sided, thereby providing information also for cyclists going downhill.Mountain pass cycling milestones are essential for cyclists that are not familiar with the climbs. In general, they allow cyclists to schedule breaks as well as to plan food and liquid uptake. They furthermore can serve as motivational landmarks.Local institutions invest in this cycling infrastructure to offer service to cyclists, thus promoting tourism in their region. Ovarian Psycos is a bicycle brigade established in Boyle Heights, Los Angeles, in 2010, that supports young women of color in leadership and empowerment activities. The group was founded by Xela de la X, a mother, artist, activist and a survivor of sexual abuse. She formed the group as a feminist community sisterhood that feels comfortable taking up space as well confronting the harassment of women. The women come from working class communities in North and East L.A. and rides are organized monthly on the full moon.A documentary by Joanna Sokolowski and Kate Trumbull-LaValle premiered in 2016 at SXSW and was screened on March 27, 2017 on the KCET Independent Lens program. A rowbike is an example of a rowing cycle, hybrid fitness/transport machine that combines a bicycle, and a rowing machine. "Rowbike" is a trademark of the Rowbike company. The Rowbike was invented by Scott Olson, the creator of Rollerblade inline skates. "Rowling" is a combination of rowing and rolling and is sometimes used in place of rowing when describing a rowbike.A rowbike is different from a bicycle in that a bicycle is powered by pedals utilizing the rider's legs, yet a Rowbike is powered utilizing the rowlers legs, back, core and arms as they engage in the back and forth, rowing motion. Rowbikes are marketed to people who desire a zero impact, total body exercise which is provided by rowing.Although a rowbike could be classified as a human powered vehicle, as opposed to a fitness machine, rowbikes are used in the United States almost exclusively for exercise and fitness, rather than for transportation. Four wheel variants also exist, and like most bicycles, rowbikes can be used with a stand that permits use as a stationary bike or indoor rower. A saddle sore in humans is a skin ailment on the buttocks due to, or exacerbated by, horse riding or cycling on a bicycle saddle. It often develops in three stages: skin abrasion, folliculitis (which looks like a small, reddish acne), and finally abscess.Because it most commonly starts with skin abrasion, it is desirable to reduce the factors which lead to skin abrasion. Some of these factors include:Reducing the friction. In equestrian activities, friction is reduced with a proper riding position and using properly fitting clothing and equipment. In cycling, friction from bobbing or swinging motion while pedaling is reduced by setting the appropriate saddle height. Angle and fore/aft position can also play a role, and different cyclists have different needs and preferences in relation to this.Selecting an appropriate size and design of horse riding saddle or bicycle saddle.Wearing proper clothing. In bicycling, this includes cycling shorts, with chamois padding. For equestrian activity, long, closely fitted pants such as equestrian breeches or jodhpurs minimize chafing. For western riding, closely fitted jeans with no heavy inner seam, sometimes combined with chaps, are preferred. Padded cycling shorts worn under riding pants helps some equestrians, and extra padding, particularly sheepskin, on the seat of the saddle may help in more difficult situations such as long-distance endurance riding.Using petroleum jelly, chamois cream or lubricating gel to further reduce friction.If left untreated over an extended period of time, saddle sores may need to be drained by a physician.In animals such as horses and other working animals, saddle sores often form on either side of the withers, which is the area where the front of a saddle rests, and also in the girth area behind the animal's elbow, where they are known as a girth gall. Saddle sores can occur over the loin, and occasionally in other locations. These sores are usually caused by ill-fitting gear, dirty gear, lack of proper padding, or unbalanced loads. Reducing friction is also of great help in preventing equine saddle sores. Where there is swelling but not yet open sores, the incidence of sore backs may be reduced by loosening the girth, but not immediately removing the saddle after a long ride, thus allowing normal circulation to return slowly. A shared bus lane is a bus lane that allows cyclists to use it. Depending on the width of the lane, the speeds and number of buses, and other local factors, the safety and popularity of this arrangement vary.Research carried out by the Transport Research Laboratory (TRL) describes shared bus cycle lanes as "generally very popular" with cyclists. Guidance produced for Cycling England endorses bus lanes because they provide cyclists with a "direct and barrier-free route into town centres" while avoiding complications related to shared-use footways. A French survey found that 42% of cyclists were "enthusiasts" for shared bus-bike lanes, versus 33% who had mixed opinions, and 27% who opposed them. Many cycling activists view these as being more attractive than cycle paths, while others object to being close to bus exhausts, a problem easily avoided through replacing exhaust buses with electric ones.In the Netherlands mixed bus/cycle lanes are uncommon. According to the Sustainable Safety guidelines they would violate the principle of homogeneity and put road users of very different masses and speed behaviour into the same lane, which is generally discouraged.As of 2003, mixed bus/cycle lanes accounted for 118 km of the 260 km of cycling facilities in Paris. The French city of Bordeaux has 40 km of shared bus cycle lanes. It is reported that in the city of Bristol, a showcase bus priority corridor, where road space was re-allocated along a 14 km stretch also resulted in more space for cyclists and had the effect of increasing cycling. The reverse effect has also been suggested, a review carried out in London reports that cycling levels fell across Kew bridge following the removal of a bus lane, despite a general increase in cycling in the city. In addition, it is arguably easier, politically speaking, to argue for funding of joint facilities rather than the additional expense of both segregated cycling facilities and bus-only lanes.In addition, it is arguably easier, politically speaking, to argue for funding of joint facilities rather than separately asking for cycling facilities and bus-only lanes. Bus lane proposals often run into opposition from cyclists because creating space for bus lanes generally results in narrowing the other lanes shared by cars and cyclists. Incidentally, the TRL reports that cyclists and bus drivers tend to have low opinions of one another. In some cities, arrangements work successfully with bus companies and cyclists' groups ensure communication and understanding between the two groups of road users. Brownout in software engineering is a technique to increase the robustness of an application to computing capacity shortage. If too many users are simultaneously accessing an application hosted online, the underlying computing infrastructure may become overloaded, rendering the application unresponsive. Users are likely to abandon the application and switch to competing alternatives, hence incurring long-term revenue loss. To better deal with such a situation, the application can be added brownout capabilities: The application will disable certain features – e.g., an online shop will no longer display recommendations of related products – to avoid overload. Although reducing features generally has a negative impact on the short-term revenue of the application owner, long-term revenue loss can be avoided.The technique is inspired by brownouts in power grids, which consists in reducing the power grid's voltage in case electricity demand exceeds production. Some consumers, such as incandescent light bulbs, will dim – hence originating the term – and draw less power, thus helping match demand with production. Similarly, a brownout application helps match its computing capacity requirements to what is available on the target infrastructure.Brownout is complementing elasticity. The former can help the application withstand short-term capacity shortage, but does so without changing the capacity available to the application. In contrast, elasticity consists in adding (or removing) capacity to the application, preferably in advance, so as to avoid capacity shortage altogether. The two techniques can be combined, e.g., brownout is triggered when the number of users increases unexpectedly until elasticity can be triggered, the latter usually requiring minutes to show an effect.Brownout is relatively non-intrusive for the developer, for example, it can be implemented as an advice in aspect-oriented programming. However, surrounding components, such as load-balancers, need to be made brownout-aware to distinguish between cases where an application is running normally and cases where the application maintains a low response time by triggering brownout. Coding conventions are a set of guidelines for a specific programming language that recommend programming style, practices, and methods for each aspect of a program written in that language. These conventions usually cover file organization, indentation, comments, declarations, statements, white space, naming conventions, programming practices, programming principles, programming rules of thumb, architectural best practices, etc. These are guidelines for software structural quality. Software programmers are highly recommended to follow these guidelines to help improve the readability of their source code and make software maintenance easier. Coding conventions are only applicable to the human maintainers and peer reviewers of a software project. Conventions may be formalized in a documented set of rules that an entire team or company follows, or may be as informal as the habitual coding practices of an individual. Coding conventions are not enforced by compilers. From its beginnings in the 1960s, writing software has evolved into a profession concerned with how best to maximize the quality of software and of how to create it. Quality can refer to how maintainable software is, to its stability, speed, usability, testability, readability, size, cost, security, and number of flaws or "bugs", as well as to less measurable qualities like elegance, conciseness, and customer satisfaction, among many other attributes. How best to create high quality software is a separate and controversial problem covering software design principles, so-called "best practices" for writing code, as well as broader management issues such as optimal team size, process, how best to deliver software on time and as quickly as possible, work-place "culture", hiring practices, and so forth. All this falls under the broad rubric of software engineering. Meta-process modeling is a type of metamodeling used in software engineering and systems engineering for the analysis and construction of models applicable and useful to some predefined problems.Meta-process modeling supports the effort of creating flexible process models. The purpose of process models is to document and communicate processes and to enhance the reuse of processes. Thus, processes can be better taught and executed. Results of using meta-process models are an increased productivity of process engineers and an improved quality of the models they produce. A mixed criticality system is a system containing computer hardware and software that can execute several applications of different criticality, such as safety-critical and non-safety critical, or of different Safety Integrity Level (SIL). Different criticality applications are engineered to different levels of assurance, with high criticality applications being the most costly to design and verify. These kinds of systems are typically embedded in a machine such as an aircraft whose safety must be ensured. Prosa Structured Analysis Tool is a visual systems and software development environment which supports industry standard SA/SD/RT structured analysis and design with real-time extensions modeling method. Prosa supports data flow diagrams, state transition diagrams and entity relationship diagrams using Chen's and Bachmans ER notations. Prosa has integrated data dictionary.Prosa actively guides the designer to create correct and consistent graphic diagrams. Prosa offers interactive checking between diagrams. Concurrent documentation integration ensures real-time link from design to documentation.Prosa automates diagram creation and checking, and produces C++, C#, Java code headers and SQL DDL for implementation. Concurrent documentation ensures accurate documents which are consistent with the software design.Prosa has an established position in analysis and design tool business. Prosa is used in areas like system and software development, telecommunications, automation, car manufacturing, machinery, banking, insurance, defense/military, research, integrated circuit design, etc. In computer science and software engineering, reusability is the use of existing assets in some form within the software product development process. Assets are products and by-products of the software development life cycle and include code, software components, test suites, designs and documentation. Leverage is modifying existing assets as needed to meet specific system requirements. Because reuse implies the creation of a separately maintained version of the assets, it is preferred over leverage.Subroutines or functions are the simplest form of reuse. A chunk of code is regularly organized using modules or namespaces into layers. Proponents claim that objects and software components offer a more advanced form of reusability, although it has been tough to objectively measure and define levels or scores of reusability.The ability to reuse relies in an essential way on the ability to build larger things from smaller parts, and being able to identify commonalities among those parts. Reusability is often a required characteristic of platform software. Reusability brings several aspects to software development that do not need to be considered when reusability is not required.Reusability implies some explicit management of build, packaging, distribution, installation, configuration, deployment, maintenance and upgrade issues. If these issues are not considered, software may appear to be reusable from design point of view, but will not be reused in practice.Software reusability more specifically refers to design features of a software element (or collection of software elements) that enhance its suitability for reuse.Many reuse design principles were developed at the WISR workshops.Candidate design features for software reuse include:AdaptableBrief: small sizeConsistencyCorrectnessExtensibilityFastFlexibleGenericLocalization of volatile (changeable) design assumptions (David Parnas)ModularityOrthogonalityParameterizationSimple: low complexityStability under changing requirementsConsensus has not yet been reached on this list on the relative importance of the entries nor on the issues which make each one important for a particular class of applications. Round-trip engineering (RTE) is a functionality of software development tools that synchronizes two or more related software artifacts, such as, source code, models, configuration files, and even documentation. The need for round-trip engineering arises when the same information is present in multiple artifacts and therefore an inconsistency may occur if not all artifacts are consistently updated to reflect a given change. For example, some piece of information was added to/changed in only one artifact and, as a result, it became missing in/inconsistent with the other artifacts.Round-trip engineering is closely related to traditional software engineering disciplines: forward engineering (creating software from specifications), reverse engineering (creating specifications from existing software), and reengineering (understanding existing software and modifying it). Round-trip engineering is often wrongly defined as simply supporting both forward and reverse engineering. In fact, the key characteristic of round-trip engineering that distinguishes it from forward and reverse engineering is the ability to synchronize existing artifacts that evolved concurrently by incrementally updating each artifact to reflect changes made to the other artifacts. Furthermore, forward engineering can be seen as a special instance of RTE in which only the specification is present and reverse engineering can be seen as a special instance of RTE in which only the software is present. Many reengineering activities can also be understood as RTE when the software is updated to reflect changes made to the previously reverse engineered specification.Another characteristic of round-trip engineering is automatic update of the artifacts in response to automatically detected inconsistencies. In that sense, it is different from forward- and reverse engineering which can be both manual (traditionally) and automatic (via automatic generation or analysis of the artifacts). The automatic update can be either instantaneous or on-demand. In instantaneous RTE, all related artifacts are immediately updated after each change made to one of them. In on-demand RTE, authors of the artifacts may concurrently evolve the artifacts (even in a distributed setting) and at some point choose to execute matching to identify inconsistencies and choose to propagate some of them and reconcile potential conflicts.Round trip engineering supports an iterative development process. After you have synchronized your model with revised code, you are still free to choose the best way to work – make further modifications to the code or make changes to your model. You can synchronize in either direction at any time and you can repeat the cycle as many times as necessary. Runtime error detection is a software verification method that analyzes a software application as it executes and reports defects that are detected during that execution. It can be applied during unit testing, component testing, integration testing, system testing (automated/scripted or manual), or penetration testing.Runtime error detection can identify defects that manifest themselves only at runtime (for example, file overwrites) and zeroing in on the root causes of the application crashing, running slowly, or behaving unpredictably. Defects commonly detected by runtime error detection include:Race conditionsExceptionsResource leaksMemory leaksSecurity attack vulnerabilities (e.g., SQL injection)Null pointersUninitialized memoryBuffer overflowsRuntime error detection tools can only detect errors in the executed control flow of the application. Search-based software engineering (SBSE) applies metaheuristic search techniques such as genetic algorithms, simulated annealing and tabu search to software engineering problems. Many activities in software engineering can be stated as optimization problems. Optimization techniques of operations research such as linear programming or dynamic programming are mostly impractical for large scale software engineering problems because of their computational complexity. Researchers and practitioners use metaheuristic search techniques to find near-optimal or "good-enough" solutions.SBSE problems can be divided into two types:black-box optimization problems, for example, assigning people to tasks (a typical combinatorial optimization problem).white-box problems where operations on source code need to be considered. In software engineering, service virtualization is a method to emulate the behavior of specific components in heterogeneous component-based applications such as API-driven applications, cloud-based applications and service-oriented architectures. It is used to provide software development and QA/testing teams access to dependent system components that are needed to exercise an application under test (AUT), but are unavailable or difficult-to-access for development and testing purposes. With the behavior of the dependent components "virtualized", testing and development can proceed without accessing the actual live components. Service virtualization is recognized by vendors, industry analysts, and industry publications as being different than mocking. Service-oriented Software Engineering (SOSE) is a software engineering methodology focused on the development of software systems by composition of reusable services (service-orientation) often provided by other service providers. Since it involves composition, it shares many characteristics of component-based software engineering, the composition of software systems from reusable components, but it adds the ability to dynamically locate necessary services at run-time. These services may be provided by others as web services, but the essential element is the dynamic nature of the connection between the service users and the service providers. Social software engineering (SSE) is a branch of software engineering that is concerned with the social aspects of software development and the developed software.SSE focuses on the socialness of both software engineering and developed software. On the one hand, the consideration of social factors in software engineering activities, processes and CASE tools is deemed to be useful to improve the quality of both development process and produced software. Examples include the role of situational awareness and multi-cultural factors in collaborative software development. On the other hand, the dynamicity of the social contexts in which software could operate (e.g., in a cloud environment) calls for engineering social adaptability as a runtime iterative activity. Examples include approaches which enable software to gather users' quality feedback and use it to adapt autonomously or semi-autonomously.SSE studies and builds socially-oriented tools to support collaboration and knowledge sharing in software engineering. SSE also investigates the adaptability of software to the dynamic social contexts in which it could operate and the involvement of clients and end-users in shaping software adaptation decisions at runtime. Social context includes norms, culture, roles and responsibilities, stakeholder's goals and interdependencies, end-users perception of the quality and appropriateness of each software behaviour, etc.The participants of the 1st International Workshop on Social Software Engineering and Applications (SoSEA 2008) proposed the following characterization:Community-centered: Software is produced and consumed by and/or for a community rather than focusing on individualsCollaboration/collectiveness: Exploiting the collaborative and collective capacity of human beingsCompanionship/relationship: Making explicit the various associations among peopleHuman/social activities: Software is designed consciously to support human activities and to address social problemsSocial inclusion: Software should enable social inclusion enforcing links and trust in communitiesThus, SSE can be defined as "the application of processes, methods, and tools to enable community-driven creation, management, deployment, and use of software in online environments".One of the main observations in the field of SSE is that the concepts, principles, and technologies made for social software applications are applicable to software development itself as software engineering is inherently a social activity. SSE is not limited to specific activities of software development. Accordingly, tools have been proposed supporting different parts of SSE, for instance, social system design or social requirements engineering. Consequently vertical market software, such as software development tools, engineering tools, marketing tools or software that helps users in a decision making process can profit from social components. Such vertical social software differentiates strongly in its user-base from traditional social software such as Yammer. In a software development team, a software analyst is the person who studies the software application domain, prepares software requirements, and specification (Software Requirements Specification) documents. The software analyst is the seam between the software users and the software developers. They convey the demands of software users to the developers.A software analyst is expected to have the following skills:Working knowledge of software technologyComputer programming experience and expertiseGeneral business knowledgeProblem solving and problem reduction skillsInterpersonal relation skillsFlexibility and adaptability In software engineering, software configuration management (SCM or S/W CM) is the task of tracking and controlling changes in the software, part of the larger cross-disciplinary field of configuration management. SCM practices include revision control and the establishment of baselines. If something goes wrong, SCM can determine what was changed and who changed it. If a configuration is working well, SCM can determine how to replicate it across many hosts.The acronym "SCM" is also expanded as source configuration management process and software change and configuration management. However, "configuration" is generally understood to cover changes typically made by a system administrator. In software engineering, a software development process is the process of dividing software development work into distinct phases to improve design, product management, and project management. It is also known as a software development life cycle. The methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application.Most modern development processes can be vaguely described as agile. Other methodologies include waterfall, prototyping, iterative and incremental development, spiral development, rapid application development, and extreme programming.Some people consider a life-cycle "model" a more general term for a category of methodologies and a software development "process" a more specific term to refer to a specific process chosen by a specific organization. For example, there are many specific software development processes that fit the spiral life-cycle model. The field is often considered a subset of the systems development life cycle Software diagnosis (also: software diagnostics) refers to concepts, techniques, and tools that allow for obtaining findings, conclusions, and evaluations about software systems and their implementation, composition, behavior, and evolution. It serves as means to monitor, steer, observe and optimize software development, software maintenance, and software re-engineering in the sense of a business intelligence approach specific to software systems. It is generally based on the automatic extraction, analysis, and visualization of corresponding information sources of the software system. It can also be manually done and not automatic. A software map represents static, dynamic, and evolutionary information of software systems and their software development processes by means of 2D or 3D map-oriented information visualization. It constitutes a fundamental concept and tool in software visualization, software analytics, and software diagnosis. Its primary applications include risk analysis for and monitoring of code quality, team activity, or software development progress and, generally, improving effectiveness of software engineering with respect to all related artifacts, processes, and stakeholders throughout the software engineering process and software maintenance. A software metric is a standard of measure of a degree to which a software system or process possesses some property. Even if a metric is not a measurement (metrics are functions, while measurements are the numbers obtained by the application of metrics), often the two terms are used as synonyms. Since quantitative measurements are essential in all sciences, there is a continuous effort by computer science practitioners and theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance testing, software debugging, software performance optimization, and optimal personnel task assignments. A specification language is a formal language in computer science used during systems analysis, requirements analysis and systems design to describe a system at a much higher level than a programming language, which is used to produce the executable code for a system.Specification languages are generally not directly executed. They are meant to describe the what, not the how. Indeed, it is considered as an error if a requirement specification is cluttered with unnecessary implementation detail.A common fundamental assumption of many specification approaches is that programs are modelled as algebraic or model-theoretic structures that include a collection of sets of data values together with functions over those sets. This level of abstraction coincides with the view that the correctness of the input/output behaviour of a program takes precedence over all its other properties.In the property-oriented approach to specification (taken e.g. by CASL), specifications of programs consist mainly of logical axioms, usually in a logical system in which equality has a prominent role, describing the properties that the functions are required to satisfy - often just by their interrelationship. This is in contrast to so-called model-oriented specification in frameworks like VDM and Z, which consist of a simple realization of the required behaviour.Specifications must be subject to a process of refinement (the filling-in of implementation detail) before they can actually be implemented. The result of such a refinement process is an executable algorithm, which is either formulated in a programming language, or in an executable subset of the specification language at hand. For example, Hartmann pipelines, when properly applied, may be considered a dataflow specification which is directly executable. Another example is the Actor model which has no specific application content and must be specialized to be executable.An important use of specification languages is enabling the creation of proofs of program correctness (see theorem prover). The Stevens Award is a software engineering lecture award given by the Reengineering Forum, an industry association. The international Stevens Award was created to recognize outstanding contributions to the literature or practice of methods for software and systems development. The first award was given in 1995. The presentations focus on the current state of software methods and their direction for the future.This award lecture is named in memory of Wayne Stevens (1944-1993), a consultant, author, pioneer, and advocate of the practical application of software methods and tools. The Stevens Award and lecture is managed by the Reengineering Forum. The award was founded by International Workshop on Computer Aided Software Engineering (IWCASE), an international workshop association of users and developers of computer-aided software engineering (CASE) technology, which merged into The Reengineering Forum. Wayne Stevens was a charter member of the IWCASE executive board. Structural synthesis of programs (SSP) is a special form of (automatic) program synthesis that is based on propositional calculus. More precisely, it uses intuitionistic logic for describing the structure of a program in such a detail that the program can be automatically composed from pieces like subroutines or even computer commands. It is assumed that these pieces have been implemented correctly, hence no correctness verification of these pieces is needed. SSP is well suited for automatic composition of services for service-oriented architectures and for synthesis of large simulation programs. A System Requirements Specification (abbreviated SyRS when need to be distinct from a Software Requirements Specification SRS) is a structured collection of information that embodies the requirements of a system.A business analyst, sometimes titled system analyst, is responsible for analyzing the business needs of their clients and stakeholders to help identify business problems and propose solutions. Within the systems development life cycle domain, the BA typically performs a liaison function between the business side of an enterprise and the information technology department or external service providers. Systems modeling or system modeling is the interdisciplinary study of the use of models to conceptualize and construct systems in business and IT development.A common type of systems modeling is function modeling, with specific techniques such as the Functional Flow Block Diagram and IDEF0. These models can be extended using functional decomposition, and can be linked to requirements models for further systems partition.Contrasting the functional modeling, another type of systems modeling is architectural modeling which uses the systems architecture to conceptually model the structure, behavior, and more views of a system.The Business Process Modeling Notation (BPMN), a graphical representation for specifying business processes in a workflow, can also be considered to be a systems modeling language. A view model or viewpoints framework in systems engineering, software engineering, and enterprise engineering is a framework which defines a coherent set of views to be used in the construction of a system architecture, software architecture, or enterprise architecture. A view is a representation of a whole system from the perspective of a related set of concerns.Since the early 1990s there have been a number of efforts to prescribe approaches for describing and analyzing system architectures. These recent efforts define a set of views (or viewpoints). They are sometimes referred to as architecture frameworks or enterprise architecture frameworks, but are not usually called "view models".Usually a view is a work product that presents specific architecture data for a given system. However, the same term is sometimes used to refer to a view definition, including the particular viewpoint and the corresponding guidance that defines each concrete view. The term view model is related to view definitions. The World Wide Web has become a major delivery platform for a variety of complex and sophisticated enterprise applications in several domains. In addition to their inherent multifaceted functionality, these Web applications exhibit complex behaviour and place some unique demands on their usability, performance, security, and ability to grow and evolve. However, a vast majority of these applications continue to be developed in an ad-hoc way, contributing to problems of usability, maintainability, quality and reliability. While Web development can benefit from established practices from other related disciplines, it has certain distinguishing characteristics that demand special considerations. In recent years, there have been developments towards addressing these considerations.Web engineering focuses on the methodologies, techniques, and tools that are the foundation of Web application development and which support their design, development, evolution, and evaluation. Web application development has certain characteristics that make it different from traditional software, information system, or computer application development.Web engineering is multidisciplinary and encompasses contributions from diverse areas: systems analysis and design, software engineering, hypermedia/hypertext engineering, requirements engineering, human-computer interaction, user interface, information engineering, information indexing and retrieval, testing, modelling and simulation, project management, and graphic design and presentation. Web engineering is neither a clone nor a subset of software engineering, although both involve programming and software development. While Web Engineering uses software engineering principles, it encompasses new approaches, methodologies, tools, techniques, and guidelines to meet the unique requirements of Web-based applications. Ballet (French: [balɛ]) is a type of performance dance that originated during the Italian Renaissance in the 15th century and later developed into a concert dance form in France and Russia. It has since become a widespread, highly technical form of dance with its own vocabulary based on French terminology. It has been globally influential and has defined the foundational techniques used in many other dance genres/cultures. Ballet has been taught in various schools around the world, which have historically incorporated their own cultures to evolve the art.Balleta work, consists of the choreography and music for a ballet production. A well-known example of this is The Nutcracker, a two-act ballet originally choreographed by Marius Petipa and Lev Ivanov with a music score by Pyotr Ilyich Tchaikovsky. Ballets are choreographed and performed by trained ballet dancers. Traditional classical ballets usually are performed with classical music accompaniment and use elaborate costumes and staging, whereas modern ballets, such as the neoclassical works of American choreographer George Balanchine, often are performed in simple costumes (e.g., leotards and tights) and without the use of elaborate sets or scenery. Ballet as a music form progressed from simply a complement to dance, to a concrete compositional form that often had as much value as the dance that went along with it. The dance form, originating in France during the 17th century, began as a theatrical dance. It was not until the 19th century that ballet gained status as a “classical” form. In ballet, the terms ‘classical’ and ‘romantic’ are chronologically reversed from musical usage. Thus, the 19th century classical period in ballet coincided with the 19th century Romantic era in Music. Ballet music composers from the 17th–19th centuries, including the likes of Jean-Baptiste Lully and Pyotr Ilyich Tchaikovsky, were predominantly in France and Russia. Yet with the increased international notoriety seen in Tchaikovsky’s lifetime, ballet music composition and ballet in general spread across the western world. Cléopâtre is a ballet in one act with choreogaphy by Mikhail Fokine and music by Arensky. It includes music from Taneyev, Rimsky-Korsakov, Glinka and Glazunov as well. Scenery and costumes were created by Leon Baskt. The first production opened at Théâtre du Châtelet in Paris on June 2, 1909.It starred Anna Pavlova as Ta-hor and Ida Rubinstein as Cleopatra. Mikhail Fokine himself danced Amoun. The favourite slaves of Cleopatra were danced by Thamar Karsavina and Vaslav Nijinsky. Other characters included Servants of the Temple, Egyptian Dancers, Greeks, Satyrs, Jewish Dancers, Syrian Musicians and Slaves.Cyril W. Beaumont writes that Cléopâtre is a largely based on a ballet called Une Nuit d'Égypte that was first produced by Fokine at the Mariinsky Theatre in St. Petersburg, Russia in 1908. The veil dance features Glinka's Danse Orientale from his opera Russlan and Ludmilla. Danse Persane from Mussogsky's Khovanchina was used as well.This ballet was revived by Diaghilev in 1918. La Fille de marbre is a ballet-pantomime in 2 acts by Arthur Saint-Léon, with music by Cesare Pugni, premiered on 20 October 1847 at the Opéra de Paris.The main roles were held by Fanny Cerrito and Arthur Saint-Léon, who were making their debut at the Opéra de Paris. The ballet master Germain Quériau was also part of the cast.Théophile Gautier, an uncompromising critic, raised many improbabilities that did not prevent the public to give a triumphant welcome to the couple who would become famous. Grotesque dance (French: danse grotesque; Italian: ballo grottesco Italian: danza grottesca) is a category of theatrical dance that became more clearly differentiated in the 18th century and was incorporated into ballet, although it had its roots in earlier centuries. As opposed to the danse noble or "noble dance" performed in royal courts which emphasised beauty of movement and noble themes, grotesque dances were comic or lighthearted and created for buffoons and commedia dell'arte characters. In 16th and 17th centuries grotesque dances were often presented as an anti-masque, performed between the acts of more serious courtly entertainments. Likewise, the 17th century ballet a entrées (a series of loosely connected tableaux rather than a continuous dramatic narrative) sometimes contained grotesque sequences, most notably those devised by the Duke of Nemours for the court of Louis XIII.Some of the grotesque performers were physically deformed, but the Italian tradition of ballo grottesco, typified by the dancer and choreographer, Gennaro Magri whose career was at its apex in the 1760s, involved a high degree of virtuosity and athleticism. Ballets which contain grotesque dances or consist solely of grotesque dance include Campra's Le jaloux trompé and Ravel's Daphnis et Chloé (Dorcon's dance in Part 1). Dancers who excelled in the grotesque genre besides Magri included Margrethe Schall and John D'Auban. Metropolitan Ballet was a short lived British ballet company. Founded in 1947 by Cecelia Blatch and Leon Hepner, the company performed in London and on tour in the provinces and abroad, staging shortened versions of the classics, some of the Diaghilev ballets, and new works by Victor Gsovsky (who was the company’s first ballet master), Andree Howard, Frank Staff and John Taras.The company’s dancers included David Adams, Poul Gnatt, Sonia Arova, Colette Marchand and the 16 year old Svetlana Beriosova. Guest artists included Alexandra Danilova, Erik Bruhn, Henry Danton, Frederic Franklin and Leonide Massine,.The company disbanded at the end of 1949 after a final televised performance of Coppélia Act 2 and Pleasuredrome on Dec 19th. Its dancers dispersed. Beriosova joined Sadler’s Wells Theatre Ballet, moving to the Covent Garden branch of the company in 1952 where she stayed until her retirement in 1975. Olympus Festival (Greek: Φεστιβάλ Ολύμπου) is an annual Festival of music and theatre and a major cultural event in Greece. It is the largest event of its kind in Northern Greece and takes place annually in the months of July and August. The aim is to provide both, the local population and tourists, with cultural entertainment and thus to promote the contact between different cultures and the tourism of the region. The festival is supported by the Greek Ministry of Culture and the Municipality of Dion as well as by other local authorities in Pieria. "Runaway" is a song by American hip hop recording artist Kanye West, released as the second single from his fifth studio album, My Beautiful Dark Twisted Fantasy (2010). It features Pusha T, who is signed to West's label GOOD Music. The production was handled by West, Emile, Jeff Bhasker, and Mike Dean. The composition features repetitive piano riffs, intricate samples and a production style with several similarities to West's album 808s & Heartbreak (2008). Described as a deeply personal song in nature, it expresses West's thoughts on his failed relationships, and his acceptance of the media's perception of him. Lyrically the song explores criticism aimed at West in the past and serves as a "toast to the douchebags."Before the song's premiere at the 2010 MTV Video Music Awards, it generated substantial public interest due to what had happened the year prior at the 2009 MTV Video Music Awards. West's performance was rewarded with positive reviews, with the full song being released online on October 4, 2010. The song received critical acclaim from music critics and was listed amongst the best songs of the year by several publications, including MTV, Pitchfork Media, Rolling Stone, Complex, New York Post, amongst others. Critics praised the song for its sincere subject matter, the openness of the song and the soulful, clean production. Upon release, the song became one of the best reviewed singles released by West, with several critics noted that the track solidified West's commercial comeback with the public.The song debuted and peaked on the Billboard Hot 100 at position 12 and is the centerpiece of Runaway, a 35-minute short film featuring the majority of songs from My Beautiful Dark Twisted Fantasy. The song's nearly ten-minute music video features ballet dancers performing elaborate choreography. The music video received mostly positive reviews from music critics, who praised the scope of the video, the degree of creativity and the production design. Multiple cover artworks were designed by contemporary visual artist George Condo. Along with the MTV Video Music Awards, the song was performed on Saturday Night Live, his Watch the Throne Tour and at the Coachella Music Festival with a guest appearance by Pusha T. Additionally Runaway was used in the trailer for 'The Hangover Part III' as well as being used in a scene in 'The Night Before' A turn board (also known as turning board) is a training device commonly used in the fields of Ballet, dance, ice skating, and other athletics in order to aid in the development of various dance turns. It is believed that regular use of a turning board may increase confidence and comfort while performing various moves that involve turns. In dance and gymnastics, a turn is a rotation of the body about the vertical axis. It is usually a complete rotation of the body, although quarter (90°) and half (180°) turns are possible for some types of turns The TurnBoard was developed by Ballet Is Fun, and it has become the world's most popular ballet training product. Most recently, a new turn training tool, the Turning Pointe, has been released by Je Ballet with a dedicated relevé platform. Friedemann Vogel (born 1 August 1979) is a German ballet dancer who performs with Stuttgart Ballet as a Principal Dancer and as a frequent guest artist at major ballet houses around the world including La Scala in Milan and the Bolshoi Ballet Theatre in Moscow. He has been awarded several prestigious dance prizes including Prix de Lausanne (1997), Prix de Luxembourg (1997), Eurocity Competition in Italy, USA International Ballet Competition(1998) and Erik Bruhn Prize (2002). In September 2015, he was awarded the national title Kammertänzer - the highest honor in Germany that can be bestowed on a dancer. In the following year, 2016, he was awarded the "Prix Maya" for "Outstanding Dancer" alongside Aurélie Dupont and Diana Vishneva. World Ballet Day is an annual celebration of ballet held since 2014 in the first week of October. It is a collaboration between major ballet companies around the world, which stream live video of their behind-the-scenes preparations in their respective timezones. Other companies and schools hold local celebrations.The companies which contribute to the live stream are:The Australian BalletBolshoi BalletThe Royal BalletThe National Ballet of CanadaSan Francisco BalletThe date of World Ballet Day since its inception in 2014 has been:1 October 2014 1 October 2015 4 October 2016 5 October 2017 Literature, in its broadest sense, is any single body of written works. More restrictively, literature is writing considered to be an art form, or any single writing deemed to have artistic or intellectual value, often due to deploying language in ways that differ from ordinary usage.Its Latin root literatura/litteratura (derived itself from littera: letter or handwriting) was used to refer to all written accounts, though contemporary definitions extend the term to include texts that are spoken or sung (oral literature). The concept has changed meaning over time: nowadays it can broaden to have non-written verbal art forms, and thus it is difficult to agree on its origin, which can be paired with that of language or writing itself. Developments in print technology have allowed an evergrowing distribution and proliferation of written works, culminating in electronic literature.Literature can be classified according to whether it is fiction or non-fiction, and whether it is poetry or prose. It can be further distinguished according to major forms such as the novel, short story or drama; and works are often categorized according to historical periods or their adherence to certain aesthetic features or expectations (genre). Literary adaptation is the adapting of a literary source (e.g. a novel, short story, poem) to another genre or medium, such as a film, stage play, or video game. It can also involve adapting the same literary work in the same genre or medium, just for different purposes, e.g. to work with a smaller cast, in a smaller venue (or on the road), or for a different demographic group (such as adapting a story for children). Sometimes the editing of these works without the approval of the author can lead to a court case.It also appeals because it obviously works as a story; it has interesting characters, who say and do interesting things. This is particularly important when adapting to a dramatic work, e.g. film, stage play, teleplay, as dramatic writing is some of the most difficult. To get an original story to function well on all the necessary dimensions — concept, character, story, dialogue, and action — is an extremely rare event performed by a rare talent.Perhaps most importantly, especially for producers of the screen and stage, an adapted work is more bankable; it represents considerably less risk to investors, and poses the possibilities of huge financial gains. This is because:It has already attracted a following.It clearly works as a literary piece in appealing to a broad group of people who care.Its title, author, characters, etc. may be a franchise in and of themselves already. Allusion is a figure of speech, in which one refers covertly or indirectly to an object or circumstance from an external context. It is left to the audience to make the connection; where the connection is directly and explicitly stated (as opposed to indirectly implied) by the author, it is instead usually termed a reference. In the arts, a literary allusion puts the alluded text in a new context under which it assumes new meanings and denotations. It is not possible to predetermine the nature of all the new meanings and inter-textual patterns that an allusion will generate. Literary allusion is closely related to parody and pastiche, which are also "text-linking" literary devices.In a wider, more informal context, an allusion is a passing or casually short statement indicating broader meaning. It is an incidental mention of something, either directly or by implication, such as "In the stock market, he met his Waterloo." An anecdote is a brief, revealing account of an individual person or an incident. Occasionally humorous, anecdotes differ from jokes because their primary purpose is not simply to provoke laughter but to reveal a truth more general than the brief tale itself, such as to characterize a person by delineating a specific quirk or trait, to communicate an abstract idea about a person, place, or thing through the concrete details of a short narrative. An anecdote is "a story with a point."Anecdotes may be real or fictional; the anecdotal digression is a common feature of literary works, and even oral anecdotes typically involve subtle exaggeration and dramatic shape designed to entertain the listener. However, an anecdote is always presented as the recounting of a real incident, involving actual persons and usually in an identifiable place. In the words of Jurgen Heine, they exhibit "a special realism" and "a claimed historical dimension."The word anecdote (in Greek: ἀνέκδοτον "unpublished", literally "not given out") comes from Procopius of Caesarea, the biographer of Justinian I, who produced a work entitled Ἀνέκδοτα (Anekdota, variously translated as Unpublished Memoirs or Secret History), which is primarily a collection of short incidents from the private life of the Byzantine court. Gradually, the term "anecdote" came to be applied to any short tale utilized to emphasize or illustrate whatever point the author wished to make. In the context of Estonian, Lithuanian, Bulgarian and Russian humor, an anecdote refers to any short humorous story without the need of factual or biographical origins. An aside is a dramatic device in which a character speaks to the audience. By convention the audience is to realize that the character's speech is unheard by the other characters on stage. It may be addressed to the audience expressly (in character or out) or represent an unspoken thought. An aside is usually a brief comment, rather than a speech, such as a monologue or soliloquy. Unlike a public announcement, it occurs within the context of the play. An aside is, by convention, a true statement of a character's thought; a character may be mistaken in an aside, but may not be dishonest. Clandestine literature, also called "underground literature", refers to a type of editorial and publishing process that involves self-publishing works, often in contradiction with the legal standards of a location. Clandestine literature is often an attempt to circumvent censorship, prosecution, or other suppression. In academic study, such literature may be referred to as heterodox publications (as opposed to officially sanctioned, orthodox publishing).Examples of clandestine literature include the Samizdat literature of Soviet dissidents; the Aljamiado literature of Al-Andalus Spain; and the nushu writing of some upper-class women in Hunan, China, from around the 10th century to the 19th century. Clandestine publications were plentiful during the Enlightenment era in 18th-century France, circulating as pamphlets or manuscripts, usually containing texts that would have been considered highly blasphemous by the Ancien Régime, or even straight out atheist. These clandestine manuscripts particularly flourished in the 1720s, and contained such controversial works as Treatise of the Three Impostors and the reverend Jean Mesliers Atheistic Testament. Both texts were later published in edited versions by Voltaire, but handwritten manuscript copies have been found in private libraries all over Europe. The clandestine literature of 18th century France also consisted of printed works produced in neighbouring Switzerland or the Netherlands and smuggled into France. These books were usually termed "philosophical works", but varied greatly in content from pornography, utopian novels, political slander and actual philosophical works by radical enlightenment philosophers like Baron d'Holbach, Julien Offray de La Mettrie and Jean-Jacques Rousseau.The willingness to break the law may be due to ideological reasons, when works are contrary to government positions or pose a threat to the institutions in power, but also for reasons at a formal level, when publications do not comply with legal regulations imposed for the circulation of printed works. Underground literature is a type of clandestine literature that does not necessarily have the evasion of the censorship of the time as its purpose; the goal of its writers may only be to lower publishing costs, often being funded by the authors themselves.Works that are originally published by clandestine means may eventually become established as canonical literature, such as Das Kapital and El Buscón.A legitimate publisher in one jurisdiction may assist writers from elsewhere to circumvent their own laws by enabling them to publish abroad. The Olympia Press in Paris published several 20th-century English-language writers, including Henry Miller, who were facing censorship and possible prosecution in their own country at the time. A classic is a book accepted as being exemplary or noteworthy, for example through an imprimatur such as being listed in a list of great books, or through a reader's personal opinion. Although the term is often associated with the Western canon, it can be applied to works of literature from all traditions, such as the Chinese classics or the Indian Vedas.What makes a book "classic" is a concern that has occurred to various authors ranging from Italo Calvino to Mark Twain and the related questions of "Why Read the Classics?" and "What Is a Classic?" have been essayed by authors from different genres and eras (including Calvino, T. S. Eliot, Charles Augustin Sainte-Beuve). The ability of a classic book to be reinterpreted, to seemingly be renewed in the interests of generations of readers succeeding its creation, is a theme that is seen in the writings of literary critics including Michael Dirda, Ezra Pound, and Sainte-Beuve.The terms "classic book" and "Western canon" are closely related concepts, but they are not necessarily synonymous. A "canon" refers to a list of books considered to be "essential" and is presented in a variety of ways. It can be published as a collection (such as Great Books of the Western World, Modern Library, or Penguin Classics), presented as a list with an academic’s imprimatur (such as Harold Bloom's) or be the official reading list of an institution of higher learning (such as "The Reading List" at St. John's College or Rutgers University. The conte cruel is, as The A to Z of Fantasy Literature by Brian Stableford states, a "short-story genre that takes its name from an 1883 collection by Villiers de l'Isle-Adam, although previous examples had been provided by such writers as Edgar Allan Poe. Some critics use the label to refer only to non-supernatural horror stories, especially those that have nasty climactic twists, but it is applicable to any story whose conclusion exploits the cruel aspects of the 'irony of fate.'" The collection from which the short-story genre of the conte cruel takes its name is Contes cruels (1883, tr. Sardonic Tales, 1927) by Villiers de l'Isle-Adam. Also taking its name from this collection is Contes cruels ("Cruel Tales"), a two-volume set of about 150 tales and short stories by the 19th-century French writer Octave Mirbeau, collected and edited by Pierre Michel and Jean-François Nivet and published in two volumes in 1990 by Librairie Séguier.Some noted writers in the conte cruel genre are Charles Birkin, Maurice Level, and Roald Dahl, the latter of whom originated Tales of the Unexpected. H. P. Lovecraft observed of Level's fiction in his essay Supernatural Horror in Literature (1927): "This type, however, is less a part of the weird tradition than a class peculiar to itself — the so-called conte cruel, in which the wrenching of the emotions is accomplished through dramatic tantalizations, frustrations, and gruesome physical horrors".Noted science fiction authors of conte cruels include Thomas M. Disch and John Sladek. The conte cruel was the standard narrative form of soft science fiction by the 1980s. Domestic realism normally refers to the genre of 19th-century novels popular with women readers. This body of writing is also known as "sentimental fiction" or "woman's fiction". The genre is mainly reflected in the novel though short-stories and non-fiction works such as Harriet Beecher Stowe's "Our Country Neighbors" and The New Housekeeper's Manual written by Stowe and her sister-in-law Catharine Beecher are works of domestic realism. The style's particular characteristics are:"1. Plot focuses on a heroine who embodies one of two types of exemplar: the angel and the practical woman (Reynolds) who sometimes exist in the same work. Baym says that this heroine is contrasted with the passive woman (incompetent, cowardly, ignorant; often the heroine's mother is this type) and the "belle," who is deprived of a proper education.2. The heroine struggles for self-mastery, learning the pain of conquering her own passions (Tompkins, Sensational Designs, 172).3. The heroine learns to balance society's demands for self-denial with her own desire for autonomy, a struggle often addressed in terms of religion.4. She suffers at the hands of abusers of power before establishing a network of surrogate kin.5. The plots "repeatedly identify immersion in feeling as one of the great temptations and dangers for a developing woman. They show that feeling must be controlled. . . " (Baym 25). Frances Cogan notes that the heroines thus undergo a full education within which to realize feminine obligations (The All-American Girl).6. The tales generally end with marriage, usually one of two possible kinds:A. Reforming the bad or "wild" male, as in Augusta Evans's St. Elmo (1867)B. Marrying the solid male who already meets her qualifications. Examples: Maria Cummins, The Lamplighter (1854) and Susan Warner, The Wide, Wide World (1850)7. The novels may use a "language of tears" that evokes sympathy from the readers.8. Richard Brodhead (Cultures of Letters) sees class as an important issue, as the ideal family or heroine is poised between a lower-class family exemplifying poverty and domestic disorganization and upper-class characters exemplifying an idle, frivolous existence (94)."An example of this style of novel is Jane Smiley's A Thousand Acres in which the main character's confinement is emphasized in such a way.Some early exponents of the genre of domestic realism were Jane Austen and Elizabeth Barrett Browning. The empirical study of literature is an interdisciplinary field of research which includes the psychology, sociology, Philosophy, the contextual study of literature, and the history of reading literary texts.The International Society for the Empirical Study of Literature and Media (IGEL) is one learned association which brings together experts in this field. Major journals in the field are Poetics: Journal of Empirical Research on Culture, the Media and the Arts, Poetics Today: International Journal for Theory and Analysis of Literature and Communication, and Scientific Study of Literature.The empirical study of literature attracts scholarship particularly in the areas of reception and audience studies and in cognitive psychology when it is concerned with questions of reading. In these two areas research and studies based on the framework are steadily growing. Further fields where the framework in various revised and expanded versions attracts scholarship is (comparative) cultural studies and pedagogy. One of several dictionary definitions of the field is as follows:“Movement within the study of literature concerned with the study of literature as a social system of [inter]actions. The main question is what happens to literature: it is written, published, distributed, read, censored, imitated, etc. The empirical study of literature originated as a reaction to, and an attempt at, solving the basic problem of hermeneutics; that is, how the validation of literary interpretation can be demonstrated. From reception theory it had already become clear that interpretations are not only tied to the text, but also, and even to a great extent, to the reader — both in terms of the individual and of social conventions. This led to the theory of radical (cognitive) constructivism, based on the thesis that the subject largely construes its empirical world itself. The logical consequence of all this, to be seen in the work of Siegfried J. Schmidt, is the separation of interpretation and the strictly scientific study of literature based on radical constructivism. The literary system of actions is observed from the outside — not experienced — and roughly characterized as depending on two conventions (hypotheses) that are tested continually. These conventions are the aesthetic convention (as opposed to the convention of facts in the daily language of reference) and the polyvalence convention (as opposed to the monovalency in the daily empirical world). Thus, the object of study of the empirical study of literature is not only the text in itself, but the roles of action within the literary system, namely, production, distribution, reception, and the processing of texts. The methods used are primarily taken from the social sciences, reception theory, cognitive science, psychology, etc. In general the steps to be taken in empirical research are the formation of a hypothesis, putting it into practice, testing, and evaluation. More concretely, for the study of reader response a wide array of techniques are used, ranging from protocol techniques and thinking aloud protocol to pre-structured techniques, such as the semantic seven point scale (C. Osgood) and the classification technique (card sorting), and forms of content analysis, discourse analysis, association techniques, etc. Some objections often raised to the empirical study of literature are the triviality of many of its research results such as confirmation of what was already known or suspected or its reductionism (artificiality of the framework and set-up, and limitation to reader response instead of the study of the text). It is clear, however, that the empirical study of literature by its specific approach of the object and its focus on methodology is an outstanding way to explore the socio-cultural aspects of the literary system. It makes an irreplaceable contribution to the development of a more rational, scientific, and socially relevant study of literature.” An epilogue or epilog (from Greek ἐπίλογος epílogos, "conclusion" from ἐπί- "in addition" and λέγειν légein, "to say") is a piece of writing at the end of a work of literature, usually used to bring closure to the work. It is presented from the perspective of within the story. When the author steps in and speaks indirectly to the reader, that is more properly considered an afterword. The opposite is a prologue—a piece of writing at the beginning of a work of literature or drama, usually used to open the story and capture interest. Some genres, for example television programs and video games, call the epilog an "outro" patterned on the use of "intro" for "introduction". Feminist bookstores are retail bookstores that sell material relating to women's issues, gender, and sexuality. These stores served as some of the earliest open spaces for feminist community building and organizing.Prior to the spread of feminist bookstores, bookselling was a trade dominated by white men in the United States. There was a lack of awareness and interest within this bookstore leadership to meet the demands for woman-centered literature being raised by feminists at the time. Though some bookstores featured small sections of women's literature or feminist books, these were limited and did not provide the range and depth representative of this category, treating topics not centered around men as an extra section of bookshops rather than an integral part. Fictional portrayals of psychopaths, or sociopaths, are some of the most notorious in film and literature but may only vaguely or partly relate to the concept of psychopathy, which is itself used with varying definitions by mental health professionals, criminologists and others. The character may be identified as a diagnosed/assessed psychopath or sociopath within the fictional work itself, or by its creator when discussing their intentions with the work, which might be distinguished from opinions of audiences or critics based only on a character appearing to show traits or behaviors associated with an undefined popular stereotype of psychopathy.Such characters are often portrayed in an exaggerated fashion and typically in the role of a villain or antihero, where the general characteristics of a psychopath are useful to facilitate conflict and danger. Because the definitions and criteria in the history of psychopathy have varied over the years and continue to change even now, many characters in notable films may have been designed to fall under the category of a psychopath at the time of the film's production or release, but not necessarily in subsequent years. There are several stereotypical images of psychopathy in both lay and professional accounts which only partly overlap and can involve contradictory traits: the charming con artist, the deranged serial killer, the successful corporate psychopath, or the chronic low-level offender with juvenile delinquency. The public concept reflects some combination of fear of the mythical bogeyman, fascination with human evil, and sometimes perhaps envy of people who might appear to go through life unencumbered by the same levels of guilt, anguish or insecurity. A film adaptation is the transfer of a written work, in whole or in part, to a feature film. Although often considered a type of derivative work, recent academic developments by scholars such as Robert Stam conceptualize film adaptation as a dialogic process.A common form of film adaptation is the use of a novel as the basis of a feature film. Other works adapted into films include non-fiction (including journalism), autobiography, comic books, scriptures, plays, historical sources, and even other films. From the earliest days of cinema, in nineteenth-century Europe, adaptation from such diverse resources has been a ubiquitous practice of filmmaking. A foreword is a (usually short) piece of writing sometimes placed at the beginning of a book or other piece of literature. Typically written by someone other than the primary author of the work, it often tells of some interaction between the writer of the foreword and the book's primary author or the story the book tells. Later editions of a book sometimes have a new foreword prepended (appearing before an older foreword if there was one), which might explain in what respects that edition differs from previous ones.When written by the author, the foreword may cover the story of how the book came into being or how the idea for the book was developed, and may include thanks and acknowledgments to people who were helpful to the author during the time of writing. Unlike a preface, a foreword is always signed.Information essential to the main text is generally placed in a set of explanatory notes, or perhaps in an introduction, rather than in the foreword or preface.The pages containing the foreword and preface (and other front matter) are typically not numbered as part of the main work, which usually uses Arabic numerals. If the front matter is paginated, it uses lowercase Roman numerals. If there is both a foreword and a preface, the foreword appears first; both appear before the introduction, which may be paginated either with the front matter or the main text.The word foreword was first used around the mid-17th century (originally used as a term in philology). It was possibly a loan translation of German Vorwort, themselves calques of Latin praefatio. Futurism is a modernist avant-garde movement in literature and part of the Futurism art movement that originated in Italy in the early 20th century. It made its official literature debut with the publication of Filippo Tommaso Marinetti's Manifesto of Futurism (1909). Futurist poetry is characterised by unexpected combinations of images and by its hyper-concision (in both economy of speech and actual length). Futurist theatre also played an important role within the movement and is distinguished by scenes that are only a few sentences long, an emphasis on nonsensical humour, and attempts to examine and subvert traditions of theatre via parody and other techniques. Longer forms of literature, such as the novel, have no place in the Futurist aesthetic of speed and compression. Futurist literature primarily focuses on seven aspects: intuition, analogy, irony, abolition of syntax, metrical reform, onomatopoeia, and essential/synthetic lyricism. In biblical studies, inclusio is a literary device based on a concentric principle, also known as bracketing or an envelope structure, which consists of creating a frame by placing similar material at the beginning and end of a section, although whether this material should consist of a word or a phrase, or whether greater amounts of text also qualify, and of what length the frames section should be, are matters of some debate. Inclusio is found in various sources, both antique and new. The purpose of an inclusio may be structural - to alert the reader to a particularly important theme - or it may serve to show how the material within the inclusio relates to the inclusio itself. An important case of this occurs in the Gospel of Mark's treatment of the "Cursing of the Fig Tree" and the "Cleansing of the Temple" (Chapter 11). By giving the first half of the story before the Cleansing of the Temple, and the conclusion after, Mark creates a "frame" that effectively highlights that he wants the Cleansing of the Temple to be seen in light of the Cursing of the Fig Tree - i.e. Jesus' actions in the Temple are not just a reform measure, but a judgment against it. In an essay, article, or book, an introduction (also known as a prolegomenon) is a beginning section which states the purpose and goals of the following writing. This is generally followed by the body and conclusion.The introduction typically describes the scope of the document and gives the brief explanation or summary of the document. It may also explain certain elements that are important to the essay if explanations are not part of the main text. The readers can have an idea about the following text before they actually start reading it.ln technical writing, the introduction typically includes one or more standard subsections: abstract or summary, preface, acknowledgments, and foreword. Alternatively, the section labeled introduction itself may be a brief section found side-by-side with abstract, foreword, etc. (rather than containing them). In this case the set of sections that come before the body of the book are known as the front matter. When the book is divided into numbered chapters, by convention the introduction and any other front-matter sections are unnumbered and precede chapter 1.Keeping the concept of the introduction the same, different documents have different styles to introduce the written text. For example, the introduction of a Functional Specification consists of information that the whole document is yet to explain. If a Userguide is written, the introduction is about the product. In a report, the introduction gives a summary about the report contents. A literary language is a register or dialect of a language that is used in literary writing of the language. This may also include liturgical writing. A literary variety of a language often gives rise to a standard variety of the language. The difference between literary and non-literary forms is more marked in some languages than in others. Where there is a strong divergence, the language is said to exhibit diglossia.In Latin, Classical Latin was the literary register used in writing from 75 BC to the 3rd century AD, while Vulgar Latin was the common, spoken variety used across the Roman Empire. The Latin brought by Roman soldiers to Gaul, Iberia, or Dacia was not identical to the Latin of Cicero, and differed from it in vocabulary, syntax, and grammar. Some literary works with low-register language from the Classical Latin period give a glimpse into the world of early Vulgar Latin. The works of Plautus and Terence, being comedies with many characters who were slaves, preserve some early basilectal Latin features, as does the recorded speech of the freedmen in the Cena Trimalchionis by Petronius Arbiter. At the third Council of Tours in 813, priests were ordered to preach in the vernacular language—either in the rustica lingua romanica (Vulgar Latin), or in the Germanic vernaculars—since the common people could no longer understand formal Latin. Literary fragments may comprise:works inadvertently left unfinished or never completed by their authorssurviving extracts of larger works subsequently lost as wholesworks deliberately constructed as fragmentary piecesThe deliberately undeveloped literary sort of fragment played an especially important role in literary Romanticism. German literature of the Romantic period has left many such fragments. In English literature, note Coleridge's unfinished (but published as a fragment in 1816) "Kubla Khan; or, A Vision in a Dream: A Fragment". In contemporary literature Dimitris Lyacos employs fragment sequences in order to develop an elliptical narrative alluding to a universe of unattainability and loss. A miscellany is a collection of various pieces of writing by different authors. Meaning a mixture, medley, or assortment, a miscellany can include pieces on many subjects and in a variety of different forms. In contrast to anthologies, whose aim is to give a selective and canonical view of literature, miscellanies were produced for the entertainment of a contemporary audience and so instead emphasise collectiveness and popularity. Laura Mandell and Rita Raley state:This last distinction is quite often visible in the basic categorical differences between anthologies on the one hand, and all other types of collections on the other, for it is in the one that we read poems of excellence, the "best of English poetry," and it is in the other that we read poems of interest. Out of the differences between a principle of selection (the anthology) and a principle of collection (miscellanies and beauties), then, comes a difference in aesthetic value, which is precisely what is at issue in the debates over the "proper" material for inclusion into the canon.Manuscript miscellanies are important in the Middle Ages, and are the sources for most surviving shorter medieval vernacular poetry. Medieval miscellanies often include completely different types of text, mixing poetry with legal documents, recipes, music, medical and devotional literature and other types of text, and in medieval contexts a mixture of types of text is often taken as a necessary condition for describing a manuscript as a miscellany. They may have been written as a collection, or represent manuscripts of different origins that were later bound together for convenience. In the early modern period miscellanies remained significant in a more restricted literary context, both in manuscript and printed forms, mainly as a vehicle for collections of shorter pieces of poetry, but also other works. Their numbers increased until their peak of importance in the 18th century, when over 1000 English poetry miscellanies were published, before the rise of anthologies in the early 19th century. The printed miscellany gradually morphed into the format of the regularly published magazine, and many early magazines used the word in their titles. A nonsense word, unlike a sememe, may have no definition. Nonsense words can be classified depending on their orthographic and phonetic similarity with (meaningful) words. If it can be pronounced according to a language's phonotactics, it is a pseudoword. Nonsense words are used in literature for poetic or humorous effect. Proper names of real or fictional entities are sometimes nonsense words.A stunt word is a nonsense word used for a special effect, or to attract attention, as part of a performance. Such words are a feature of the work of Dr. Seuss ("Sometimes I am quite certain there's a Jertain in the curtain").The ability to infer the (hypothetical) meaning of a nonsense word from context is used to test for brain damage. Outdoor literature is a literature genre about or involving the outdoors. Outdoor literature encompasses several different subgenres including exploration literature, adventure literature, mountain literature and nature writing. Another subgenre is the guide book, an early example of which was Thomas West's guide to the Lake District published in 1778. The genres can include activities such as exploration, survival, sailing, hiking, mountaineering, whitewater boating, geocaching or kayaking, or writing about nature and the environment. Travel literature is similar to outdoor literature but differs in that it does not always deal with the out-of-doors, but there is a considerable overlap between these genres, in particular with regard to long journeys. A pan-national epic is a lengthy work of poetry or prose that is widely taken to be representative of the pan-national character of a large cultural grouping that exceeds the bounds of a single nation-state or even a specific language or language group. Pan-national epics can be subdivided into supranational epics, which are epics held dear to several national groups speaking more than one language, and language epics, which are more narrowly restricted to nations sharing the same language. A nation can have its own distinct national epic in addition to a supranational and/or a language epic. Examples of pan-national epics follow: Popular history is a broad and somewhat ill-defined genre of historiography that takes a popular approach, aims at a wide readership, and usually emphasizes narrative, personality and vivid detail over scholarly analysis. The term is used in contradistinction to professional academic or scholarly history writing which is usually more specialized and technical and, thus, less accessible to the general reader.Some popular historians are without academic affiliation while others are academics, or former academics, that have (according to one writer) "become somehow abstracted from the academic arena, becoming cultural commentators". Many worked as journalists, perhaps after taking an initial degree in history.Popular historians may become nationally renowned or best-selling authors and may or may not serve the interests of particular political viewpoints in their roles as "public historians". Many authors of "official histories" and "authorized biographies" would qualify as popular historians serving the interests of particular institutions or public figures.Popular historians aim to appear on the "general lists" of general publishers, rather than the university presses that have dominated academic publishing in recent years. Increasingly, popular historians have taken to television where they are able, often accompanying a series of documentaries with a tie-in book. In literature, the term portrait refers to a written description or analysis of a person or thing. A written portrait often gives deep insight, and offers an analysis that goes far beyond the superficial. For example, American author Patricia Cornwell wrote a best-selling book titled Portrait of a Killer about the personality, background, and possible motivations of Jack the Ripper, as well as the media coverage of his murders, and the subsequent police investigation of his crimes.Gertrude Stein also wrote literary portraits of European painters Henri Matisse and Pablo Picasso. Postcolonial literature is the literature of countries that were colonised, mainly by European countries. It exists on all continents except Antarctica. Postcolonial literature often addresses the problems and consequences of the decolonization of a country, especially questions relating to the political and cultural independence of formerly subjugated people, and themes such as racialism and colonialism. A range of literary theory has evolved around the subject.Migrant literature and postcolonial literature show some considerable overlap. However, not all migration takes place in a colonial setting, and not all postcolonial literature deals with migration. A question of current debate is the extent to which postcolonial theory also speaks to migration literature in non-colonial settings. Cognition is "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses". It encompasses processes such as knowledge, attention, memory and working memory, judgment and evaluation, reasoning and "computation", problem solving and decision making, comprehension and production of language. Human cognition is conscious and unconscious, concrete or abstract, as well as intuitive (like knowledge of a language) and conceptual (like a model of a language). Cognitive processes use existing knowledge and generate new knowledge.The processes are analyzed from different perspectives within different contexts, notably in the fields of linguistics, anesthesia, neuroscience, psychiatry, psychology, education, philosophy, anthropology, biology, systemics, logic, and computer science. These and other different approaches to the analysis of cognition are synthesised in the developing field of cognitive science, a progressively autonomous academic discipline. Within psychology and philosophy, the concept of cognition is closely related to abstract concepts such as mind and intelligence. It encompasses the mental functions, mental processes (thoughts), and states of intelligent entities (humans, collaborative groups, human organizations, highly autonomous machines, and artificial intelligences).Thus, the term's usage varies across disciplines; for example, in psychology and cognitive science, "cognition" usually refers to an information processing view of an individual's psychological functions. It is also used in a branch of social psychology called social cognition to explain attitudes, attribution, and group dynamics. In cognitive psychology and cognitive engineering, cognition is typically assumed to be information processing in a participant’s or operator’s mind or brain.Cognition can in some specific and abstract sense also be artificial.The term "cognition" is often incorrectly used to mean "cognitive abilities" or "cognitive skills". A-not-B error (also known as "stage 4 error" or "perseverative error") is a phenomenon uncovered by the work of Jean Piaget in his theory of cognitive development of children. The A-not-B error is a particular error made by infants during substage 4 of their sensorimotor stage.A typical A-not-B task goes like this: An experimenter hides an attractive toy under box "A" within the baby's reach. The baby searches for the toy, looks under box "A", and finds the toy. This activity is usually repeated several times (always with the researcher hiding the toy under box "A"). Then, in the critical trial, the experimenter moves the toy under box "B", also within easy reach of the baby. Babies of 10 months or younger typically make the perseveration error, meaning they look under box "A" even though they saw the researcher move the toy under box "B", and box "B" is just as easy to reach. This demonstrates a lack of, or incomplete, schema of object permanence. Children of 12 months or older typically do not make this error. 'Activist knowledge' or 'dissident knowledge', refers to the ideological and ideational aspects of social movements such as challenging or reformulating dominant political ideas and ideologies, and developing new concepts, thoughts and meanings through the contentional interactions with social, political, cultural and economic authorities.The cognitive or ideational aspects of social movements have been theorized by a group of scholars such as Ron Eyerman and Andrew Jamison (from a cognitive approach), Hank Johnston and David Snow and others (from a framing perspective) and S A Hosseini, from an integrative approach."The ‘ideational dimension’ of a social movement consists of the intellectual processes of how the movement actors understand, conceptualize, explain, and analyze social problems and the events they have experienced, and how they reflect on their own individual and collective practices. The ‘ideational landscape’ of a social movement is a space where movement actors translate their collective experiences of social reality into ideas... Activist knowledge is, by definition, a process of (trans)forming social consciousness through a certain course of socio-political contentions and communicative actions – mostly undertaken in ‘public spheres’, around a vital set of interrelated social issues, in order to explain and respond to them. This kind of collective-networked cognition is a practical-ideational process which proceeds out of a social movement’s relations with (and contributions to) both existing knowledge spheres and social reality."The creation of new systems of meaning is an inseparable part of social movements. Especially in today's information society, as Manuel Castells points out, the real targets of the current mobilizations are the minds of people around the world; it is "by changing minds that they expect to put pressure on the institutions of governance and, ultimately, bring democracy and alternative social values to these institutions"As mentioned in the World Social Forum’s Charter of Principles, for instance, ‘… the World Social Forum is a movement of ideas that prompts reflection, and the transparent circulation of the results of that reflection, on the mechanisms and instruments of domination by capital … and on the alternatives proposed to solve the problems of exclusion and inequality’ (WSF 2001 Principle 11)The meaning and knowledge making processes in social movements are not however restricted to information acquisition and processing, social psychological cognitions, practical knowledge, deliberative contemplations in public spheres, discursive and ideological transformations, framing and so on.‘Activist/dissident knowledge’ "is formed through both strategic and communicative actions in confronting dominant social processes. Such knowledge is shaped at a very pragmatic level that differs in nature (despite some overlaps) from the academic level of knowledge production, the institutional level of political ideology construction, and even the routine interactional level of cultural reproduction". Activity recognition aims to recognize the actions and goals of one or more agents from a series of observations on the agents' actions and the environmental conditions. Since the 1980s, this research field has captured the attention of several computer science communities due to its strength in providing personalized support for many different applications and its connection to many different fields of study such as medicine, human-computer interaction, or sociology.Due to its many-faceted nature, different fields may refer to activity recognition as plan recognition, goal recognition, intent recognition, behavior recognition, location estimation and location-based services. Alexithymia is a personality construct characterized by the subclinical inability to identify and describe emotions in the self. The core characteristics of alexithymia are marked dysfunction in emotional awareness, social attachment, and interpersonal relating. Furthermore, people with alexithymia have difficulty in distinguishing and appreciating the emotions of others, which is thought to lead to unempathic and ineffective emotional responding. Alexithymia occurs in approximately 10% of the population and can occur with a number of psychiatric conditions.The term alexithymia was coined by psychotherapist Peter Sifneos in 1973. The word comes from Greek α (a, "no", the negating alpha privative), λέξις (léxis, "word"), and θυμός (thymos, "emotions", but understood by Sifneos as having the meaning "mood"), literally meaning "no words for mood". Amodal perception is the perception of the whole of a physical structure when only parts of it affect the sensory receptors. For example, a table will be perceived as a complete volumetric structure even if only part of it—the facing surface—projects to the retina; it is perceived as possessing internal volume and hidden rear surfaces despite the fact that only the near surfaces are exposed to view. Similarly, the world around us is perceived as a surrounding plenum, even though only part of it is in view at any time. Another much quoted example is that of the "dog behind a picket fence" in which a long narrow object (the dog) is partially occluded by fence-posts in front of it, but is nevertheless perceived as a single continuous object. Albert Bregman noted an auditory analogue of this phenomenon: when a melody is interrupted by bursts of white noise, it is nonetheless heard as a single melody continuing "behind" the bursts of noise.Formulation of the theory is credited to the Belgian psychologist Albert Michotte and Fabio Metelli, an Italian psychologist, with their work developed in recent years by E.S. Reed and the Gestaltists.Modal completion is a similar phenomenon in which a shape is perceived to be occluding other shapes even when the shape itself is not drawn. Examples include the triangle that appears to be occluding three disks and an outlined triangle in the Kanizsa triangle and the circles and squares that appear in different versions of the Koffka cross. The analysis of competing hypotheses (ACH) allegedly provides an unbiased methodology for evaluating multiple competing hypotheses for observed data. It was developed by Richards (Dick) J. Heuer, Jr., a 45-year veteran of the Central Intelligence Agency, in the 1970s for use by the Agency. ACH is used by analysts in various fields who make judgments that entail a high risk of error in reasoning. It helps an analyst overcome, or at least minimize, some of the cognitive limitations that make prescient intelligence analysis so difficult to achieve.ACH was a step forward in intelligence analysis methodology, but it was first described in relatively informal terms. Producing the best available information from uncertain data remains the goal of researchers, tool-builders, and analysts in industry, academia and government. Their domains include data mining, cognitive psychology and visualization, probability and statistics, etc. Abductive reasoning is an earlier concept with similarities to ACH. In psychology, apprehension (Lat. ad, "to"; prehendere, "to seize") is a term applied to a model of consciousness in which nothing is affirmed or denied of the object in question, but the mind is merely aware of ("seizes") it."Judgment" (says Reid, ed. Hamilton, i. p. 414) "is an act of the mind, specifically different from simple apprehension or the bare conception of a thing". "Simple apprehension or conception can neither be true nor false." This distinction provides for the large class of mental acts in which we are simply aware of, or "take in" a number of familiar objects, about which we in general make no judgment, unless our attention is suddenly called by a new feature. Or again, two alternatives may be apprehended without any resultant judgment as to their respective merits.Similarly, G.F. Stout stated that while we have a very vivid idea of a character or an incident in a work of fiction, we can hardly be said in any real sense to have any belief or to make any judgment as to its existence or truth. With this mental state may be compared the purely aesthetic contemplation of music, wherein apart from, say, a false note, the faculty of judgment is for the time inoperative. To these examples may be added the fact that one can fully understand an argument in all its bearings, without in any way judging its validity. Without going into the question fully, it may be pointed out that the distinction between judgment and apprehension is relative. In every kind of thought, there is judgment of some sort in a greater or less degree of prominence.Judgment and thought are in fact psychologically distinguishable merely as different, though correlative, activities of consciousness. Professor Stout further investigates the phenomena of apprehension, and comes to the conclusion that "it is possible to distinguish and identify a whole without apprehending any of its constituent details." On the other hand, if the attention focuses itself for a time on the apprehended object, there is an expectation that such details will, as it were, emerge into consciousness. Hence, he describes such apprehension as "implicit", and insofar as the implicit apprehension determines the order of such emergence, he describes it as "schematic".A good example of this process is the use of formulae in calculations; ordinarily the formula is used without question; if attention is fixed upon it, the steps by which it is shown to be universally applicable emerge, and the "schema " is complete in detail. With this result may be compared Kant's theory of apprehension as a synthetic act (the "synthesis of apprehension") by which the sensory elements of a perception are subjected to the formal conditions of time and space. Approach-avoidance conflicts as elements of stress were first introduced by psychologist Kurt Lewin, one of the founders of modern social psychology.Approach-avoidance conflicts occur when there is one goal or event that has both positive and negative effects or characteristics that make the goal appealing and unappealing simultaneously. For example, marriage is a momentous decision that has both positive and negative aspects. The positive aspects, or approach portion, of marriage might be considered togetherness, sharing memories, and companionship while the negative aspects, or avoidance portions, might include financial considerations, arguments, and difficulty with in-laws. The negative effects of the decision help influence the decision maker to avoid the goal or event, while the positive effects influence the decision maker to want to approach or proceed with the goal or event. The influence of the negative and positive aspects create a conflict because the decision maker has to either proceed toward the goal or avoid the goal altogether. For example, the decision maker might approach proposing to a partner with excitement because of the positive aspects of marriage. On the other hand, he or she might avoid proposing due to the negative aspects of marriage.The decision maker might initiate approach toward the goal, but as awareness of the negative factors increases, the desire to avoid the goal may arise, producing indecision. If there are competing feelings to a goal, the stronger of the two will triumph. For instance, if a woman was thinking of starting a business she would be faced with positive and negative aspects. Before actually starting the business, the woman would be excited about the prospects of success for the new business and she would encounter (approach) the positive aspects first: she would attract investors, create interest in her upcoming ideas and it would be a new challenge. However, as she drew closer to actually launching the business, the negative aspects would become more apparent; the woman would acknowledge that it would require much effort, time, and energy from other aspects of her life. The increase in strength of these negative aspects (avoidance) would cause her to avoid the conflict or goal of starting the new business, which might result in indecision. Research pertaining to approach and avoidance conflicts has been extended into implicit motives, both abstract and social in nature. Attentional blink (AB) is a phenomenon that reflects the temporal costs in the allocating selective attention. The AB is typically measured by using rapid serial visual presentation (RSVP) tasks, where participants often fail to detect a second salient target occurring in succession if it is presented between 180-450 ms after the first one. Also, the AB has been observed using two backward-masked targets and auditory stimuli. The term attentional blink was first used in 1992, although the phenomenon was probably known before. Augmented cognition is an interdisciplinary area of psychology and engineering, attracting researchers from the more traditional fields of human-computer interaction, psychology, ergonomics and neuroscience. Augmented cognition research generally focuses on tasks and environments where human-computer interaction and interfaces already exist. Developers, leveraging the tools and findings of neuroscience, aim to develop applications which capture the human user's cognitive state in order to drive real-time computer systems. In doing so, these systems are able to provide operational data specifically targeted for the user in a given context. Three major areas of research in the field are: Cognitive State Assessment (CSA), Mitigation Strategies (MS), and Robust Controllers (RC). A subfield of the science, Augmented Social Cognition, endeavours to enhance the "ability of a group of people to remember, think, and reason." An autonomous agent is an intelligent agent operating on an owner's behalf but without any interference of that ownership entity. An Intelligent agent, however appears according to a multiply cited statement in a no longer accessible IBM white paper as follows:Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user's goals or desires.Such an agent is a system situated in, and part of, a technical or natural environment, which senses any or some status of that environment, and acts on it in pursuit of its own agenda. Such an agenda evolves from drives (or programmed goals). The agent acts to change part of the environment or of its status and influences what it sensed.Non-biological examples include intelligent agents, autonomous robots, and various software agents, including artificial life agents, and many computer viruses. Biological examples are not yet defined. Basking in reflected glory (BIRGing) is a self-serving cognition whereby an individual associates themselves with known successful others such that the winner's success becomes the individual's own accomplishment. The affiliation of another's success is enough to stimulate self glory. The individual does not need to be personally involved in the successful action. To BIRG, they must simply associate themselves with the success. Examples of BIRGing include anything from sharing a home state with a past or present famous person, to religious affiliations, to sports teams. For example, when a fan of a football team wears the team's jersey and boasts after a win, this fan is engaging in BIRGing. A parent with a bumper sticker reading "My child is an honor student" is basking in the reflected glory of their child. While many people have anecdotal accounts of BIRGing, social psychologists seek to find experimental investigations delving into BIRGing. Within social psychology, BIRGing is thought to enhance self-esteem and to be a component of self-management.BIRGing has connections to social identity theory, which explains how self-esteem and self-evaluation can be enhanced by the identification with another person's success by basking in reflected glory not earned. (The American Heritage Dictionary of the English Language: Fourth Edition, 2000.)Social identity is the individual's self-concept derived from perceived membership of social groups. High self-esteem is typically a perception of oneself as attractive, competent, likeable and a morally good person. The perception of having these attributes makes the person feel as if they are more attractive to the outside social world and thus more desirable to others to be in a social relationship.( Shavelson, Richard J.; Bolus, Roger (1982))BIRGing is a widespread and important impression management technique to counter any threats to self-esteem and maintain positive relations with others. Some positive effects of BIRGing include increasing individual self-esteem and a sense of accomplishment. It can show pride of self, and pride for the other person's success, which in turn boosts one's own self-esteem. BIRGing can be negative when done too extensively that the individual engaging in BIRGing becomes delusional or forgets the reality that they did not actually accomplish the successful event.The opposite of BIRGing is cutting off reflected failure (CORFing). This is the idea that people tend to disassociate themselves from lower-status individuals because they do not want their reputations affected by associating with the people who are considered failures. In the behaviorism approach to psychology, behavioral scripts are a sequence of expected behaviors for a given situation. Scripts include default standards for the actors, props, setting, and sequence of events that are expected to occur in a particular situation. The classic script example involves an individual dining at a restaurant. This script has several components: props including tables, menus, food, and money, as well as roles including customers, servers, chefs, and a cashier. The sequence of expected events for this script begins with a hungry customer entering the restaurant, ordering, eating, paying and then ends with the customer exiting. People continually follow scripts which are acquired through habit, practice and simple routine. Following a script can be useful because it could help to save the time and mental effort of deciding on appropriate behavior each time a situation is encountered. The Ben Franklin effect is a proposed psychological phenomenon: a person who has performed a favor for someone is more likely to do another favor for that person than they would be if they had received a favor from that person. An explanation for this would be that we internalize the reason that we helped them was because we liked them.The Benjamin Franklin effect, in other words, "is the result of your concept of self coming under attack. Every person develops a persona, and that persona persists because inconsistencies in your personal narrative get rewritten, redacted and misinterpreted". Binaural fusion or binaural integration is a cognitive process that involves the "fusion" of different auditory information presented binaurally, or to each ear. In humans, this process is essential in understanding speech as one ear may pick up more information about the speech stimuli than the other.The process of binaural fusion is important for computing the location of sound sources in the horizontal plane (sound localization), and it is important for sound segregation. Sound segregation refers the ability to identify acoustic components from one or more sound sources. The binaural auditory system is highly dynamic and capable of rapidly adjusting tuning properties depending on the context in which sounds are heard. Each eardrum moves one-dimensionally; the auditory brain analyzes and compares movements of both eardrums to extract physical cues and synthesize auditory objects.When stimulation from a sound reaches the ear, the eardrum deflects in a mechanical fashion, and the three middle ear bones (ossicles) transmit the mechanical signal to the cochlea, where hair cells transform the mechanical signal into an electrical signal. The auditory nerve, also called the cochlear nerve, then transmits action potentials to the central auditory nervous system.In binaural fusion, inputs from both ears integrate and fuse to create a complete auditory picture at the brainstem. Therefore, the signals sent to the central auditory nervous system are representative of this complete picture, integrated information from both ears instead of a single ear.The binaural squelch effect is a result of nuclei of the brainstem processing timing, amplitude, and spectral differences between the two ears. Sounds are integrated and then separated into auditory objects. For this effect to take place, neural integration from both sides is required. The binding problem is a term used at the interface between neuroscience, cognitive science and philosophy of mind that has multiple meanings.Firstly, there is the segregation problem: a practical computational problem of how brains segregate elements in complex patterns of sensory input so that they are allocated to discrete "objects". In other words, when looking at a blue square and a yellow circle, what neural mechanisms ensure that the square is perceived as blue and the circle as yellow, and not vice versa? The segregation problem is sometimes called BP1.Secondly, there is the combination problem: the problem of how objects, background and abstract or emotional features are combined into a single experience. The combination problem is sometimes called BP2.However, the difference between these two problems is not always clear. Moreover, the historical literature is often ambiguous as to whether it is addressing the segregation or the combination problem. Biological functionalism is an anthropological paradigm, asserting that all social institutions, beliefs, values and practices serve to address pragmatic concerns. In many ways, the theorem derives from the longer-established structural functionalism, yet the two theorems diverge from one another significantly. While both maintain the fundamental belief that a social structure is composed of many interdependent frames of reference, biological functionalists criticise the structural view that a social solidarity and collective conscience is required in a functioning system. By that fact, biological functionalism maintains that our individual survival and health is the driving provocation of actions, and that the importance of social rigidity is negligible. Biological motion perception is the act of perceiving the fluid unique motion of a biological agent. The phenomenon was first documented by Swedish perceptual psychologist, Gunnar Johansson, in 1973 . There are many brain areas involved in this process, some similar to those used to perceive faces. While humans complete this process with ease, from a computational neuroscience perspective there is still much to be learned as to how this complex perceptual problem is solved. One tool which many research studies in this area use is a display stimuli called a point light walker. Point light walkers are coordinated moving dots that simulate biological motion in which each dot represents specific joints of a human performing an action.Currently a large topic of research, many different models of biological motion/perception have been proposed. The following models have shown that both form and motion are important components of biological motion perception. However, to what extent each of the components play is contrasted upon the models. In neuroscience, a biological neural network is a series of interconnected neurons whose activation defines a recognizable linear pathway. The interface through which neurons interact with their neighbors usually consists of several axon terminals connected via synapses to dendrites on other neurons. If the sum of the input signals into one neuron surpasses a certain threshold, the neuron sends an action potential (AP) at the axon hillock and transmits this electrical signal along the axon.Biological neural networks have inspired the design of artificial neural networks. In typography, a bouma ( BOH-mə) is the shape of a cluster of letters, often a whole word. It is a reduction of "Bouma-shape", which was probably first used in Paul Saenger's 1997 book Space between Words: The Origins of Silent Reading, although Saenger himself attributes it to Insup & Maurice Martin Taylor. Its origin is in reference to hypotheses by prominent vision researcher Herman Bouma, who studied the shapes and confusability of letters and letter strings.Some typographers believe that, when reading, people can recognize words by deciphering boumas, not just individual letters, or that the shape of the word is related to readability and/or legibility. The claim is that this is a natural strategy for increasing reading efficiency. However, considerable study and experimentation by cognitive psychologists led to their general acceptance of a different, and largely contradictory, theory by the end of the 1980s: parallel letterwise recognition. In recent years (starting from 2000) parallel letterwise recognition has been more evangelized to typographers by Microsoft's Dr Kevin Larson, via conference presentations and a widely read article. Nonetheless, ongoing research (starting from 2009) often supports the bouma model of reading. In neuroscience the bridge locus for a particular sensory percept is a hypothetical set of neurons whose activity is the basis of that sensory percept. The term was introduced by D.N. Teller and E.Y. Pugh, Jr. in 1983, and has been sparingly used. Activity in the bridge locus neurons is postulated to be necessary and sufficient for sensory perception: if the bridge locus neurons are not active, then the sensory perception does not occur, regardless of the actual sensory input. Conversely if the bridge locus neurons are active, then sensory perception occurs, regardless of the actual sensory input. It is the highest neural level of a sensory perception. So, for example, retinal neurons are not considered a bridge locus for visual perception because stimulating visual cortex can give rise to visual percepts.Not all scholars believe in such a neural correlate of consciousness. Pessoa et al., for example, argue that there is no necessity for a bridge locus, basing their argument on the requirement of an isomorphism between neural states and conscious states. Thompson argues that there are good reasons to think that the notion of a bridge locus, which he calls a "localizationist approach", is misguided, questioning the premise that there has to be one particular neural stage whose activity forms the immediate substrate of perception. He argues, based upon work by Zeki & Shipp, DeYoe & Van Essen, and others, that brain regions are not independent stages or modules but have dense forward and backward projections that act reciprocally, and that visual processing is highly interactive and context-dependent. He also argues that cells in the visual cortex "are not mere 'feature detectors'", and that neuroscience has revealed that the brain in fact employs distributed networks, rather than centralized representations. He equates the notion of a bridge locus to a Cartesian theatre and suggests that as a notion it should be abandoned. Business activity monitoring (BAM) is software that aids in monitoring of business activities, as those activities are implemented in computer systems.The term was originally coined by analysts at Gartner, Inc. and refers to the aggregation, analysis, and presentation of real-time information about activities inside organizations and involving customers and partners. A business activity can either be a business process that is orchestrated by business process management (BPM) software, or a business process that is a series of activities spanning multiple systems and applications. BAM is an enterprise solution primarily intended to provide a real-time summary of business activities to operations managers and upper management. Categorization is the process in which ideas and objects are recognized, differentiated, and understood. Categorization implies that objects are grouped into categories, usually for some specific purpose. Ideally, a category illuminates a relationship between the subjects and objects of knowledge. Categorization is fundamental in language, prediction, inference, decision making and in all kinds of environmental interaction. It is indicated that categorization plays a major role in computer programming.There are many categorization theories and techniques. In a broader historical view, however, three general approaches to categorization may be identified:Classical categorizationConceptual clusteringPrototype theory "The Centipede's Dilemma" is a short poem that has lent its name to a psychological effect called the centipede effect or centipede syndrome. The centipede effect occurs when a normally automatic or unconscious activity is disrupted by consciousness of it or reflection on it. For example, a golfer thinking too closely about their swing or someone thinking too much about how they knot their tie may find their performance of the task impaired. The effect is also known as hyper-reflection or Humphrey's law after the English psychologist George Humphrey (1889–1966), who propounded it in 1923. As he wrote of the poem, "This is a most psychological rhyme. It contains a profound truth which is illustrated daily in the lives of all of us". The Centre for Cognitive Ageing and Cognitive Epidemiology (CCACE) is a "centre of excellence" to advance research into how ageing affects cognition, and how mental ability in youth affects health and longevity.Based at the University of Edinburgh and funded by the Medical Research Council (MRC), ESRC, BBSRC and EPSRC through the LLHW MRC's Lifelong Health and Wellbeing scheme, the Centre is led by Professor Ian Deary alongside 2 co-Directors and 8 Research Group Leaders spread across three University of Edinburgh sites (George Square, New Royal Infirmary of Edinburgh, Western General Hospital). Chunking in psychology is a process by which individual pieces of information are bound together into a meaningful whole (Neath & Surprenant, 2003). A chunk is defined as a familiar collection of more elementary units that have been inter-associated and stored in memory repeatedly and act as a coherent, integrated group when retrieved (Tulving & Craik, 2000).It is believed that individuals create higher order cognitive representations of the items on the list that are more easily remembered as a group than as individual items themselves. Representations of these groupings are highly subjective, as they depend critically on the individual's perception of the features of the items and the individual's semantic network. The size of the chunks generally ranges anywhere from two to six items, but differs based on language and cultureThe phenomenon of chunking as a memory mechanism can be observed in the way individuals group numbers and information in the day-to-day life. For example, when recalling a number such as 12101946, if numbers are grouped as 12, 10 and 1946, a mnemonic is created for this number as a day, month and year. Similarly, another illustration of the limited capacity of working memory as suggested by George Miller can be seen from the following example: While recalling a mobile phone number such as 9849523450, we might break this into 98 495 234 50. Thus, instead of remembering 10 separate digits that is beyond the "seven plus-or-minus two" memory span, we are remembering four groups of numbers.A modality effect is present in chunking. That is, the mechanism used to convey the list of items to the individual affects how much "chunking" occurs. Experimentally, it has been found that auditory presentation results in a larger amount of grouping in the responses of individuals, as compared to visual presentation. Previous literature, such as George Miller's The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information (1956) have shown that the probability of recall is greater when the "chunking" strategy is used. As stated above, the grouping of the responses occurs as individuals place them into categories according to their inter-relatedness based on semantic and perceptual properties. Lindley (1966) showed that the groups produced have meaning to the participant, therefore; this strategy makes it easier for an individual to recall and maintain information in memory during studies and testing. Therefore, when "chunking" is used as a strategy, one can expect a higher proportion of correct recalls.Various kinds of memory training systems and mnemonics include training and drill in specially-designed recoding or chunking schemes. Such systems existed before Miller's paper, but there was no convenient term to describe the general strategy or substantive and reliable research. The term "chunking" is now often used in reference to these systems. As an illustration, patients with Alzheimer's disease typically experience working memory deficits; chunking is an effective method to improve patients' verbal working memory performance (Huntley, Bor, Hampshire, Owen, & Howard, 2011). Another classic example of chunking is discussed in the "Expertise and skill memory effects" section below. A cognitive bias refers to the systematic pattern of deviation from norm or rationality in judgment, whereby inferences about other people and situations may be drawn in an illogical fashion. Individuals create their own "subjective social reality" from their perception of the input. An individual's construction of social reality, not the objective input, may dictate their behaviour in the social world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality.Some cognitive biases are presumably adaptive. Cognitive biases may lead to more effective actions in a given context. Furthermore, cognitive biases enable faster decisions when timeliness is more valuable than accuracy, as illustrated in heuristics. Other cognitive biases are a "by-product" of human processing limitations, resulting from a lack of appropriate mental mechanisms (bounded rationality), or simply from a limited capacity for information processing.A continually evolving list of cognitive biases has been identified over the last six decades of research on human judgment and decision-making in cognitive science, social psychology, and behavioral economics. Kahneman and Tversky (1996) argue that cognitive biases have efficient practical implications for areas including clinical judgment, entrepreneurship, finance, and management. Cognitive biology is an emerging science that regards natural cognition as a biological function. It is based on the theoretical assumption that every organism—whether a single cell or multicellular—is continually engaged in systematic acts of cognition coupled with intentional behaviors, i.e., a sensory-motor coupling. That is to say, if an organism can sense stimuli in its environment and respond accordingly, it is cognitive. Any explanation of how natural cognition may manifest in an organism is constrained by the biological conditions in which its genes survives from one generation to the next. And since by Darwinian theory the species of every organism is evolving from a common root, three further elements of cognitive biology are required: (i) the study of cognition in one species of organism is useful, through contrast and comparison, to the study of another species’ cognitive abilities; (ii) it is useful to proceed from organisms with simpler to those with more complex cognitive systems, and (iii) the greater the number and variety of species studied in this regard, the more we understand the nature of cognition. Cognitive deficit or cognitive impairment is an inclusive term to describe any characteristic that acts as a barrier to the cognition process.The term may describedeficits in overall intelligence (as with intellectual disabilities),specific and restricted deficits in cognitive abilities (such as in learning disorders like dyslexia),neuropsychological deficits (such as in attention, working memory or executive function),or it may describe drug-induced impairment in cognition and memory (such as that seen with alcohol, glucocorticoids, and the benzodiazepines.)It usually refers to a durable characteristic, as opposed to altered level of consciousness, which may be acute and reversible. Cognitive deficits may be inborn or caused by environmental factors such as brain injuries, neurological disorders, or mental illness. In the field of psychology, cognitive dissonance is the mental discomfort (psychological stress) experienced by a person who simultaneously holds two or more contradictory beliefs, ideas, or values. The occurrence of cognitive dissonance is a consequence of a person performing an action that contradicts personal beliefs, ideals, and values; and also occurs when confronted with new information that contradicts said beliefs, ideals, and values.In A Theory of Cognitive Dissonance (1957), Leon Festinger proposed that human beings strive for internal psychological consistency in order to mentally function in the real world. A person who experiences internal inconsistency tends to become psychologically uncomfortable and is motivated to reduce the cognitive dissonance. This is done by changing parts of the cognition to justify the stressful behavior, by adding new parts to the cognition that causes the psychological dissonance, or by actively avoiding social situations and contradictory information that are likely to increase the magnitude of the cognitive dissonance. Cognitive distortions are exaggerated or irrational thought patterns that are believed to perpetuate the effects of psychopathological states, especially depression and anxiety. Psychiatrist Aaron T. Beck laid the groundwork for the study of these distortions, and his student David D. Burns continued research on the topic. Burns's The Feeling Good Handbook (1989) describes these thought patterns and how to eliminate them.Cognitive distortions are thoughts that cognitive therapists believe cause individuals to perceive reality inaccurately. These thinking patterns often are said to reinforce negative thoughts or emotions. Cognitive distortions tend to interfere with the way a person perceives an event. Because the way a person feels intervenes with how they think, these distorted thoughts can feed negative emotions and lead an individual affected by cognitive distortions towards an overall negative outlook on the world and consequently a depressive or anxious mental state. The cognitive elite of a society, according to Richard J. Herrnstein and Charles Murray, are those having higher intelligence levels and thus better prospects for success in life. The development of a cognitive elite during the 20th century is presented in their 1994 book The Bell Curve. In this book, Herrnstein and Murray propose that the cognitive elite has been produced by a more technological society which offers enough high skill jobs for those with a higher intelligence to fill. They also propose that by removing race, gender or class as criteria the main criteria of success in academic and professional life is becoming primarily based on cognitive ability.Educational psychologist Linda Gottfredson wrote: Cognitive ergonomics, defined by the International Ergonomics Association "is concerned with mental processes, such as perception, memory, reasoning, and motor response, as they affect interactions among humans and other elements of a system. The relevant topics include mental workload, decision-making, skilled performance, human-computer interaction, human reliability, work stress and training as these may relate to human-system design." Cognitive ergonomics studies cognition in work and operational settings, in order to optimize human well-being and system performance. It is a subset of the larger field of human factors and ergonomics. In cognitive psychology, cognitive load refers to the total amount of mental effort being used in the working memory. Cognitive load theory was developed out of the study of problem solving by John Sweller in the late 1980s. Sweller argued that instructional design can be used to reduce cognitive load in learners. Cognitive load theory differentiates cognitive load into three types: intrinsic, extraneous, and germane.Intrinsic cognitive load is the effort associated with a specific topic. Extraneous cognitive load refers to the way information or tasks are presented to a learner. And, germane cognitive load refers to the work put into creating a permanent store of knowledge, or a schema.Researchers Paas and Van Merriënboer developed a way to measure perceived mental effort which is indicative of cognitive load. Task-invoked pupillary response is a reliable and sensitive measurement of cognitive load that is directly related to working memory. Measuring humans' pupil responses has the potential to improve human–computer interaction and adaptive decision support systems. Heavy cognitive load can have negative effects on task completion, and it is important to note that the experience of cognitive load is not the same in everyone. The elderly, students, and children experience different, and more often higher, amounts of cognitive load.High cognitive load in the elderly has been shown to affect their center of balance. With increased distractions and cell phone use students are more prone to experiencing high cognitive load which can reduce academic success. Children have less general knowledge than adults which increases their cognitive load. Recent theoretical advances include the incorporation of embodied cognition in order to predict the cognitive load resulting from embodied interactions. In psychology, the human mind is considered to be a cognitive miser due to the tendency of humans to think and solve problems in simpler and less effortful ways rather than in more sophisticated and more effortful ways, regardless of intelligence. Just as a miser seeks to avoid spending money, the human mind often seeks to avoid spending computational effort. The cognitive miser theory is an umbrella theory of cognition that brings together previous research on heuristics and attributional biases to explain how and why people are cognitive misers.The term cognitive miser was first introduced by Susan Fiske and Shelley Taylor in 1984. It is an important concept in social cognition theory and has been influential in other social sciences including but not exclusive to economics and political science. Cognitive neuropsychology is a branch of cognitive psychology that aims to understand how the structure and function of the brain relates to specific psychological processes. Cognitive psychology is the science that looks at how mental processes are responsible for our cognitive abilities to store and produce new memories, produce language, recognize people and objects, as well as our ability to reason and problem solve. Cognitive neuropsychology places a particular emphasis on studying the cognitive effects of brain injury or neurological illness with a view to inferring models of normal cognitive functioning. Evidence is based on case studies of individual brain damaged patients who show deficits in brain areas and from patients who exhibit double dissociations. Double dissociations involve two patients and two tasks. One patient is impaired at one task but normal on the other, while the other patient is normal on the first task and impaired on the other. For example, patient A would be poor at reading printed words while still being normal at understanding spoken words, while the patient B would be normal at understanding written words and be poor at understanding spoken words. Scientists can interpret this information to explain how there is a single cognitive module for word comprehension. From studies like these, researchers infer that different areas of the brain are highly specialised. Cognitive neuropsychology can be distinguished from cognitive neuroscience, which is also interested in brain damaged patients, but is particularly focused on uncovering the neural mechanisms underlying cognitive processes. Cognitive polyphasia is where different kinds of knowledge, possessing different rationalities live side by side in the same individual or collective. From Greek: polloi "many", phasis "appearance".In his research on popular representations of psychoanalysis in France, Serge Moscovici observed that different and even contradictory modes of thinking about the same issue often co-exist. In contemporary societies people are "speaking" medical, psychological, technical, and political languages in their daily affairs. By extending this phenomenon to the level of thought he suggests that "the dynamic co-existence—interference or specialization—of the distinct modalities of knowledge, corresponding to definite relations between man and his environment, determines a state of cognitive polyphasia". Cognitive shifting is the mental process of consciously redirecting one's attention away from one fixation to another. In contrast, if this process happened unconsciously then it is referred to as task switching. Both are forms of cognitive flexibility.In the general framework of cognitive therapy and awareness management, cognitive shifting refers to the conscious choice to take charge of one's mental habits—and redirect one's focus of attention in helpful, more successful directions. In the term's specific usage in corporate awareness methodology, cognitive shifting is a performance-oriented technique for refocusing attention in more alert, innovative, charismatic and empathic directions. Cognitive functioning is a term referring to an individual’s ability to process to (thoughts) that should not deplete on a large scale in healthy individuals. It is defined as "the ability of an individual to perform the various mental activities most closely associated with learning and problem solving. Examples include verbal, spatial, psychomotor, and processing-speed ability." Cognition mainly refers to things like memory, the ability to learn new information, speech, understanding of written material. The brain is usually capable of learning new skills in the aforementioned areas, typically in early childhood, and of developing personal thoughts and beliefs about the world. Old age and disease may affect cognitive function, causing memory loss and trouble thinking of the right words while speaking or writing ("drawing a blank"). Multiple sclerosis (MS), for example, can eventually cause memory loss, an inability to grasp new concepts or information, and depleted verbal fluency. Not all with the condition will experience this side effect, and most will retain their general intellect and the ability.Humans generally have a capacity for cognitive function once born, so almost every person is capable of learning or remembering. However, this is tested using tests like the IQ test, although these have issues with accuracy and completeness. In these tests, the patient will be asked a series of questions or to perform tasks, with each measuring a cognitive skill, such as level of consciousness, memory, awareness, problem-solving, motor skills, analytical abilities, or other similar concepts. Early childhood is when most people are best able to absorb and use new information. In this period, children learn new words, concepts, and various methods to express themselves. Cognitive specialization suggests that certain behaviors, often in the domain of social communication, are passed on to offspring and refined to be maximally beneficial by the process of natural selection. Specializations serve an adaptive purpose for an organism by allowing the organism to be better suited for its habitat. Over time, specializations often become essential to the species' continued survival. Cognitive specialization in humans has been thought to underlie the acquisition, development, and evolution of language, theory of mind, and specific social skills such as trust and reciprocity. These specializations are considered to be critical to the survival of the species, even though there are successful individuals who lack certain specializations, including those diagnosed with autism spectrum disorder or who lack language abilities. Cognitive specialization is also believed to underlie adaptive behaviors such as self-awareness, navigation, and problem solving skills in several animal species such as chimpanzees and bottlenose dolphins. Cognitive style or "thinking style" is a concept used in cognitive psychology to describe the way individuals think, perceive and remember information. Cognitive style differs from cognitive ability (or level), the latter being measured by aptitude tests or so-called intelligence tests. There is controversy over the exact meaning of the term "cognitive style" and whether it is a single or multiple dimension of human personality. However it remains a key concept in the areas of education and management. If a pupil has a cognitive style that is similar to that of his/her teacher, the chances are improved that the pupil will have a more positive learning experience. Likewise, team members with similar cognitive styles likely feel more positive about their participation with the team. While matching cognitive styles may make participants feel more comfortable when working with one another, this alone cannot guarantee the success of the outcome. Cognitive styles analysis (CSA) was developed by Richard J. Riding and is the most frequently used computerized measure of cognitive styles. Although CSA is not well known in North American institutions, it is quite popular among European universities and organizations.Rezaei and Katz (2004, p. 1318) state:"A number of different labels have been given to cognitive styles and, according to Riding, many of these are but different conceptions of the same dimensions (Riding & Sadler-Smith 1992). Riding and Cheema (Riding & Cheema 1991) surveyed the various (about 30) labels and, after reviewing the descriptions, correlations, methods of assessment, and effect on behavior, concluded that the styles may be grouped into two principal groups: the Wholist-Analytic and the Verbal-Imagery dimensions. It is argued that these dimensions of cognitive style are very fundamental because they develop early in life and are pervasive given their effect on social behavior, decision making, and learning."Unlike many other cognitive style measures, CSA has been the subject of much empirical investigation. Three experiments reported by Rezaei and Katz (2004) showed the reliability of CSA to be low. Considering the theoretical strength of CSA, and unsuccessful earlier attempts to create a more reliable parallel form of it (Peterson 2003), a revised version was made to improve its validity and reliability. Cognitive synonymy is a type of synonymy in which synonyms are so similar in meaning that they cannot be differentiated either denotatively or connotatively, that is, not even by mental associations, connotations, emotive responses, and poetic value. It is a stricter (more precise) technical definition of synonymy, specifically for theoretical (e.g., linguistic and philosophical) purposes. In usage employing this definition, synonyms with greater differences are often called near-synonyms rather than synonyms. Comparative cognition is the comparative study of the mechanisms and origins of cognition in various species. From a biological point of view, work is being done on the brains of fruit flies that should yield techniques precise enough to allow an understanding of the workings of the human brain on a scale appreciative of individual groups of neurons rather than the more regional scale previously used. Similarly, gene activity in the human brain is better understood through examination of the brains of mice by the Seattle-based Allen Institute for Brain Science (see link below), yielding the freely available Allen Brain Atlas. This type of study is related to comparative cognition, but better classified as one of comparative genomics. Increasing emphasis in psychology and ethology on the biological aspects of perception and behavior is bridging the gap between genomics and behavioral analysis. Event processing is a method of tracking and analyzing (processing) streams of information (data) about things that happen (events), and deriving a conclusion from them. Complex event processing, or CEP, is event processing that combines data from multiple sources to infer events or patterns that suggest more complicated circumstances. The goal of complex event processing is to identify meaningful events (such as opportunities or threats) and respond to them as quickly as possible.These events may be happening across the various layers of an organization as sales leads, orders or customer service calls. Or, they may be news items, text messages, social media posts, stock market feeds, traffic reports, weather reports, or other kinds of data. An event may also be defined as a "change of state," when a measurement exceeds a predefined threshold of time, temperature, or other value. Analysts suggest that CEP will give organizations a new way to analyze patterns in real-time and help the business side communicate better with IT and service departments.The vast amount of information available about events is sometimes referred to as the event cloud. Calculus (from Latin calculus, literally 'small pebble', used for counting and calculations, like on an abacus) is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. It has two major branches, differential calculus (concerning rates of change and slopes of curves), and integral calculus (concerning accumulation of quantities and the areas under and between curves). These two branches are related to each other by the fundamental theorem of calculus. Both branches make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit. Generally, modern calculus is considered to have been developed in the 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Today, calculus has widespread uses in science, engineering, and economics.Calculus is a part of modern mathematics education. A course in calculus is a gateway to other, more advanced courses in mathematics devoted to the study of functions and limits, broadly called mathematical analysis. Calculus has historically been called "the calculus of infinitesimals", or "infinitesimal calculus". The term calculus (plural calculi) is also used for naming specific methods of calculation or notation, and even some theories; such as, e.g., propositional calculus, Ricci calculus, calculus of variations, lambda calculus, and process calculus. In mathematics, a continuous function is a function for which sufficiently small changes in the input result in arbitrarily small changes in the output. Otherwise, a function is said to be a discontinuous function. A continuous function with a continuous inverse function is called a homeomorphism.Continuity of functions is one of the core concepts of topology, which is treated in full generality below. The introductory portion of this article focuses on the special case where the inputs and outputs of functions are real numbers. A stronger form of continuity is uniform continuity. In addition, this article discusses the definition for the more general case of functions between two metric spaces. In order theory, especially in domain theory, one considers a notion of continuity known as Scott continuity. Other forms of continuity do exist but they are not discussed in this article.As an example, consider the function h(t), which describes the height of a growing flower at time t. This function is continuous. By contrast, if M(t) denotes the amount of money in a bank account at time t, then the function jumps at each point in time when money is deposited or withdrawn, so the function M(t) is discontinuous. An Euler spiral is a curve whose curvature changes linearly with its curve length (the curvature of a circular curve is equal to the reciprocal of the radius). Euler spirals are also commonly referred to as spiros, clothoids, or Cornu spirals.Euler spirals have applications to diffraction computations. They are also widely used as transition curves in railroad engineering/highway engineering for connecting and transiting the geometry between a tangent and a circular curve. A similar application is also found in photonic integrated circuits. The principle of linear variation of the curvature of the transition curve between a tangent and a circular curve defines the geometry of the Euler spiral:Its curvature begins with zero at the straight section (the tangent) and increases linearly with its curve length.Where the Euler spiral meets the circular curve, its curvature becomes equal to that of the latter. In mathematics, infinitesimals are things so small that there is no way to measure them. The insight with exploiting infinitesimals was that entities could still retain certain specific properties, such as angle or slope, even though these entities were quantitatively small. The word infinitesimal comes from a 17th-century Modern Latin coinage infinitesimus, which originally referred to the "infinite-th" item in a sequence. Infinitesimals are a basic ingredient in the procedures of infinitesimal calculus as developed by Leibniz, including the law of continuity and the transcendental law of homogeneity. In common speech, an infinitesimal object is an object that is smaller than any feasible measurement, but not zero in size—or, so small that it cannot be distinguished from zero by any available means. Hence, when used as an adjective, "infinitesimal" means "extremely small". To give it a meaning, it usually must be compared to another infinitesimal object in the same context (as in a derivative). Infinitely many infinitesimals are summed to produce an integral.The concept of infinitesimals was originally introduced around 1670 by either Nicolaus Mercator or Gottfried Wilhelm Leibniz. Archimedes used what eventually came to be known as the method of indivisibles in his work The Method of Mechanical Theorems to find areas of regions and volumes of solids. In his formal published treatises, Archimedes solved the same problem using the method of exhaustion. The 15th century saw the work of Nicholas of Cusa, further developed in the 17th century by Johannes Kepler, in particular calculation of area of a circle by representing the latter as an infinite-sided polygon. Simon Stevin's work on decimal representation of all numbers in the 16th century prepared the ground for the real continuum. Bonaventura Cavalieri's method of indivisibles led to an extension of the results of the classical authors. The method of indivisibles related to geometrical figures as being composed of entities of codimension 1. John Wallis's infinitesimals differed from indivisibles in that he would decompose geometrical figures into infinitely thin building blocks of the same dimension as the figure, preparing the ground for general methods of the integral calculus. He exploited an infinitesimal denoted 1/∞ in area calculations.The use of infinitesimals by Leibniz relied upon heuristic principles, such as the law of continuity: what succeeds for the finite numbers succeeds also for the infinite numbers and vice versa; and the transcendental law of homogeneity that specifies procedures for replacing expressions involving inassignable quantities, by expressions involving only assignable ones. The 18th century saw routine use of infinitesimals by mathematicians such as Leonhard Euler and Joseph-Louis Lagrange. Augustin-Louis Cauchy exploited infinitesimals both in defining continuity in his Cours d'Analyse, and in defining an early form of a Dirac delta function. As Cantor and Dedekind were developing more abstract versions of Stevin's continuum, Paul du Bois-Reymond wrote a series of papers on infinitesimal-enriched continua based on growth rates of functions. Du Bois-Reymond's work inspired both Émile Borel and Thoralf Skolem. Borel explicitly linked du Bois-Reymond's work to Cauchy's work on rates of growth of infinitesimals. Skolem developed the first non-standard models of arithmetic in 1934. A mathematical implementation of both the law of continuity and infinitesimals was achieved by Abraham Robinson in 1961, who developed non-standard analysis based on earlier work by Edwin Hewitt in 1948 and Jerzy Łoś in 1955. The hyperreals implement an infinitesimal-enriched continuum and the transfer principle implements Leibniz's law of continuity. The standard part function implements Fermat's adequality.Vladimir Arnold wrote in 1990:Nowadays, when teaching analysis, it is not very popular to talk about infinitesimal quantities. Consequently present-day students are not fully in command of this language. Nevertheless, it is still necessary to have command of it. In mathematics, a function or groups of functions are important enough to deserve their own names. This is a listing of articles which explain some of these functions in more detail. There is a large theory of special functions which developed out of statistics and mathematical physics. A modern, abstract point of view contrasts large function spaces, which are infinite-dimensional and within which most functions are 'anonymous', with special functions picked out by properties such as symmetry, or relationship to harmonic analysis and group representations.See also List of types of functions In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given range (the local or relative extrema) or on the entire domain of a function (the global or absolute extrema). Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions.As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively. Unbounded infinite sets, such as the set of real numbers, have no minimum or maximum. In mathematics, a multiplicative calculus is a system with two multiplicative operators, called a "multiplicative derivative" and a "multiplicative integral", which are inversely related in a manner analogous to the inverse relationship between the derivative and integral in the classical calculus of Newton and Leibniz. The multiplicative calculi provide alternatives to the classical calculus, which has an additive derivative and an additive integral.There are infinitely many multiplicative non-Newtonian calculi, including the geometric calculus and the bigeometric calculus discussed below. These calculi all have a derivative and/or integral that is not a linear operator.The geometric calculus is useful in image analysis, and in the study of growth/decay phenomena (e.g., in economic growth, bacterial growth, and radioactive decay). The bigeometric calculus is useful in some applications of fractals, and in the theory of elasticity in economics. In mathematics, non-standard calculus is the modern application of infinitesimals, in the sense of non-standard analysis, to differential and integral calculus. It provides a rigorous justification for some arguments in calculus that were previously considered merely heuristic.Calculations with infinitesimals were widely used before Karl Weierstrass sought to replace them with the (ε, δ)-definition of limit starting in the 1870s. (See history of calculus.) For almost one hundred years thereafter, mathematicians like Richard Courant viewed infinitesimals as being naive and vague or meaningless.Contrary to such views, Abraham Robinson showed in 1960 that infinitesimals are precise, clear, and meaningful, building upon work by Edwin Hewitt and Jerzy Łoś. According to Jerome Keisler, "Robinson solved a three hundred year old problem by giving a precise treatment of infinitesimals. Robinson's achievement will probably rank as one of the major mathematical advances of the twentieth century." In mathematics, reduction refers to the rewriting of an expression into a simpler form. For example, the process of rewriting a fraction into one with the smallest whole-number denominator possible (while keeping the numerator an integer) is called "reducing a fraction". Rewriting a radical (or "root") expression with the smallest possible whole number under the radical symbol is called "reducing a radical". Minimizing the number of radicals that appear underneath other radicals in an expression is called denesting radicals. In mathematics, a reflection formula or reflection relation for a function f is a relationship between f(a − x) and f(x). It is a special case of a functional equation, and it is very common in the literature to use the term "functional equation" when "reflection formula" is meant.Reflection formulas are useful for numerical computation of special functions. In effect, an approximation that has greater accuracy or only converges on one side of a reflection point (typically in the positive half of the complex plane) can be employed for all arguments. In mathematics, the Regiomontanus's angle maximization problem, is a famous optimization problem posed by the 15th-century German mathematician Johannes Müller (also known as Regiomontanus). The problem is as follows:A painting hangs from a wall. Given the heights of the top and bottom of the painting above the viewer's eye level, how far from the wall should the viewer stand in order to maximize the angle subtended by the painting and whose vertex is at the viewer's eye?If the viewer stands too close to the wall or too far from the wall, the angle is small; somewhere in between it is as large as possible.The same approach applies to finding the optimal place from which to kick a ball in rugby. For that matter, it is not necessary that the alignment of the picture be at right angles: we might be looking at a window of the Leaning Tower of Pisa or a realtor showing off the advantages of a sky-light in a sloping attic roof. The solutions of a first-order differential equation of a scalar function y(x) can be drawn in a 2-dimensional space with the x in horizontal and y in vertical direction. Possible solutions are functions y(x) drawn as solid curves. Sometimes it is too cumbersome solving the differential equation analytically. Then one can still draw the tangents of the function curves e.g on a regular grid. The tangents are touching the functions at the grid points. However, the direction field is rather agnostic about chaotic aspects of the differential equation. In mathematics, tensor calculus or tensor analysis is an extension of vector calculus to tensor fields (tensors that may vary over a manifold, e.g. in spacetime).Developed by Gregorio Ricci-Curbastro and his student Tullio Levi-Civita, it was used by Albert Einstein to develop his theory of general relativity. Contrasted with the infinitesimal calculus, tensor calculus allows presentation of physics equations in a form that is independent of the choice of coordinates on the manifold.Tensor calculus has many real-life applications in physics and engineering, including elasticity, continuum mechanics, electromagnetism (see mathematical descriptions of the electromagnetic field), and general relativity (see mathematics of general relativity). In mathematics, time-scale calculus is a unification of the theory of difference equations with that of differential equations, unifying integral and differential calculus with the calculus of finite differences, offering a formalism for studying hybrid discrete–continuous dynamical systems. It has applications in any field that requires simultaneous modelling of discrete and continuous data. It gives a new definition of a derivative such that if one differentiates a function which acts on the real numbers then the definition is equivalent to standard differentiation, but if one uses a function acting on the integers then it is equivalent to the forward difference operator. In mathematics, a function f is uniformly continuous if, roughly speaking, it is possible to guarantee that f(x) and f(y) be as close to each other as we please by requiring only that x and y are sufficiently close to each other; unlike ordinary continuity, the maximum distance between f(x) and f(y) cannot depend on x and y themselves. For instance, any isometry (distance-preserving map) between metric spaces is uniformly continuous.Every uniformly continuous function between metric spaces is continuous. Uniform continuity, unlike continuity, relies on the ability to compare the sizes of neighbourhoods of distinct points of a given space. In an arbitrary topological space, comparing the sizes of neighborhoods may not be possible. Instead, uniform continuity can be defined on a metric space where such comparisons are possible, or more generally on a uniform space.We have the following chain of inclusions for functions over a compact subset of the real lineContinuously differentiable ⊆Lipschitz continuous ⊆ α-Hölder continuous ⊆ uniformly continuous = continuous In elementary mathematics, a variable is an alphabetic character representing a number, called the value of the variable, which is either arbitrary, not fully specified, or unknown. Making algebraic computations with variables as if they were explicit numbers allows one to solve a range of problems in a single computation. A typical example is the quadratic formula, which allows one to solve every quadratic equation by simply substituting the numeric values of the coefficients of the given equation to the variables that represent them.The concept of a variable is also fundamental in calculus. Typically, a function y = f(x) involves two variables, y and x, representing respectively the value and the argument of the function. The term "variable" comes from the fact that, when the argument (also called the "variable of the function") varies, then the value varies accordingly.In more advanced mathematics, a variable is a symbol that denotes a mathematical object, which could be a number, a vector, a matrix, or even a function. In this case, the original property of "variability" of a variable is not kept (except, sometimes, for informal explanations).Similarly, in computer science, a variable is a name (commonly an alphabetic character or a word) representing some value represented in computer memory. In mathematical logic, a variable is either a symbol representing an unspecified term of the theory, or a basic object of the theory, which is manipulated without referring to its possible intuitive interpretation. Nutrition is the science that interprets the interaction of nutrients and other substances in food in relation to maintenance, growth, reproduction, health and disease of an organism. It includes food intake, absorption, assimilation, biosynthesis, catabolism and excretion.The diet of an organism is what it eats, which is largely determined by the availability and palatability of foods. For humans, a healthy diet includes preparation of food and storage methods that preserve nutrients from oxidation, heat or leaching, and that reduce risk of foodborne illness.In humans, an unhealthy diet can cause deficiency-related diseases such as blindness, anemia, scurvy, preterm birth, stillbirth and cretinism, or nutrient excess health-threatening conditions such as obesity and metabolic syndrome; and such common chronic systemic diseases as cardiovascular disease, diabetes, and osteoporosis. Undernutrition can lead to the wasting of kwashiorkor in acute cases, and the stunting of marasmus in chronic cases of malnutrition. The Academy of Nutrition and Dietetics is the United States' largest organization of food and nutrition professionals, and represents over 100,000 credentialed practitioners — registered dietitian nutritionists, dietetic technicians, registered, and other dietetics professionals holding undergraduate and advanced degrees in nutrition and dietetics. After nearly 100 years as the American Dietetic Association, the organization officially changed its name to the Academy of Nutrition and Dietetics in 2012. The organization's members are primarily registered dietitian nutritionists (RDNs) and nutrition and dietetic technicians, registered (NDTR) as well as many researchers, educators, students, nurses, physicians, pharmacists, clinical and community dietetics professionals, consultants and food service managers.The Academy has faced controversy regarding corporate influence related to its relationship with the food industry and funding from corporate groups such as McDonald's, Coca-Cola, Mars, and others. The African Nutrition Leadership Programme (ANLP) is a 10 days training course that started in 2002 to assist the development of future leaders in the field of human nutrition in Africa. The emphasis of the programme is on understanding and developing the qualities and skills of leaders, team building, communication and understanding nutrition information in a broader context. The long-term aim of the ANLP is to meet the demands for leadership in Africa to solve its nutritional challenges.The programme is designed for individuals who have experience in various fields of nutrition. Preference will be given to candidates with a postgraduate qualification, postdoctoral fellows and candidates with comparable working experience in the broader human nutrition sciences, studying or working in Africa. Alliesthesia (αλλoς (allós) - other, and αἴσθησις (aísthēsis) - sensation, perception ; French : alliesthésie, German : Alliästhesie) describes the dependence of the perception of pleasure or disgust perceived when consuming a stimulus on the "milieu intérieur" of the organism. Therefore, a stimulus capable of ameliorating the state of the interior milieu, will be perceived as pleasant. In contrast, a stimulus disturbing the milieu interne of the organism will be perceived as unpleasant or even painful. The sensation elicited therefore depends not only on the quality or on the intensity of the stimulus, but also on internal receptors, and is subjective.Alliesthesia is a physiologic phenomenon and should not be confounded with the pathologic symptom of allesthesia.Another phenomenon based on sensory cues and not to be confound with alliesthesia is "sensory-specific satiety". Alliin is a sulfoxide that is a natural constituent of fresh garlic. It is a derivative of the amino acid cysteine. When fresh garlic is chopped or crushed, the enzyme alliinase converts alliin into allicin, which is responsible for the aroma of fresh garlic.Garlic has been used since antiquity as a therapeutic remedy for certain conditions now associated with oxygen toxicity, and, when this was investigated, garlic did indeed show strong antioxidant and hydroxyl radical-scavenging properties, it is presumed owing to the alliin contained within. Alliin has also been found to affect immune responses in blood.Alliin was the first natural product found to have both carbon- and sulfur-centered stereochemistry. L-Alpha glycerylphosphorylcholine (alpha-GPC, choline alfoscerate) is a natural choline compound found in the brain. It is also a parasympathomimetic acetylcholine precursor which may have potential for the treatment of Alzheimer's disease and other dementias.Alpha-GPC rapidly delivers choline to the brain across the blood–brain barrier and is a biosynthetic precursor of acetylcholine. It is a non-prescription drug in most countries and in the United States it is classified as generally recognized as safe (GRAS). The American Society for Nutrition (ASN) is the principal United States society for professional researchers and practitioners in the field of nutrition. ASN members, located in 76 countries around the globe, are the leaders in nutrition science, clinical practice, education, and policy. ASN publishes the Nutrition Science Collection, and annually convenes the greatest minds and future leaders in nutrition to network, share information and discuss how to continue to advance global public health. Animal nutrition focuses on the dietary needs of animals, primarily those in agriculture and food production, but also in zoos, aquariums and wildlife management.There are seven major classes of nutrients: carbohydrates, fats, fibre, minerals, proteins, vitamins, and water.The macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built) and energy. Some of the structural material can be used to generate energy internally, and in either case it is measured in joules or calories (sometimes called "kilocalories" and on other rare occasions written with a capital C to distinguish them from little 'c' calories). Carbohydrates and proteins provide 17 kJ approximately (4 kcal) of energy per gram, while fats provide 37 kJ (9 kcal) per gram., though the net energy from either depends on such factors as absorption and digestive effort, which vary substantially from instance to instance. Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. A third class dietary material, fiber (i.e., non-digestible material such as cellulose), seems also to be required, for both mechanical and biochemical reasons, though the exact reasons remain unclear.Molecules of carbohydrates and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose, fructose, galactose) to complex polysaccharides (starch). Fats are triglycerides, made of assorted fatty acid monomers bound to glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental components of protein are nitrogen-containing amino acids, some of which are essential in the sense that humans cannot make them internally. Some of the amino acids are convertible (with the expenditure of energy) to glucose and can be used for energy production just as ordinary glucose. By breaking down existing protein, some glucose can be produced internally; the remaining amino acids are discarded, primarily as urea in urine. This occurs normally only during prolonged starvation.Other dietary substances found in plant foods (phytochemicals, polyphenols) are not identified as essential nutrients but appear to impact health in both positive and negative ways. Most foods contain a mix of some or all of the nutrient classes, together with other substances. Some nutrients can be stored internally (e.g., the fat soluble vitamins), while others are required more or less continuously. Poor health can be caused by a lack of required nutrients or, in extreme cases, too much of a required nutrient. For example, both salt provides sodium and chloride, both essential nutrients, but will cause illness or even death in too large amounts. Anthoxanthins (flavones and flavonols) are a type of flavonoid pigments in plants. Anthoxanthins are water-soluble pigments which range in color from white or colorless to a creamy to yellow, often on petals of flowers. These pigments are generally whiter in an acid medium and yellowed in an alkaline medium. They are very susceptible to color changes with minerals and metal ions, similar to anthocyanins. As with all flavonoids, they exhibit antioxidant properties, and are important in nutrition, and are sometimes used as food additives. Darkening with iron is particularly prominent in food products. They are considered to have more variety than anthocyanins. Some examples are quercitin. The Association for Nutrition (AfN) is learned society in the United Kingdom.The association is a registered charity and is custodian of the United Kingdom Voluntary Register of Nutritionists (UKVRN).Its purpose is to "Protect and benefit the public by defining and advancing standards of evidence-based practice across the field of nutrition and at all levels within the workforce".The Association for Nutrition and the UKVRN is acknowledged by Public Health England, NHS Careers, NHS Choices and the National Careers Service as the professional body for nutritionists in the UK.The Chief Executive is Leonie Milliner. The Atwater system, named after Wilbur Olin Atwater, or derivatives of this system are used for the calculation of the available energy of foods. The system was developed largely from the experimental studies of Atwater and his colleagues in the later part of the 19th century and the early years of the 20th at Wesleyan University in Middletown, Connecticut. Its use has frequently been the cause of dispute, but no real alternatives have been proposed. As with the calculation of protein from total nitrogen, the Atwater system is a convention and its limitations can be seen in its derivation. Auxology, sometimes called auxanology (from Greek αὔξω, auxō, or αὐξάνω, auxanō, "grow"; and -λογία, -logia), is a meta-term covering the study of all aspects of human physical growth (though it is also a fundamental of biology, generally speaking). Auxology is a highly multi-disciplinary science involving health sciences/medicine (pediatrics, general practice, endocrinology, neuroendocrinology, physiology, epidemiology), and to a lesser extent: nutrition, genetics, anthropology, anthropometry, ergonomics, history, economic history, economics, socioeconomics, sociology, public health and psychology, among others. Amy Bentley is Professor of Food Studies in the Department of Nutrition and Food Studies at New York University's Steinhardt School of Culture, Education, and Human Development, and is co-founder of the NYU Urban Farm Lab and the Experimental Cuisine Collective.She completed her PhD in American Civilization at the University of Pennsylvania. Her research interests are wide ranging and include the social and cultural history of food, food systems, nutrition and health. Her diverse interests in food studies have resulted in multiple publications in journals, books and on social media covering topics as diverse as the politicisation of domesticity under American food rationing in World War II to a review of anthropologist Sidney Mintz's examination of the sugar industry. Biological value (BV) is a measure of the proportion of absorbed protein from a food which becomes incorporated into the proteins of the organism's body. It captures how readily the digested protein can be used in protein synthesis in the cells of the organism. Proteins are the major source of nitrogen in food. BV assumes protein is the only source of nitrogen and measures the proportion of this nitrogen absorbed by the body which is then excreted. The remainder must have been incorporated into the proteins of the organisms body. A ratio of nitrogen incorporated into the body over nitrogen absorbed gives a measure of protein "usability" – the BV.Unlike some measures of protein usability, biological value does not take into account how readily the protein can be digested and absorbed (largely by the small intestine). This is reflected in the experimental methods used to determine BV.BV uses two similar scales:The true percentage utilization (usually shown with a percent symbol).The percentage utilization relative to a readily utilizable protein source, often egg (usually shown as unitless).These two values will be similar but not identical.The BV of a food varies greatly, and depends on a wide variety of factors. In particular the BV value of a food varies depending on its preparation and the recent diet of the organism. This makes reliable determination of BV difficult and of limited use — fasting prior to testing is universally required in order to ascertain reliable figures.BV is commonly used in nutrition science in many mammalian organisms, and is a relevant measure in humans. It is a popular guideline in bodybuilding in protein choice. Blood lipids (or blood fats) are lipids in the blood, either free or bound to other molecules. They are mostly transported in a protein capsule, and the density of the lipids and type of protein determines the fate of the particle and its influence on metabolism. The concentration of blood lipids depends on intake and excretion from the intestine, and uptake and secretion from cells. Blood lipids are mainly fatty acids and cholesterol. Hyperlipidemia is the presence of elevated or abnormal levels of lipids and/or lipoproteins in the blood, and is a major risk factor for cardiovascular disease. The body fat percentage (BFP) of a human or other living being is the total mass of fat divided by total body mass; body fat includes essential body fat and storage body fat. Essential body fat is necessary to maintain life and reproductive functions. The percentage of essential body fat for women is greater than that for men, due to the demands of childbearing and other hormonal functions. The percentage of essential fat is 2–5% in men, and 10–13% in women (referenced through NASM). Storage body fat consists of fat accumulation in adipose tissue, part of which protects internal organs in the chest and abdomen. The minimum recommended total body fat percentage exceeds the essential fat percentage value reported above. A number of methods are available for determining body fat percentage, such as measurement with calipers or through the use of bioelectrical impedance analysis.The body fat percentage is a measure of fitness level, since it is the only body measurement which directly calculates a person's relative body composition without regard to height or weight. The widely used body mass index (BMI) provides a measure that allows the comparison of the adiposity of individuals of different heights and weights. While BMI largely increases as adiposity increases, due to differences in body composition, other indicators of body fat give more accurate results; for example, individuals with greater muscle mass or larger bones will have higher BMIs. As such, BMI is a useful indicator of overall fitness for a large group of people, but a poor tool for determining the health of an individual. The Callanetics exercise programme was created by Callan Pinckney in the early 1980s. It is a system of exercise involving frequent repetition of small muscular movements and squeezes, designed to improve muscle tone. The programme was developed by Pinckney from classical ballet exercises, to help ease a back problem that she was born with.The theory of callanetics is that the surface muscles of the body are supported by deeper muscles, but popular exercise programmes often exercise only the surface muscles. According to callanetics, deeper muscles are best exercised using small but precise movements. Exercising the deeper muscles also leads to improved posture, which may result in the appearance of weight loss even if very little weight was lost.Pinckney also recommends exercising with clothing that highlights (not flatters) the body's natural shape, and exercising in bright light, to show up the body's imperfections to the exerciser. "A calorie is a calorie" is a tautology used to convey the speaker's conviction that the concept of the "calorie" is in fact a sufficient way to describe energy content of food.It has been a commonly cited truism since the early 1960s. The tautological phrase means that regardless of the form of food calorie a person consumes (whether a carbohydrate, protein or fat calorie) the energy chemically extracted from the food, or the work necessary to burn such a calorie, is identical to any other. One dietary calorie contains 4.184 kilojoules of energy. With this knowledge, it is easy to assume that all calories have equal value. CAP-e (cell-based antioxidant protection in erythrocytes), is a novel in vitro bioassay for antioxidant activity developed by Alexander Schauss, Gitte Jensen, and associates at the American Institute for Biosocial and Medical Research (AIBMR), a private contract research organization (CRO) located in Puyallup, Washington, and Holger NIS, a private CRO located in Klamath Falls, Oregon.The CAP-e assay is performed by first incubating red blood cells (erythrocytes) with a test sample at a range of concentrations. The cells are then combined with dichloro fluorescein diacetate (DCFDA), which is oxidized in the presence of free radicals to form a green fluorescent byproduct (DCF). In the next step of the assay, exogenous hydrogen peroxide is added at a concentration of 167 mM to artificially induce severe oxidative stress. The antioxidant activity of varying concentrations of the test compound is measured based on the degree of inhibition of DCF-fluorescence, which is an indirect and nonspecific measure of reactive oxygen species production. To date, the assay has been used in 2 published studies, both conducted by AIBMR and Holger NIS. A carbohydrate is a biological molecule consisting of carbon (C), hydrogen (H) and oxygen (O) atoms, usually with a hydrogen–oxygen atom ratio of 2:1 (as in water); in other words, with the empirical formula Cm(H2O)n (where m could be different from n). This formula holds true for monosaccharides. Some exceptions exist; for example, deoxyribose, a sugar component of DNA, has the empirical formula C5H10O4. Carbohydrates are technically hydrates of carbon; structurally it is more accurate to view them as polyhydroxy aldehydes and ketones.The term is most common in biochemistry, where it is a synonym of 'saccharide', a group that includes sugars, starch, and cellulose. The saccharides are divided into four chemical groups: monosaccharides, disaccharides, oligosaccharides, and polysaccharides. Monosaccharides and disaccharides, the smallest (lower molecular weight) carbohydrates, are commonly referred to as sugars. The word saccharide comes from the Greek word σάκχαρον (sákkharon), meaning "sugar". While the scientific nomenclature of carbohydrates is complex, the names of the monosaccharides and disaccharides very often end in the suffix -ose. For example, grape sugar is the monosaccharide glucose, cane sugar is the disaccharide sucrose, and milk sugar is the disaccharide lactose.Carbohydrates perform numerous roles in living organisms. Polysaccharides serve for the storage of energy (e.g. starch and glycogen) and as structural components (e.g. cellulose in plants and chitin in arthropods). The 5-carbon monosaccharide ribose is an important component of coenzymes (e.g. ATP, FAD and NAD) and the backbone of the genetic molecule known as RNA. The related deoxyribose is a component of DNA. Saccharides and their derivatives include many other important biomolecules that play key roles in the immune system, fertilization, preventing pathogenesis, blood clotting, and development.In food science and in many informal contexts, the term carbohydrate often means any food that is particularly rich in the complex carbohydrate starch (such as cereals, bread and pasta) or simple carbohydrates, such as sugar (found in candy, jams, and desserts).Often in lists of nutritional information, such as the USDA National Nutrient Database, the term "carbohydrate" (or "carbohydrate by difference") is used for everything other than water, protein, fat, ash, and ethanol. This will include chemical compounds such as acetic or lactic acid, which are not normally considered carbohydrates. It also includes dietary fiber which is a carbohydrate but which does not contribute much in the way of food energy (calories), even though it is often included in the calculation of total food energy just as though it were a sugar.Carbohydrates are found in a wide variety of foods. The important sources are cereals (wheat, maize, rice), potatoes, sugarcane, fruits, table sugar (sucrose), bread, milk, etc. Starch and sugar are the important carbohydrates in our diet. Starch is abundant in potatoes, maize, rice and other cereals. Sugar appears in our diet mainly as sucrose (table sugar), which is added to drinks and many prepared foods such as jam, biscuits and cakes, and glucose and fructose which occur naturally in many fruits and some vegetables.Glycogen is a carbohydrate found in the liver and muscles (as animal source). Cellulose in the cell wall of all plant tissue is a carbohydrate. It is important in our diet as fibre which helps to maintain a healthy digestive system. All carbohydrates absorbed in the small intestine must be hydrolyzed to monosaccharides prior to absorption. Hydrolysis precedes transport of monosaccharides in hamster intestine. From sucrose, glucose is taken up much faster than fructose. Monosaccharide transport saturates with D-glucose at 30 mM.Digestion of starch begins with the action of salivary alpha-amylase/ptyalin, although its activity is slight in comparison with that of pancreatic amylase in the small intestine. Amylase hydrolyzes starch to alpha-dextrin, which are then digested by gluco-amylase (alpha-dextrinases) to maltose and maltotriose. The products of digestion of alpha-amylase and alpha-dextrinase, along with dietary disaccharides are hydrolyzed to their corresponding monosaccharides by enzymes (maltase, isomaltase, sucrase and lactase) present in the brush border of the small intestine. In the typical Western diet, digestion and absorption of carbohydrates is fast and takes place usually in the upper small intestine. However, when the diet contains carbohydrates not easily digestible, digestion and absorption take place mainly in the ileal portion of the intestine.Digestion of food continues while simplest elements are absorbed. The absorption of most digested food occurs in the small intestine through the brush border of the epithelium covering the villi (small hair-like structure). It is not a simple diffusion of substances, but is active and requires energy use by the epithelial cells.During the phase of carbohydrate absorption, fructose is transported into the intestinal cell's cytosol, glucose and galactose competes with other [|Na|] transporter required for operation. From the cytosol, monosaccharides pass into the capillaries by simple or facilitated diffusion.Carbohydrates not digested in the small intestine, including resistant starch foods such as potato, bean, oat, wheat flour, and several monosaccharide oligosaccharides and starch, are digested in a variable when they reach the large intestine. The bacterial flora metabolize these compounds anaerobically in the absence of oxygen. This produces gases (hydrogen, carbon dioxide and methane) and short-chain fatty acids (acetate, propionate, butyrate). The gases are absorbed and excreted by breathing or through the anus (flatulence). Fatty acids are rapidly metabolized. Butyrate is used mainly by cells in the colon and acetate is absorbed into the blood and taken up by the liver, muscle and other tissue. Propionate is an important precursor of glucose in some animals, but not humans. Chelates ( che·late ) [kee-leyt] in animal feed are organic forms of essential trace minerals such as copper, iron, manganese and zinc.Animals absorb, digest and use mineral chelates better than inorganic minerals. This means that lower concentrations can be used in animal feeds. In addition, animals fed chelated sources of essential trace minerals excrete lower amounts in their faeces, and so there is less environmental contamination. Mineral chelates also offers health and welfare benefits in animal nutrition Every child has the right to adequate nutrition under the Universal Declaration of Human Rights. In New Zealand, an estimated 100,000 New Zealand children go to school every day without breakfast. Article 11 of the International Covenant on Economic, Social and Cultural Rights recognises 'the fundamental right for everyone to be free from hunger’.In the Auckland region, there are approximately 43,000 children in decile 1 and 2 state schools. Of these 57% are Pasifica, 30% are Māori, and 4% are European. The remaining 9% are Asian/Middle Eastern and other ethnic groups. The Ministry of Health advised the Minister in 2006 that "decile one and two schools draw their students from our most vulnerable communities and cope with multiple issues related to poverty".A 2002 Ministry of Health survey found that there is a high percentage of children between the age of 5 to 14 who "sometimes or always ate nothing before school", compared to a New Zealand Health Survey that found around 15% of children leave for school without eating breakfast.The Children's Commissioner released a Framework for Food in Schools Programme stating "Children need to be fed adequately for a range of nutritional, educational and social reasons and should be fed regardless of their parent's income or status. To end this, breakfast should be made available to decile 1 and 2 primary, intermediate and primary intermediate combined schools." Cholesterol, from the Ancient Greek chole- (bile) and stereos (solid) followed by the chemical suffix -ol for an alcohol, is an organic molecule. It is a sterol (or modified steroid), a type of lipid molecule, and is biosynthesized by all animal cells, because it is an essential structural component of all animal cell membranes; essential to maintain both membrane structural integrity and fluidity. Cholesterol enables animal cells to dispense with a cell wall (to protect membrane integrity and cell viability), thereby allowing animal cells to change shape rapidly and animals to move (unlike bacteria and plant cells, which are restricted by their cell walls).In addition to its importance for animal cell structure, cholesterol also serves as a precursor for the biosynthesis of steroid hormones, bile acid, and vitamin D. Cholesterol is the principal sterol synthesized by all animals. In vertebrates, hepatic cells typically produce the greatest amounts. It is absent among prokaryotes (bacteria and archaea), although there are some exceptions, such as Mycoplasma, which require cholesterol for growth.François Poulletier de la Salle first identified cholesterol in solid form in gallstones in 1769. However, it was not until 1815 that chemist Michel Eugène Chevreul named the compound "cholesterine". The Spanish Biomedical Research Centre in Physiopathology of Obesity and Nutrition (Centro de Investigación Biomédica en Red de Fisiopatología de la Obesidad y Nutrición: CIBERObn, www.ciberobn.com) is a public research consortium which was founded on November 28, 2006 financed by the Instituto de Salud Carlos III (ISCIII) and the Ministerio de Ciencia e Innovación (MICINN).The CIBERObn gathers 25 investigation groups from different Spanish Hospitals, Universities and Research Centres. Its mission is to promote a better knowledge about the mechanisms contributing to obesity development in order to reduce its incidence and prevalence, as well as its complications, in addition to nutrition-related diseases. The CIBERObn is structured into 8 scientific programs intended to increase the collaboration between researchers, to strengthen synergies and to boost new research lines. Programs are as follows:Nutrition: effects of different types of diet and nutrients on human health.Adipobiology: identification of new signals released by the adipose tissue which are involved in the regulation of energy homeostasis.Obesity and Cancer: role of those proteins associated with cell cycle on metabolic control and obesity development.Obesity and Cardiovascular risk: hemodynamic, metabolic and inflammatory factors associated to cardiac and vascular diseases in obesity.Neurocognitive and Environmental Factors: environmental and emotional factors in nutrition and obesity disorders.Obesity in Childhood-Adolescence Period: biochemical, hormonal, metabolic, genetic, proteomic and body-composition study in children and adolescents.Biomarkers: new strategies, therapeutic and prevention technologies, biomarkers of obesity.Biological Models and Therapeutic Targets: development and validation of experimental models and therapeutic targets in case of obesity.Additionally, CIBERObn lays a particular emphasis on translational research, specially focusing on research transfer to clinical applications and practices. To this end, two cross-cutting programs have been created:Staff Training and Recruitment, which is intended to train our staff according to our research lines and priorities“Fat Bank” Structural Program: biobank infrastructure connecting the above-mentioned programs in a cross way by contributing with common solutions.The Fat Bank is a strategic platform of the CIBERobn which offers the Scientific Community different kinds of biological material which are associated to thorough metabolic phenotyping. This information is entered by means of a tailor-made individualised software. This fat-bank- launched in 2009- currently contains 3000 samples of biologic material from more than 300 individuals.In 2009, 287 indexed articles were published. Their average impact factor is 4.05, which is very high for this subject area. Of them, 67 (23%) belong to the first decile and 105 more (total 172 papers, 60%) belong to the first quartile of the subject area of indexed journals. They accumulate a total impact factor of 1,165. Provisional data of 2010 show an increase of 10%, highly improving the international visibility of the consortium. A complete protein (or whole protein) is a source of protein that contains an adequate proportion of all nine of the essential amino acids necessary for the dietary needs of humans or other animals.According to the Food and Nutrition Board of the National Academy of Medicine (NAM), formerly called the Institute of Medicine (IoM), complete proteins are supplied by meat, poultry, fish, eggs, milk, cheese, yogurt, quinoa, or soybean. Since the amino acid profile of protein in plant food may, except for few cases, be deficient in one or more of the following types, plant proteins are said to be incomplete. Vegetarian meals may supply complete protein by the practice of protein combining which raises the amino acid profile through plant variety.The following table lists the optimal profile of the essential amino acids, which comprises complete protein, as recommended by the Institute of Medicine's Food and Nutrition Board:The second column in the following table shows the amino acid requirements of adults as recommended by the World Health Organization calculated for a 70-kilogram (155 pound) adult. Recommended Daily Intake is based on 2,000 kilocalories per day., which is also an appropriate daily calorie allowance for a fairly sedentary, 70-kilogram (155-pound) adult. The third column in the following table shows the amino acid profile of 2466 kilocalories of baked potatoes (2,652 grams, or 5 lb 13.5 oz).While many plant proteins are lower in one or more essential amino acids than animal proteins, especially lysine, and to a lesser extent methionine and threonine, eating a variety of plants can serve as a well-balanced and complete source of amino acids.Consuming a mixture of plant-based protein sources can increase the biological value (BV) of food. For example, to obtain 25 grams of high BV protein requires 492 grams (1 lb 1 oz) of canned pinto beans (USDA16044) for a total calorie intake of 423 kcal. When paired with 12 g (.5 oz) of Brazil nuts (USDA12078), we require only 364 g (13 oz) of canned pinto beans, for a total of 391 kcal. This small addition of Brazil nuts yields a 23% reduction in the total food mass and a 7.5% reduction in calories. Complementary proteins need not be eaten at the same meal for your body to use them together. Your body can combine complementary proteins that are eaten over the course of the day. Conditioned satiety is one of the three known food-specific forms of suppression of appetite for food by effects of eating, along with alimentary alliesthesia and sensory-specific satiety. Conditioned satiety was first evidenced in 1955 in rats by the late French physiologist professor Jacques Le Magnen. The term was coined in 1972 by professor David Allenby Booth. Unlike the other two sorts of stimulus-specific satiety, this phenomenon is based on classical conditioning but is distinct from conditioned taste aversion (CTA) in its dependence on internal state towards the end of a meal. The CRNHs goal is to improve the knowledge on the function properties of food, on metabolism and on human physiology, from basic research to the study of behaviours and their impact on health. The French territory is now provided with four CRNH having common tools and complementary scientific skills that allow them to develop multi center programs using their platforms and their specific skills.Nutritional issues are of major importance as populations around the world are considerably changing their diet and lifestyle with large health consequences. Food related-diseases such as obesity, diabetes, cardiovascular diseases, malnutrition and cancers are becoming major public health issues. CRNHs is the network of French research centres for human nutrition. CRNHs has the ambition to provide scientific answers to nutrition related health issues by promoting science and accelerating technology transfer to society. CRNHs aims to improve the knowledge on the functional properties of food and particularly its effects on metabolism and on human physiology by developing multi center programs using its platforms and its specific skills. CRNHs contributes to technology transfer between hospital sectors, research laboratories and industries. CRNHs’ expertise, its platforms for clinical exploration, analysis and data processing offer significant opportunities for collaborations. CRNHs develops research programs in nutrition within the framework of national, European and international research programs, working closely with industry partners and researchers worldwide. To advise on strategic development, CRNHs has been endowed with an external scientific advisory board composed of experts from several European. 7-Dehydrocholesterol is a zoosterol that functions in the serum as a cholesterol precursor, and is converted to vitamin D3 in the skin, therefore functioning as provitamin-D3. The presence of this compound in human skin enables humans to manufacture vitamin D3 (cholecalciferol) from ultraviolet rays in the sun light, via an intermediate isomer pre-vitamin D3. It is also found in the milk of several mammalian species. In insects it is a precursor for the hormone ecdysone, required for reaching adulthood. It was discovered by Nobel-laureate organic chemist Adolf Windaus. Dicopper chloride trihydroxide is the chemical compound with the formula Cu2(OH)3Cl. It is often referred to as tribasic copper chloride (TBCC), copper trihydroxyl chloride or copper hydroxychloride. It is a greenish crystalline solid encountered in mineral deposits, metal corrosion products, industrial products, art and archeological objects, and some living systems. It was originally manufactured on an industrial scale as a precipitated material used as either a chemical intermediate or a fungicide. Since 1994, a purified, crystallized product has been produced at the scale of thousands of tons per year, and used extensively as a nutritional supplement for animals. Dietary factors are recognized as having a significant effect on the risk of cancers, with different dietary elements both increasing and reducing risk. Diet and obesity may be related to up to 30-35% of cancer deaths, while physical inactivity appears to be related to 7% risk of cancer occurrence. One review in 2011 suggested that total caloric intake influences cancer incidence and possibly progression.While many dietary recommendations have been proposed to reduce the risk of cancer, few have significant supporting scientific evidence. Obesity and drinking alcohol are confirmed causes of cancer. Lowering the drinking of beverages sweetened with sugar is recommended as a measure to address obesity. A diet low in fruits and vegetables and high in red meat has been implicated but not confirmed, and the effect may be small for well-nourished people who maintain a healthy weight.Some specific foods are linked to specific cancers. Studies have linked eating red or processed meat to an increased risk of breast cancer, colon cancer, prostate cancer, and pancreatic cancer, which may be partially explained by the presence of carcinogens in foods cooked at high temperatures. Aflatoxin B1, a frequent food contaminate, causes liver cancer, but drinking coffee is associated with a reduced risk. Betel nut chewing causes oral cancer. The differences in dietary practices may partly explain differences in cancer incidence in different countries. For example, stomach cancer is more common in Japan due to its high-salt diet and colon cancer is more common in the United States. Immigrant communities tend to develop the risk of their new country, often within one generation, suggesting a substantial link between diet and cancer.Dietary recommendations for cancer prevention typically include weight management and eating "mainly vegetables, fruit, whole grains and fish, and a reduced intake of red meat, animal fat, and refined sugar." Dietary fiber or roughage is the indigestible portion of food derived from plants. It has two main components:Soluble fiber, which dissolves in water, is readily fermented in the colon into gases and physiologically active byproducts, and can be prebiotic and viscous. This delays gastric emptying which, in humans, can result in an extended feeling of fullness.Insoluble fiber, which does not dissolve in water, is metabolically inert and provides bulking, or it can be prebiotic and metabolically ferment in the large intestine. Bulking fibers absorb water as they move through the digestive system, easing defecation.Dietary fibers can act by changing the nature of the contents of the gastrointestinal tract and by changing how other nutrients and chemicals are absorbed. Some types of soluble fiber absorb water to become a gelatinous, viscous substance which is fermented by bacteria in the digestive tract. Some types of insoluble fiber have bulking action and are not fermented. Lignin, a major dietary insoluble fiber source, may alter the rate and metabolism of soluble fibers. Other types of insoluble fiber, notably resistant starch, are fully fermented. Some but not all soluble plant fibers block intestinal mucosal adherence and translocation of potentially pathogenic bacteria and may therefore modulate intestinal inflammation, an effect that has been termed contrabiotic.Chemically, dietary fiber consists of non-starch polysaccharides such as arabinoxylans, cellulose, and many other plant components such as resistant starch, resistant dextrins, inulin, lignin, chitins, pectins, beta-glucans, and oligosaccharides. A position has been adopted by the US Department of Agriculture to include functional fibers as isolated fiber sources that may be included in the diet. The term "fiber" is something of a misnomer, since many types of so-called dietary fiber are not actually fibrous.Food sources of dietary fiber are often divided according to whether they provide (predominantly) soluble or insoluble fiber. Plant foods contain both types of fiber in varying degrees, according to the plant's characteristics.Advantages of consuming fiber are the production of healthful compounds during the fermentation of soluble fiber, and insoluble fiber's ability (via its passive hygroscopic properties) to increase bulk, soften stool, and shorten transit time through the intestinal tract. A disadvantage of a diet high in fiber is the potential for significant intestinal gas production and bloating. In the context of nutrition, a mineral is a chemical element required as an essential nutrient by organisms to perform functions necessary for life. Minerals originate in the earth and cannot be made by living organisms. Plants get minerals from soil. Most of the minerals in a human diet come from eating plants and animals or from drinking water. As a group, minerals are one of the four groups of essential nutrients, the others of which are vitamins, essential fatty acids, and essential amino acids.The five major minerals in the human body are calcium, phosphorus, potassium, sodium, and magnesium. All of the remaining elements in a human body are called "trace elements". The trace elements that have a specific biochemical function in the human body are sulfur, iron, chlorine, cobalt, copper, zinc, manganese, molybdenum, iodine and selenium.Most chemical elements that are ingested by organisms are in the form of simple compounds. Plants absorb dissolved elements in soils, which are subsequently ingested by the herbivores and omnivores that eat them, and the elements move up the food chain. Larger organisms may also consume soil (geophagia) or use mineral resources, such as salt licks, to obtain limited minerals unavailable through other dietary sources.Bacteria and fungi play an essential role in the weathering of primary elements that results in the release of nutrients for their own nutrition and for the nutrition of other species in the ecological food chain. One element, cobalt, is available for use by animals only after having been processed into complex molecules (e.g., [[vitamin B12]]) by bacteria. Minerals are used by animals and microorganisms for the process of mineralizing structures, called "biomineralization", used to construct bones, seashells, eggshells, exoskeletons and mollusc shells. The Dietary Reference Intake (DRI) is a system of nutrition recommendations from the Institute of Medicine (IOM) of the National Academies (United States). It was introduced in 1997 in order to broaden the existing guidelines known as Recommended Dietary Allowances (RDAs, see below). The DRI values differ from those used in nutrition labeling on food and dietary supplement products in the U.S. and Canada, which uses Reference Daily Intakes (RDIs) and Daily Values (%DV) which were based on outdated RDAs from 1968 but were updated as of 2016.DRI provides several different types of reference values:Estimated Average Requirements (EAR), expected to satisfy the needs of 50% of the people in that age group based on a review of the scientific literature.Recommended Dietary Allowances (RDA), the daily dietary intake level of a nutrient considered sufficient by the Food and Nutrition Board of the Institute of Medicine to meet the requirements of 97.5% of healthy individuals in each life-stage and sex group. The definition implies that the intake level would cause a harmful nutrient deficiency in just 2.5%. It is calculated based on the EAR and is usually approximately 20% higher than the EAR (See Calculating the RDA).Adequate Intake (AI), where no RDA has been established, but the amount established is somewhat less firmly believed to be adequate for everyone in the demographic group.Tolerable upper intake levels (UL), to caution against excessive intake of nutrients (like vitamin A) that can be harmful in large amounts. This is the highest level of daily nutrient consumption that is considered to be safe for, and cause no side effects in, 97.5% of healthy individuals in each life-stage and sex group. The definition implies that the intake level would cause a harmful nutrient excess in just 2.5%. The European Food Safety Authority (EFSA) has also established ULs which do not always agree with U.S. ULs. For example, adult zinc UL is 40 mg in U.S. and 25 mg in EFSA.Acceptable Macronutrient Distribution Ranges (AMDR), a range of intake specified as a percentage of total energy intake. Used for sources of energy, such as fats and carbohydrates.The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL defined the same as in United States, but values may differ.DRIs are used by both the United States and Canada, and are intended for the general public and health professionals. Applications include:Composition of diets for schools, prisons, hospitals or nursing homesIndustries developing new foods and dietary supplementsHealthcare policy makers and public health officials Dietary Reference Values (DRV) is the name of the nutritional requirements systems used by the United Kingdom Department of Health and the European Union's European Food Safety Authority.In 1991, the United Kingdom Department of Health published the Dietary Reference Values for Food Energy and Nutrients for the United Kingdom. This records Dietary Reference Values which recommended nutritional intakes for the UK population. The DRVs can be divided into three types:RNI - Reference Nutrient Intake (95% of the population's requirement is met)EAR - Estimated Average Requirement (50% of the population's requirement is met)LRNI - Lower Recommended Nutritional Intake (5% of the population's requirement is met)RNI is not the same as RDA (Recommended Daily Allowance) or GDA, although they are often similar. Ecotrophology is a branch of nutritional science concerned with everyday practice. It is mainly in Germany that it is seen as a separate branch of health care, and the word is rare outside Germany. Ecotrophologists are specialists in nutrition, household management and economics. This includes physiological, economic and technological principles of healthy nutrition and practical application. They work in many different fields: management of the above types of operations, development of new nutritional concepts in catering, quality control in food manufacturing and processing operations and research within the food industry. Due to the interdisciplinary nature of the training, ecotrophologists often take a coordinating role in Facility Management companies. In human nutrition, the term empty calories applies to foods and beverages composed primarily or solely of sugar, fats or oils, or alcohol-containing beverages. An example is carbonated soft drinks. These supply food energy but little or no other nutrition in the way of vitamins, minerals, protein, fiber, or essential fatty acids. Fat contributes nine calories per gram, ethanol seven calories, sugar four calories. The U.S. Department of Agriculture (USDA) advises, "A small amount of empty calories is okay, but most people eat far more than is healthy." The phrase is derived from low nutrient density, which is the proportion of nutrients in a food relative to its energy content.The error of considering energy foods as adequate nutrition was first scientifically demonstrated by François Magendie by experiments on dogs and described in his Précis élementaire de Physiologie. He showed that only sugar, or only olive oil, or only butter, each led to the death of his test animals in 30 to 40 days. In biology, energy homeostasis, or the homeostatic control of energy balance, is a biological process that involves the coordinated homeostatic regulation of food intake (energy inflow) and energy expenditure (energy outflow). The human brain, particularly the hypothalamus, plays a central role in regulating energy homeostasis and generating the sense of hunger by integrating a number of biochemical signals that transmit information about energy balance. Fifty percent of the energy from glucose metabolism is immediately converted to heat.Energy homeostasis is an important aspect of bioenergetics. An essential amino acid, or indispensable amino acid, is an amino acid that cannot be synthesized de novo (from scratch) by the organism, and thus must be supplied in its diet. The nine amino acids humans cannot synthesize are phenylalanine, valine, threonine, tryptophan, methionine, leucine, isoleucine, lysine, and histidine (i.e., F V T W M L I K H).Six other amino acids are considered conditionally essential in the human diet, meaning their synthesis can be limited under special pathophysiological conditions, such as prematurity in the infant or individuals in severe catabolic distress. These six are arginine, cysteine, glycine, glutamine, proline, and tyrosine (i.e., R C G Q P Y). Five amino acids are dispensable in humans, meaning they can be synthesized in the body. These five are alanine, aspartic acid, asparagine, glutamic acid and serine (i.e., A D N E S). An essential nutrient is a nutrient required for normal physiological function that cannot be synthesized by the body, and thus must be obtained from a dietary source. Apart from water, which is universally required for the maintenance of homeostasis, essential nutrients are indispensable for the metabolic processes of cells, as well as the proper physiological functions of tissues and organs. In the case of humans, there are nine amino acids, two fatty acids, thirteen vitamins and fifteen minerals that are considered essential nutrients. In addition, there are several molecules that are considered conditionally essential nutrients since they are indispensable in certain developmental and pathological states. Neurology (from Greek: νεῦρον, neuron, and the suffix -λογία -logia "study of") is a branch of medicine dealing with disorders of the nervous system. Neurology deals with the diagnosis and treatment of all categories of conditions and disease involving the central and peripheral nervous system (and its subdivisions, the autonomic nervous system and the somatic nervous system); including their coverings, blood vessels, and all effector tissue, such as muscle. Neurological practice relies heavily on the field of neuroscience, which is the scientific study of the nervous system.A neurologist is a physician specializing in neurology and trained to investigate, or diagnose and treat neurological disorders. Neurologists may also be involved in clinical research, clinical trials, and basic or translational research. While neurology is a non-surgical specialty, its corresponding surgical specialty is neurosurgery.There is significant overlap between the fields of neurology and psychiatry, with the boundary between the two disciplines and the conditions they treat being somewhat nebulous. Adipsia, also known as hypodipsia, is a symptom of inappropriately decreased or absent feelings of thirst. It involves an increased osmolality or concentration of solute in the urine, which stimulates secretion of antidiuretic hormone (ADH) from the hypothalamus to the kidneys. This causes the person to retain water and ultimately become unable to feel thirst. Due to its rarity, the disorder has not been the subject of many research studies.Adipsia may be seen in conditions such as diabetes insipidus and may result in hypernatremia. It can occur as the result of abnormalities in the hypothalamus, pituitary and corpus callosum, as well as following pituitary/hypothalamic surgery.It is possible for hypothalamic dysfunction, which may result in adipsia, to be present without physical lesions in the hypothalamus, although there are only four reported cases of this. There are also some cases of patients experiencing adipsia due to a psychiatric disease. In these rare psychogenic cases, the patients have normal levels of urine osmolality as well as typical ADH activity. Alcohol-related brain damage is the damage that occurs to brain structures or function of the central nervous system as a result of the direct neurotoxic effects of alcohol intoxication or acute withdrawal. The frontal lobes are the most damaged region of the brains of alcohol abusers but other regions of the brain are also affected. The damage that occurs from heavy drinking/high blood alcohol levels causes impairments in judgement and decision making and social skills. These brain changes are linked to poor behavioural control and impulsivity, which tend to worsen the existing addiction problem.The problems of alcoholism are well known, such as memory disorders, liver disease, high blood pressure, muscle weakness, heart problems, anaemia, low immune function, disorders of the digestive system and pancreatic problems as well as depression, unemployment and family problems including child abuse. Recently attention has been increasingly focused on binge drinking by adolescents and young adults due to neurochemical changes and brain damage which, unlike with alcoholism, can occur after a relatively short period of time; the damage is particularly evident in the corticolimbic region. This brain damage increases the risk of abnormalities in mood and cognitive abilities, increases the risk of dementia and additionally binge drinkers have an increased risk of developing chronic alcoholism.Individuals who are impulsive are at high risk of addiction due to impaired behavioural control and increased sensation seeking behaviour. Alcohol abuse, especially during adolescence, causes a deterioration of executive functions in the frontal lobe. This brain damage from alcohol actually increases impulsivity and therefore worsens the addictive disorder. With prolonged abstinence neurogenesis occurs which can potentially reverse the damage from alcohol abuse. Astrogliosis (also known as astrocytosis or referred to as reactive astrocytosis) is an abnormal increase in the number of astrocytes due to the destruction of nearby neurons from CNS trauma, infection, ischemia, stroke, autoimmune responses, and neurodegenerative disease. In healthy neural tissue, astrocytes play critical roles in energy provision, regulation of blood flow, homeostasis of extracellular fluid, homeostasis of ions and transmitters, regulation of synapse function, and synaptic remodeling. Astrogliosis changes the molecular expression and morphology of astrocytes, causing scar formation and, in severe cases, inhibition of axon regeneration. Automated Neuropsychological Assessment Metrics (ANAM), is a library of computer-based assessments of cognitive domains including attention, concentration, reaction time, memory, processing speed, and decision-making. ANAM has been administered nearly two million times in a variety of applications and settings. ANAM provides clinicians and researchers with tests to evaluate changes in an individual’s cognitive status over time.Components of today’s ANAM design reflect the work of dozens of talented scientists, and ANAM development is guided by public and private sector research. Early research versions of ANAM were developed in the U.S. Department of Defense. This work was patented by the U.S. Army and exclusively licensed for development and commercialization to benefit the military and the public. Through its Technology Transition program, the U.S. Army licensed ANAM exclusively to the University of Oklahoma (OU).The OU Center for the Study of Human Operator Performance programmed and tested a robust new ANAM product, including 22 neurocognitive tests, statistical reporting and research support tools. Vista LifeSciences (vistalifesciences.com) holds an exclusive license to ANAM from the University of Oklahoma to commercialize the technology and continues to develop and support ANAM. Autoscopy is the experience in which an individual perceives the surrounding environment from a different perspective, from a position outside of his or her own body. Autoscopy comes from the ancient Greek αὐτός ("self") and σκοπός ("watcher").Autoscopy has been of interest to humankind from time immemorial and is abundant in the folklore, mythology, and spiritual narratives of most ancient and modern societies. Cases of autoscopy are commonly encountered in modern psychiatric practice. According to neurological research, autoscopic experiences are hallucinations. Beevor’s Axiom is the idea that the brain does not know muscles, only movements. In other words, the brain registers the movements that muscles combine to make, not the individual muscles that are making the movements. Hence, this is why one can sign their name (albeit poorly) with their foot. Beevor’s Axiom was coined by Dr. Charles Edward Beevor, an English neurologist.Dr. Beevor presented Beevor’s Axiom in a series of four lectures from June 3, 1903 to July 4, 1903 before the Royal College of Physicians of London as part of the Croonian Lectures. His experiments showed that when an area of the cortex was stimulated, the body responded with a movement, not just a single muscle. Dr. Beevor concluded that “only co-ordinated movements are represented in the excitable cortex” In relation to Beevor’s Axiom, it has been found that the brain encodes sequences, such as playing the piano, signing our name, wiping off a counter, and chopping vegetables, and once encoded and practiced, it takes less brain activity to perform them. This supports Beevor’s Axiom, because the brain can recall movements easier than it can learn them.Beevor’s Axiom is only partially true, however. Most behavior of muscles is encoded in the primary motor cortex (M1) and separated by muscle group. In an effort to understand the encoding in the M1, researchers observed commands of monkeys. Muscle cells changed firing rate according to the direction of the arm movements. Each neuron has one direction that elicits the greatest response. Some M1 neurons encode muscle contractions, while others react to particular movements, regardless of the muscles used to perform them. The key characteristic of the primary motor cortex is its dynamic nature; the M1 changes based on experience. The supplementary motor area (SMA) plays a key role in initiating motion sequences. The premotor cortex (PMA) plays a key role when motor sequences are guided by external events. They map behaviors as opposed to the M1 which maps specific movements. This could cause in issue in brain–computer interface research. If a researcher tries to excite only a muscle, it might be impossible without expecting a full movement. Behavioral neurology is a subspecialty of neurology that studies the neurological basis of behavior, memory, and cognition, the impact of neurological damage and disease upon these functions, and the treatment thereof. Two fields associated with behavioral neurology are neuropsychiatry and neuropsychology. In the United States, 'Behavioral Neurology and Neuropsychiatry' has been recognized as a single subspecialty by the United Council for Neurologic Subspecialties (UCNS) since 2004.Behavioral neurology is that speciality of one, which deals with the study of neurological basis of behavior, memory, and cognition, and their impact of damage and disease and treatment.Syndromes and diseases commonly studied by behavioral neurology include but are not limited to: Beta-2 transferrin is a carbohydrate-free (desialated) isoform of transferrin, which is almost exclusively found in the cerebrospinal fluid. It is not found in blood, mucus or tears, thus making it a specific marker of cerebrospinal fluid, applied as an assay in cases where cerebrospinal fluid leakage is suspected.Beta-2 transferrin would also be positive in patients with perilymph fluid leaks, as it is also present in inner ear perilymph. Thus, beta-2 transferrin in otorrhea would be suggestive of either a CSF leak or a perilymph leak. Multilingualism is the use of more than one language, either by an individual speaker or by a community of speakers. It is believed that multilingual speakers outnumber monolingual speakers in the world's population. More than half of all Europeans claim to speak at least one language other than their mother tongue. Multilingualism is becoming a social phenomenon governed by the needs of globalization and cultural openness. Owing to the ease of access to information facilitated by the Internet, individuals' exposure to multiple languages is becoming increasingly frequent, thereby promoting a need to acquire additional languages. People who speak several languages are also called polyglots.Multilingual speakers have acquired and maintained at least one language during childhood, the so-called first language (L1). The first language (sometimes also referred to as the mother tongue) is acquired without formal education, by mechanisms heavily disputed. Children acquiring two languages in this way are called simultaneous bilinguals. Even in the case of simultaneous bilinguals, one language usually dominates the other. People who know more than one language have been reported to be more adept at language learning compared to monolinguals. Additionally, bilinguals often have important economic advantages over monolingual individuals as bilingual people are able to carry out duties that monolinguals cannot, such as interacting with customers who only speak a minority language.Multilingualism in computing can be considered part of a continuum between internationalization and localization. Due to the status of English in computing, software development nearly always uses it (but see also Non-English-based programming languages), so almost all commercial software is initially available in an English version, and multilingual versions, if any, may be produced as alternative options based on the English original. Various aspects of multilingualism have been studied in the field of neurology. These include the representation of different language systems in the brain, the effects of multilingualism on the brain's structural plasticity, aphasia in multilingual individuals, and bimodal bilinguals (people who can speak one sign language and one oral language). Neurological studies of multilingualism are carried out with functional neuroimaging, electrophysiology, and through observation of people who have suffered brain damage.The brain contains areas that are specialized to deal with language, located in the perisylvian cortex of the left hemisphere. These areas are crucial for performing language tasks, but they are not the only areas that are used; disparate parts of both right and left brain hemispheres are active during language production. In multilingual individuals, there is a great deal of similarity in the brain areas used for each of their languages. Insights into the neurology of multilingualism have been gained by the study of multilingual individuals with aphasia, or the loss of one or more languages as a result of brain damage. Bilingual aphasics can show several different patterns of recovery; they may recover one language but not another, they may recover both languages simultaneously, or they may involuntarily mix different languages during language production during the recovery period. These patterns are explained by the dynamic view of bilingual aphasia, which holds that the language system of representation and control is compromised as a result of brain damage.Research has also been carried out into the neurology of bimodal bilinguals, or people who can speak one oral language and one sign language. Studies with bimodal bilinguals have also provided insight into the tip of the tongue phenomenon, working memory, and patterns of neural activity when recognizing facial expressions, signing, and speaking. The biochemistry of Alzheimer's disease (AD), one of the most common causes of adult dementia, is not yet very well understood. AD has been identified as a protein misfolding disease due to the accumulation of abnormally folded amyloid beta protein in the brains of Alzheimer's patients. Amyloid beta, also written Aβ, is a short peptide that is an abnormal proteolytic byproduct of the transmembrane protein amyloid precursor protein (APP), whose function is unclear but thought to be involved in neuronal development. The presenilins are components of proteolytic complex involved in APP processing and degradation.Amyloid beta monomers are soluble and contain short regions of beta sheet and polyproline II helix secondary structures in solution, though they are largely alpha helical in membranes; however, at sufficiently high concentration, they undergo a dramatic conformational change to form a beta sheet-rich tertiary structure that aggregates to form amyloid fibrils. These fibrils deposit outside neurons in dense formations known as senile plaques or neuritic plaques, in less dense aggregates as diffuse plaques, and sometimes in the walls of small blood vessels in the brain in a process called amyloid angiopathy or congophilic angiopathy.AD is also considered a tauopathy due to abnormal aggregation of the tau protein, a microtubule-associated protein expressed in neurons that normally acts to stabilize microtubules in the cell cytoskeleton. Like most microtubule-associated proteins, tau is normally regulated by phosphorylation; however, in AD patients, hyperphosphorylated tau accumulates as paired helical filaments that in turn aggregate into masses inside nerve cell bodies known as neurofibrillary tangles and as dystrophic neurites associated with amyloid plaques. Although little is known about the process of filament assembly, it has recently been shown that a depletion of a prolyl isomerase protein in the parvulin family accelerates the accumulation of abnormal tau.Neuroinflammation is also involved in the complex cascade leading to AD pathology and symptoms. Considerable pathological and clinical evidence documents immunological changes associated with AD, including increased pro-inflammatory cytokine concentrations in the blood and cerebrospinal fluid. Whether these changes may be a cause or consequence of AD remains to be fully understood, but inflammation within the brain, including increased reactivity of the resident microglia towards amyloid deposits, has been implicated in the pathogenesis and progression of AD. The primary symptoms of dyslexia were first identified by Oswald Berkhan in 1881. The term 'dyslexia' was coined in 1887 by Rudolf Berlin, an ophthalmologist practicing in Stuttgart, Germany. Since then generations of researchers have been investigating what dyslexia is and trying to identify the biological causes. (See History section of article "Dyslexia".) The theories of the etiology of dyslexia have and are evolving with each new generation of dyslexia researchers, and the more recent theories of dyslexia tend to enhance one or more of the older theories as understanding of the nature of dyslexia evolves.Theories should not be viewed as competing, but as attempting to explain the underlying causes of a similar set of symptoms from a variety of research perspectives and background. Blindsight is the ability of people who are cortically blind due to lesions in their striate cortex, also known as primary visual cortex or V1, to respond to visual stimuli that they do not consciously see. The majority of studies on blindsight are conducted on patients who have the conscious blindness on only one side of their visual field. Following the destruction of the striate cortex, patients are asked to detect, localize and discriminate amongst visual stimuli that are presented to their blind side, often in a forced-response or guessing situation, even though they don't consciously recognise the visual stimulus. Research shows that blind patients achieve a higher accuracy than would be expected from chance alone. Type 1 blindsight is the term given to this ability to guess—at levels significantly above chance—aspects of a visual stimulus (such as location or type of movement) without any conscious awareness of any stimuli. Type 2 blindsight occurs when patients claim to have a feeling that there has been a change within their blind area—e.g. movement—but that it was not a visual percept. Blindsight challenges the common belief that perceptions must enter consciousness to affect our behavior; it shows that our behavior can be guided by sensory information of which we have no conscious awareness. It may be thought of as a converse of the form of anosognosia known as Anton–Babinski syndrome, in which there is full cortical blindness along with the confabulation of visual experience. The blood–brain barrier (BBB) is a highly selective semipermeable membrane barrier that separates the circulating blood from the brain and extracellular fluid in the central nervous system (CNS). The blood–brain barrier is formed by brain endothelial cells and it allows the passage of water, some gases, and lipid-soluble molecules by passive diffusion, as well as the selective transport of molecules such as glucose and amino acids that are crucial to neural function. Furthermore, it prevents the entry of lipophilic potential neurotoxins by way of an active transport mechanism mediated by P-glycoprotein. Astrocytes have been claimed to be necessary to create the blood–brain barrier. A few regions in the brain, including the circumventricular organs, do not have a blood–brain barrier.The blood–brain barrier occurs along all capillaries and consists of tight junctions around the capillaries that do not exist in normal circulation. Endothelial cells restrict the diffusion of microscopic objects (e.g., bacteria) and large or hydrophilic molecules into the cerebrospinal fluid (CSF), while allowing the diffusion of hydrophobic molecules (O2, CO2, hormones). Cells of the barrier actively transport metabolic products such as glucose across the barrier with specific proteins. This barrier also includes a thick basement membrane and astrocytic endfeet. Cerebral circulation is the movement of blood through the network of cerebral arteries and veins supplying the brain. The rate of the cerebral blood flow in the adult is typically 750 milliliters per minute, representing 15% of the cardiac output. The arteries deliver oxygenated blood, glucose and other nutrients to the brain, and the veins carry deoxygenated blood back to the heart, removing carbon dioxide, lactic acid, and other metabolic products. Since the brain is very vulnerable to compromises in its blood supply, the cerebral circulatory system has many safeguards including autoregulation of the blood vessels and the failure of these safeguards can result in a stroke. The amount of blood that the cerebral circulation carries is known as cerebral blood flow. The presence of gravitational fields or accelerations also determine variations in the movement and distribution of blood in the brain, such as when suspended upside-down.The following description is based on idealized human cerebral circulation. The pattern of circulation and its nomenclature vary between organisms. Cerebrospinal fluid (CSF) is a clear, colorless body fluid found in the brain and spinal cord. It is produced in the choroid plexuses of the ventricles of the brain, and absorbed in the arachnoid granulations. There is about 125mL of CSF at any one time, and about 500mL is generated every day. CSF acts as a cushion or buffer for the brain, providing basic mechanical and immunological protection to the brain inside the skull. The CSF also serves a vital function in cerebral autoregulation of cerebral blood flow.The CSF occupies the subarachnoid space (between the arachnoid mater and the pia mater) and the ventricular system around and inside the brain and spinal cord. It fills the ventricles of the brain, cisterns, and sulci, as well as the central canal of the spinal cord. There is also a connection from the subarachnoid space to the bony labyrinth of the inner ear via the perilymphatic duct where the perilymph is continuous with the cerebrospinal fluid.A sample of CSF can be taken via lumbar puncture. This can reveal the intracranial pressure, as well as indicate diseases including infections of the brain or its surrounding meninges. Although noted by Hippocrates, it was only in the eighteenth century that Emanuel Swedenborg is credited with its rediscovery, and as late as 1914 that Harvey W. Cushing demonstrated CSF was secreted by the choroid plexus. The term cognitive reserve describes the mind's resistance to damage of the brain. The mind's resilience is evaluated behaviorally, whereas the neuropathological damage is evaluated histologically, although damage may be estimated using blood-based markers and imaging methods. There are two models that can be used when exploring the concept of "reserve": brain reserve and cognitive reserve. These terms, albeit often used interchangeably in the literature, provide a useful way of discussing the models. Using a computer analogy brain reserve can be seen as hardware and cognitive reserve as software. All these factors are currently believed to contribute to global reserve. Cognitive reserve is commonly used to refer to both brain and cognitive reserves in the literature.In 1988 a study published in Annals of Neurology reporting findings from post-mortem examinations on 137 elderly persons unexpectedly revealed that there was a discrepancy between the degree of Alzheimer's disease neuropathology and the clinical manifestations of the disease. This is to say that some participants whose brains had extensive Alzheimer's disease pathology, clinically had no or very little manifestations of the disease. Furthermore, the study showed that these persons had higher brain weights and greater number of neurons as compared to age-matched controls. The investigators speculated with two possible explanations for this phenomenon: these people may have had incipient Alzheimer's disease but somehow avoided the loss of large numbers of neurons, or alternatively, started with larger brains and more neurons and thus might be said to have had a greater "reserve". This is the first time this term has been used in the literature in this context.The study sparked off interest in this area, and to try to confirm these initial findings further studies were done. Higher reserve was found to provide a greater threshold before clinical deficit appears. Furthermore, those with higher capacity once they become clinically impaired show more rapid decline, probably indicating a failure of all compensatory systems and strategies put in place by the individual with greater reserve to cope with the increasing neuropathological damage. Cortical spreading depression (CSD) or spreading depolarization is a wave of electrophysiological hyperactivity followed by a wave of inhibition. Spreading depolarization describes a phenomenon characterized by the appearance of depolarization waves of the neurons and neuroglia that propagates across the cortex at a velocity of 2–5 mm/min. CSD can be induced by hypoxic conditions and facilitates neuronal death in energy-compromised tissue. CSD has also been implicated in migraine aura, where CSD is assumed to ascend in well-nourished tissue and is typically benign in most of the cases, although it may increase the probability in migraine patients to develop a stroke. Spreading depolarization within brainstem tissues regulating functions crucial for life has been implicated in sudden unexpected death in epilepsy, by way of ion channel mutations such as those seen in Dravet syndrome, a particularly severe form of childhood epilepsy that appears to carry an unusually high risk of SUDEP. Cranial ultrasound is a technique for scanning the brain using high-frequency sound waves. It is used almost exclusively in babies because their fontanelle (the soft spot on the skull) provides an "acoustic window". A different form of ultrasound-based brain scanning, transcranial Doppler, can be used in any age group. This uses Doppler ultrasound to assess blood flow through the major arteries in the brain, and can scan through bone. It is not usual for this technique to be referred to simply as "cranial ultrasound". Additionally, cranial ultrasound can be used for intra-operative imaging in adults undergoing neurosurgery once the skull has been opened, for example to help identify the margins of a tumour. Derealization (sometimes abbreviated as DR) is an alteration in the perception or experience of the external world so that it seems unreal. Other symptoms include feeling as though one's environment is lacking in spontaneity, emotional colouring, and depth. It is a dissociative symptom of many conditions.Derealization is a subjective experience of unreality of the outside world, while depersonalization is sense of unreality in one's personal self, although most authors currently do not regard derealization (surroundings) and depersonalization (self) as separate constructs.Chronic derealization may be caused by occipital–temporal dysfunction. These symptoms are common in the population, with a lifetime prevalence of up to 5% and 31–66% at the time of a traumatic event. A dermatome is an area of skin that is mainly supplied by a single spinal nerve. There are 8 cervical nerves (C1 being an exception with no dermatome), 12 thoracic nerves, 5 lumbar nerves and 5 sacral nerves. Each of these nerves relays sensation (including pain) from a particular region of skin to the brain.A dermatome also refers to the part of an embryonic somite.Along the thorax and abdomen the dermatomes are like a stack of discs forming a human, each supplied by a different spinal nerve. Along the arms and the legs, the pattern is different: the dermatomes run longitudinally along the limbs. Although the general pattern is similar in all people, the precise areas of innervation are as unique to an individual as fingerprints.A similar area innervated by peripheral nerves is called a peripheral nerve field. Electrodiagnosis (EDX) is a method of medical diagnosis that obtains information about diseases by passively recording the electrical activity of body parts (that is, their natural electrophysiology) or by measuring their response to external electrical stimuli (evoked potentials). The most widely used methods of recording spontaneous electrical activity are various forms of electrodiagnostic testing (electrography) such as electrocardiography (ECG), electroencephalography (EEG), and electromyography (EMG). Electrodiagnostic medicine (also EDX) is a medical subspecialty of neurology, clinical neurophysiology, cardiology, and physical medicine and rehabilitation. Electrodiagnostic physicians apply electrophysiologic techniques, including needle electromyography and nerve conduction studies to diagnose, evaluate, and treat people with impairments of the neurologic, neuromuscular, and/or muscular systems. The provision of a quality electrodiagnostic medical evaluation requires extensive scientific knowledge that includes anatomy and physiology of the peripheral nerves and muscles, the physics and biology of the electrical signals generated by muscle and nerve, the instrumentation used to process these signals, and techniques for clinical evaluation of diseases of the peripheral nerves and sensory pathways Erythropoietin in neuroprotection is the use of the glycoprotein erythropoietin (Epo) for neuroprotection. Epo controls erythropoiesis, or red blood cell production.Erythropoietin and its receptor were thought to be present in the central nervous system according to experiments with antibodies that were subsequently shown to be nonspecific. While epoetin alpha is capable of crossing the blood brain barrier via active transport the amounts appearing in the CNS are very low. The possibility that Epo might have effects on neural tissues resulted in experiments to explore whether Epo might be tissue protective. The reported presence of Epo within the spinal fluid of infants and the expression of Epo-R in the spinal cord, suggested a potential role by Epo within the CNS therefore Epo represented a potential therapy to protect photoreceptors damaged from hypoxic pretreatment.In some animal studies Erythropoietin has been shown to protect nerve cells from hypoxia-induced glutamate toxicity. Epo has also been reported to enhance nerve recovery after spinal trauma. Celik and associates investigated motor neuron apoptosis in rabbits with a transient global spinal ischemia model. The functional neurological status of animals given RhEpo was better after recovery from anesthesia, and kept improving over a two-day period. The animals given saline demonstrated a poor functional neurological status and showed no significant improvements. These results suggested that RhEpo has both an acute and delayed beneficial action in ischemic spinal cord injury.In contrast to these results, numerous studies suggested that Epo had no neuroprotective benefit in animal models and EpoR was not detected in brain tissues using anti-EpoR antibodies that were shown to be sensitive and specific. Eye–hand coordination (also known as hand–eye coordination) is the coordinated control of eye movement with hand movement, and the processing of visual input to guide reaching and grasping along with the use of proprioception of the hands to guide the eyes. Eye–hand coordination has been studied in activities as diverse as the movement of solid objects such as wooden blocks, archery, sporting performance, music reading, computer gaming, copy-typing, and even tea-making. It is part of the mechanisms of performing everyday tasks; in its absence most people would be unable to carry out even the simplest of actions such as picking up a book from a table or playing a video game. While it is recognized by the term hand–eye coordination, without exception medical sources, and most psychological sources, refer to eye–hand coordination. Familial encephalopathy with neuroserpin inclusion bodies (FENIB) is a progressive disorder of the nervous system that is characterized by a loss of intellectual functioning (dementia) and seizures. At first, affected individuals may have difficulty sustaining attention and concentrating. Their judgment, insight, and memory become impaired as the condition progresses. Over time, they lose the ability to perform the activities of daily living, and most people with this condition eventually require comprehensive care.The signs and symptoms of familial encephalopathy with neuroserpin inclusion bodies vary in their severity and age of onset. In severe cases, the condition causes seizures and episodes of sudden, involuntary muscle jerking or twitching (myoclonus) in addition to dementia. These signs can appear as early as a person's teens. Less severe cases are characterized by a progressive decline in intellectual functioning beginning in a person's forties or fifties.Mutations in the SERPINI1 gene cause familial encephalopathy with neuroserpin inclusion bodies. The SERPINI1 gene provides instructions for making a protein called neuroserpin. This protein is found in nerve cells, where it plays a role in the development and function of the nervous system. Neuroserpin helps control the growth of nerve cells and their connections with one another, which suggests that this protein may be important for learning and memory. Mutations in the gene result in the production of an abnormally shaped, unstable version of neuroserpin. Abnormal neuroserpin proteins can attach to one another and form clumps (called neuroserpin inclusion bodies or Collins bodies) within nerve cells. These clumps disrupt the cells' normal functioning and ultimately lead to cell death. Progressive dementia results from this gradual loss of nerve cells in certain parts of the brain. Researchers believe that a buildup of related, potentially toxic substances in nerve cells may also contribute to the signs and symptoms of this condition.This condition is inherited in an autosomal dominant pattern, which means one copy of the altered gene in each cell is sufficient to cause the disorder. In many cases, an affected person has a parent with the condition. The Ferrier Lecture is a Royal Society lectureship given every three years "on a subject related to the advancement of natural knowledge on the structure and function of the nervous system". It was created in 1928 to honour the memory of Sir David Ferrier, a neurologist who was the first British scientist to electronically stimulate the brain for the purpose of scientific study.In its 90-year history, the Lecture has been given 30 times. It has never been given more than once by the same person. The first female to be awarded the honour was Prof. Christine Holt in 2017. The first lecture was given in 1929 by Charles Scott Sherrington, and was titled "Some functional problems attaching to convergence". The most recent lecturer was provided by Prof. Christine Holt, who presented a lecture in 2017 titled "understanding of the key molecular mechanisms involved in nerve growth, guidance and targeting which has revolutionised our knowledge of growing axon tips". In 1971, the lecture was given by two individuals (David Hunter Hubel and Torsten Nils Wiesel) on the same topic, with the title "The function and architecture of the visual cortex". Functional neurological disorder (FND) is a condition in which patients experience neurological symptoms such as weakness, movement disorders, sensory symptoms and blackouts. The brain of a patient with functional neurological symptom disorder is structurally normal, but functions incorrectly. According to consensus from the literature and from physicians and psychologists practicing in the field, "functional symptoms, also called 'medically unexplained,' 'psychogenic,' or 'hysterical,' are symptoms that are clinically recognisable as not being caused by a definable organic disease". Subsets of functional neurological disorder include functional neurological symptom disorder, conversion disorder, and psychogenic movement disorder/non-epileptic seizures. Functional neurological disorders are common in neurological services, accounting for up to one third of outpatient neurology clinic attendances, and associated with as much physical disability and distress as other neurological disorders. The diagnosis is made based on positive signs and symptoms in the history and examination during consultation of a neurologist (see below). Physiotherapy is particularly helpful for patients with motor symptoms (weakness, gait disorders, movement disorders) and tailored cognitive behavioural therapy has the best evidence in patients with dissociative (non-epileptic) attacks. A grid cell is a type of neuron in the brains of many species that allows them to understand their position in space.Grid cells were discovered in 2005 by Edvard Moser, May-Britt Moser and their students Torkel Hafting, Marianne Fyhn and Sturla Molden at the Centre for the Biology of Memory (CBM) in Norway. They were awarded the 2014 Nobel Prize in Physiology or Medicine together with John O'Keefe for their discoveries of cells that constitute a positioning system in the brain. The arrangement of spatial firing fields all at equal distances from their neighbors led to a hypothesis that these cells encode a cognitive representation of Euclidean space. The discovery also suggested a mechanism for dynamic computation of self-position based on continuously updated information about position and direction.In a typical experimental study, an electrode capable of recording the activity of an individual neuron is implanted in the cerebral cortex of a rat, in a section called the dorsomedial entorhinal cortex, and recordings are made as the rat moves around freely in an open arena. For a grid cell, if a dot is placed at the location of the rat's head every time the neuron emits an action potential, then as illustrated in the adjoining figure, these dots build up over time to form a set of small clusters, and the clusters form the vertices of a grid of equilateral triangles. This regular triangle-pattern is what distinguishes grid cells from other types of cells that show spatial firing. By contrast, if a place cell from the rat hippocampus is examined in the same way (i.e., by placing a dot at the location of the rat's head whenever the cell emits an action potential), then the dots build up to form small clusters, but frequently there is only one cluster (one "place field") in a given environment, and even when multiple clusters are seen, there is no perceptible regularity in their arrangement. Synaptic plasticity refers to a chemical synapse's ability to undergo changes in strength. Synaptic plasticity is typically input-specific (i.e. homosynaptic plasticity), meaning that the activity in a particular neuron alters the efficacy of a synaptic connection between that neuron and its target. However, in the case of heterosynaptic plasticity, the activity of a particular neuron leads to input unspecific changes in the strength of synaptic connections from other unactivated neurons. A number of distinct forms of heterosynaptic plasticity have been found in a variety of brain regions and organisms. These different forms of heterosynaptic plasticity contribute to a variety of neural processes including associative learning, the development of neural circuits, and homeostasis of synaptic input. The International Cooperative Ataxia Rating Scale (ICARS) is an outcome measure that was created in 1997 by the Committee of the World Federation of Neurology with the goal of standardizing the quantification of impairment due to cerebellar ataxia. The scale is scored out of 100 with 19 items and 4 subscales of postural and gait disturbances, limb ataxia, dysarthria, and oculomotor disorders. Higher scores indicate higher levels of impairment.The ICARS has been validated for use in patients with focal cerebellar lesions and hereditary spinocerebellar and Friedrich's ataxia. More recently, two shorter ataxia scales based upon the ICARS have been created and validated, the Scale for the Assessment and Rating of Ataxia (SARA) and the Brief Ataxia Rating Scale (BARS). The SARA is a shorter, 8 item, 40 point scale which has been validated in ataxia patients. The BARS was developed in 2009 in an attempt to both reduce redundancies of the ICARS, but also to shorten and simplify the administration of ataxia outcome measures. Kernohan's notch is a cerebral peduncle indentation associated with some forms of transtentorial herniation (uncal herniation). It is a secondary condition caused by a primary injury on the opposite hemisphere of the brain. Kernohan's notch is an ipsilateral condition, in that a left-sided primary lesion (in which Kernohan's notch would be on the right side) evokes motor impairment in the left side of the body and a right-sided primary injury evokes motor impairment in the right side of the body. The seriousness of Kernohan's notch varies depending on the primary problem causing it, which may range from benign brain tumors to advanced subdural hematoma. Kindling due to substance withdrawal refers to the neurological condition which results from repeated withdrawal episodes from sedative–hypnotic drugs such as alcohol and benzodiazepines. Each withdrawal leads to more severe withdrawal symptoms than the previous withdrawal syndrome. Individuals who have had more withdrawal episodes are at an increased risk of very severe withdrawal symptoms, up to and including seizures, and death. Withdrawal from GABAergic-acting sedative–hypnotic drugs causes acute GABA underactivity as well as glutamate overactivity, which can lead to sensitization and hyper-excitability of the central nervous system, excito-neurotoxicity, and increasingly profound neuroadaptions. The McDonald criteria are diagnostic criteria for multiple sclerosis (MS). These criteria are named after neurologist W. Ian McDonald who directed an international panel in association with the National Multiple Sclerosis Society (NMSS) of America and recommended revised diagnostic criteria for MS in April 2001. These new criteria intended to replace the Poser criteria and the older Schumacher criteria. They have undergone revisions in 2005 and 2010.They maintain the Poser requirement to demonstrate "dissemination of lesions in space and time" (DIS and DIT) but they discourage the previously used Poser terms such as "clinically definite" and "probable MS", and propose as diagnostic either "MS", "possible MS", or "not MS".The McDonald criteria maintained a scheme for diagnosing MS based solely on clinical grounds but also proposed for the first time that when clinical evidence is lacking, magnetic resonance imaging (MRI) findings can serve as surrogates for dissemination in space (DIS) and/or time (DIT) to diagnose MS. The criteria try to prove the existence of demyelinating lesions, by image or by their effects, showing that they occur in different areas of the nervous system (DIS) and that they accumulate over time (DIT). The McDonald criteria facilitate the diagnosis of MS in patients who present with their first demyelinating attack and significantly increase the sensitivity for diagnosing MS without compromising the specificity.The McDonald criteria for the diagnosis of multiple sclerosis were revised first in 2005 to clarify exactly what is meant by an "attack", "dissemination" and a "positive MRI", etc. Later they were revised again in 2010.McDonald's criteria are the standard clinical case definition for MS and the 2010 version is regarded as the gold standard test for MS diagnosis. Midline shift is a shift of the brain past its center line. The sign may be evident on neuroimaging such as CT scanning. The sign is considered ominous because it is commonly associated with a distortion of the brain stem that can cause serious dysfunction evidenced by abnormal posturing and failure of the pupils to constrict in response to light. Midline shift is often associated with high intracranial pressure (ICP), which can be deadly. In fact, midline shift is a measure of ICP; presence of the former is an indication of the latter. Presence of midline shift is an indication for neurosurgeons to take measures to monitor and control ICP. Immediate surgery may be indicated when there is a midline shift of over 5 mm. The sign can be caused by conditions including traumatic brain injury,stroke, hematoma, or birth deformity that leads to a raised intracranial pressure. The Modified Ashworth scale (MAS) measures resistance during passive soft-tissue stretching and is used as a simple measure of spasticity. Scoring (taken from Bohannon and Smith, 1987):0: No increase in muscle tone1: Slight increase in muscle tone, manifested by a catch and release or by minimal resistance at the end of the range of motion when the affected part(s) is moved in flexion or extension1+: Slight increase in muscle tone, manifested by a catch, followed by minimal resistance throughout the remainder (less than half) of the ROM2: More marked increase in muscle tone through most of the ROM, but affected part(s) easily moved3: Considerable increase in muscle tone, passive movement difficult4: Affected part(s) rigid in flexion or extension Earth is the third planet from the Sun and the only object in the Universe known to harbor life. According to radiometric dating and other sources of evidence, Earth formed over 4 billion years ago. Earth's gravity interacts with other objects in space, especially the Sun and the Moon, Earth's only natural satellite. Earth revolves around the Sun in 365.26 days, a period known as an Earth year. During this time, Earth rotates about its axis about 366.26 times.Earth's axis of rotation is tilted, producing seasonal variations on the planet's surface. The gravitational interaction between the Earth and Moon causes ocean tides, stabilizes the Earth's orientation on its axis, and gradually slows its rotation. Earth is the densest planet in the Solar System and the largest of the four terrestrial planets.Earth's lithosphere is divided into several rigid tectonic plates that migrate across the surface over periods of many millions of years. About 71% of Earth's surface is covered with water, mostly by oceans. The remaining 29% is land consisting of continents and islands that together have many lakes, rivers and other sources of water that contribute to the hydrosphere. The majority of Earth's polar regions are covered in ice, including the Antarctic ice sheet and the sea ice of the Arctic ice pack. Earth's interior remains active with a solid iron inner core, a liquid outer core that generates the Earth's magnetic field, and a convecting mantle that drives plate tectonics.Within the first billion years of Earth's history, life appeared in the oceans and began to affect the Earth's atmosphere and surface, leading to the proliferation of aerobic and anaerobic organisms. Some geological evidence indicates that life may have arisen as much as 4.1 billion years ago. Since then, the combination of Earth's distance from the Sun, physical properties, and geological history have allowed life to evolve and thrive. In the history of the Earth, biodiversity has gone through long periods of expansion, occasionally punctuated by mass extinction events. Over 99% of all species that ever lived on Earth are extinct. Estimates of the number of species on Earth today vary widely; most species have not been described. Over 7.4 billion humans live on Earth and depend on its biosphere and natural resources for their survival. Humans have developed diverse societies and cultures; politically, the world has about 200 sovereign states. Anywhere on Earth (AoE) is a calendar designation which indicates that a period expires when the date passes everywhere on Earth. The last place on Earth where any date exists is on Howland and Baker islands, in the IDLW time zone (the West side of the International Date Line), and so is the last spot on the globe for any day to exist. Therefore, the day ends AoE when it ends on Howland Island.The convention originated in IEEE 802.16 balloting procedures. At this point, many IEEE 802 ballot deadlines are established as the end of day using "AoE", for "Anywhere on Earth" as a designation. This means that the deadline has not passed if, anywhere on Earth, the deadline date has not yet passed.Note that the day's end AoE occurs at noon Coordinated Universal Time (UTC) of the following day, Howland and Baker islands being halfway around the world from the prime meridian that is the base reference longitude for UTC.Thus, in standard notation this is:UTC−12:00 hourDaylight Saving Time: DST is not applied, nor applicable Asteroid impact avoidance comprises a number of methods by which near-Earth objects (NEO) could be diverted, preventing destructive impact events. A sufficiently large impact by an asteroid or other NEOs would cause, depending on its impact location, massive tsunamis, multiple firestorms and an impact winter caused by the sunlight-blocking effect of placing large quantities of pulverized rock dust, and other debris, into the stratosphere.A collision between the Earth and an approximately 10-kilometre-wide object 66 million years ago is thought to have produced the Chicxulub Crater and the Cretaceous–Paleogene extinction event, widely held responsible for the extinction of most dinosaurs.While the chances of a major collision are not great in the near term, there is a high probability that one will happen eventually unless defensive actions are taken. Recent astronomical events—such as the Shoemaker-Levy 9 impacts on Jupiter and the 2013 Chelyabinsk meteor along with the growing number of objects on the Sentry Risk Table—have drawn renewed attention to such threats. NASA warns that the Earth is unprepared for such an event. The length of the day, which has increased over the long term of Earth's history due to tidal effects, is also subject to fluctuations on a shorter scale of time. Exact measurements of time by atomic clocks and satellite laser ranging have revealed that the length of day (LOD) is subject to a number of different changes. These subtle variations have periods that range from a few weeks to a few years. They are attributed to interactions between the dynamic atmosphere and Earth itself. The International Earth Rotation and Reference Systems Service monitors the changes. Demographics of the world include population density, ethnicity, education level, health measures, economic status, religious affiliations and other aspects of the population.The overall total population of the world is approximately 7.45 billion, as of July 2016.Its overall population density is 50 people per km² (129.28 per sq. mile), excluding Antarctica. Nearly two-thirds of the population lives in Asia and is predominantly urban and suburban, with more than 2.5 billion in the countries of China and India combined. The World's fairly low literacy rate (83.7%) is attributable to impoverished regions. Extremely low literacy rates are concentrated in three regions, the Arab states, South and West Asia, and Sub-Saharan Africa.The world's largest ethnic group is Han Chinese with Mandarin being the world's most spoken language in terms of native speakers.Human migration has been shifting toward cities and urban centers, with the urban population jumping from 29% in 1950, to 50.5% in 2005. Working backwards from the United Nations prediction that the world will be 51.3 percent urban by 2010, Dr. Ron Wimberley, Dr. Libby Morris and Dr. Gregory Fulkerson estimated May 23, 2007 to be the first time the urban population outnumbered the rural population in history. China and India are the most populous countries, as the birth rate has consistently dropped in developed countries and until recently remained high in developing countries. Tokyo is the largest urban conglomeration in the world.The total fertility rate of the World is estimated as 2.52 children per woman, which is above the replacement fertility rate of approximately 2.1. However, world population growth is unevenly distributed, going from .91 in Macau, to 7.68 in Niger. The United Nations estimated an annual population increase of 1.14% for the year of 2000.There are approximately 3.38 billion females in the World. The number of males is about 3.41 billion.People under 14 years of age made up over a quarter of the world population (26.3%), and people age 65 and over made up less than one-tenth (7.9%) in 2011.The world population growth is approximately 1.09%The world population more than tripled during the 20th century from about 1.65 billion in 1900 to 5.97 billion in 1999.It reached the 2 billion mark in 1927, the 3 billion mark in 1960, 4 billion in 1974, and 5 billion in 1987. Currently, population growth is fastest among low wealth, Third World countries.The UN projects a world population of 9.15 billion in 2050, which is a 32.69% increase from 2010 (6.89 billion). Earth religion is a term used mostly in the context of neopaganism.Earth-centered religion or nature worship is a system of religion based on the veneration of natural phenomena. It covers any religion that worships the earth, nature, or fertility gods and goddesses, such as the various forms of goddess worship or matriarchal religion. Some find a connection between earth-worship and the Gaia hypothesis. Earth religions are also formulated to allow one to utilize the knowledge of preserving the earth. Earth's energy budget accounts for the balance between energy Earth receives from the Sun, and energy Earth radiates back into outer space after having been distributed throughout the five components of Earth's climate system and having thus powered the so-called "Earth’s heat engine". This system is made up of earth's water, ice, atmosphere, rocky crust, and all living things.Quantifying changes in these amounts is required to accurately model the Earth's climate.Received radiation is unevenly distributed over the planet, because the Sun heats equatorial regions more than polar regions. The atmosphere and ocean work non-stop to even out solar heating imbalances through evaporation of surface water, convection, rainfall, winds, and ocean circulation. Earth is very close to be (but not perfectly) in radiative equilibrium, the situation where the incoming solar energy is balanced by an equal flow of heat to space; under that condition, global temperatures will be relatively stable. Globally, over the course of the year, the Earth system —land surfaces, oceans, and atmosphere— absorbs and then radiates back to space an average of about 240 watts of solar power per square meter. Anything that increases or decreases the amount of incoming or outgoing energy will change global temperatures in response.However, Earth's energy balance and heat fluxes depend on many factors, such as atmospheric composition (mainly aerosols and greenhouse gases), the albedo (reflectivity) of surface properties, cloud cover and vegetation and land use patterns.Changes in surface temperature due to Earth's energy budget do not occur instantaneously, due to the inertia of the oceans and the cryosphere. The net heat flux is buffered primarily by becoming part of the ocean's heat content, until a new equilibrium state is established between radiative forcings and the climate response. The flow of heat from Earth's interior to the surface is estimated at 47 terawatts (TW) and comes from two main sources in roughly equal amounts: the radiogenic heat produced by the radioactive decay of isotopes in the mantle and crust, and the primordial heat left over from the formation of the Earth.Earth's internal heat powers most geological processes and drives plate tectonics. Despite its geological significance, this heat energy coming from Earth's interior is actually only 0.03% of Earth's total energy budget at the surface, which is dominated by 173,000 TW of incoming solar radiation. The insolation that eventually, after reflection, reaches the surface penetrates only several tens of centimeters on the daily cycle and only several tens of meters on the annual cycle. This renders solar radiation irrelevant for internal processes. Geothermal energy is heat energy generated and stored in the Earth. Thermal energy is the energy that determines the temperature of matter. The geothermal energy of the Earth's crust originates from the original formation of the planet and from radioactive decay of materials (in currently uncertain but possibly roughly equal proportions). The geothermal gradient, which is the difference in temperature between the core of the planet and its surface, drives a continuous conduction of thermal energy in the form of heat from the core to the surface. The adjective geothermal originates from the Greek roots γη (ge), meaning earth, and θερμος (thermos), meaning hot.Earth's internal heat is thermal energy generated from radioactive decay and continual heat loss from Earth's formation. Temperatures at the core–mantle boundary may reach over 4000 °C (7,200 °F). The high temperature and pressure in Earth's interior cause some rock to melt and solid mantle to behave plastically, resulting in portions of the mantle convecting upward since it is lighter than the surrounding rock. Rock and water is heated in the crust, sometimes up to 370 °C (700 °F).From hot springs, geothermal energy has been used for bathing since Paleolithic times and for space heating since ancient Roman times, but it is now better known for electricity generation. Worldwide, 11,700 megawatts (MW) of geothermal power is online in 2013. An additional 28 gigawatts of direct geothermal heating capacity is installed for district heating, space heating, spas, industrial processes, desalination and agricultural applications as of 2010.Geothermal power is cost-effective, reliable, sustainable, and environmentally friendly, but has historically been limited to areas near tectonic plate boundaries. Recent technological advances have dramatically expanded the range and size of viable resources, especially for applications such as home heating, opening a potential for widespread exploitation. Geothermal wells release greenhouse gases trapped deep within the earth, but these emissions are much lower per energy unit than those of fossil fuels.The Earth's geothermal resources are theoretically more than adequate to supply humanity's energy needs, but only a very small fraction may be profitably exploited. Drilling and exploration for deep resources is very expensive. Forecasts for the future of geothermal power depend on assumptions about technology, energy prices, subsidies, plate boundary movement and interest rates. Pilot programs like EWEB's customer opt in Green Power Program show that customers would be willing to pay a little more for a renewable energy source like geothermal. But as a result of government assisted research and industry experience, the cost of generating geothermal power has decreased by 25% over the past two decades. In 2001, geothermal energy costs between two and ten US cents per kWh. An overwhelming majority of fiction is set on or features the Earth. However, authors of speculative fiction novels and writers and directors of science fiction film deal with Earth quite differently from authors of conventional fiction. Unbound from the same ties that bind authors of traditional fiction to the Earth, they can either completely ignore the Earth or use it as but one of many settings in a more complicated universe, exploring a number of common themes through examining outsiders perceptions of and interactions with Earth. The expression figure of the Earth has various meanings in geodesy according to the way it is used and the precision with which the Earth's size and shape is to be defined. While the sphere is a close approximation of the true figure of the Earth and satisfactory for many purposes, geodesists have developed several models that more closely approximate the shape of the Earth so that coordinate systems can serve the precise needs of navigation, surveying, cadastre, land use, and various other concerns. The biological and geological future of Earth can be extrapolated based upon the estimated effects of several long-term influences. These include the chemistry at Earth's surface, the rate of cooling of the planet's interior, the gravitational interactions with other objects in the Solar System, and a steady increase in the Sun's luminosity. An uncertain factor in this extrapolation is the ongoing influence of technology introduced by humans, such as climate engineering, which could cause significant changes to the planet. The current Holocene extinction is being caused by technology and the effects may last for up to five million years. In turn, technology may result in the extinction of humanity, leaving the planet to gradually return to a slower evolutionary pace resulting solely from long-term natural processes.Over time intervals of hundreds of millions of years, random celestial events pose a global risk to the biosphere, which can result in mass extinctions. These include impacts by comets or asteroids with diameters of 5–10 km (3.1–6.2 mi) or more, and the possibility of a massive stellar explosion, called a supernova, within a 100-light-year radius of the Sun, called a Near-Earth supernova. Other large-scale geological events are more predictable. If the long-term effects of global warming are disregarded, Milankovitch theory predicts that the planet will continue to undergo glacial periods at least until the Quaternary glaciation comes to an end. These periods are caused by variations in eccentricity, axial tilt, and precession of the Earth's orbit. As part of the ongoing supercontinent cycle, plate tectonics will probably result in a supercontinent in 250–350 million years. Some time in the next 1.5–4.5 billion years, the axial tilt of the Earth may begin to undergo chaotic variations, with changes in the axial tilt of up to 90°.During the next four billion years, the luminosity of the Sun will steadily increase, resulting in a rise in the solar radiation reaching the Earth. This will result in a higher rate of weathering of silicate minerals, which will cause a decrease in the level of carbon dioxide in the atmosphere. In about 600 million years from now, the level of CO2 will fall below the level needed to sustain C3 carbon fixation photosynthesis used by trees. Some plants use the C4 carbon fixation method, allowing them to persist at CO2 concentrations as low as 10 parts per million. However, the long-term trend is for plant life to die off altogether. The extinction of plants will be the demise of almost all animal life, since plants are the base of the food chain on Earth.In about one billion years, the solar luminosity will be 10% higher than at present. This will cause the atmosphere to become a "moist greenhouse", resulting in a runaway evaporation of the oceans. As a likely consequence, plate tectonics will come to an end, and with them the entire carbon cycle. Following this event, in about 2−3 billion years, the planet's magnetic dynamo may cease, causing the magnetosphere to decay and leading to an accelerated loss of volatiles from the outer atmosphere. Four billion years from now, the increase in the Earth's surface temperature will cause a runaway greenhouse effect, heating the surface enough to melt it. By that point, all life on the Earth will be extinct. The most probable fate of the planet is absorption by the Sun in about 7.5 billion years, after the star has entered the red giant phase and expanded beyond the planet's current orbit. The Gaia hypothesis ( GYE-ə, GAY-ə), also known as the Gaia theory or the Gaia principle, proposes that organisms interact with their inorganic surroundings on Earth to form a synergistic self-regulating, complex system that helps to maintain and perpetuate the conditions for life on the planet. Topics of interest include how the biosphere and the evolution of life forms affect the stability of global temperature, ocean salinity, oxygen in the atmosphere, the maintenance of a hydrosphere of liquid water and other environmental variables that affect the habitability of Earth.The hypothesis was formulated by the chemist James Lovelock and co-developed by the microbiologist Lynn Margulis in the 1970s. The hypothesis was initially criticized for being teleological, and contradicting principles of natural selection, but later refinements aligned the Gaia hypothesis with ideas from fields such as Earth system science, biogeochemistry and systems ecology, with Lovelock referring to the "geophysiology" of the Earth. Even so, the Gaia hypothesis continues to attract criticism, and today some scientists consider it to be only weakly supported by, or at odds with, the available evidence. In 2006, the Geological Society of London awarded Lovelock the Wollaston Medal in part for his work on the Gaia hypothesis. The geological history of Earth follows the major events in Earth's past based on the geologic time scale, a system of chronological measurement based on the study of the planet's rock layers (stratigraphy). Earth formed about 4.54 billion years ago by accretion from the solar nebula, a disk-shaped mass of dust and gas left over from the formation of the Sun, which also created the rest of the Solar System.Earth was initially molten due to extreme volcanism and frequent collisions with other bodies. Eventually, the outer layer of the planet cooled to form a solid crust when water began accumulating in the atmosphere. The Moon formed soon afterwards, possibly as a result of the impact of a planetoid with the Earth. Outgassing and volcanic activity produced the primordial atmosphere. Condensing water vapor, augmented by ice delivered from comets, produced the oceans.As the surface continually reshaped itself over hundreds of millions of years, continents formed and broke apart. They migrated across the surface, occasionally combining to form a supercontinent. Roughly 750 million years ago, the earliest-known supercontinent Rodinia, began to break apart. The continents later recombined to form Pannotia, 600 to 540 million years ago, then finally Pangaea, which broke apart 200 million years ago.The present pattern of ice ages began about 40 million years ago, then intensified at the end of the Pliocene. The polar regions have since undergone repeated cycles of glaciation and thaw, repeating every 40,000–100,000 years. The last glacial period of the current ice age ended about 10,000 years ago. A geomagnetic reversal is a change in a planet's magnetic field such that the positions of magnetic north and magnetic south are interchanged, while geographic north and geographic south remain the same. The Earth's field has alternated between periods of normal polarity, in which the predominant direction of the field was the same as the present direction, and reverse polarity, in which it was the opposite. These periods are called chrons.The time spans of chrons are randomly distributed with most being between 0.1 and 1 million years with an average of 450,000 years. Most reversals are estimated to take between 1,000 and 10,000 years. The latest one, the Brunhes–Matuyama reversal, occurred 780,000 years ago, and may have happened very quickly, within a human lifetime.A brief complete reversal, known as the Laschamp event, occurred only 41,000 years ago during the last glacial period. That reversal lasted only about 440 years with the actual change of polarity lasting around 250 years. During this change the strength of the magnetic field weakened to 5% of its present strength. Brief disruptions that do not result in reversal are called geomagnetic excursions. The gravity of Earth, which is denoted by g, refers to the acceleration that is imparted to objects due to the distribution of mass within the Earth. In SI units this acceleration is measured in metres per second squared (in symbols, m/s2 or m·s−2) or equivalently in newtons per kilogram (N/kg or N·kg−1). Near the Earth's surface, gravitational acceleration is approximately 9.8 m/s2, which means that, ignoring the effects of air resistance, the speed of an object falling freely will increase by about 9.8 metres per second every second. This quantity is sometimes referred to informally as little g (in contrast, the gravitational constant G is referred to as big G).The precise strength of Earth's gravity varies depending on location. The nominal "average" value at the Earth's surface, known as standard gravity is, by definition, 9.80665 m/s2. This quantity is denoted variously as gn, ge (though this sometimes means the normal equatorial value on Earth, 9.78033 m/s2), g0, gee, or simply g (which is also used for the variable local value). The weight of an object on the Earth's surface is the downwards force on that object, given by Newton's second law of motion, or F = ma (force = mass × acceleration). Gravitational acceleration contributes to the total acceleration, but other factors, such as the rotation of the Earth, also contribute, and, therefore, affect the weight of the object. Throughout the Phanerozoic history of the Earth, the planet's climate has been fluctuating between two dominant climate states: the greenhouse earth and the icehouse earth. These two climate states last for millions of years and should not be confused with glacial and interglacial periods, which occur only during an icehouse period and tend to last less than 1 million years. There are five known glaciations in Earth's climate history; the main factors involved in changes of the paleoclimate are believed to be the concentration of atmospheric carbon dioxide, changes in the Earth's orbit, and oceanic and orogenic changes due to tectonic plate dynamics. Greenhouse and icehouse periods have profoundly shaped the evolution of life on Earth. The history of Earth concerns the development of planet Earth from its formation to the present day. Nearly all branches of natural science have contributed to the understanding of the main events of Earth's past. The age of the Earth is approximately one-third of the age of the universe. An immense amount of geological change has occurred in that timespan, accompanied by the emergence of life and its subsequent evolution.Earth formed around 4.54 billion years ago by accretion from the solar nebula. Volcanic outgassing probably created the primordial atmosphere and then the ocean, but the early atmosphere contained almost no oxygen and so would not have supported known forms of life. Much of the Earth was molten because of frequent collisions with other bodies which led to extreme volcanism. A giant impact collision with a planet-sized body named Theia while Earth was in its earliest stage, also known as Early Earth, is thought to have been responsible for forming the Moon. Over time, the Earth cooled, causing the formation of a solid crust, and allowing liquid water to exist on the surface.The geological time scale (GTS) depicts the larger spans of time, from the beginning of the Earth to the present, and it chronicles some definitive events of Earth history. The Hadean eon represents time before the reliable (fossil) record of life beginning on Earth; it began with the formation of the planet and ended at 4.0 billion years ago as defined by international convention. The Archean and Proterozoic eons follow; they produced the abiogenesis of life on Earth and then the evolution of early life. The succeeding eon is the Phanerozoic, which is represented by its three component eras: the Palaeozoic; the Mesozoic, which spanned the rise, reign, and climactic extinction of the non-avian dinosaurs; and the Cenozoic, which presented the subsequent development of dominant mammals on Earth.Hominins, the earliest direct ancestors of the human clade, rose sometime during the latter part of the Miocene epoch; the precise time marking the first hominins is broadly debated over a current range of 13 to 4 million years ago. The succeeding Quaternary period is the time of recognizable humans, i.e., the genus Homo, but that period's two million-year-plus term of the recent times is too small to be visible at the scale of the GTS graphic. (Notes re the graphic: Ga means "billion years"; Ma, "million years".)The earliest undisputed evidence of life on Earth dates at least from 3.5 billion years ago, during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. There are microbial mat fossils such as stromatolites found in 3.48 billion-year-old sandstone discovered in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in southwestern Greenland as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. According to one of the researchers, "If life arose relatively quickly on Earth … then it could be common in the universe."Photosynthetic organisms appeared between 3.2 and 2.4 billion years ago and began enriching the atmosphere with oxygen. Life remained mostly small and microscopic until about 580 million years ago, when complex multicellular life arose, developed over time, and culminated in the Cambrian Explosion about 541 million years ago. This event drove a rapid diversification of life forms on Earth that produced most of the major phyla known today, and it marked the end of the Proterozoic Eon and the beginning of the Cambrian Period of the Paleozoic Era. More than 99 percent of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.2 million are documented, but over 86 percent have not been described. Scientists recently reported that 1 trillion species are estimated to be on Earth currently with only one-thousandth of one percent described.The Earth's crust has constantly changed since its formation. Likewise, life has constantly changed since its first appearance. Species continue to evolve, taking on new forms, splitting into daughter species or going extinct in the process of adapting or dying in response to ever-changing physical environments. The process of plate tectonics continues to shape the Earth's continents and oceans and the life they harbor. Human activity is now a dominant force affecting global change, adversely affecting the biosphere, the Earth's surface, hydrosphere, and atmosphere, with the loss of wild lands, over-exploitation of the oceans, production of greenhouse gases, degradation of the ozone layer, and general degradation of soil, air, and water quality. The Jaramillo reversal was a reversal and excursion of the Earth's magnetic field that occurred approximately one million years ago. In the geological time scale it was a "short-term" positive reversal in the then-dominant Matuyama reversed magnetic chronozone; its beginning is widely dated to 990,000 years before the present (BP), and its end to 950,000 BP (though an alternative date of 1.07 million years ago to 990,000 is also found in the scientific literature).The causes and mechanisms of short-term reversals and excursions like the Jaramillo, as well as the major field reversals like the Brunhes–Matuyama reversal, are subjects of study and dispute among researchers. One theory associates the Jaramillo with the Bosumtwi impact event, as evidenced by a tektite strewnfield in the Ivory Coast, though this hypothesis has been claimed as "highly speculative" and "refuted". Artificial radiation belts are radiation belts that have been created by high altitude nuclear explosions.The table above only lists those high-altitude nuclear explosions for which a reference exists in the open (unclassified) English-language scientific literature to persistent artificial radiation belts resulting from the explosion.The Starfish Prime radiation belt had, by far, the greatest intensity and duration of any of the artificial radiation belts.The Starfish Prime radiation belt damaged the United Kingdom Satellite Ariel 1 and the United States satellites, Traac, Transit 4B, Injun I and Telstar I. It also damaged the Soviet satellite Cosmos V. All of these satellites failed completely within several months of the Starfish detonation.Telstar I lasted the longest of the satellites damaged by the Starfish Prime radiation, with its complete failure occurring on February 21, 1963.In Los Alamos Scientific Laboratory report LA-6405, Herman Hoerlin gave the following explanation of the history of the original Argus experiment and of how the nuclear detonations lead to the development of artificial radiation belts.Before the discovery of the natural Van Allen belts in 1958, N. C. Christofilos had suggested in October 1957 that many observable geophysical effects could be produced by a nuclear explosion at high altitude in the upper atmosphere. This suggestion was reduced to practice with the sponsorship of the Advanced Research Project Agency (ARPA) of the Department of Defense and under the overall direction of Herbert York, who was then Chief Scientist of ARPA. It required only four months from the time it was decided to proceed with the tests until the first bomb was exploded. The code name of the project was Argus. Three events took place in the South Atlantic. ... Following these events, artificial belts of trapped radiation were observed.A general description of trapped radiation is as follows. Charged particles move in spirals around magnetic-field lines. The pitch angle (the angle between the direction of the motion of the particle and direction of the field line) has a low value at the equator and increases while the particle moves down a field line in the direction where the magnetic field strength increases. When the pitch angle becomes 90 degrees, the particle must move in the other direction, up the field lines, until the process repeats itself at the other end. The particle is continuously reflected at the two mirror points — it is trapped in the field. Because of asymmetries in the field, the particles also drift around the earth, electrons towards the east. Thus, they form a shell around the earth similar in shape to the surface formed by a field line rotated around the magnetic dipole axis.In 2010, the United States Defense Threat Reduction Agency issued a report that had been written in support of the United States Commission to Assess the Threat to the United States from Electromagnetic Pulse Attack. The report, entitled "Collateral Damage to Satellites from an EMP Attack," discusses in great detail the historical events that caused artificial radiation belts and their effects on many satellites that were then in orbit. The same report also projects the effects of one or more present-day high altitude nuclear explosions upon the formation of artificial radiation belts and the probable resulting effects on satellites that are currently in orbit. Knowledge of Earth's location in the Universe has been shaped by 400 years of telescopic observations, and has expanded radically in the last century. Initially, Earth was believed to be the center of the Universe, which consisted only of those planets visible with the naked eye and an outlying sphere of fixed stars. After the acceptance of the heliocentric model in the 17th century, observations by William Herschel and others showed that the Sun lay within a vast, disc-shaped galaxy of stars. By the 20th century, observations of spiral nebulae revealed that our galaxy was one of billions in an expanding universe, grouped into clusters and superclusters. By the end of the 20th century, the overall structure of the visible universe was becoming clearer, with superclusters forming into a vast web of filaments and voids. Superclusters, filaments and voids are the largest coherent structures in the Universe that we can observe. At still larger scales (over 1000 megaparsecs) the Universe becomes homogeneous meaning that all its parts have on average the same density, composition and structure.Since there is believed to be no "center" or "edge" of the Universe, there is no particular reference point with which to plot the overall location of the Earth in the universe. Because the observable universe is defined as that region of the Universe visible to terrestrial observers, Earth is, by definition, the center of Earth's observable universe. Reference can be made to the Earth's position with respect to specific structures, which exist at various scales. It is still undetermined whether the Universe is infinite. There have been numerous hypotheses that our universe may be only one such example within a higher multiverse; however, no direct evidence of any sort of multiverse has ever been observed, and some have argued that the hypothesis is not falsifiable. Magmatic water or juvenile water is water that exists within, and in equilibrium with, a magma or water-rich volatile fluids that are derived from a magma. This magmatic water is released to the atmosphere during a volcanic eruption. Magmatic water may also be released as hydrothermal fluids during the late stages of magmatic crystallization or solidification within the Earth's crust. The crystallization of hydroxyl bearing amphibole and mica minerals acts to contain part of the magmatic water within a solidified igneous rock. Ultimate sources of this magmatic water includes water and hydrous minerals in rocks melted during subduction as well as primordial water brought up from the deep mantle. The mediocrity principle is the philosophical notion that "if an item is drawn at random from one of several sets or categories, it's likelier to come from the most numerous category than from any one of the less numerous categories". The principle has been taken to suggest that there is nothing very unusual about the evolution of the Solar System, Earth's history, the evolution of biological complexity, human evolution, or any one nation. It is a heuristic in the vein of the Copernican principle, and is sometimes used as a philosophical statement about the place of humanity. The idea is to assume mediocrity, rather than starting with the assumption that a phenomenon is special, privileged, exceptional, or even superior. Earth's orbit is the trajectory along which Earth travels around the Sun. The average distance between the Earth and the Sun is 149.60 million km (92.96 million mi), and one complete orbit takes 365.256 days (1 sidereal year), during which time Earth has traveled 940 million km (584 million mi). Earth's orbit has an eccentricity of 0.0167.As seen from Earth, the planet's orbital prograde motion makes the Sun appear to move with respect to other stars at a rate of about 1° (or a Sun or Moon diameter every 12 hours) eastward per solar day. Earth's orbital speed averages about 30 km/s (108,000 km/h; 67,000 mph), which is fast enough to cover the planet's diameter in 7 minutes and the distance to the Moon in 4 hours.From a vantage point above the north pole of either the Sun or Earth, Earth would appear to revolve in a counterclockwise direction around the Sun. From the same vantage point, both the Earth and the Sun would appear to rotate also in a counterclockwise direction about their respective axes. Planetary management is intentional global-scale management of Earth's biological, chemical and physical processes and cycles (water, carbon, nitrogen, sulfur, phosphorus, and others). Planetary management also includes managing humanity’s influence on planetary-scale processes. Effective planetary management aims to prevent destabilisation of Earth's climate, protect biodiversity and maintain or improve human well-being. More specifically, it aims to benefit society and the global economy, and safeguard the ecosystem services upon which humanity depends – global climate, freshwater supply, food, energy, clean air, fertile soil, pollinators, and so on.Because of the sheer complexity and enormous scope of the task, it remains to be seen whether planetary management is a feasible paradigm for maintaining global sustainability. The concept currently has defenders and critics on both sides: environmentalist David W. Orr questions whether such a task can be accomplished with human help and technology or without first examining the underlying human causes, while geographer Vaclav Smil acknowledges that "the idea of planetary management may seem preposterous to many, but at this time in history there is no rational alternative". Precession is a change in the orientation of the rotational axis of a rotating body. In an appropriate reference frame it can be defined as a change in the first Euler angle, whereas the third Euler angle defines the rotation itself. In other words, if the axis of rotation of a body is itself rotating about a second axis, that body is said to be precessing about the second axis. A motion in which the second Euler angle changes is called nutation. In physics, there are two types of precession: torque-free and torque-induced.In astronomy, precession refers to any of several slow changes in an astronomical body's rotational or orbital parameters. An important example is the steady change in the orientation of the axis of rotation of the Earth, known as the precession of the equinoxes. (See section Astronomy below.) Earth radius is the approximate distance from Earth's center to its surface, about 6,371 km (3,959 mi). This distance is used as a unit of length, especially in astronomy and geophysics, where it is usually denoted by R⊕. Strictly speaking, the term "radius" is a property of a true sphere. Since Earth is only approximately spherical, no single value serves as its definitive radius. Meaningful values range from 6,353 to 6,384 kilometres (3,948 to 3,967 mi).A distance from the center of Earth to some point on its surface might be referred to as Earth’s radius at that point. More commonly, Earth radius means a computed average of distances to the surface or to some idealized surface. The idealized surface is usually mean sea levels globally or else an ellipsoid that approximates sea level. If elevations were to be included, average radius would increase by about 230 m over sea level, or less than one part in 25,000.The first published reference to the Earth's size appeared around 350 BC, when Aristotle reported in his book On the Heavens that mathematicians had guessed the circumference of the Earth to be 400,000 stadia. Scholars have interpreted Aristotle's figure to be anywhere from highly accurate to almost double the true value. The first known scientific measurement and calculation of the radius of the Earth was performed by Eratosthenes in about 240 BC. Estimates of the accuracy of Eratosthenes’s measurement range from 0.5% to 17%. For both Aristotle and Eratosthenes, uncertainty in the accuracy of their estimates is due to modern uncertainty over which stadion length they meant. In planetary astronomy and astrobiology, the Rare Earth Hypothesis argues that the origin of life and the evolution of biological complexity such as sexually reproducing, multicellular organisms on Earth (and, subsequently, human intelligence) required an improbable combination of astrophysical and geological events and circumstances. According to the hypothesis, complex extraterrestrial life is a very improbable phenomenon and likely to be extremely rare. The term "Rare Earth" originates from Rare Earth: Why Complex Life Is Uncommon in the Universe (2000), a book by Peter Ward, a geologist and paleontologist, and Donald E. Brownlee, an astronomer and astrobiologist, both faculty members at the University of Washington.An alternative view point was argued in the 1970s and 1980s by Carl Sagan and Frank Drake, among others. It holds that Earth is a typical rocky planet in a typical planetary system, located in a non-exceptional region of a common barred-spiral galaxy. Given the principle of mediocrity (in the same vein as the Copernican principle), it is probable that the universe teems with complex life. Ward and Brownlee argue to the contrary: that planets, planetary systems, and galactic regions that are as friendly to complex life as are the Earth, the Solar System, and our region of the Milky Way are very rare. Earth orientation parameters (EOP) are a collection of parameters that describe irregularities in the rotation of the Earth.The Earth's rotation is not even. Any motion in/on the Earth causes a slowdown or speedup of the rotation, or a change of rotation axis. Most of them can be ignored, but movements of very large mass, like sea current or tide can produce discernible changes and cause error to very precise astronomical observations.A single parameter can be used to describe one phenomenon. The collection of earth orientation parameters is fitted to describe the rotation irregularities all together. Technically, they provide the rotation transforming the International Terrestrial Reference System (ITRS) to the International Celestial Reference System (ICRS), or vice versa, as a function of time. Earth's rotation is the rotation of Planet Earth around its own axis. Earth rotates eastward, in prograde motion. As viewed from the north pole star Polaris, Earth turns counterclockwise.The North Pole, also known as the Geographic North Pole or Terrestrial North Pole, is the point in the Northern Hemisphere where Earth's axis of rotation meets its surface. This point is distinct from Earth's North Magnetic Pole. The South Pole is the other point where Earth's axis of rotation intersects its surface, in Antarctica.Earth rotates once in about 24 hours with respect to the sun and once every 23 hours, 56 minutes and 4 seconds with respect to the stars (see below). Earth's rotation is slowing slightly with time; thus, a day was shorter in the past. This is due to the tidal effects the Moon has on Earth's rotation. Atomic clocks show that a modern-day is longer by about 1.7 milliseconds than a century ago, slowly increasing the rate at which UTC is adjusted by leap seconds. Analysis of historical astronomical records shows a slowing trend of 2.3 milliseconds per century since the 8th century BCE. Spaceship Earth or Spacecraft Earth is a world view term usually expressing concern over the use of limited resources available on Earth and encouraging everyone on it to act as a harmonious crew working toward the greater good.The earliest known use is a passage in Henry George's best known work, Progress and Poverty (1879). From book IV, chapter 2:It is a well-provisioned ship, this on which we sail through space. If the bread and beef above decks seem to grow scarce, we but open a hatch and there is a new supply, of which before we never dreamed. And very great command over the services of others comes to those who as the hatches are opened are permitted to say, "This is mine!"George Orwell later paraphrases Henry George in The Road to Wigan Pier:The world is a raft sailing through space with, potentially, plenty of provisions for everybody; the idea that we must all cooperate and see to it that everyone does his fair share of the work and gets his fair share of the provisions seems so blatantly obvious that one would say that no one could possibly fail to accept it unless he had some corrupt motive for clinging to the present system.In 1965 Adlai Stevenson made a famous speech to the UN in which he said:We travel together, passengers on a little space ship, dependent on its vulnerable reserves of air and soil; all committed for our safety to its security and peace; preserved from annihilation only by the care, the work, and, I will say, the love we give our fragile craft. We cannot maintain it half fortunate, half miserable, half confident, half despairing, half slave—to the ancient enemies of man—half free in a liberation of resources undreamed of until this day. No craft, no crew can travel safely with such vast contradictions. On their resolution depends the survival of us all.The following year, Spaceship Earth became the title of a book by a friend of Stevenson's, the internationally influential economist Barbara Ward.Also in 1966, Kenneth E. Boulding, who was influenced by reading Henry George, used the phrase in the title of an essay, The Economics of the Coming Spaceship Earth. Boulding described the past open economy of apparently illimitable resources, which he said he was tempted to call the "cowboy economy", and continued: "The closed economy of the future might similarly be called the 'spaceman' economy, in which the earth has become a single spaceship, without unlimited reservoirs of anything, either for extraction or for pollution, and in which, therefore, man must find his place in a cyclical ecological system". (David Korten would take up the "cowboys in a spaceship" theme in his 1995 book When Corporations Rule the World.)The phrase was also popularized by Buckminster Fuller, who published a book in 1968 under the title of Operating Manual for Spaceship Earth. This quotation, referring to fossil fuels, reflects his approach:…we can make all of humanity successful through science's world-engulfing industrial evolution provided that we are not so foolish as to continue to exhaust in a split second of astronomical history the orderly energy savings of billions of years' energy conservation aboard our Spaceship Earth. These energy savings have been put into our Spaceship's life-regeneration-guaranteeing bank account for use only in self-starter functions.United Nations Secretary-General U Thant spoke of Spaceship Earth on Earth Day March 21, 1971 at the ceremony of the ringing of the Japanese Peace Bell: "May there only be peaceful and cheerful Earth Days to come for our beautiful Spaceship Earth as it continues to spin and circle in frigid space with its warm and fragile cargo of animate life."Spaceship Earth is the name given to the 54,864 m geodesic sphere that greets visitors at the entrance of Walt Disney World's Epcot theme park. Housed within the sphere is a dark ride that serves to explore the history of communications and promote Epcot's founding principles, "[a] belief and pride in man's ability to shape a world that offers hope to people everywhere." A previous incarnation of the ride, narrated by actor Jeremy Irons and revised in 2008, was explicit in its message:Like a grand and miraculous spaceship, our planet has sailed through the universe of time, and for a brief moment, we have been among its many passengers….We now have the ability and the responsibility to build new bridges of acceptance and co-operation between us, to create a better world for ourselves and our children as we continue our amazing journey aboard Spaceship Earth.David Deutsch has pointed out that the picture of Earth as a friendly "spaceship" habitat is difficult to defend even in metaphorical sense. The Earth environment is harsh and survival is constant struggle for life, including whole species extinction. Humans wouldn't be able to live in most of the areas where they are living now without knowledge necessary to build life-support systems such as houses, heating, water supply, etc.The term "Spaceship Earth" is frequently used on the labels of Emanuel Bronner's products to refer to the Earth. The earliest reliably documented mention of the spherical Earth concept dates from around the 6th century BC when it appeared in ancient Greek philosophy but remained a matter of speculation until the 3rd century BC, when Hellenistic astronomy established the spherical shape of the Earth as a physical given. The paradigm was gradually adopted throughout the Old World during Late Antiquity and the Middle Ages. A practical demonstration of Earth's sphericity was achieved by Ferdinand Magellan and Juan Sebastián Elcano's expedition's circumnavigation (1519−1522).The concept of a spherical Earth displaced earlier beliefs in a flat Earth: In early Mesopotamian mythology, the world was portrayed as a flat disk floating in the ocean with a hemispherical sky-dome above, and this forms the premise for early world maps like those of Anaximander and Hecataeus of Miletus. Other speculations on the shape of Earth include a seven-layered ziggurat or cosmic mountain, alluded to in the Avesta and ancient Persian writings (see seven climes).The realization that the figure of the Earth is more accurately described as an ellipsoid dates to the 17th century, as described by Isaac Newton in Principia. In the early 19th century, the flattening of the earth ellipsoid was determined to be of the order of 1/300 (Delambre, Everest). The modern value as determined by the US DoD World Geodetic System since the 1960s is close to 1/298.25. The subsolar point on a planet is where its sun is perceived to be directly overhead (in zenith); that is where the sun's rays are hitting the planet exactly perpendicular to its surface. It can also mean the point closest to the sun on an object in space, even though the sun might not be visible.For planets with an orientation and rotation similar to the Earth's, the subsolar point will move westward, circling the globe once a day, approximately moving along a circle of latitude. However, it will also move north and south between the tropics over the course of a year, so it is spiraling, like a helix.The December solstice occurs when the subsolar point is on the Tropic of Capricorn and the June solstice is at the instant when the subsolar point is on the Tropic of Cancer. The March and September equinoxes occur when the subsolar point crosses the equator.When the point passes through Hawaii, the only U.S. state in which this happens, it is known as Lahaina Noon. A substorm, sometimes referred to as a magnetospheric substorm or an auroral substorm, is a brief disturbance in the Earth's magnetosphere that causes energy to be released from the "tail" of the magnetosphere and injected into the high latitude ionosphere. Visually, a substorm is seen as a sudden brightening and increased movement of auroral arcs. Substorms were first described in qualitative terms by Kristian Birkeland which he called polar elementary storms. Sydney Chapman used the term substorm about 1960 which is now the standard term. The morphology of aurora during of a substorm was first described by Syun-Ichi Akasofu in 1964 using data collected during the International Geophysical Year.Substorms are distinct from geomagnetic storms in that the latter take place over a period of several days, are observable from anywhere on Earth, inject a large number of ions into the outer radiation belt, and occur once or twice a month during the maximum of the solar cycle and a few times a year during solar minimum. Substorms, on the other hand, take place over a period of a few hours, are observable primarily at the polar regions, do not inject many particles into the radiation belt, and are relatively frequent — often occurring only a few hours apart from each other. Substorms can be more intense and occur more frequently during a geomagnetic storm when one substorm may start before the previous one has completed. The source of the magnetic disturbances observed at the Earth's surface during geomagnetic storms is the ring current, whereas the sources of magnetic disturbances observed on the ground during substorms are electric currents in the ionosphere at high latitudes.Substorms can cause magnetic field disturbances in the auroral zones up to a magnitude of 1000 nT, roughly 2% of the total magnetic field strength in that region. The disturbance is much greater in space, as some geosynchronous satellites have registered the magnetic field dropping to half of its normal strength during a substorm. The most visible indication of a substorm is an increase in the intensity and size of polar auroras. Substorms can be divided into three phases: the growth phase, the expansion phase, and the recovery phase.In 2012, the THEMIS satellite mission observed the dynamics of rapidly developing substorms, confirming the existence of giant magnetic ropes and witnessed small explosions in the outskirts of Earth's magnetic field. A variety of symbols or iconographic conventions are used to represent Earth, either in the sense of planet Earth, or the inhabited world. Representations of the globe of Earth, either with an indication of the shape of the continents or with a representation of meridians and parallels remains a common pictographic convention to express the notion of "worldwide, global". The modern astronomical symbol for Earth as a planet uses either a stylized globus cruciger (♁) or a circle with a cross (representing the equator and one meridian; (🜨). Terrestrial Time (TT) is a modern astronomical time standard defined by the International Astronomical Union, primarily for time-measurements of astronomical observations made from the surface of Earth. For example, the Astronomical Almanac uses TT for its tables of positions (ephemerides) of the Sun, Moon and planets as seen from Earth. In this role, TT continues Terrestrial Dynamical Time (TDT or TD), which in turn succeeded ephemeris time (ET). TT shares the original purpose for which ET was designed, to be free of the irregularities in the rotation of Earth.The unit of TT is the SI second, the definition of which is currently based on the caesium atomic clock, but TT is not itself defined by atomic clocks. It is a theoretical ideal, and real clocks can only approximate it.TT is distinct from the time scale often used as a basis for civil purposes, Coordinated Universal Time (UTC). TT indirectly underlies UTC, via International Atomic Time (TAI). Because of the historical difference between TAI and ET when TT was introduced, TT is approximately 32.184 s ahead of TAI. Water is distributed across earth. Most water in the Earth's atmosphere and crust comes from the world ocean's saline seawater, while freshwater accounts for only 2.5% of the total. Because the oceans that cover roughly 70% of the area of the Earth reflect blue light, the Earth appears blue from space, and is often referred to as the blue planet and the Pale Blue Dot. An estimated 1.5 to 11 times the amount of water in the oceans may be found hundreds of miles deep within the Earth's interior, although not in liquid form.The oceanic crust is young, thin and dense, with none of the rocks within it dating from any older than the breakup of Pangaea. Because water is much denser than any gas, this means that water will flow into the "depressions" formed as a result of the high density of oceanic crust. (On a planet like Venus, with no water, the depressions appear to form a vast plain above which rise plateaux). Since the low density rocks of the continental crust contain large quantities of easily eroded salts of the alkali and alkaline earth metals, salt has, over billions of years, accumulated in the oceans as a result of evaporation returning the fresh water to land as rain and snow.As a result, the vast bulk of the water on Earth is regarded as saline or salt water, with an average salinity of 35‰ (or 3.5%, roughly equivalent to 34 grams of salts in 1 kg of seawater), though this varies slightly according to the amount of runoff received from surrounding land. In all, water from oceans and marginal seas, saline groundwater and water from saline closed lakes amount to over 97% of the water on Earth, though no closed lake stores a globally significant amount of water. Saline groundwater is seldom considered except when evaluating water quality in arid regions.The remainder of the Earth's water constitutes the planet's fresh water resource. Typically, fresh water is defined as water with a salinity of less than 1 percent that of the oceans - i.e. below around 0.35‰. Water with a salinity between this level and 1‰ is typically referred to as marginal water because it is marginal for many uses by humans and animals. The ratio of salt water to fresh water on Earth is around 40 to 1.The planet's fresh water is also very unevenly distributed. Although in warm periods such as the Mesozoic and Paleogene when there were no glaciers anywhere on the planet all fresh water was found in rivers and streams, today most fresh water exists in the form of ice, snow, groundwater and soil moisture, with only 0.3% in liquid form on the surface. Of the liquid surface fresh water, 87% is contained in lakes, 11% in swamps, and only 2% in rivers. Small quantities of water also exist in the atmosphere and in living beings. Of these sources, only river water is generally valuable.Most lakes are in very inhospitable regions such as the glacial lakes of Canada, Lake Baikal in Russia, Lake Khövsgöl in Mongolia, and the African Great Lakes. The North American Great Lakes, which contain 21% of the world's fresh water by volume, are the exception. They are located in a hospitable region, which is heavily populated. The Great Lakes Basin is home to 33 million people. The Canadian cities of Toronto, Hamilton, Ontario, St. Catharines, Niagara, Oshawa, Windsor, and Barrie, and the United States cities of Duluth, Milwaukee, Chicago, Gary, Detroit, Cleveland, Buffalo, and Rochester, are all located on shores of the Great Lakes.Although the total volume of groundwater is known to be much greater than that of river runoff, a large proportion of this groundwater is saline and should therefore be classified with the saline water above. There is also a lot of fossil groundwater in arid regions that has never been renewed for thousands of years; this must not be seen as renewable water.However, fresh groundwater is of great value, especially in arid countries such as India. Its distribution is broadly similar to that of surface river water, but it is easier to store in hot and dry climates because groundwater storages are much more shielded from evaporation than are dams. In countries such as Yemen, groundwater from erratic rainfall during the rainy season is the major source of irrigation water.Because groundwater recharge is much more difficult to accurately measure than surface runoff, groundwater is not generally used in areas where even fairly limited levels of surface water are available. Even today, estimates of total groundwater recharge vary greatly for the same region depending on what source is used, and cases where fossil groundwater is exploited beyond the recharge rate (including the Ogallala Aquifer) are very frequent and almost always not seriously considered when they were first developed. Windows on Earth is a museum exhibit, Web site, and exploration tool, developed by TERC, Inc. (Technical Education Research Centers, an educational non-profit organization), and the Association of Space Explorers, that enables the public to explore an interactive, virtual view of Earth from space. In addition, the tool has been selected by NASA to help astronauts identify targets for photography from the International Space Station (ISS).The program simulates the view of Earth as seen from a window aboard the ISS, in high-resolution, photographically accurate colors and 3D animations. The views include cloud cover, day and night cycles, night time lights, and other features that help make the exhibit realistic and interactive. The world is the planet Earth and all life upon it, including human civilization. In a philosophical context, the world is the whole of the physical Universe, or an ontological world. In a theological context, the world is the material or the profane sphere, as opposed to the celestial, spiritual, transcendent or sacred. The "end of the world" refers to scenarios of the final end of human history, often in religious contexts.History of the world is commonly understood as spanning the major geopolitical developments of about five millennia, from the first civilizations to the present. In terms such as world religion, world language, world government, and world war, world suggests international or intercontinental scope without necessarily implying participation of the entire world.World population is the sum of all human populations at any time; similarly, world economy is the sum of the economies of all societies or countries, especially in the context of globalization. Terms like world championship, gross world product, world flags imply the sum or combination of all current-day sovereign states. Basketball is a limited contact sport played on a rectangular court. While most often played as a team sport with five players on each side, three-on-three, two-on-two, and one-on-one competitions are also common. The objective is to shoot a ball through a hoop 18 inches (46 cm) in diameter and 10 feet (3.048 m) high that is mounted to a backboard at each end of the court. The game was invented in 1891 by Dr. James Naismith.A team can score a field goal by shooting the ball through the basket being defended by the opposition team during regular play. A field goal scores three points for the shooting team if the player shoots from behind the three-point line, and two points if shot from in front of the line. A team can also score via free throws, which are worth one point, after the other team is assessed with certain fouls. The team with the most points at the end of the game wins, but additional time (overtime) is mandated when the score is tied at the end of regulation. The ball can be advanced on the court by passing it to a teammate, or by bouncing it while walking or running (dribbling). It is a violation to lift, or drag, one's pivot foot without dribbling the ball, to carry it, or to hold the ball with both hands then resume dribbling.The game has many individual techniques for displaying skill—ball-handling, shooting, passing, dribbling, dunking, shot-blocking, and rebounding. Basketball teams generally have player positions, the tallest and strongest members of a team are called a center or power forward, while slightly shorter and more agile players are called small forward, and the shortest players or those who possess the best ball handling skills are called a point guard or shooting guard. The point guard directs the on court action of the team, implementing the coach's game plan, and managing the execution of offensive and defensive plays (player positioning).Basketball is one of the world's most popular and widely viewed sports. The National Basketball Association (NBA) is the most popular and widely considered to be the highest level of professional basketball in the world and NBA players are the world's best paid athletes by average annual salary per player. Outside North America, the top clubs from national leagues qualify to continental championships such as the Euroleague and FIBA Americas League. The FIBA Basketball World Cup and Men's Olympic Basketball Tournament are the major international events of the sport and attract top national teams from around the world. Each continent hosts regional competitions for national teams, like EuroBasket and FIBA AmeriCup.The FIBA Women's Basketball World Cup and Women's Olympic Basketball Tournament feature top national teams from continental championships. The main North American league is the WNBA (NCAA Women's Division I Basketball Championship is also popular), whereas strongest European clubs participate in the EuroLeague Women. The following outline is provided as an overview of and topical guide to basketball:Basketball – ball game and team sport in which two teams of five players try to score points by throwing or "shooting" a ball through the top of a basketball hoop while following a set of rules. Since being developed by James Naismith as a non-contact game that almost anyone can play, basketball has undergone many different rule variations, eventually evolving into the NBA-style game known today. Basketball is one of the most popular and widely viewed sports in the world. The Death Lineup is a series of small ball lineups from the Golden State Warriors of the National Basketball Association (NBA).Developed by head coach Steve Kerr and Warriors special assistant Nick U'Ren, the Death Lineup is strategically advantageous because it allows the Warriors to create mismatches on offense, spread the floor with shooting and playmaking, and take advantage of the team's defensive versatility in order to defend against larger opponents.The Death Lineup features a three-point shooting backcourt in two-time NBA MVP Stephen Curry and Klay Thompson (who are nicknamed the Splash Brothers), versatile defender Andre Iguodala on the wing alongside scoring wings Harrison Barnes and Kevin Durant, and 2016–2017 Defensive Player of the Year Draymond Green at center. Draymond Green's defensive versatility has been described as the "key" that allows the lineup to be so effective; although Green's natural position is power forward, he is able to play as an undersized center in lieu of a traditional center who might be slower or lack the playmaking and shooting abilities of Green.The lineup has been described as "the most feared five-man lineup in the NBA" and has played a key role in the team's historic success. The Death Lineup is also considered to be indicative of a larger overall trend in NBA basketball towards "positionless" basketball, where traditional position assignments and roles have less importance. Basketball shoes are footwear designed specifically for playing basketball. Special shoe designs for basketball have existed since the 1920s. This list includes major brands of basketball shoe, listed by manufacturer and year of introduction.AdidasJabbar- worn by Kareem Abdul-Jabbar (1971)Top 10 (1979)Ewing Rivalry - worn by Patrick Ewing (1986)Real Deal II- Worn by Antoine Walker (1986)Ewing Attitude- worn by Patrick Ewing (1987)Ewing Conductor- worn by Patrick Ewing (1987)Mutombo- worn by Dikembe Mutombo (1992)EQT Elevation- worn by Kobe Bryant (1996)EQT Top Ten- worn by Kobe Bryant (1996)K8B (Kobe Bryant signature shoe) (1997)Real Deal- worn by Antonie Walker (1997)KB82- worn by Kobe Bryant (1998)Bromium- worn by Chris Webber (1999)KB8 III- worn by Kobe Bryant (1999)TMAC- worn by Tracy McGrady (2002)TMAC 2- worn by Tracy McGrady (2003)TMAC III- worn by Tracy McGrady (2004)C-Billups- worn by Chauncey Billups (2006)Kevin Garnett III- worn by Kevin Garnett (2006)TMAC VI- worn by Tracy McGrady (2006)Skyhoot Plus Low- worn by Kareem Abdul-Jabbar (2007)adiZero Rose- worn by Derrick Rose (2009)TS Commander LIte "Skeleton" - worn by Tim Duncan (2009)adiZero Rose 1.5- worn by Derrick Rose (2010)Superbeast- worn by Dwight Howard (2010)AND1Athletic Propulsion Labs (APL)Load 'N Launch (2009)ConverseChuck Taylor All-Stars (1918)EktioWraptor (2010)Post Up (2010)NikeBlazers (1973)Air Jordan (1985)Air Max (1987)Hyper series (2008)ReebokPump (1989)Under ArmourMicro G (2010)Black IceFlyBlurLite. Posterized is North American slang derived from an action in the game of basketball, in which the offensive player "dunks" over a defending player in a play that is spectacular and athletic enough to warrant reproduction in a printed poster. The term was originated from Julius Erving, whose high-flying style of play inspired the term.One of the most famous examples of a player being 'posterized' occurred during the 2000 Summer Olympics. 6-foot-6 Vince Carter, playing for Team USA, dunked over 7-foot-2 Frédéric Weis of France.Posterize used in a sentence: "CJ posterizes fools at Thursday night basketball"Posterized is also used infrequently to describe similar events in other sports and has made its way into business writing. Intercultural theater, also known as cross-cultural theatre, may transcend time, while mixing and matching cultures or subcultures. Mixing and matching is the unavoidable process in the making of inner connections and the presentations of inter-culturalities. Majority of the works in the intercultural theatre is basically about thinking and doing around the themes, stories, pre-performative or performative concepts of Asian classical theatre or traditional performing arts forms & practices mixing and matching with the concepts or the ideas of foreign. After the well-known success production of Peter Brook's Mahabharata, the trend has been tremendously evolving around the globe and many governments cultural institutions directly interested to push the boundaries of inter-cultural sense and sensitivities by financially investing for new theatrical productions, university researches, conferences and fellowships In anthropology, an acephalous society (from the Greek ἀκέφαλος "headless") is a society which lacks political leaders or hierarchies. Such groups are also known as egalitarian or non-stratified societies. Typically these societies are small-scale, organized into bands or tribes that make decisions through consensus decision making rather than appointing permanent chiefs or kings. Most foraging or hunter-gatherer societies are acephalous.In scientific literature covering native African societies and the effect of European colonialism on them the term is often used to describe groups of people living in a settlement with "no government in the sense of a group able to exercise effective control over both the people and their territory". In this respect the term is also often used as synonymous to "stateless Society". Such societies are described as consensus-democratic in opposition to the majority-democratic systems of the West.The Igbo Nation in West Africa is alleged to be an acephalous or egalitarian society. Actor–network theory (ANT) is a theoretical and methodological approach to social theory where everything in the social and natural worlds exists in constantly shifting networks of relationship. It posits that nothing exists outside those relationships. All the factors involved in a social situation are on the same level, and thus there are no external social forces beyond what and how the network participants interact at present. Thus, objects, ideas, processes, and any other relevant factors are seen as just as important in creating social situations as humans. ANT holds that social forces do not exist in themselves, and therefore cannot be used to explain social phenomena. Instead, strictly empirical analysis should be undertaken to "describe" rather than "explain" social activity. Only after this can one introduce the concept of social forces, and only as an abstract theoretical concept, not something which genuinely exists in the world. The fundamental aim of ANT is to explore how networks are built or assembled and maintained to achieve a specific objective. Although it is best known for its controversial insistence on the capacity of nonhumans to act or participate in systems or networks or both, ANT is also associated with forceful critiques of conventional and critical sociology. Developed by science and technology studies (STS) scholars Michel Callon and Bruno Latour, the sociologist John Law, and others, it can more technically be described as a "material-semiotic" method. This means that it maps relations that are simultaneously material (between things) and semiotic (between concepts). It assumes that many relations are both material and semiotic.Broadly speaking, ANT is a constructivist approach in that it avoids essentialist explanations of events or innovations (i.e. ANT explains a successful theory by understanding the combinations and interactions of elements that make it successful, rather than saying it is true and the others are false). Likewise, it is not a cohesive theory in itself. Rather, ANT functions as a strategy that assists people in being sensitive to terms and the often unexplored assumptions underlying them. It is distinguished from many other STS and sociological network theories for its distinct material-semiotic approach. Agriculture is the cultivation and breeding of animals, plants and fungi for food, fiber, biofuel, medicinal plants and other products used to sustain and enhance human life. Agriculture was the key development in the rise of sedentary human civilization, whereby farming of domesticated species created food surpluses that nurtured the development of civilization. The study of agriculture is known as agricultural science. The history of agriculture dates back thousands of years, and its development has been driven and defined by greatly different climates, cultures, and technologies. Industrial agriculture based on large-scale monoculture farming has become the dominant agricultural method.Modern agronomy, plant breeding, agrochemicals such as pesticides and fertilizers, and technological developments have in many cases sharply increased yields from cultivation, but at the same time have caused widespread ecological damage and negative human health effects. Selective breeding and modern practices in animal husbandry have similarly increased the output of meat, but have raised concerns about animal welfare, enviromental damage (such as massive drainage of resources such as water and feed fed to the animals, global warming, rainforest destruction, leftover waste products that are littered), and the health effects of the antibiotics, growth hormones, artificial additives and other chemicals commonly used in industrial meat production. Genetically modified organisms are an increasing component of agriculture, although they are banned in several countries. Agricultural food production and water management are increasingly becoming global issues that are fostering debate on a number of fronts. Significant degradation of land and water resources, including the depletion of aquifers, has been observed in recent decades, and the effects of global warming on agriculture and of agriculture on global warming are still not fully understood. However, entomophagy would solve most of the former problems, and may start to gain popularity among society in the West.The major agricultural products can be broadly grouped into foods, fibers, fuels, and raw materials. Specific foods include cereals (grains), vegetables, fruits, oils, meats and spices. Fibers include cotton, wool, hemp, silk and flax. Raw materials include lumber and bamboo. Other useful materials are also produced by plants, such as resins, dyes, drugs, perfumes, biofuels and ornamental products such as cut flowers and nursery plants. Over one third of the world's workers are employed in agriculture, second only to the service sector, although the percentages of agricultural workers in developed countries has decreased significantly over the past several centuries. The alliance theory, also known as the general theory of exchanges, is a structuralist method of studying kinship relations. It finds its origins in Claude Lévi-Strauss's Elementary Structures of Kinship (1949) and is in opposition to the functionalist theory of Radcliffe-Brown. Alliance theory has oriented most anthropological French works until the 1980s; its influences were felt in various fields, including psychoanalysis, philosophy and political philosophy.The hypothesis of a "marriage-alliance" emerged in this frame, pointing out towards the necessary interdependence of various families and lineages. Marriages themselves are thus seen as a form of communication that anthropologists such as Lévi-Strauss, Louis Dumont or Rodney Needham have described. Alliance theory hence tries to understand the basic questions about inter-individual relations, or what constitutes society.Alliance theory is based on the incest taboo: according to it, only this universal prohibition of incest pushes human groups towards exogamy. Thus, inside a given society, certain categories of kin are forbidden to inter-marry. The incest taboo is thus a negative prescription; without it, nothing would push men to go searching for women outside their inner kinship circle, or vice versa. This theory echoes with Freud's Totem and Taboo (1913). But the incest taboo of alliance theory, in which one's daughter or sister is offered to someone outside a family circle, starts a circle of exchange of women: in return, the giver is entitled to a woman from the other's intimate kinship group. Thus the negative prescriptions of the prohibition have positive counterparts. The idea of the alliance theory is thus of a reciprocal or a generalized exchange which founds affinity. This global phenomenon takes the form of a "circulation of women" which links together the various social groups in one whole: society. American anthropology has culture as its central and unifying concept. This most commonly refers to the universal human capacity to classify and encode human experiences symbolically, and to communicate symbolically encoded experiences socially. American anthropology is organized into four fields, each of which plays an important role in research on culture:biological anthropologylinguistic anthropologycultural anthropologyarchaeologyResearch in these fields has influenced anthropologists working in other countries to different degrees. Anthrobotics is the science of developing and studying robots that are either entirely or in some way human-like.The term anthrobotics was originally coined by Mark Rosheim in a paper entitled "Design of An Omnidirectional Arm" presented at the IEEE International Conference on Robotics and Automation, May 13–18, 1990, pp. 2162–2167. Rosheim says he derived the term from "...Anthropomorphic and Robotics to distinguish the new generation of dexterous robots from its simple industrial robot forebears." The word gained wider recognition as a result of its use in the title of Rosheim's subsequent book Robot Evolution: The Development of Anthrobotics, which focussed on facsimiles of human physical and psychological skills and attributes.However, a wider definition of the term anthrobotics has been proposed, in which the meaning is derived from anthropology rather than anthropomorphic. This usage includes robots that respond to input in a human-like fashion, rather than simply mimicking human actions, thus theoretically being able to respond more flexibly or to adapt to unforeseen circumstances. This expanded definition also encompasses robots that are situated in social environments with the ability to respond to those environments appropriately, such as insect robots, robotic pets, and the like.Anthrobotics is now taught at some universities, encouraging students not only to design and build robots for environments beyond current industrial applications, but also to speculate on the future of robotics that are embedded in the world at large, as mobile phones and computers are today. In 2016 philosopher Luis de Miranda created the Anthrobotics Cluster at the University of Edinburgh "a platform of cross-disciplinary research that seeks to investigate some of the biggest questions that will need to be answered" on the relationship between humans, robots and intelligent systems and "a think tank on the social spread of robotics, and also how automation is part of the definition of what humans have always been". to explore the symbiotic relationship between humans and automated protocols. Anthropocentrism is (; from Greek Ancient Greek: ἄνθρωπος, ánthrōpos, "human being"; and Ancient Greek: κέντρον, kéntron, "center") the belief that considers human beings to be the most significant entity of the universe and interprets or regards the world in terms of human values and experiences. The term can be used interchangeably with humanocentrism, and some refer to the concept as human supremacy or human exceptionalism. Anthropocentrism is considered to be profoundly embedded in many modern human cultures and conscious acts. It is a major concept in the field of environmental ethics and environmental philosophy, where it is often considered to be the root cause of problems created by human action within the ecosphere.However, many proponents of anthropocentrism state that this is not necessarily the case: they argue that a sound long-term view acknowledges that a healthy, sustainable environment is necessary for humans and that the real issue is shallow anthropocentrism. Anthropological theories of value attempt to expand on the traditional theories of value used by economists or ethicists. They are often broader in scope than the theories of value of Adam Smith, David Ricardo, John Stuart Mill, Karl Marx, etc. usually including sociological, political, institutional, and historical perspectives (transdisciplinarity). Some have influenced feminist economics.The basic premise is that economic activities can only be fully understood in the context of the society that creates them. The concept of "value" is a social construct, and as such is defined by the culture using the concept. Yet we can gain some insights into modern patterns of exchange, value, and wealth by examining previous societies. An anthropological approach to economic processes allows us to critically examine the cultural biases inherent in the principles of modern economics. Anthropological linguistics is a related field that looks at the terms we use to describe economic relations and the ecologies they are set within. Many anthropological economists (or economic anthropologists) are reacting against what they see as the portrayal of modern society as an economic machine that merely produces and consumes.Marcel Mauss and Bronisław Malinowski for example wrote about objects that circulate in society without being consumed. Georges Bataille wrote about objects that are destroyed, but not consumed. Bruce Owens talks about objects of value that are neither circulating nor consumed (e.g. gold reserves, warehoused paintings, family heirlooms). An anthropologist is a person engaged in the practice of anthropology. Anthropology is the study of various aspects of humans within past and present societies. Social anthropology, cultural anthropology, and philosophical anthropology study the norms and values of societies. Linguistic anthropology studies how language affects social life, while economic anthropology studies human economic behavior. Biological (physical), forensic, and medical anthropology study the biological development of humans, the application of biological anthropology in a legal setting, and the study of diseases and their impacts on humans over time, respectively. The anthropology of development is a term applied to a body of anthropological work which views development from a critical perspective. The kind of issues addressed, and implications for the approach typically adopted can be gleaned from a list questions posed by Gow (1996). These questions involve anthropologists asking why, if a key development goal is to alleviate poverty, is poverty increasing? Why is there such a gap between plans and outcomes? Why are those working in development so willing to disregard history and the lessons it might offer? Why is development so externally driven rather than having an internal basis? In short why does so much planned development fail?This anthropology of development has been distinguished from development anthropology. Development anthropology refers to the application of anthropological perspectives to the multidisciplinary branch of development studies. It takes international development and international aid as primary objects. In this branch of anthropology, the term development refers to the social action made by different agents (institutions, business, enterprise, states, independent volunteers) who are trying to modify the economic, technical, political or/and social life of a given place in the world, especially in impoverished, formerly colonized regions.Development anthropologists share a commitment to simultaneously critique and contribute to projects and institutions that create and administer Western projects that seek to improve the economic well-being of the most marginalized, and to eliminate poverty. While some theorists distinguish between the 'anthropology of development' (in which development is the object of study) and development anthropology (as an applied practice), this distinction is increasingly thought of as obsolete. Anthropology of food is a sub-discipline of anthropology that connects an ethnographic and historical perspective with contemporary social issues in food production and consumption systems.Although early anthropological accounts often dealt with cooking and eating as part of ritual or daily life, food was rarely regarded as the central point of academic focus. This changed in the later half of the 20th century, when foundational work by Mary Douglas, Marvin Harris, Arjun Appadurai, Jack Goody, and Sidney Mintz cemented the study of food as a key insight into modern social life. Mintz is known as the "Father of food anthropology" for his work Sweetness and Power (1985), which linked British demand for sugar with the creation of empire and exploitative industrial labor conditions.Research has traced the material and symbolic importance of food, as well as how they intersect. Examples of ongoing themes are food as a form of differentiation, commensality, and food's role in industrialization and globalizing labor and commodity chains.Several related and interdisciplinary academic programs exist in the US and UK (listed under Food studies institutions). The anthropology of institutions is a sub-field in social anthropology dedicated to the study of institutions in different cultural contexts.The role of anthropology in institutions has expanded significantly since the end of the 20th century. Much of this development can be attributed to the rise in anthropologists working outside of academia and the increasing importance of globalization in both institutions and the field of anthropology. Anthropologists can be employed by institutions such as for-profit business, nonprofit organizations, and governments. For instance, cultural anthropologists are commonly employed by the United States federal government.The two types of institutions defined in the field of anthropology are total institutions and social institutions. Total institutions are places that comprehensively coordinate the actions of people within them, and examples of total institutions include prisons, convents, and hospitals. Social institutions, on the other hand, are constructs that regulate individuals’ day to day lives, such as kinship, religion, and economics. Anthropology of institutions may analyze labor unions, businesses ranging from small enterprises to corporations, government, medical organizations, education, prisons, and financial institutions. Nongovernmental organizations have garnered particular interest in the field of institutional anthropology because of they are capable of fulfilling roles previously ignored by governments, or previously realized by families or local groups, in an attempt to mitigate social problems.The types and methods of scholarship performed in the anthropology of institutions can take a number of forms. Institutional anthropologists may study the relationship between organizations or between an organization and other parts of society. Institutional anthropology may also focus on the inner workings of an institution, such as the relationships, hierarchies and cultures formed, and the ways that these elements are transmitted and maintained, transformed, or abandoned over time. Additionally, some anthropology of institutions examines the specific design of institutions and their corresponding strength. More specifically, anthropologists may analyze specific events within an institution, perform semiotic investigations, or analyze the mechanisms by which knowledge and culture are organized and dispersed.In all manifestations of institutional anthropology, participant observation is critical to understanding the intricacies of the way an institution works and the consequences of actions taken by individuals within it. Simultaneously, anthropology of institutions extends beyond examination of the commonplace involvement of individuals in institutions to discover how and why the organizational principles evolved in the manner that they did.Common considerations taken by anthropologists in studying institutions include the physical location at which a researcher places themselves, as important interactions often take place in private, and the fact that the members of an institution are often being examined in their workplace and may not have much idle time to discuss the details of their everyday endeavors. The ability of individuals to present the workings of an institution in a particular light or frame must additionally be taken into account when using interviews and document analysis to understand an institution, as the involvement of an anthropologist may be met with distrust when information being released to the public isn’t directly controlled by the institution and could potentially be damaging. In anthropology, anthropopoiesis is the self-building process of social man and of a whole culture, particularly referred to what concerns modifications of socialized body. The concept found applications mainly in French and Italian contemporary literatures.In accordance with theoretic background which supports the idea, man is an unfinished being or better his behaviour is not strongly prefixed by genetic heritage. Human beings become fully finished only by means of culture acquisition.Anthropopoiesis is both anthropogenesis (man “reborne” as social creature) and manufactoring of ”mankind patterns and fictions”. Therefore social and cultural practices build up the man by means of ritual and institutional constraints.An example could be circumcision, a practice widely existing in many rites of passage amongst Islamic and Jewish believers and also amongst traditional cultures and communities. Besides, Christians ascribe a clear meaning to the sacred garment and to the tonsure; they are convinced that some sacramental rites mark indelible dispositions. All that affects the body and through this one the perception of one’s own identity and social status. Anthrozoology (also known as human–non-human-animal studies, or HAS) is the subset of ethnobiology that deals with interactions between humans and other animals. It is an interdisciplinary field that overlaps with other disciplines including anthropology, ethnology, medicine, psychology, veterinary medicine and zoology. A major focus of anthrozoologic research is the quantifying of the positive effects of human-animal relationships on either party and the study of their interactions. It includes scholars from fields such as anthropology, sociology, biology, history and philosophy.Anthrozoology scholars, such as Pauleen Bennett recognize the lack of scholarly attention given to non-human animals in the past, and to the relationships between human and non-human animals, especially in the light of the magnitude of animal representations, symbols, stories and their actual physical presence in human societies. Rather than a unified approach, the field currently consists of several methods adapted from the several participating disciplines to encompass human-nonhuman animal relationships and occasional efforts to develop sui generis methods. Applied anthropology refers to the application of the method and theory of anthropology to the analysis and solution of practical problems. In Applied Anthropology: Domains of Application, Kedia and Van Willigen define the process as a "complex of related, research-based, instrumental methods which produce change or stability in specific cultural systems through the provision of data, initiation of direct action, and/or the formulation of policy". More simply, applied anthropology is the praxis-based side of anthropological research; it includes researcher involvement and activism within the participating community. Archaeology, or archeology, is the study of human activity through the recovery and analysis of material culture. The archaeological record consists of artifacts, architecture, biofacts or ecofacts, and cultural landscapes. Archaeology can be considered both a social science and a branch of the humanities. In North America, archaeology is considered a sub-field of anthropology, while in Europe archaeology is often viewed as either a discipline in its own right or a sub-field of other disciplines.Archaeologists study human prehistory and history, from the development of the first stone tools at Lomekwi in East Africa 3.3 million years ago up until recent decades. Archaeology as a field is distinct from the discipline of palaeontology, the study of fossil remains. Archaeology is particularly important for learning about prehistoric societies, for whom there may be no written records to study. Prehistory includes over 99% of the human past, from the Paleolithic until the advent of literacy in societies across the world. Archaeology has various goals, which range from understanding culture history to reconstructing past lifeways to documenting and explaining changes in human societies through time.The discipline involves surveying, excavation and eventually analysis of data collected to learn more about the past. In broad scope, archaeology relies on cross-disciplinary research. It draws upon anthropology, history, art history, classics, ethnology, geography, geology, literary history, linguistics, semiology, textual criticism, physics, information sciences, chemistry, statistics, paleoecology, paleography, paleontology, paleozoology, and paleobotany.Archaeology developed out of antiquarianism in Europe during the 19th century, and has since become a discipline practiced across the world. Archaeology has been used by nation-states to create particular visions of the past. Since its early development, various specific sub-disciplines of archaeology have developed, including maritime archaeology, feminist archaeology and archaeoastronomy, and numerous different scientific techniques have been developed to aid archaeological investigation. Nonetheless, today, archaeologists face many problems, such as dealing with pseudoarchaeology, the looting of artifacts, a lack of public interest, and opposition to the excavation of human remains. The Areni-1 shoe is a 5,500-year-old leather shoe that was found in 2008 in excellent condition in the Areni-1 cave located in the Vayots Dzor province of Armenia. It is a one-piece leather-hide shoe, the oldest piece of leather footwear in the world known to contemporary researchers. The discovery was made by an international team led by Boris Gasparyan, an archaeologist from the Institute of Archaeology and Ethnography of the National Academy of Sciences of Armenia (co-directors of the project are Ron Pinhasi from University College Cork in Ireland, and Gregory Areshian from UCLA). The Areni-1 winery is a 6100-year-old winery that was discovered in 2007 in the Areni-1 cave complex in the village of Areni in the Vayots Dzor province of the Republic of Armenia by a team of Armenian and Irish archaeologists. The excavations were carried out by Boris Gasparyan of the Institute of Archaeology and Ethnography of the National Academy of Sciences of Armenia and Ron Pinhasi from University College Cork (Ireland), and were sponsored by the Gfoeller Foundation (USA) and University College Cork. In 2008 the University of California, Los Angeles (UCLA) also joined the project. Since then the excavations have been sponsored by UCLA and the National Geographic Society as well. The excavations of the winery were completed in 2010.The winery consists of fermentation vats, a wine press, storage jars, pottery sherds, and is believed to be at least a thousand years older than the winery unearthed in Judea and Samaria in 1963, which is the second oldest currently known.The Areni-1 shoe was found in the same cave in 2008. Australian archaeology is a large sub-field in the discipline of archaeology. Archaeology in Australia takes three main forms, Aboriginal archaeology (the archaeology of Aboriginal and Torres Strait Islander people in Australia before and after European settlement), historical archaeology (the archaeology of Australia after European settlement) and maritime archaeology. Bridging these sub-disciplines is the important concept of cultural heritage management which encompasses Aboriginal and Torres Strait Islander sites, historical sites and maritime sites. The Australian Institute of Aboriginal and Torres Strait Islander Studies (AIATSIS) is an independent Australian Government statutory authority. It is a collecting, publishing and research institute and is considered to be Australia's premier resource for information about the cultures and societies of Aboriginal and Torres Strait Islander peoples. The Institute is a leader in ethical research and the handling of culturally sensitive material and holds in its collections many unique and irreplaceable items of cultural, historical and spiritual significance. The collection at AIATSIS has been built through over 50 years of research and engagement with Aboriginal and Torres Strait Islander communities and is now a source of language and culture revitalisation, native title research and family and community history. AIATSIS is located on Acton Peninsula in Canberra, Australian Capital Territory. The ayllu is the traditional form of a community in the Andes, especially among Quechuas and Aymaras. They are an indigenous local government model across the Andes region of South America, particularly in Bolivia and Peru. Ayllus functioned prior to Inca conquest, during the Inca and Spanish colonial period, and continue to exist to the present day. How the ancient form and current organization correspond is unclear, since Spanish chronicles do not give a precise definition of the term.Ayllus had defined territories and were essentially extended family or kin groups, but they could include non-related members, giving individual families more variation and security of the land that they farmed. The male head of an ayllu is called a mallku which means, literally, condor, but is a title which can be roughly translated as "prince". They would often have their own wak'a, or minor god, usually embodied in a physical object such as a mountain or rock. "Ayullus were named for a particular person or place."Ayllu were self-sustaining units and would educate their own offspring and farm or trade for all the food they ate, except in cases of disaster such as El Niño years when they relied on the Inca storehouse system. Their primary function was to solve subsistence issues, and issues of how to get along in family, and larger, units.Each ayllu owned a parcel of land, and the members had reciprocal obligations to each other.In marriages, the woman would generally join the class and ayllu of her partner as would her children, but would inherit her land from her parents and retain her membership in her birth ayllu. This is how most movements of people between ayllu occurred. But a person could also join an ayllu by assuming the responsibility of membership. This included mink'a, communal work for common purposes, ayni, or work in kind for other members of the ayllu, and mit'a, a form of taxation levied by the Inca government.“Ayllu solidarity is a combination of kinship and territorial ties, as well as symbolism. (Albo 1972; Duviols 1974; Tshopik 1951; and Urioste 1975). These studies, however, do not explain how the ayllu is a corporate whole, which includes social principles, verticality, and metaphor... Ayllu also refers to people who live in the same territory (llahta) and who feed the earth shrines of that territory.”Ayllu is a word in both the Quechua and Aymara languages referring to a network of families in a given area, often with a putative or fictive common ancestor. Ayllus are distinguished by comparative self-sufficiency, commonly held territory, and relations of reciprocity. Members engage in shared collective labor (Quechua: minga) and in reciprocal exchanges of assistance (Quechua: ayni).In Bolivia, representatives from the ayllus are sent to the National Council of Ayllus and Markas of Qullasuyu (Conamaq). This body chooses an Apu Mallku as its head. The basic needs approach is one of the major approaches to the measurement of absolute poverty in developing countries. It attempts to define the absolute minimum resources necessary for long-term physical well-being, usually in terms of consumption goods. The poverty line is then defined as the amount of income required to satisfy those needs. The 'basic needs' approach was introduced by the International Labour Organization's World Employment Conference in 1976. "Perhaps the high point of the WEP was the World Employment Conference of 1976, which proposed the satisfaction of basic human needs as the overriding objective of national and international development policy. The basic needs approach to development was endorsed by governments and workers’ and employers’ organizations from all over the world. It influenced the programmes and policies of major multilateral and bilateral development agencies, and was the precursor to the human development approach."A traditional list of immediate "basic needs" is food (including water), shelter and clothing. Many modern lists emphasize the minimum level of consumption of 'basic needs' of not just food, water, clothing and shelter, but also sanitation, education, and healthcare. Different agencies use different lists.The basic needs approach has been described as consumption-oriented, giving the impression "that poverty elimination is all too easy." Amartya Sen focused on 'capabilities' rather than consumption.In the development discourse, the basic needs model focuses on the measurement of what is believed to be an eradicable level of poverty. Development programs following the basic needs approach do not invest in economically productive activities that will help a society carry its own weight in the future, rather it focuses on allowing the society to consume just enough to rise above the poverty line and meet its basic needs. These programs focus more on subsistence than fairness. Nevertheless, in terms of "measurement", the basic needs or absolute approach is important. The 1995 world summit on social development in Copenhagen had, as one of its principal declarations that all nations of the world should develop measures of both absolute and relative poverty and should gear national policies to "eradicate absolute poverty by a target date specified by each country in its national context." Behavioral modernity is a suite of behavioral and cognitive traits that distinguishes current Homo sapiens from other anatomically modern humans, hominins, and primates. Although often debated, most scholars agree that modern human behavior can be characterized by abstract thinking, planning depth, symbolic behavior (e.g., art, ornamentation, music), exploitation of large game, and blade technology, among others. Underlying these behaviors and technological innovations are cognitive and cultural foundations that have been documented experimentally and ethnographically. Some of these human universal patterns are cumulative cultural adaptation, social norms, language, and extensive help and cooperation beyond close kin. It has been argued that the development of these modern behavioral traits, in combination with the climatic conditions of the Last Glacial Maximum, was largely responsible for the human replacement of Neanderthals and the peopling of the rest of the world.Arising from differences in the archaeological record, a debate continues as to whether anatomically modern humans were behaviorally modern as well. There are many theories on the evolution of behavioral modernity. These generally fall into two camps: gradualist and cognitive approaches. The Later Upper Paleolithic Model refers to the idea that modern human behavior arose through cognitive, genetic changes abruptly around 40,000–50,000 years ago. Other models focus on how modern human behavior may have arisen through gradual steps; the archaeological signatures of such behavior only appearing through demographic or subsistence-based changes. Ota Benga (c. 1883 – March 20, 1916) was a Congolese man, a Mbuti pygmy known for being featured in an anthropology exhibit at the Louisiana Purchase Exposition in St. Louis, Missouri in 1904, and in a human zoo exhibit in 1906 at the Bronx Zoo. Benga had been purchased from African slave traders by the explorer Samuel Phillips Verner, a businessman hunting African people for the Exposition. He traveled with Verner to the United States. At the Bronx Zoo, Benga had free run of the grounds before and after he was exhibited in the zoo's Monkey House. Except for a brief visit with Verner to Africa after the close of the St. Louis Fair, Benga lived in the United States, mostly in Virginia, for the rest of his life.Displays of non-white humans as examples of "earlier stages" of human evolution were common in the early 20th century, when racial theories were frequently intertwined with concepts from evolutionary biology. African-American newspapers around the nation published editorials strongly opposing Benga's treatment. Dr. R. S. MacArthur, the spokesperson for a delegation of black churches, petitioned New York City Mayor George B. McClellan, Jr. for his release from the Bronx Zoo.The mayor released Benga to the custody of Reverend James M. Gordon, who supervised the Howard Colored Orphan Asylum in Brooklyn and made him a ward. That same year Gordon arranged for Benga to be cared for in Virginia, where he paid for him to acquire American clothes and to have his teeth capped, so the young man could be more readily accepted in local society. Benga was tutored in English and began to work. Several years later, the outbreak of World War I stopped ship passenger travel and prevented his returning to Africa. This, as well as the poor treatment he was subjected to for most of his life, caused Benga to fall into a depression. He committed suicide in 1916 at the age of 32. This bibliography of anthropology lists some notable publications in the field of anthropology, including its various subfields. It is not comprehensive and continues to be developed. It also includes a number of works that are not by anthropologists but are relevant to the field, such as literary theory, sociology, psychology, and philosophical anthropology.Anthropology is the study of humanity. Described as "the most humanistic of sciences and the most scientific of the humanities", it is considered to bridge the natural sciences, social sciences and humanities, and draws upon a wide range of related fields. In North America, anthropology is traditionally divided into four major subdisciplines: biological anthropology, sociocultural anthropology, linguistic anthropology and archaeology. Other academic traditions use less broad definitions, where one or more of these fields are considered separate, but related, disciplines. A big man is a highly influential individual in a tribe, especially in Melanesia and Polynesia. Such a person may not have formal tribal or other authority (through for instance material possessions, or inheritance of rights), but can maintain recognition through skilled persuasion and wisdom. The big man has a large group of followers, both from his clan and from other clans. He provides his followers with protection and economic assistance, in return receiving support which he uses to increase his status. Biocultural diversity is defined by Luisa Maffi as "the diversity of life in all its manifestations: biological, cultural, and linguistic — which are interrelated (and possibly coevolved) within a complex socio-ecological adaptive system." "The diversity of life is made up not only of the diversity of plants and animal species, habitats and ecosystems found on the planet, but also of the diversity of human cultures and languages." Certain geographic areas have been positively correlated with high levels of biocultural diversity, including those of low latitudes, higher rainfalls, higher temperatures, coastlines, and high altitudes. A negative correlation is found with areas of high latitudes, plains, and drier climates. Positive correlations can also be found between biological diversity and linguistic diversity, illustrated in the overlap between the distribution of plant diverse and language diverse zones. Social factors, such as modes of subsistence, have also been found to affect biocultural diversity. Biological functionalism is an anthropological paradigm, asserting that all social institutions, beliefs, values and practices serve to address pragmatic concerns. In many ways, the theorem derives from the longer-established structural functionalism, yet the two theorems diverge from one another significantly. While both maintain the fundamental belief that a social structure is composed of many interdependent frames of reference, biological functionalists criticise the structural view that a social solidarity and collective conscience is required in a functioning system. By that fact, biological functionalism maintains that our individual survival and health is the driving provocation of actions, and that the importance of social rigidity is negligible. Body culture studies describe and compare bodily practice in the larger context of culture and society, i.e. in the tradition of anthropology, history and sociology. As body culture studies analyse culture and society in terms of human bodily practices, they are sometimes viewed as a form of materialist phenomenology. The significance of the body and of body culture (in German Körperkultur, in Danish kropskultur) was discovered since the early twentieth century by several historians and sociologists. During the 1980s, a particular school of Body Culture Studies spread, in connection with – and critically related to – sports studies. Body Culture Studies were especially established at Danish universities and academies and cooperated with Nordic, European and East Asian research networks.Body culture studies include studies of dance, play (play (activity)) and game, outdoor activities, festivities and other forms of movement culture. The field of body culture studies is floating towards studies of medical cultures, of working habits, of gender and sexual cultures, of fashion and body decoration, of popular festivity and more generally towards popular culture studies.Body Culture Studies have shown useful by making the study of sport enter into broader historical and sociological discussion – from the level of subjectivity to civil society, state and market. Bomboniere (Italian bomboniera, from the french bonbonnière, a box containing "bonbons" (candies)) also known as "favors", are gifts given by hosts to their guests on special occasions such as bar and bat mitzvahs, weddings, baptism, First Communion or Confirmation. They usually include Jordan almonds, known in Italian as Confetti. Five sugared almonds symbolize health, wealth, happiness, fertility and long life.Confetti can be made in many forms using several different ingredients.Sugared almonds are put inside a bag made of tulle or satin and tied with ribbons in assorted colors. The colour of sugared almonds is white for a wedding, First Communion or Confirmation; pink or light-blue for birthday or Baptism of a baby girl or baby boy, respectively, red for a graduation, and silver or gold for 25 or 50 year anniversaries.Often they are adorned with dried natural flowers or artificial flowers made of silk or paper. The bag is often given stored inside a small vessel made of silver, crystal or porcelain.In Australia the word "bomboniere" is applied to any small gift or keepsake given by the hosts to guests at any function held to celebrate weddings, first holy communions and the like. Such gifts may take the form of a wine bottle stopper, glass vase or picture frame as well as the more traditional sugared almonds in decorative bags.Torta BombonieraA different and new type of bomboniera is the Favor Cake or "Torta Bomboniera" as it is called in Italy. They are made using little carton box forming one or more tier of a "cake".Inside each box there are the sugared almonds and a card printed with the data of ceremony (names, date etc.). On each box there are glued several types of fine objects made of many materials.Some samples are below, for various ceremonies:Evolution of "Torta Bomboniera" to "Pasticceria Artigianale in Porcellana" (Porcelain Artisan Pastries)The following type of Italian Favor Cake is an evolution of above types.It is made using porcelain boxes shaped and hand decorated like very real edible "bignè" or cream puff.Inside each box there are five sugared almonds and a card printed with data of ceremony.For this reasons, the name that Italians give to this new idea is "Pasticceria Artigianale in Porcellana", that is Porcelain Artisan Pastries Bride service has traditionally been portrayed in the anthropological literature as the service rendered by the bridegroom to a bride's family as a bride price or part of one (see dowry).Bride service and bride wealth models frame anthropological discussions of kinship in many regions of the world.Patterns of matrilocal post-marital residence, as well as the practice of temporary or prolonged bride service, have been widely reported for indigenous peoples of the Amazon basin. Among these people, bride service is frequently performed in conjunction with an interval of uxorilocal residence. The length of uxorilocal residence and the duration of bride service are contingent upon negotiations between the concerned parties, the outcome of which has been characterized as an enduring commitment or permanent debt. The power wielded by those who “give” wives over those who “take” them is also said to be a significant part of the political relationships in societies where bride service obligations are prevalent.Rather than seeing affinity in terms of a "compensation" model whereby individuals are exchanged as objects, Dean’s (1995) research on Amazon bride service among the Urarina demonstrates how differentially situated subjects negotiate the politics of marriage. An example of bride service occurs in the Hebrew Bible, Genesis 29:16-29, when Jacob labored for Laban for fourteen years to marry Rachel. Originally the deal was seven years, but Laban tricked Jacob by giving him Leah on their wedding day, so Jacob had to work another seven years to obtain the girl he had originally fallen in love with, Rachel. In the late 1950s a number of European countries (most notably West Germany and France) decided on a migration policy known as the buffer theory.Owing to rapid economic recovery in the post World War II period (aided by the American Marshall plan) there were many more job vacancies than people who were available or becoming available in the workforce to fill them. To resolve this situation they decided to "import" workers from the southern Mediterranean basin (including North Africa) on a temporary capacity to fill this labour shortfall.These workers were invitees of the governments and came to Europe initially on the understanding that they could at any point in time in the future be repatriated if and when economic circumstances changed. These Gastarbeiter as they became known in Germany were mainly young unskilled males who very often left their families behind in their country of origin and migrated alone as 'Economic Migrant'. They worked predominantly in certain areas of the economy where working conditions were poorer than those of indigenous Germans and where the rates of pay were considerably lower. Ultimately they came to predominate in low paid service rated employment. The situation remained unchanged until the 1970s economic recession.Jobs were being lost in manufacturing and industry in particular but not necessarily in the occupational types in which the migrants worked. In 1974 the then West German government imposed a ban restricting any future economic migrants and offered the possibility of returning to their country of origin to many others, few migrants took up the offer and stayed at their jobs or began to receive unemployment assistance from the state. This led to increased tensions and feeling of resentment from many German people. Calceology (from Latin calcei "shoes" and -λογία, -logiā, "-logy") is the study of footwear, especially historical footwear whether as archaeology, shoe fashion history, or otherwise. It is not yet formally recognized as a field of research. Calceology comprises the examination, registration, research and conservation of leather shoe fragments. A wider definition includes the general study of the ancient footwear, its social and cultural history, technical aspects of pre-industrial shoemaking and associated leather trades, as well as reconstruction of archaeological footwear. Cantometrics ("song measurements") is a method developed by Alan Lomax and a team of researchers for relating elements of the world's traditional vocal music (or folk songs) to features of social organization as defined via George Murdock's Human Relations Area Files, resulting in a taxonomy of expressive human communications style. Lomax defined Cantometrics as the study of singing as normative expressive behavior and maintained that Cantometrics reveals folk performance style to be a "systems-maintaining framework" which models key patterns of co-action in everyday life. His work on Cantometrics gave rise to further comparative studies of aspects of human communication in relation to culture, including: Choreometrics, Parlametrics, Phonotactics (an analysis of vowel frequency in speech), and Minutage (a study of breath management).Instead of the traditional Western musicological descriptive criteria of pitch, rhythm, and harmony, Cantometrics employs 37 style factors developed by Lomax and his team in consultation with specialists in linguistics, otolaryngology and voice therapy. The vocal style factors were designed to be easily rated by observers on a five-point scale according to their presence or absence. They include, for example: group cohesion in singing; orchestral organization; tense or relaxed vocal quality; breathiness; short or long phrases; rasp (vocal grating, such as associated, for example with the singing of Louis Armstrong); presence and percentage of vocables versus meaningful words); and melisma (ornamentation), to name a few.In the early stages of his work on the Cantometrics coding system, Lomax wrote of the relationship of musical style to culture:"Its fundamental diagnostic traits appear to be vocal quality (color, timbre, normal pitch, attack, type of melodic ornamentation, etc.) and the degree in which song is normally monodic or polyphonic. The determinative socio-psychological factors seem to be . . . the type of social organization, the pattern of erotic life, and the treatment of children.... I myself believe that the voice quality is the root [diagnostic] element. From this socio-psychological complex there seem to arise a complex of habitual musical practices which we call musical style" A catchphrase (alternatively spelled catch phrase) is a phrase or expression recognized by its repeated utterance. Such phrases often originate in popular culture and in the arts, and typically spread through word of mouth and a variety of mass media (such as films, internet, literature and publishing, television and radio). Some become the de facto or literal "trademark" or "signature" of the person or character with whom they originated, and can be instrumental in the typecasting (beneficially or otherwise) of a particular actor. Central place foraging (CPF) theory is an evolutionary ecology model for analyzing how an organism can maximize foraging rates while traveling through a patch (a discreet resource concentration), but maintains the key distinction of a forager traveling from a home base to a distant foraging location rather than simply passing through an area or travelling at random. CPF was initially developed to explain how red-winged blackbirds might maximize energy returns when traveling to and from a nest. The model has been further refined and used by anthropologists studying human behavioral ecology and archaeology. Technology ("science of craft", from Greek τέχνη, techne, "art, skill, cunning of hand"; and -λογία, -logia) is the collection of techniques, skills, methods, and processes used in the production of goods or services or in the accomplishment of objectives, such as scientific investigation. Technology can be the knowledge of techniques, processes, and the like, or it can be embedded in machines to allow for operation without detailed knowledge of their workings.The simplest form of technology is the development and use of basic tools. The prehistoric discovery of how to control fire and the later Neolithic Revolution increased the available sources of food, and the invention of the wheel helped humans to travel in and control their environment. Developments in historic times, including the printing press, the telephone, and the Internet, have lessened physical barriers to communication and allowed humans to interact freely on a global scale. The steady progress of military technology has brought weapons of ever-increasing destructive power, from clubs to nuclear weapons.Technology has many effects. It has helped develop more advanced economies (including today's global economy) and has allowed the rise of a leisure class. Many technological processes produce unwanted by-products known as pollution and deplete natural resources to the detriment of Earth's environment. Innovations have always influenced the values of a society and raised new questions of the ethics of technology. Examples include the rise of the notion of efficiency in terms of human productivity, and the challenges of bioethics.Philosophical debates have arisen over the use of technology, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar reactionary movements criticize the pervasiveness of technology, arguing that it harms the environment and alienates people; proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition. A behavior-shaping constraint, also sometimes referred to as a forcing function or poka-yoke, is a technique used in error-tolerant design to prevent the user from making common errors or mistakes. One example is the reverse lockout on the transmission of a moving automobile.The microwave provides another example of a forcing function. In all modern microwaves, it is impossible to start the microwave while the door is still open. Likewise, the microwave will shut off automatically if the door is opened by the user. By forcing the user to close the microwave door while it is in use, it becomes impossible for the user to err by leaving the door open. Forcing functions are very effective in safety critical situations such as this, but can cause confusion in more complex systems that do not inform the user of the error that has been made.When automobiles first started shipping with on-board GPS systems, it was not uncommon to use a forcing function which prevented the user from interacting with the GPS (such as entering in a destination) while the car was in motion. This ensures that the driver's attention is not distracted by the GPS. However, many drivers found this feature irksome, and the forcing function has largely been abandoned. This reinforces the idea that forcing functions are not always the best approach to shaping behavior.These forcing functions are being used in the service industry as well. Call centers concerned with credit card fraud and friendly fraud are using agent-assisted automation to prevent the agent from seeing or hearing the credit card information so that it cannot be stolen. The customer punches the information into their phone keypad, the tones are masked to the agent and are not visible in the customer relationship management software. Canopy is a digital health company, established in 2010 and headquartered in New York, NY. Its technologies focus on bridging the linguistic and cultural barrier between healthcare providers and their limited English proficiency, or non-English speaking patients. The language barrier undermines the quality of care for 27+ million patients, and creates enormous workflow and financial constraints to the health delivery organizations that serve them. With support from the National Institutes of Health (NIH), Canopy has developed a unique suite of products to tackle these challenges.Bill Tan is founder of Canopy Innovations. Some adopters of its software include Duke University, Jefferson Health, Elmhurst Hospital, UCLA David Geffen School of Medicine, and University of Arizona College of Medicine.Jerrit Tan is the CEO of Canopy Innovations. Code First: Girls is a Not for Profit Social Enterprise that works exclusively with women in Britain to develop coding skills.The organisation promotes gender diversity and female participation in the technology sector by offering free and paid training and courses for students and professional women. It also supports businesses to train staff and develop talent management policies.As of June 2016, Code First: Girls is reported to have provided in excess of £1.5 million worth of free coding courses to more than 1,500 women since 2013. Critical making refers to the hands-on productive activities that link digital technologies to society. It was invented to bridge the gap between creative physical and conceptual exploration. The purpose of critical making resides in the learning extracted from the process of making rather than the experience derived from the finished output. The term "critical making" was popularized by Matt Ratto, an Associate Professor at the University of Toronto. Ratto describes one of the main goals of critical making as way "to use material forms of engagement with technologies to supplement and extend critical reflection and, in doing so, to reconnect our lived experiences with technologies to social and conceptual critique." "Critical making", as defined by practitioners like Matt Ratto and Stephen Hockema, "is an elision of two typically disconnected modes of engagement in the world — "critical thinking," often considered as abstract, explicit, linguistically based, internal and cognitively individualistic; and "making," typically understood as tacit, embodied, external, and community-oriented." The term cultural lag refers to the notion that culture takes time to catch up with technological innovations, and that social problems and conflicts are caused by this lag. Subsequently, cultural lag does not only apply to this idea only, but also relates to theory and explanation. It helps by identifying and explaining social problems to predict future problems.As explained by James W. Woodward, when the material conditions change, changes are occasioned in the adaptive culture, but these changes in the adaptive culture do not synchronize exactly with the change in the material culture, this delay is the culture lag. The term was coined by sociologist William F. Ogburn in his 1922 work Social change with respect to culture and original nature. His theory of cultural lag suggests that a period of maladjustment occurs when the non-material culture is struggling to adapt to new material conditions. This resonates with ideas of technological determinism. That is it can presuppose that technology has independent effects on society at large. However it does not necessarily assign causality to technology. Rather cultural lag focuses examination on the period of adjustment to new technologies.According to Ogburn, cultural lag is a common societal phenomenon due to the tendency of material culture to evolve and change rapidly and voluminously while non-material culture tends to resist change and remain fixed for a far longer period of time. Due to the opposing nature of these two aspects of culture, adaptation of new technology becomes rather difficult. This distinction between material and non-material culture is also a contribution of Ogburn's 1922 work on social change.Cultural lag creates problems for a society in a multitude of ways. The issue of cultural lag tends to permeate any discussion in which the implementation of some new technology is a topic. For example, the advent of stem cell research has given rise to many new, potentially beneficial medical technologies; however these new technologies have also raised serious ethical questions about the use of stem cells in medicine. Cultural lag is seen as a critical ethical issue because failure to develop broad social consensus on appropriate applications of modern technology may lead to breakdowns in social solidarity and the rise of social conflict. A Darwin machine (a 1987 coinage by William H. Calvin, by analogy to a Turing machine) is a machine that, like a Turing machine, involves an iteration process that yields a high-quality result, but, whereas a Turing machine uses logic, the Darwin machine uses rounds of variation, selection, and inheritance.In its original connotation, a Darwin machine is any process that bootstraps quality by utilizing all of the six essential features of a Darwinian process: A pattern is copied with variations, where populations of one variant pattern compete with another population, their relative success biased by a multifaceted environment (natural selection) so that winners predominate in producing the further variants of the next generation (Darwin's inheritance principle).More loosely, a Darwin machine is a process that utilizes some subset of the Darwinian essentials, typically natural selection to create a non-reproducing pattern, as in neural Darwinism. Many aspects of neural development utilize overgrowth followed by pruning to a pattern, but the resulting pattern does not itself create further copies.Darwin machine has been used multiple times to name computer programs after Charles Darwin. Design technology, or D.T., is the study, design, development, application, implementation, support and management of computer and non-computer based technologies for the express purpose of communicating product design intent and constructability. Design technology can be applied to the problems encountered in construction, operation and maintenance of a product.At times there is cross-over between D.T. and Information Technology, whereas I.T. is primarily focused on overall network infrastructure, hardware & software requirements, and implementation, D.T. is specifically focused on supporting, maintaining and training design and engineering applications and tools and working closely with I.T. to provide necessary infrastructure, for the most effective use of these applications and tools.Within the building design, construction and maintenance industry (also known as AEC/O/FM), the product is the building and the role of D.T., is the effective application of technologies within all phases and aspects of building process. D.T. processes have adopted Building Information Modeling (BIM) to quicken construction, design and facilities management using technology. So though D.T. encompasses BIM and Integrated Project Delivery, I.P.D., it is more overarching in its directive and scope and likewise looks for ways to leverage and more effectively utilize C.A.D., Virtual Design & Construction, V.D.C., as well as historical and legacy data and systems.D.T. is also applicable to industrial and product design and the manufacturing and fabrication processes therein.There are formal courses of study in some countries known as design and technology that focus on particular areas. In this case the above definition still remains valid, if for instance one takes the subject textiles technology and replace product in the above definition with textile. Digital addict is used to refer to a person who compulsively uses digital technology, which would manifest as another form of addiction if that technology was not as easily accessible to them. Colloquially, it can be used to describe a person whose interaction with technology is verging on excessive, threatening to absorb their attention above all else and consequently having a negative impact on the well-being of the user.The primary theory is digital technology users develop digital addiction by their habitual use and reward from computer applications. This reward triggers the reward center in the brain that releases more dopamine, opiates, and neurochemicals, which overtime can produce a stimulation tolerance or need to increase stimulation to achieve a “high” and prevent withdrawal.Used as a conversational phrase, digital addict describes an increasingly common dependence on devices in the digital age. The phrase is used to highlight the possible danger in being over exposed to technology in an age where the scope for using digital technologies in everyday life is ever-increasing and the danger of becoming dependent upon them is a distinct possibility. Digital phobic is an informal phrase used to describe a reluctance to become fully immersed in the digital age for being fearful of how it might negatively change or alter everyday life.The fast-paced development of the digital world in the twenty-first century has contributed to the digital divide becoming a very real problem for a segment of the population for whom a lack of education of, interest in, or access to digital devices has left them excluded from the technological world and fearful of its growing omnipresence.Digital phobic is part of a growing dictionary of digital vocabulary exploring the social impact of the technological age. The phrase considers the fears associated with technological evolution and change, and acknowledges the possibility of exclusion as a result of a rising reliance on technology in day-to-day life. One of the defining features of development today is the relationship between education and technology, stimulated by the spectacular growth in internet connectivity and mobile penetration. We live in a connected world. An estimated 40% of the world’s population now uses the internet and this number is growing at a remarkable rate. While there are significant variations in internet connectivity among countries and regions, the number of households with such links in the global South has now overtaken those in the global North. Moreover, over 70% of mobile telephone subscriptions worldwide are now in the global South. Five billion people are expected to go from no to full connectivity within the next twenty years. However, there are still significant gaps among countries and regions, for example between urban and rural areas. Limited broadband speed and lack of connectivity hamper access to knowledge, participation in society and economic development.The internet has transformed how people access information and knowledge, how they interact, and the direction of public management and business. Digital connectivity holds promise for gains in health, education, communication, leisure and well-being. Artificial intelligence advances, 3D printers, holographic recreation, instant transcription, voice-recognition and gesture-recognition software are only some examples of what is being tested. Digital technologies are reshaping human activity from daily life to international relations, from work to leisure, redefining multiple aspects of our private and public life.Such technologies have expanded opportunities for freedom of expression and for social, civic and political mobilization, but they also raise important concerns. The availability of personal information in the cyber world, for example, brings up significant issues of privacy and security. New spaces for communication and socialization are transforming what constitutes the idea of ‘social’ and they require enforceable legal and other safeguards to prevent their overuse, abuse and misuse. Examples of such misuse of the internet, mobile technology and social media range from cyber-bullying to criminal activity, even to terrorism. In this new cyber world, educators need to better prepare new generations of ‘digital natives’ to deal with the ethical and social dimensions of not only existing digital technologies but also those yet to be invented. Enterprise coexistence refers to the means for users on different remote systems to communicate with each other seamlessly. This new technology is only just beginning to appear commercially. As a result, it's identified differently by different companies (Amazon WorkMail calls it 'interoperability', Dell calls it coexistence, and Cloudiway calls it both coexistence and enterprise coexistence, while BitTitan calls it enterprise coexistence: Binary Tree has opted to market each aspect of enterprise coexistence as individual products).Enterprise coexistence solutions usually include calendar free/busy scheduling as well as global address list management on each remote system. Additional features also exist, such as mail routing. The remote systems during enterprise coexistence do not need to be the same type, allowing interoperability between considerably different remote systems, such as G Suite and Exchange, whether cloud-based or on-premises.Each remote system communicates through a dedicated coexistence server, which acts as an interpreter between the two, passing information between systems in a format they recognize and accept. Each system them passes the details back to the source of the request.Enterprise coexistence is primarily used by businesses during mergers, acquisitions and specific collaborations. Some businesses opt to use coexistence long-term. It's also gaining popularity as a solution when moving business software tools from on-premises-based software to the cloud (such as G Suite and Office 365), which is due to see an increase in 2017 and beyond. Harley Frankfurt (born in 1969) is a pioneer and visionary in the Energy Industry, making significant advances in gas compression efficiency and renewable energy. He has a passion for I/O communications, now known as the Internet of Things (IoT), holding a patent in advance utility control systems. Harley led advances in offshore gas compression with GE, currently referred to as Floating Liquid Natural Gas (FLNG) after the acquisition of Nuovo Pignone in Florence Italy.In 2015, "Harley Frankfurt" was instrumental in developing a safe and energy-efficient design proposal for TransCanada's "Energy East" pipeline project' currently being reviewed by the National Energy Board (NEB). His work included an advanced pipeline control systems designed and developed in Houston, Texas, for both pipelines including the Keystone/KXL pipeline currently pending approvals from the Senate and House of Representatives in the United States, work that if approved will advance America's energy security.In 2013 he was the GE/ Downer Consortium Executive and acting Construction director for Boco Rock wind farm valued at $361M owned by EGCO. An International role managing financial and equipment supply stakeholders in Thailand, USA, China, Germany and Australia. He worked with local politicians, councils and communities in the towns of Cooma, NSW and Nimmitabel, NSW and surrounding regions to ensure a long-term sustainable renewable energy industry in this area in the future.Follow the success of Snowtown II, Harley Frankfurt together with Meridian Energy lead the startup and execution of Mt Mercer Windfarm in VictoriaIn January 2011, he held the role of Executive Renewables Marketing for Siemens Australia in the APAC region. He represented a multibillion-dollar portfolio in the Wind, PV Solar, Solar Thermal CSP, BioPower, Energy Storage and Hybrid Industry Energy Market. Projects secured include Snowtown 2 wind farm valued at $700M and Mill Creek wind farm valued at $169M.In 2009, together with First Solar, delivered to the World the Largest PV Solar farm located in Sarnia, Ontario. The Sarnia Solar farm, 80 MW (97MWp), consisting of 1,300,000 panels, completed in record time, seven months, developing some firsts in applied manufacturing in an outdoor environment, delivering the project in the third quarter, ahead of shareholder expectations.He was development director of Power Systems Inc, an energy industry solutions provider, EPC, based in Calgary, Alberta, Canada. He has been with the company from 2003 until he managed its acquisition in 2009.He was a 2009 Project Management Institute nominee in two categories, Distinguished Contribution by an Individual award and Project Management Excellence - Individual - Engineering and Construction (award finalist). He represented the $1.3B StatOil SAGD (Steam Assisted Gravity Drainage) Project in Leismer, Alberta.He has been acknowledged by Lawrence MacAulay, former Solicitor General of Canada, and Pat Binns, former premier of Prince Edward Island and Ralph Klein, former premier of Alberta for his many contributions in promoting projects and business.He is a longstanding member of Association of Science and Engineering Technology Professionals of Alberta and IEEE Professional Association. French Tech is an accreditation awarded to French cities recognized for their startup ecosystem. It is also a name used by technologically innovative French businesses throughout the world.Convinced by the necessity to promote the emergence of successful start-ups in France to generate economic value and jobs, the French Government created the French Tech Initiative at the end of 2013. Its philosophy: build on member initiatives of the French Tech themselves, highlight what already exists, and create a snowball effect. It is a shared ambition, propelled by the State but carried and built with all the actors of the French tech company and start-up scene.The French Tech initiative also has a transversal objective: to enhance the coherence of public actions in favor of startups. It does not create a new organization or a new public tool, but is carried by a small team, Mission French Tech, which works closely with the French Ministry of Economy and Finance, the Ministry of Foreign Affairs and with the General Commissariat for Investment. Its partners, the pillars of the initiative, are national operators, who, under the common banner "French Tech" coordinate their actions in favor of startups: Caisse des Dépôts, Bpifrance and Business France.Funding from the French Tech Initiative for accelerators (€ 200 million) and international attractiveness (€ 15 million) is part of the Investments for the Future program. In this context, the operator is Caisse des Dépôts, which relies on Bpifrance for investment in accelerators and on Business France for international investments.The French Tech aims to provide a strong common visual identity to French startups as well as to promote entrepreneurial exchanges between them. Government spin-off is civilian goods which are the collateral result of military or governmental research. One prominent example of a type of government spin-off is technology that has been commercialized through NASA funding, research, licensing, facilities, or assistance.NASA spin-off technologies have been produced for over forty years. The Internet is a specific example of a government spin-off resulting from DARPA funding. But in the years since the Moore's law driven commercial sector with yearly timeframes has jumped ahead of the defense sector's decade long timeframes to the extent that new "spin-on" Commercial off-the-shelf products are now applied for defense platforms. HoloBuilder Inc., founded in 2016, assists builders and engineers to create immersive progress views of construction sites.The company is a German-American construction technology-startup, developed both in San Francisco, California, and Aachen, Germany. They offer tools to create and share 360° views of construction sites and buildings. Those digitized sites help at managing their real-world counterparts effortlessly from everywhere. In 2016, HoloBuilder has been used in 190 countries around the world, where more than 15,000 projects have been created, which have been viewed almost 800,000 times.The cloud-based software www.HoloBuilder.com providing the leading cloud and mobile software for virtual reality capturing of construction sites. HoloBuilder for constructions focuses on the whole development process, from the planning phase up until maintenance management. Information and media literacy ('IMLref name="portal.unesco.org">{{cite web|url=http://portal.unesco.org/ci/en/ev.php-URL_ID=15886&URL_DO=DO_TOPIC&URL_SECTIO Prior to the 1990s, the primary focus of information literacy has been research skills. Media literacy, a study that emerges around the 1970s traditionally focuses on the analysis and the delivery of information through various forms of media. Nowadays, the study of Information Literacy has been extended to include the study of Media Literacy in many countries like UK, Australia and New Zealand. The term Information and Media Literacy is used by UNESCO to differentiate the combined study from the existing study of Information Literacy. It is also defined as Information and Communication Technologies (ICT) in the United States. Educators, such as Gregory Ulmer, has also defined the field as electracy.IML is a combination of information literacy and media literacy. The purpose of being information and media literate is to engage in a digital society; one needs to be able to use, understand, inquire, create, communicate and think critically. It is important to have capacity to effectively access, organize, analyze, evaluate, and create messages in a variety of forms. The transformative nature of IML includes creative works and creating new knowledge; to publish and collaborate responsibly requires ethical, cultural and social understanding. Intelligent environments (IE) are spaces with embedded systems and information and communication technologies creating interactive spaces that bring computation into the physical world and enhance occupants experiences. "Intelligent environments are spaces in which computation is seamlessly used to enhance ordinary activity. One of the driving forces behind the emerging interest in highly interactive environments is to make computers not only genuine user-friendly but also essentially invisible to the user".IEs describe physical environments in which information and communication technologies and sensor systems disappear as they become embedded into physical objects, infrastructures, and the surroundings in which we live, travel, and work. The goal here is to allow computers to take part in activities never previously involved and allow people to interact with computers via gesture, voice, movement, and context. The annual IEEE conferences on intelligent environments present current trends and applications. In archeology, lithic technology refers to a broad array of techniques and styles to produce usable tools from various types of stone. The earliest stone tools were recovered from modern Ethiopia and were dated to between two-million and three-million years old. The archaeological record of lithic technology is divided into three major time periods: the Paleolithic (Old Stone Age), Mesolithic (Middle Stone Age), and Neolithic (New Stone Age). Not all cultures in all parts of the world exhibit the same pattern of lithic technological development, and stone tool technology continues to be used to this day, but these three time periods represent the span of the archaeological record when lithic technology was paramount. By analysing modern stone tool usage within an ethnoarchaeological context insight into the breadth of factors influencing lithic technologies in general may be studied. See: Stone tool. For example, for the Gamo of Southern Ethiopia, political, environmental, and social factors influence the patterns of technology variation in different subgroups of the Gamo culture; through understanding the relationship between these different factors in a modern context, archaeologists can better understand the ways that these factors could have shaped the technological variation that is present in the archaeological record. Marine technology is defined by WEGEMT (a European association of 40 universities in 17 countries) as "technologies for the safe use, exploitation, protection of, and intervention in, the marine environment." In this regard, according to WEGEMT, the technologies involved in marine technology are the following: naval architecture, marine engineering, ship design, ship building and ship operations; oil and gas exploration, exploitation, and production; hydrodynamics, navigation, sea surface and sub-surface support, underwater technology and engineering; marine resources (including both renewable and non-renewable marine resources); transport logistics and economics; inland, coastal, short sea and deep sea shipping; protection of the marine environment; leisure and safety. Mindfulness and technology is a movement in research and design, that encourages the user to become aware of the present moment, rather than losing oneself in a technological device. This field encompasses multidisciplinary participation between design, psychology, computer science, and religion. Mindfulness stems from Buddhist meditation practices and refers to the awareness that arises through paying attention on purpose in the present moment, and non-judgmentally. In the field of Human-Computer Interaction, research is being done on Techno-spirituality — the study of how technology can facilitate feelings of awe, wonder, transcendence, and mindfulness and on Slow design, which facilitates self-reflection. The excessive use of personal devices, such as smartphones and laptops, can lead to the deterioration of mental and physical health. This area focuses on redesigning and creating technology to improve the wellbeing of its users. Music instrument technology refers to the construction of instruments and the way they have changed over time. Such change has produced modern instruments that are considerably different from their historical antecedents.An example is the way in which many instruments commonly associated with a modern symphony orchestra are markedly different from the same instruments for which European composers were composing music centuries ago.Such changes include the addition of piston valves to brass instruments, the design of more complex fingering systems for woodwind instruments such as the flute, and the standardization of the family of orchestral string instruments.Many advancements were made in music instrument technology during the Middle Ages and 19th Century. The introduction of copper smelting allowed for trumpets, organ pipes, and slides to be constructed with sheet metal which had a smooth texture and consistency in thickness, allowing for more range of tones and sounds. Improvements in molding and casting during the 19th Century allowed for technological advancement to pianos. While originally constructed with wooden frames, limiting the amount of sound that could be produced, pianos began to be constructed of one-piece iron frames. This provided a more amplified volume from the instrument and allowed musicians to use less force when playing the instrument. Improvements in drum tuning were also established at this time. The "Dresden" model of tuning, involving steel technology and employing a foot petal with ratchet in order to attach the device to the timpani, was invented by Carl Pittrich. This technology allowed for timpani to be tuned much faster by the musician. The Dresden tuners could also be added onto existing timpani, allowing symphonies to continue using their existing instruments while still employing this new technology. Lastly, the 19th century also lead to the development of valves, and when added in to the construction of trumpets and horns, they allowed for the instruments to express a broader range to the harmonic series of notes being produced.Some of this technology represents patentable advancements in the musical instrument industry. See Musical Instrument Patent of Week The Office National d'Etudes et de Recherches Aérospatiales (ONERA) is the French national aerospace research centre. It is a public establishment with industrial and commercial operations, and carries out application-oriented research to support enhanced innovation and competitiveness in the aerospace and defense sectors.ONERA was created in 1946 as "Office National d’Études et de Recherches Aéronautiques". Since 1963, its official name has been "Office National d’Études et de Recherches Aérospatiales". However, in January 2007, ONERA has been dubbed "The French Aerospace Lab" to improve its international visibility. OpenFlint is an open technology used for displaying ("casting") content from one computerized device on the display of another. Usually this would be from a smaller personal device (like a smartphone) to a device with a larger screen suitable for viewing by multiple spectators (like a TV).Development of OpenFlint was initiated in 2014 by the Matchstick project, which is a crowd-funded effort to create a miniature piece of hardware suitable for running an OpenFlint server casting to a screen through an HDMI connection. This is similar in concept to Google's Chromecast device that uses Google Cast.The Matchstick TV devices are powered by Firefox OS, but as an open technology OpenFlint itself is not tied to any specific operating system or hardware.As of July 2015, no consumer-grade OpenFlint-enabled products have shipped, but Matchstick developer devices have been shipping since late 2014, and the first round of devices for backers of the Matchstick Kickstarter project were expected to ship in February 2015, but were delayed until August 2015.A demonstration OpenFlint server can be set up on an ordinary laptop or desktop computer running Linux by following instructions. The Matchstick TV dongle project was canceled due to issues implementing DRM into Firefox OS. Orphaned technology is a descriptive term for computer products, programs, and platforms that have been abandoned by their original developers. Orphaned technology refers to software, such as abandonware and antique software, but also to computer hardware and practices. In computer software standards and documentation, deprecation is the gradual phasing-out of a software or programming language feature, while orphaning usually connotes a sudden discontinuation, usually for business-related reasons, of a product with an active user base.For users of technologies that have been withdrawn from the market, there is a choice between maintaining their software support environments in some form of emulation, or switching to other supported products, possibly losing capabilities unique to their original solution.Some well-known examples of orphaned technology include:Coleco ADAM - 8-bit home computerTI 99/4A - 16-bit home computerMattel AquariusApple Lisa - 16/32-bit graphical computerNewton PDA (Apple Newton) - tablet computerDEC Alpha - 64-bit microprocessorHyperCard - hypermediaICAD (KBE) - knowledge-based engineeringJavelin Software - modeling and data analysisLISP machines - LISP oriented computersClassic Mac OS - m68k and PowerPC operating systemMicrosoft Bob - graphical helperOpenDoc - compound documents (Mac OS, OS/2)Prograph - visual programming systemSymbolics Inc's operating systems, Genera and OpenGenera, were twice orphaned, as they were ported from LISP machines to computers using the Alpha 64-bit CPU.User groups often exist for specific orphaned technologies, such as The Hong Kong Newton User Group, Symbolics Lisp [Machines] Users' Group (now known as the Association of Lisp Users), and Newton Reference. Playbrush is an oral care invention for children that is designed to encourage kids to brush their teeth regularly and properly. The Playbrush device – developed by an Austro-British company with the same name – is a dongle that fits onto the end of any manual toothbrush and connects to a smartphone or tablet via Bluetooth Low Energy. The movement of the brush is picked up by motion sensors in the dongle, which then feeds it to the phone to control different games, that encourage brushers to spend a full two minutes cleaning their teeth. Privacy-Enhancing Technologies (PET) is the standardized term that refers to specific methods that act in accordance with the laws of data protection - PETs' allow online users to protect the privacy of their personally identifiable information (PII) provided to and handled by services or applications.Privacy-enhancing technologies can also be defined as:Privacy-Enhancing Technologies is a system of ICT that measures the protection of informational privacy by eliminating or minimising personal data thereby preventing unnecessary or unwanted processing of personal data, without the loss of the functionality of the information system.(van Blarkom, Borking & Olk 2003) A product teardown, or simply teardown, is the act of disassembling a product, such that it helps to identify its component parts, chip & system functionality, and component costing information. For products having 'secret' technology, such as the Mikoyan-Gurevich MiG-25, the process may be secret. For others, including consumer electronics, the results are typically disseminated through photographs and component lists so that others can make use of the information without having to disassemble the product themselves. This information is important to designers of semiconductors, displays, batteries, packaging companies, integrated design firms, and semiconductor fabs, and the systems they operate within.This information can be of interest to hobbyists, but can also be used commercially by the technical community to find out, for example, what semiconductor components are being utilized in consumer electronic products, such as the Wii video game console or Apple's iPhone. Such knowledge can aid understanding of how the product works, including innovative design features, and can facilitate estimating the bill of materials (BOM). The financial community therefore has an interest in teardowns, as knowing how a company's products are built can help guide a stock valuation. Manufacturers are often not allowed to announce what components are present in a product due to non-disclosure agreements (NDA). Teardowns also play a part in evidence of use in court and litigation proceedings where a companies parts may have been used without their permission, counterfeited, or to show where intellectual property or patents might be infringed by another firms part or system.Identifying semiconductor components in systems has become more difficult over the past years. The most notable change started with Apple's 8GB iPod nano, were repackaged with Apple branding. This makes it more difficult to identify the actual device manufacturer and function of the component without performing a 'decap' – removing the outer packaging to analyze the die within. Typically there are markings on the die inside the package that can lead experienced engineers to see who actually created the device and what functionality it performs in the system.Teardowns have also been performed in front of a live studio audience at the Embedded Systems Conference (ESC). The first live teardown was performed on a Toyota Prius at the Embedded Systems Conference in San Jose, April 2006. Since that time, additional live teardowns have been performed, most recently being the Sony OLED TV, Gibson Self-Tuning Guitar, SuitSat space suit, and Sony Rolly MP3 player.Major companies that publicize their teardowns include Portelligent and Semiconductor Insights, both of which write featured articles in EETimes and TechOnline on their findings. The two companies were merged to form TechInsights, headquartered in Canada. ABI Research also provides teardowns for all major mobile devices and components in their Device Portal. Other websites offer user-contributed teardown information at no cost. Renaissance technology is the set of European artifacts and inventions which span the Renaissance period, roughly the 14th century through the 16th century. The era is marked by profound technical advancements such as the printing press, linear perspective in drawing, patent law, double shell domes and Bastion fortresses. Sketchbooks from artisans of the period (Taccola and Leonardo da Vinci, example) give a deep insight into the mechanical technology then known and applied.Renaissance science spawned the Scientific Revolution; science and technology began a cycle of mutual advancement. Robotic non-destructive testing (NDT) is a method of inspection used to assess the structural integrity of petroleum, natural gas, and water installations. Crawler-based robotic tools are commonly used for in-line inspection (ILI) applications in pipelines that cannot be inspected using traditional intelligent pigging tools (or unpiggable pipelines).Robotic NDT tools can also be used for mandatory inspections in inhospitable areas (e.g., tank interiors, subsea petroleum installations) to minimize danger to human inspectors, as these tools are operated remotely by a trained technician or NDT analyst. These systems transmit data and commands via either a wire (typically called an umbilical cable or tether) or wirelessly (in the case of battery-powered tetherless crawlers). STEM (science, technology, engineering and mathematics, previously METS) is the academic disciplines of science, technology, engineering and mathematics. This term is typically used when addressing education policy and curriculum choices in schools, to improve competitiveness in science and technology development. It has implications for workforce development, national security concerns and immigration policy. Education systems and schools play a central role in determining girls' and boys' interest in STEM subjects and in providing equal opportunities to access and benefit from quality STEM education.The acronym arose in common use shortly after an interagency meeting science education held at the US National Science Foundation chaired by the then NSF director Rita Colwell. A director from the Office of Science division of Workforce Development for Teachers and Scientists, Peter Faletra, suggested the change from the older acronym METS to STEM. Colwell, expressing some dislike for the older acronym, responded by suggesting NSF to institute the change. One of the first NSF projects to use the acronym was STEMTEC, the Science, Technology, Engineering and Math Teacher Education Collaborative at the University of Massachusetts Amherst, which was funded in 1998. Education is the process of facilitating learning, or the acquisition of knowledge, skills, values, beliefs, and habits. Educational methods include storytelling, discussion, teaching, training, and directed research. Education frequently takes place under the guidance of educators, but learners may also educate themselves. Education can take place in formal or informal settings and any experience that has a formative effect on the way one thinks, feels, or acts may be considered educational. The methodology of teaching is called pedagogy.Education is commonly divided formally into such stages as preschool or kindergarten, primary school, secondary school and then college, university, or apprenticeship.A right to education has been recognized by some governments and the United Nations. In most regions, education is compulsory up to a certain age. The Essex Court Chambers-Singapore Academy of Law Moot is an international moot competition for young practicing lawyers organised by Essex Court Chambers and Singapore Academy of Law. Whereas most moot competitions are for law students only, the ECC-SAL Moot is only open to lawyers who have been qualified for practice for no more than three years. The first edition of the moot was held in 2012 and the competition has been open to lawyers from Australia, Brunei, Hong Kong, India, Malaysia, New Zealand, Pakistan, Singapore, and South Korea. The moot is mainly judged by leading lawyers (including Queen's Counsel and Senior Counsel), academics, and judges in the region, and typically involves a commercial dispute between parties before the Singapore International Commercial Court. 101 is a topic for beginners in any area. It has all the basic principles and concepts that is expected in a particular field.In American university course numbering systems, the number 101 is often used for an introductory course at a beginner's level in a department's subject area. This common numbering system was designed to make transfer between colleges easier. In theory, any numbered course in one academic institution should bring a student to the same standard as a similarly numbered course at other institutions. Academic achievement or (academic) performance is the extent to which a student, teacher or institution has achieved their short or long-term educational goals. Cumulative GPA and completion of educational degrees such as High School and bachelor's degrees represent academic achievement.Academic achievement is commonly measured through examinations or continuous assessments but there is no general agreement on how it is best evaluated or which aspects are most important — procedural knowledge such as skills or declarative knowledge such as facts. Furthermore, there are inconclusive results over which individual factors successfully predict academic performance, elements such as test anxiety, environment, motivation, and emotions require consideration when developing models of school achievement.In California, the achievement of schools is measured by the Academic Performance Index. The importance of affect in education has become a topic of increasing interest in the fields of psychology and education. It is a commonly held opinion that curriculum and emotional literacy should be interwoven. Examples of such curriculum include using English language to increase emotional vocabulary, and writing about the self and history to discuss emotion in major events such as genocide. This type of curriculum is also known as therapeutic education. According to Ecclestone and Hayes, therapeutic education focuses on the emotional over the intellectual. An alternative break is a trip where a group of college students (usually 10–12 per trip) engage in volunteer service, typically for a week. Alternative break trips originated with college students in the early 1980s as a counter to "traditional" spring break trips. These trips are usually led by 2 "site leaders" who are students that have already participated in an alternative break and have gone through extensive leadership training.Alternative breaks may occur during students’ fall, winter, weekend, or summer school breaks. Each trip has a focus on a particular social issue, such as poverty, education reform, refugee resettlement, the environment, healthcare reform, mental health, immigration, animal care, and much more. Students learn about the social issues and then perform week-long projects with local non-profit organizations. Thus, students have the opportunity to connect and collaborate with different community partners. Alternative breaks are also drug and alcohol-free experiences, with a heavy emphasis on group and individual reflection.On the site, students provide necessary services and explore the culture and the history of the area. Students who participate in this program cultivate social responsibility, leadership, and life-long learning; thereby fostering a generation of leaders committed to positive social change. Alternative breaks challenge students to critically think and react to problems faced by members of the communities they are involved with. Being immersed in diverse environments enables participants to experience, discuss, and understand social issues in a significant way.The intensity of the experience increases the likelihood that participants will transfer their experience on-site back to their own communities even after the alternative break ends.The aim of the experience is to contribute volunteer hours to communities in need and to positively influence the life of the alternative breaker. Breakers are emboldened to take educated steps toward valuing and prioritizing their own communities in life choices such as recycling, donating resources, voting, etc.Many breakers have returned to their college campuses to create a campus organization related to the social issue, have a deeper understanding and commitment to an academic path, execute a fundraiser for the non-profit organization they worked with, organize a letter writing campaign to members of Congress, volunteer in their local community, or commit to an internship or career within the non-profit sector. Anarchism has had a special interest on the issue of education from the works of William Godwin and Max Stirner onwards.A wide diversity of issues related to education have gained the attention of anarchist theorists and activists. They have included the role of education in social control and socialization, the rights and liberties of youth and children within educational contexts, the inequalities encouraged by current educational systems, the influence of state and religious ideologies in the education of people, the division between social and manual work and its relationship with education, sex education and art education.Various alternatives to contemporary mainstream educational systems and their problems have been proposed by anarchists which have gone from alternative education systems and environments, self-education, advocacy of youth and children rights, and freethought activism. 'Appreciative Inquiry' is an approach that believes, improvement is more engaging when the focus is made on the strengths rather than the weaknesses. People tend to respond to positive statements but react to negative statements that concern them. Children are more sensitive to their self-worth and thrive on what makes them feel good, what makes them feel accepted, included and recognized. AI is a powerful tool that can be used in the field of Education to enable children discover what is good about them and dream of what they can do with this realization. Children of today are very sensitive and make decisions in haste which sometimes costs them their lives. In such a situation, AI plays a very vital role.Appreciative Inquiry is the cooperative search for the best in people, their organizations, and the world around them. It involves systematic discovery of what gives a system ‘life’ when it is most effective and capable in economic, ecological, and human terms. AI involves the art and practice of asking questions that strengthen a system’s capacity to heighten positive potential. It mobilizes inquiry through crafting an “unconditional positive question’ often involving hundreds or sometimes thousands of people.”Applied in the education sector, AI is a cooperative search for the best in children, their school, their teachers, their classmates, their parents and this discovery influences and helps shape their image of the future. It all begins with a story which the appreciative inquirer tells about him/herself and this story is only about where the child has experienced the best of what he could e.g. in reading, writing, passing tests and exams. With this flow of energy from past experience, the child is poised for a similar experience in the future and so nurtures all that give energy and brings joy of performance, acceptance and readiness to move ahead. AI starts with a statement of purpose or object of inquiry and which then takes the inquirer through the five steps (known as the 5Ds of AI) and graphically illustrated as follows:Refer the picture inAppreciative Inquiry in the education sector can amplify the motivation of the students and help them become most alive and effective. AI brings about social change in the pupil as the emphasis is on what is good and the belief that people nurture what they appreciate, than what they are not happy about. The system of education can be based on the five principles of AI that will enable the child discover through her/his own story, what is good about him/her and dream of how he/she can capitalize on this story of goodness to do more of such things that he/she appreciates about himself, about his environment, about his world.A quick look at the principles will enable the understanding of why AI is suitable for our education system: i. Constructionist Principle – argues that the language and metaphors we use don’t just describe reality (the world), they actually create ‘our ‘reality (the world). It means that great care is taken in the choice of words that we use as it will influence the kind of future we create. The language of the teacher influences what the child considers as his/her reality and this influences his self-perception and hence his self-worth which is very important for what he/she becomes in future. ii. Principle of Simultaneity – change begins from the moment we ask a question about a thing. The heart of AI is an unconditional positive question. For example – what was the best thing that has happened to you in the last week? iii. The Poetic Principle - as the topic of inquiry is on what is good about the individual or the environment, this helps open a new chapter in the life of the child. Stories reveal qualities which had not been previously realized and appreciated. iv. The Anticipatory Principle – we grow into the images we create, hence, when the child is made to see himself as good, his imagination about his future will always be good enough and as magnet this imagined future will always pull the child towards this goal. v. The Positive Principle – feelings of hope, inspiration, caring, sense of purpose, joy and creating something meaningful or being part of something good are among what we define as positive. It is therefore, important that the questions asked to the child are affirmative and positive.It allows a student to be potentially free from any kind of bondage or control. AI gives an opportunity for the students to showcase their innovative side rather than just rote memory. This in turn makes them autonomous learners. The students are able to understand their strengths every time their potentials are amplified. Use of this in the educational sector would bring about a sea of difference as there would be more room for amplifying the existing positive energy. Even the basic assumptions of AI which includes the assumption that ‘in every human situation, there is something that works” is a clear indication that no child is incapable of producing a result that would even surprise the child him/herself. All that the child needs are such questions that would enable him/her tap into the core of his/her being.The system of education which relies on an average test or examination grades label children who do not meet the marks as ‘failed’. AI in education enables the child to identify the subjects where he/she is very satisfied with the performance, and through story, the child discovers what he/she did differently and how to tap this aspect for more satisfaction. This is why AI is also referred as ‘locating the energy for change’. It is a search for what is good through stories and what needs to be done through dreams. AI brings a dream to reality because, motivation for the future depends on the images of success of the past.One of the strands of educational reform movements in the last two decades has been the call for greater collaborative efforts, both among educators as well as with parents, students and the surrounding community. Educational researcher Hargreaves (1994) referred to collaboration as an ‘articulating and integrating principle’ (p. 245) for school improvement, providing a way for teachers to learn from each other, gain moral support, coordinate action, and reflect on their classroom practices, their values, and the meaning of their work . These concerns point to the need for a change process that has a positive focus, is essentially self-organizing, encourages deep reflection, and avoids the pitfalls of manipulation by school administrators. This analysis points to a consideration of appreciative inquiry, a strengths-based process that builds on ‘the best of what is’ in an organization. Arts-based environmental education (AEE) brings art education and environmental education together in one undertaking. The approach has two essential characteristics. The first is that it refers to a specific kind of environmental education that starts off from an artistic approach. Different from other types of outdoor or environmental education which offer room for aesthetic experiences, AEE turns the tables in a fundamental way. Art is not an added quality, the icing on the cake; it is rather the point of departure in the effort to find ways in which people can connect to their environment. A second fundamental characteristic is that AEE is one of the first contemporary approaches of bringing together artistic practice and environmental education in which practitioners also made an attempt to formulate an epistemology. In education, authentic learning is an instructional approach that allows students to explore, discuss, and meaningfully construct concepts and relationships in contexts that involve real-world problems and projects that are relevant to the learner. It refers to a "wide variety of educational and instructional techniques focused on connecting what students are taught in school to real-world issues, problems, and applications. The basic idea is that students are more likely to be interested in what they are learning, more motivated to learn new concepts and skills, and better prepared to succeed in college, careers, and adulthood if what they are learning mirrors real-life contexts, equips them with practical and useful skills, and addresses topics that are relevant and applicable to their lives outside of school."Authentic instruction will take on a much different form than traditional teaching methods. In the traditional classroom, students take a passive role in the learning process. Knowledge is considered to be a collection of facts and procedures that are transmitted from the teacher to the student. In this view, the goal of education is to possess a large collection of these facts and procedures. Authentic learning, on the other hand, takes a constructivist approach, in which learning is an active process. Teachers provide opportunities for students to construct their own knowledge through engaging in self-directed inquiry, problem solving, critical thinking, and reflections in real-world contexts. This knowledge construction is heavily influenced by the student's prior knowledge and experiences, as well as by the characteristics that shape the learning environment, such as values, expectations, rewards, and sanctions. Education is more student-centered. Students no longer simply memorize facts in abstract and artificial situations, but they experience and apply information in ways that are grounded in reality. Bayesian Knowledge Tracing is an algorithm used in many intelligent tutoring systems to model each learner's mastery of the knowledge being tutored.It models student knowledge in a Hidden Markov Model as a latent variable, updated by observing the correctness of each student's interaction in which they apply the skill in question.BKT assumes that student knowledge is represented as a set of binary variables, one per skill, where the skill is either mastered by the student or not. Observations in BKT are also binary: a student gets a problem/step either right or wrong. Intelligent tutoring systems often uses BKT for mastery learning and problem sequencing. In its most common implementation, BKT has only skill-specific parameters. Bildung (German: [ˈbɪldʊŋ], "education, formation, etc.") refers to the German tradition of self-cultivation (as related to the German for: creation, image, shape), wherein philosophy and education are linked in a manner that refers to a process of both personal and cultural maturation. This maturation is described as a harmonization of the individual’s mind and heart and in a unification of selfhood and identity within the broader society, as evidenced with the literary tradition of Bildungsroman.In this sense, the process of harmonization of mind, heart, selfhood and identity is achieved through personal transformation, which presents a challenge to the individual’s accepted beliefs. In Hegel’s writings, the challenge of personal growth often involves an agonizing alienation from one’s “natural consciousness” that leads to a reunification and development of the self. Similarly, although social unity requires well-formed institutions, it also requires a diversity of individuals with the freedom (in the positive sense of the term) to develop a wide-variety of talents and abilities and this requires personal agency. However, rather than an end state, both individual and social unification is a process that is driven by unrelenting negations.In this sense, education involves the shaping of the human being with regard to his/her own humanity as well as his/her innate intellectual skills. So, the term refers a process of becoming that can be related to a process of becoming within Existentialism.The term Bildung also corresponds to the Humboldtian model of higher education from the work of Prussian philosopher and educational administrator Wilhelm von Humboldt (1767-1835). Thus, in this context, the concept of education becomes a lifelong process of human development, rather than mere training in gaining certain external knowledge or skills. Such training in skills is known by the German words Erziehung, and Ausbildung. Bildung in contrast is seen as a process wherein an individual's spiritual and cultural sensibilities as well as life, personal and social skills are in process of continual expansion and growth. Bildung is seen as a way to become more free due to higher self-reflection. Von Humboldt himself wrote the following about Bildung in his essay "Theory of Human Education [Bildung]"Von Humboldt wrote with respect to Bildung in 1793/1794: "'Education [Bildung], truth and virtue' must be disseminated to such an extent that the 'concept of mankind' takes on a great and dignified form in each individual (GS, I, p. 284). However, this shall be achieved personally by each individual, who must 'absorb the great mass of material offered to him by the world around him and by his inner existence, using all the possibilities of his receptiveness; he must then reshape that material with all the energies of his own activity and appropriate it to himself so as to create an interaction between his own personality and nature in a most general, active and harmonious form'".Most explicitly in Hegel’s writings, the Bildung tradition rejects the pre-Kantian metaphysics of being for a post-Kantian metaphysics of experience that rejects universal narratives. Much of Hegel's writings were about the nature of education (both Bildung and Erziehung), reflecting his own role as a teacher and administrator in German secondary schools, and in his more general writings.A more contemporary view was developed by Tony Waters "Bildung, I discovered in my 2 years in Germany, is an organizing cultural principle for German higher education that trumps both careerism and disciplinary silos. It is generally translated as “education”, but in fact it means more—dictionary definitions often refer to “self-cultivation”, “philosophy”, “personal and cultural maturation” and even “existentialism”. Bildung is the cry of the land of poets and thinkers against the demands of credentialism, professionalism, careerism and the financial temptations dangled to graduating students."In this way, fulfillment is achieved through practical activity that promotes the development of one’s own individual talents and abilities which in turn lead to the development of one’s society. In this way, Bildung does not simply accept the socio-political status quo, but rather it includes the ability to engage in a critique of one’s society, and to ultimately challenge the society to actualize its own highest ideals. School teachers are commonly the subject of bullying but they are also sometimes the originators of bullying within a school environment. When an adult bullies a child, it is referred to as psychological, emotional, or verbal abuse. According to the American Psychological Association, it is as harmful to children as sexual or physical abuse. "Children who are emotionally abused and neglected face similar and sometimes worse mental health problems as children who are physically or sexually abused, yet psychological abuse is rarely addressed in prevention programs or in treating victims, according to a new study published by the American Psychological Association." A capstone course, also known as capstone unit serves as the culminating and usually integrative experience of an educational program. A capstone course, module, project, subject, or unit in the higher education context may also be referred to as a capstone experience, senior seminar (in the U.S.), or final year project or dissertation (more common in the U.K.).The term derives from the final decorative coping or "cap-stone" used to complete a building or monument. In higher education, the term has been in common use in the USA since the mid-twentieth century, although there is evidence that it was in use as early as the late 1800s. It has gradually been gaining currency in other countries, particularly where attention has focused on student outcomes and employability in undergraduate studies. National grant projects in Australia and the U.K. have further raised the profile of the capstone experience. A cheat sheet (also cheatsheet) or crib sheet is a concise set of notes used for quick reference.Cheat sheets are so named because they may be used by students without the instructor's knowledge to cheat on a test. However, at higher levels of education where rote memorization is not as important as in basic education, students may be permitted to consult their own notes during the exam (which is not considered cheating). The act of preparing a cheat sheet is also an educational exercise, thus students are sometimes only allowed to use cheat sheets they have written themselves. In such usage, a cheat sheet is a physical piece of paper, often filled with equations and/or facts in compressed writing. Modern students often print cheat sheets in extremely small font, fitting an entire page of notes in the palm of their hands during the exam.The term can also apply to the fully worked solution for exams or work sheets normally handed out to university staff to ease marking. Child integration is the inclusion of children in a variety of mature daily activities of families and communities. This contrasts with, for example, age segregation; separating children into age-defined activities and institutions (e.g., some models of organized schooling). Integrating children in the range of mature family and community activities gives equal value and responsibility to children as contributors and collaborators, and can be a way to help them learn. Children's integration provides a learning environment because children are able to observe and pitch in as they feel they can.In the United States, child integration into "adult" life is not as common as it used to be. However, in other cultures social norms continue to incorporate children into the mature, productive activities of the family and community. In all cultures child integration is present in one way or another. For example, nearly all children's first language learning seems to be supported through integration with a mature linguistic community. Children usually are not taught in a classroom how to speak, but instead learn by observing the language and pitching it when they can. Children’s rights education (or children’s human rights education) is the teaching and practice ofrights in schools educational institutions, as informed by and consistent with the United Nations Convention on the Rights of the Child. When fully implemented, a children's rights education program consists of both a curriculum to teach children their human rights, and framework to operate the school in a manner that respects children's rights. Articles 29 and 42 of the Convention on the Rights of the Child require children to be educated about their rights.In addition to meeting legal obligations of the Convention to spread awareness of children’s rights to children and to adults, teaching children about their rights has the benefits of improving their awareness of rights in general, making them more respectful of other people's rights, and empowering them to take action in support of other people's rights. Early programs to teach children about their rights, in Belgium, Canada, England and New Zealand have provided evidence of this.It must never be forgotten that children's rights in schools were taught and practiced as an ethos of 'liberating the child' well before the UN Convention was written, and that this practice helped to inform the values and philosophy of the Convention, the IBE and UNESCO, though sadly these practices, and this history is not really acknowledged or built-upon by the UN. One of the reasons why children's rights have not become a foundation of our schools despite 100 years of struggle. A class in education has a variety of related meanings.It can be the group of students which attends a specific course or lesson at a university, school, or other educational institution, see Form (education).It can refer to a course itself, for example, a class in Shakespearean drama.It can be the group of students at the same level in an institution: the freshman class; or the group of students which graduates from the institution at the same time: the Class of 2005. The term can be used in a slightly more general context, such as "the graduating class."It can also refer to the classroom, in the building or venue where such a lesson is conducted.In some countries' educational systems (such as Taiwan's), it can refer to a subdivision of the students in an academic department, consisting of a cohort of students of the same academic level. For example, a department's sophomores may be divided into three classes.In countries such as the Republic of Ireland, India, Germany, Russia, and in the past, Sweden, the word can mean a grade: 1st class is ages 4–5, 2nd class is ages 6–7, 3rd class is ages 8–9, 4th class is ages 9–10, 5th class is ages 10–11, 6th class is ages 11–12, and 9th class is ages 14–15, class 10 is ages 15–16 and class 12th is ages 17–18. In learning, co-construction is a distinctive approach, where the emphasis is on collaborative, or partnership working. Creative Partnerships refer to 'Co-construction of learning' as the partnership between teaching staff, pupils and creative professionals to develop and deliver creative learning in schools.'Co-construction of learning' deepens relationships and understanding between all learning partners and can lead to School Improvement. Co-construction of learning is referred to in Primary and Secondary Schools and other learning settings in the UK, and generally refers to collaboration in learning beyond delivery of learning or projects, for example in Curriculum co-construction.Wikipedia could also be considered a form of 'co-construction of learning.' Cognitive apprenticeship is a theory of the process where a master of a skill teaches that skill to an apprentice.Constructivist approaches to human learning have led to the development of a theory of cognitive apprenticeship. This theory holds that masters of a skill often fail to take into account the implicit processes involved in carrying out complex skills when they are teaching novices. To combat these tendencies, cognitive apprenticeships "…are designed, among other things, to bring these tacit processes into the open, where students can observe, enact, and practice them with help from the teacher…". This model is supported by Albert Bandura's (1997) theory of modeling, which posits that in order for modeling to be successful, the learner must be attentive, must have access to and retain the information presented, must be motivated to learn, and must be able to accurately reproduce the desired skill. While education institutions across the P-20W (early learning through postsecondary and workforce) environment use many different data standards to meet information needs, there are certain data we all need to be able to understand, compare, and exchange in an accurate, timely, and consistent manner. For these, we need a shared vocabulary for education data—that is, we need common education data standards. The Common Education Data Standards (CEDS) project is a United States national collaborative effort to develop voluntary, common data standards for a key set of education data elements to streamline the exchange, comparison, and understanding of data within and across P-20W institutions and sectors.Common Education Data Standards is abbreviated as CEDS. This is usually pronounced as "C-E-D-S" not "Keds" or "Saids". Competency-based learning or competency-based education and training is an approach to teaching and learning more often used in learning concrete skills than abstract learning. It differs from other non-related approaches in that the unit of learning is extremely fine grained. Rather than a course or a module every individual skill/learning outcome, known as a competency, is one single unit. Learners work on one competency at a time, which is likely a small component of a larger learning goal. The student is evaluated on the individual competency, and only once they have mastered it do they move on to others. After that, higher or more complex competencies are learned to a degree of mastery and isolated from other topics. Another common component of Competency-based learning is the ability to skip learning modules entirely if the learner can demonstrate they already have mastery. That can be done either through prior learning assessment or formative testing.For example, people learning to drive manual transmission might first have to demonstrate their mastery of "rules of the road", safety, defensive driving, parallel parking etc. Then they may focus on two independent competencies: "using the clutch, brake with right foot" and "shifting up and down through the gears". Once the learners have demonstrated they are comfortable with those two skills the next, overarching skill might be "finding first: from full stop to a slow roll" followed by "sudden stops", "shifting up" and "down shifting". Because this is kinetic learning the instructor likely would demonstrate the individual skill a few times then the student would perform guided practice followed by independent practice until they can demonstrate their mastery.Competency-based learning is learner‑focused and works naturally with independent study and with the instructor in the role of facilitator. Learners often find different individual skills more difficult than others. This learning method allows a student to learn those individual skills they find challenging at their own pace, practising and refining as much as they like. Then, they can move rapidly through other skills to which they are more adept.While most other learning methods use summative testing, competency-based learning requires mastery of every individual learning outcome, making it very well suited to learning credentials in which safety is an issue. With summative testing a student who has 80% in an evaluation may have an 80% mastery of all learning outcomes or may have no mastery what-so-ever of 20% of the learning outcomes. Further, this student may be permitted to move on to higher learning and still be missing some abilities that are crucial to that higher learning. For example, a student who knows most traffic laws and has mostly mastered controlling a vehicle could be treated equally to a student who has a very high mastery of vehicle control but no understanding of traffic laws, but only one of those students should be permitted to drive.What it means to have mastered a competency depends on the learning domain (subject matter). In subject matter that could affect safety, it would be usual to expect complete learning that can be repeated every time. In abstract learning, such as algebra, the learner may only have to demonstrate that they identify an appropriate formula, for example, 4 of 5 times since when using that skill in the next competency, resolving a formula, will usually allow opportunity the learner to discover and correct their mistakes.It is important to understand that this learning methodology is common in many kinetic and/or skills-based learning, but is also sometime applied to abstract and/or academic learning for students who find themselves out-of-step with their grade, course or program of study. Increasingly educational institutions are evaluating ways to include competency-based learning methodologies in many different types of programs in order to make learning success a constant while student pace can vary.Competency-based learning is an educational technique that can be applied in many fields and learning environments. It is an area of pedagogical research and is not adequately understood in one, single learning domain, such as that which follows in this article.The rest of this article focuses one application of competency-based learning in corporate environments and is heavily weighted to a Human Resources perspective.Once organizations have used a competency dictionary to define the competency requirements for groups, areas, or the whole organization, it becomes possible to develop learning strategies targeted to close major gaps in organizational competencies and to focus learning plans on the business goals and strategic direction for the organization. Consciousness raising (also called awareness raising) is a form of activism, popularized by United States feminists in the late 1960s. It often takes the form of a group of people attempting to focus the attention of a wider group of people on some cause or condition. Common issues include diseases (e.g. breast cancer, AIDS), conflicts (e.g. the Darfur genocide, global warming), movements (e.g. Greenpeace, PETA, Earth Hour), and political parties or politicians. Since informing the populace of a public concern is often regarded as the first step to changing how the institutions handle it, raising awareness is often the first activity in which any advocacy group engages.However, in practice, raising awareness is often combined with other activities, such as fundraising, membership drives, or advocacy, in order to harness and/or sustain the motivation of new supporters, which may be at its highest just after they have learned and digested the new information.The term awareness raising is used in the Yogyakarta Principles against discriminatory attitudes and LGBT stereotypes, as well as the Convention on the Rights of Persons with Disabilities to combat stereotypes, prejudices, and harmful practices toward people with disabilities. Cooling out is an informal set of practices used by colleges, especially two-year, junior, and community colleges, to handle students whose lack of academic ability or other resources prevent them from achieving the educational goals they have developed for themselves such as attaining a bachelor's degree. The purpose of cooling out is to encourage the students to adjust their expectations or redefine failure. The practices contrast with "warming up", in which students who aspire to easier educational goals are encouraged to reach for more ambitious degrees. CU Coventry is a subsidiary of the Public Research University, Coventry University. It has been in operation since its launch in 2012, and was formerly known as Coventry University College. Its campus is located in the city of Coventry, England.It is part of CU, a network of three higher education campuses, comprising CU Coventry, CU London and CU Scarborough, all part of the Coventry University Group. These campuses offer an alternative, flexible approach to higher education to complement the traditional university experience of Coventry University and Coventry University London. Deaf education is the education of students with any manner of hearing impairment which addresses their differences and individual needs. This process involves individually-planned, systematically-monitored teaching methods, adaptive materials, accessible settings and other interventions designed to help students achieve a higher level of self-sufficiency and success in the school and community than they would achieve with a typical classroom education. A number of countries focus on training teachers to teach deaf students with a variety of approaches and have organizations to aid deaf students. In U.S. education, deeper learning is a set of student educational outcomes including acquisition of robust core academic content, higher-order thinking skills, and learning dispositions. Deeper learning is based on the premise that the nature of work, civic, and everyday life is changing and therefore increasingly requires that formal education provides young people with mastery of skills like analytic reasoning, complex problem solving, and teamwork.Deeper learning is associated with a growing movement in U.S. education that places special emphasis on the ability to apply knowledge to real-world circumstances and to solve novel problems."Deeper learning" was first introduced by the William and Flora Hewlett Foundation in 2010 and specified a set of educational outcomes-Mastery of rigorous academic contentDevelopment of critical thinking and problem-solving skillsThe ability to work collaborativelyEffective oral and written communicationLearning how to learnDeveloping and maintaining an academic mindsetA number of U.S. schools and school districts serving a broad socio-economic spectrum apply deeper learning as an integral component of their instructional approach. The Delors Report was a report created by the Delors Commission in 1996. It proposed an integrated vision of education based on two key concepts, ‘learning throughout life’ and the four pillars of learning, to know, to do, to live together. It was not in itself a blueprint for educational reform, but rather a basis for reflection and debate about what choices should be made in formulating policies. The report argued that choices about education were determined by choices about what kind of society we wished to live in. Beyond education’s immediate functionality, it considered the formation of the whole person to be an essential part of education’s purpose. The Delors Report was aligned closely with the moral and intellectual principles that underpin UNESCO, and therefore its analysis and recommendations were more humanistic and less instrumental and market-driven than other education reform studies of the time.The Delors Report identified a number of tensions generated by technological, economic and social change. They included tensions between the global and the local; the universal and the particular; tradition and modernity; the spiritual and the material; long term and short term considerations; the need for competition and the ideal of equality of opportunity; and the expansion of knowledge and our capacity to assimilate it. These seven tensions remain useful perspectives from which to view the current dynamics of social transformation. Some are taking on new meaning, with fresh tensions emerging. These include patterns of economic growth characterized by rising vulnerability, growing inequality, increased ecological stress, and rising intolerance and violence. Finally, while there has been progress in human rights, implementation of norms often remains a challenge. Digital Intelligence (DQ) is the sum of social, emotional, and cognitive abilities that enable individuals to face the challenges and adapt to the demands of digital life. In the same way as Intelligence quotient (IQ) and Emotional intelligence (EQ) measure general and emotional intelligence, Digital Intelligence Quotient (DQ) can be further constructed by 8 key components: Digital identity, Digital rights, Digital literacy, Digital use, Digital communication, Digital safety, Digital emotional intelligence, Digital security. DQ is originally initiated by Dr. Yuhyun Park, an NTU-based Researcher and a mother of two children. Besides the DQ education framework and assessment, Dr. Park is also the founder of DQ Institute - a think tank that works to improve digital education, culture and innovation and developed DQ world.net, an online digital citizenship education program, which enables children to self-learn DQ.As of July 2017, the company reported the DQ World programme has been tried and tested by 500,000 students and poised to launch in 17 countries. Digital scholarship is the use of digital evidence, methods of inquiry, research, publication and preservation to achieve scholarly and research goals. Digital scholarship can encompass both scholarly communication using digital media and research on digital media. An important aspect of digital scholarship is the effort to establish digital media and social media as credible, professional and legitimate means of research and communication. Digital scholarship has a close association with digital humanities, though the relationship between these terms is unclear.Digital scholarship may also include born-digital means of scholarly communication that are more traditional, like online journals and databases, e-mail correspondence and the digital or digitized collections of research and academic libraries. Since digital scholarship is concerned with the production and distribution of digital media, discussions about copyright, fair use and digital rights management (DRM) frequently accompany academic analysis of the topic. Combined with open access, digital scholarship is offered as a more affordable and open model for scholarly communication. Diversity training can be defined as any program designed to facilitate positive intergroup interaction, reduce prejudice and discrimination, and generally teach individuals who are different from others how to work together effectively. "From the broad corporate perspective, diversity training is defined as raising personal awareness about individual differences in the workplace and how those differences inhibit or enhance the way people work together and get work done. In the narrowest sense, it is education about compliance – Affirmative Action (AA), Equal Employment Opportunity (EEO), and sexual harassment." Diversity training is instruction aimed at helping participants to gain cultural awareness in order to benefit the organization or company. Diversity training is the reality that is facing many human resource management teams – one of the pressing reasons is the growing ethnic and racial diversity in the workplace. While major corporations believe that diversity training and active diversity hiring will assist them in remaining competitive in a global economy, other large organizations (universities and colleges) have been slow to embrace diversity training.Trainers use diversity training as a means to meet many objectives, such as attracting and retaining customers and productive workers; maintaining high employee morale; and/or fostering understanding and harmony between workers. However, a systematic analysis has shown the diversity training is usually counterproductive. An edcamp is a participant-driven conference - commonly referred to as an "unconference". Edcamps are designed to provide participant-driven professional development for K-12 educators. Edcamps are modeled after BarCamps, free participant-driven conferences with a primary focus on technology and computers. Educational technology is a common topic area for edcamps, as are pedagogy, practical examples in instructional use of modern tools, and solving the problems technology can introduce into the classroom environment.Edcamps are generally free or very low-cost, built around ad hoc community participation. Sessions are not planned until the day of the event, when participants can volunteer to facilitate a conversation on a topic of their choice or simply choose an idea they are interested in learning more about. Edcamps operate "without keynote speakers or vendor booths, encourage participants to find or lead a conversation that meet their needs and interests."The first edcamp was held in May 2010 in Philadelphia. Since that time, there have been over 1,000 edcamp events held throughout the world. The Edcamp Foundation was formed in December 2011 to help teachers and other stakeholders who organize edcamps. The vision of the Edcamp Foundation is to "promote organic, participant-driven professional development for K-12 educators worldwide." The Edcamp Foundation is still located in Conshohocken, PA. The Foundation has implemented a variety of programs to help participants and organizers get the most out of edcamps like Impact Grants, Edcamp-In-A-Box, and the Urban Initiative.The first edcamps that were held in languages other than English were edcamp Stockholm on October 31, 2011 (in Swedish) and edcamp Montreal on November 1, 2011 (in French). There has also been International edcamps in Spain, China, Indonesia, Canada, and more. One of the defining features of development today is the relationship between education and technology, stimulated by the spectacular growth in internet connectivity and mobile penetration. We live in a connected world. An estimated 40% of the world’s population now uses the internet and this number is growing at a remarkable rate. While there are significant variations in internet connectivity among countries and regions, the number of households with such links in the global South has now overtaken those in the global North. Moreover, over 70% of mobile telephone subscriptions worldwide are now in the global South. Five billion people are expected to go from no to full connectivity within the next twenty years. However, there are still significant gaps among countries and regions, for example between urban and rural areas. Limited broadband speed and lack of connectivity hamper access to knowledge, participation in society and economic development.The internet has transformed how people access information and knowledge, how they interact, and the direction of public management and business. Digital connectivity holds promise for gains in health, education, communication, leisure and well-being. Artificial intelligence advances, 3D printers, holographic recreation, instant transcription, voice-recognition and gesture-recognition software are only some examples of what is being tested. Digital technologies are reshaping human activity from daily life to international relations, from work to leisure, redefining multiple aspects of our private and public life.Such technologies have expanded opportunities for freedom of expression and for social, civic and political mobilization, but they also raise important concerns. The availability of personal information in the cyber world, for example, brings up significant issues of privacy and security. New spaces for communication and socialization are transforming what constitutes the idea of ‘social’ and they require enforceable legal and other safeguards to prevent their overuse, abuse and misuse. Examples of such misuse of the internet, mobile technology and social media range from cyber-bullying to criminal activity, even to terrorism. In this new cyber world, educators need to better prepare new generations of ‘digital natives’ to deal with the ethical and social dimensions of not only existing digital technologies but also those yet to be invented. Education sector responses to LGBT violence addresses the ways in which education systems work to create safe learning environments for LGBT students. Overall, education sector responses tend to focus on homophobia and violence linked to sexual orientation and gender identity/expression, and less on transphobia. Most responses focus in some way on diverse expressions of gender and support students to understand that gender may be expressed in a different way from binary models (of masculine and feminine). Responses vary greatly in their scope (from a single class to the national level); duration (from one-off events to several years); and level of support that they enjoy (from individual teachers to the highest levels of government).A comprehensive education sector response to homophobic and transphobic violence encompasses all of the following elements: effective policies, relevant curricula and learning materials, training and support for staff, support for students and families information and strategic partnerships, and monitoring and evaluation.Very few countries have education sector policies that address homophobic and transphobic violence or include sexual orientation and gender identity/expression in curricula or learning materials. In most countries, staff lack training and support to address sexual orientation and gender identity/expression and to prevent and respond to homophobic and transphobic violence. Although many countries provide support for students who experience violence, services are often ill-equipped to deal with homophobic and transphobic violence. Few countries collect data on the nature, prevalence or impact of homophobic and transphobic violence, which contributes to low awareness of the problem and lack of evidence for planning effective responses.In general terms, the range of responses to homophobic and transphobic violence in educational settings appears to correlate to a country's: socio-cultural context (in terms of the society's beliefs and attitudes towards sexual and gender diversity, as well as to human rights and gender equality); and legal context (in terms of the rights of LGBTI individuals and the situation of human rights in general). An educational psychologist is a psychologist whose differentiating functions may include diagnostic and psycho-educational assessment, psychological counseling in educational communities (students, teachers, parents and academic authorities), community-type psycho-educational intervention, and mediation, coordination, and referral to other professionals, at all levels of the educational system. Many countries use this term to signify those who provide services to students, their teachers, and families while other countries use this term to signify academic training in the discipline of educational psychology, with no intention of preparing persons to provide services. The EDUindex is a Correlation coefficient representing the relevancy of Curriculum to post-educational objectives, particularly employability. An EDUindex Gap Analysis provides missing, relevant curriculum relative to employment opportunity within a representative area. Representative areas may include geographic regions, states, cities, school districts or specific schools. Analysis is regularly conducted using zip code sets.In 1918, John Franklin Bobbitt said that curriculum, as an idea, has its roots in the Latin word for horse race-course, explaining the curriculum as the course of deeds and experiences through which children become the adults they should be, for success in adult society. EDUindex, Inc. developed the EDUindex to identify and promote relevance in education.The EDUindex is a correlation of curricular subjects taught in a particular school to skills as suggested by a pre-defined or custom selected target marketplace. Published class offerings represent the skills taught. The Classification of Secondary School Courses (CSSC) provides a general inventory of courses taught nationwide in the secondary school level (grades 9 through 12). Further detail is provided by High School Transcript Studies provided by the National Center for Education Statistics. Public, Charter, and Private School listings are accessed per geographical area to create a comprehensive data set of all schools and businesses within the analytical focus. Curriculum per School, District, etc. is published individually and is publicly available.Standard databases like the North American Industry Classification System (NAICS) provide defined business focus. Business focus can be further refined into specific occupations and skill sets using Standard Occupational Classification System (SOC). Together these datasets provide information representing the skills offered and the occupational opportunities available within the designated target area.The EDUindex, as a value, is expressed as a number from 0 to 1.0 with 1.0 representing a perfect match of curricular offering to target need. The value is determined using the Pearson product-moment correlation coefficient (sometimes referred to as the PMCC, and typically denoted by r) as a measure of the correlation (linear dependence) between two variables X and Y, giving a value between +1 and −1 inclusive. It is widely used in the sciences as a measure of the strength of linear dependence between two variables. It was developed by Karl Pearson from a similar but slightly different idea introduced by Francis Galton in the 1880s. The general correlation coefficient is sometimes called "Pearson's r." The EDUindex calculates Pearson’s r for educational relevance by comparing the content of course offerings with the need for related skill sets within the same banded geographic area. Correlative results are weighted based on data volume for Scalar, comparative and presentation purposes. Engaged Scholarship is the integration of education with community development. Ethical participatory research in education is introduced to high school and undergraduate curricula to serve the mutual benefit of students, faculty, and the communities that surround and support academic institutions. Engaged scholarship is a type of education, "that can be directly applied to social problems and issues faced by individuals, local communities, organizations, practitioners, and policymakers." Engaged scholarship originates from the perceived disconnect between academic research and practical research and knowledge that can be meaningfully used to solve problems in communities. Chemistry is the scientific discipline involved with compounds composed of atoms, i.e. elements, and molecules, i.e. combinations of atoms: their composition, structure, properties, behavior and the changes they undergo during a reaction with other compounds. Chemistry addresses topics such as how atoms and molecules interact via chemical bonds to form new chemical compounds. There are four types of chemical bonds: covalent bonds, in which compounds share one or more electron(s); ionic bonds, in which a compound donates one or more electrons to another compound to produce ions: cations and anions; hydrogen bonds; and Van der Waals force bonds.Chemistry occupies an intermediate position in a hierarchy of the sciences by reductive level between physics and biology. It is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. Examples include plant chemistry (botany), the formation of igneous rocks (geology), how atmospheric ozone is formed and how environmental pollutants are degraded (ecology), the properties of the soil on the moon (astrophysics), how medications work (pharmacology), and how to collect DNA evidence at a crime scene (forensics).The history of chemistry represents a time span from ancient history to the present. By 1000 BC, civilizations used technologies that would eventually form the basis of the various branches of chemistry. Examples include extracting metals from ores, making pottery and glazes, fermenting beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into soap, making glass, and making alloys like bronze. The protoscience of chemistry, alchemy, was unsuccessful in explaining the nature of matter and its transformations. However, by performing experiments and recording the results, alchemists set the stage for modern chemistry. The distinction began to emerge when a clear differentiation was made between chemistry and alchemy by Robert Boyle in his work The Sceptical Chymist (1661). While both alchemy and chemistry are concerned with matter and its transformations, chemists are seen as applying scientific method to their work. Chemistry is considered to have become an established science with the work of Antoine Lavoisier, who developed a law of conservation of mass that demanded careful measurement and quantitative observations of chemical phenomena. The history of chemistry is intertwined with the history of thermodynamics, especially through the work of Willard Gibbs. Chemistry was preceded by its protoscience, alchemy, which is an intuitive but non-scientific approach to understanding the elements, compounds, and their interactions. It was unsuccessful in explaining the nature of matter and its transformations, but by performing experiments and recording the results, alchemists set the stage for modern chemistry. The distinction began to emerge when a clear differentiation between alchemy and chemistry was made by Robert Boyle in 1661: the application of the scientific method in chemistry was the crucial difference. An acid–base reaction is a chemical reaction that occurs between an acid and a base. Several theoretical frameworks provide alternative conceptions of the reaction mechanisms and their application in solving related problems; these are called the acid–base theories, for example, Brønsted–Lowry acid–base theory. Their importance becomes apparent in analyzing acid–base reactions for gaseous or liquid species, or when acid or base character may be somewhat less apparent. The first of these concepts was provided by the French chemist Antoine Lavoisier, around 1776. Actinide chemistry (or actinoid chemistry) is one of the main branches of nuclear chemistry that investigates the processes and molecular systems of the actinides. The actinides chemistry derives its name from the group 3 element actinium. The informal chemical symbol An is used in general discussions of actinide chemistry to refer to any actinide. All but one of the actinides are f-block elements, corresponding to the filling of the 5f electron shell; lawrencium, a d-block element, is also generally considered an actinide. In comparison with the lanthanides, also mostly f-block elements, the actinides show much more variable valence. The actinide series encompasses the 15 metallic chemical elements with atomic numbers from 89 to 103, actinium through lawrencium. Allotropy or allotropism (from Greek ἄλλος (allos), meaning "other", and τρόπος (tropos), meaning "manner, form") is the property of some chemical elements to exist in two or more different forms, in the same physical state, known as allotropes of these elements. Allotropes are different structural modifications of an element; the atoms of the element are bonded together in a different manner. For example, the allotropes of carbon include diamond (the carbon atoms are bonded together in a tetrahedral lattice arrangement), graphite (the carbon atoms are bonded together in sheets of a hexagonal lattice), graphene (single sheets of graphite), and fullerenes (the carbon atoms are bonded together in spherical, tubular, or ellipsoidal formations). The term allotropy is used for elements only, not for compounds. The more general term, used for any crystalline material, is polymorphism. Allotropy refers only to different forms of an element within the same phase (i.e. different solid, liquid or gas forms); these different states are not, themselves, considered to be examples of allotropy.For some elements, allotropes have different molecular formulae which can persist in different phases – for example, two allotropes of oxygen (dioxygen, O2, and ozone, O3), can both exist in the solid, liquid and gaseous states. Conversely, some elements do not maintain distinct allotropes in different phases – for example phosphorus has numerous solid allotropes, which all revert to the same P4 form when melted to the liquid state. An alloy is a mixture of metals or a mixture of a metal and another element. Alloys are defined by a metallic bonding character. An alloy may be a solid solution of metal elements (a single phase) or a mixture of metallic phases (two or more solutions). Intermetallic compounds are alloys with a defined stoichiometry and crystal structure. Zintl phases are also sometimes considered alloys depending on bond types (see also: Van Arkel-Ketelaar triangle for information on classifying bonding in binary compounds).Alloys are used in a wide variety of applications. In some cases, a combination of metals may reduce the overall cost of the material while preserving important properties. In other cases, the combination of metals imparts synergistic properties to the constituent metal elements such as corrosion resistance or mechanical strength. Examples of alloys are steel, solder, brass, pewter, duralumin, bronze and amalgams.The alloy constituents are usually measured by mass percentage for practical applications, and in atomic fraction ( see Atomic_ratio) for basic science studies. Alloys are usually classified as substitutional or interstitial alloys, depending on the atomic arrangement that forms the alloy. They can be further classified as homogeneous (consisting of a single phase), or heterogeneous (consisting of two or more phases) or intermetallic. Astrochemistry is the study of the abundance and reactions of molecules in the Universe, and their interaction with radiation. The discipline is an overlap of astronomy and chemistry. The word "astrochemistry" may be applied to both the Solar System and the interstellar medium. The study of the abundance of elements and isotope ratios in Solar System objects, such as meteorites, is also called cosmochemistry, while the study of interstellar atoms and molecules and their interaction with radiation is sometimes called molecular astrophysics. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds is of special interest, because it is from these clouds that solar systems form. Atmospheric chemistry is a branch of atmospheric science in which the chemistry of the Earth's atmosphere and that of other planets is studied. It is a multidisciplinary approach of research and draws on environmental chemistry, physics, meteorology, computer modeling, oceanography, geology and volcanology and other disciplines. Research is increasingly connected with other arenas of study such as climatology.The composition and chemistry of the Earth's atmosphere is of importance for several reasons, but primarily because of the interactions between the atmosphere and living organisms. The composition of the Earth's atmosphere changes as result of natural processes such as volcano emissions, lightning and bombardment by solar particles from corona. It has also been changed by human activity and some of these changes are harmful to human health, crops and ecosystems. Examples of problems which have been addressed by atmospheric chemistry include acid rain, ozone depletion, photochemical smog, greenhouse gases and global warming. Atmospheric chemists seek to understand the causes of these problems, and by obtaining a theoretical understanding of them, allow possible solutions to be tested and the effects of changes in government policy evaluated. An atom is the smallest constituent unit of ordinary matter that has the properties of a chemical element. Every solid, liquid, gas, and plasma is composed of neutral or ionized atoms. Atoms are very small; typical sizes are around 100 picometers (a ten-billionth of a meter, in the short scale).Atoms are small enough that attempting to predict their behavior using classical physics – as if they were billiard balls, for example – gives noticeably incorrect predictions due to quantum effects. Through the development of physics, atomic models have incorporated quantum principles to better explain and predict the behavior.Every atom is composed of a nucleus and one or more electrons bound to the nucleus. The nucleus is made of one or more protons and typically a similar number of neutrons. Protons and neutrons are called nucleons. More than 99.94% of an atom's mass is in the nucleus. The protons have a positive electric charge, the electrons have a negative electric charge, and the neutrons have no electric charge. If the number of protons and electrons are equal, that atom is electrically neutral. If an atom has more or fewer electrons than protons, then it has an overall negative or positive charge, respectively, and it is called an ion.The electrons of an atom are attracted to the protons in an atomic nucleus by this electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by a different force, the nuclear force, which is usually stronger than the electromagnetic force repelling the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force, and nucleons can be ejected from the nucleus, leaving behind a different element: nuclear decay resulting in nuclear transmutation.The number of protons in the nucleus defines to what chemical element the atom belongs: for example, all copper atoms contain 29 protons. The number of neutrons defines the isotope of the element. The number of electrons influences the magnetic properties of an atom. Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules. The ability of atoms to associate and dissociate is responsible for most of the physical changes observed in nature and is the subject of the discipline of chemistry. Bioconcentration is the accumulation of a chemical in or on an organism when the source of chemical is solely water. Bioconcentration is a term that was created for use in the field of aquatic toxicology. Bioconcentration can also be defined as the process by which a chemical concentration in an aquatic organism exceeds that in water as a result of exposure to a waterborne chemical.There are several ways in which to measure and assess bioaccumulation and bioconcentration. These include: octanol-water partition coefficients (KOW), bioconcentration factors (BCF), bioaccumulation factors (BAF) and biota-sediment accumulation factor (BSAF). Each of these can be calculated using either empirical data or measurements as well as from mathematical models. One of these mathematical models is a fugacity-based BCF model developed by Don Mackay.Bioconcentration factor can also be expressed as the ratio of the concentration of a chemical in an organism to the concentration of the chemical in the surrounding environment. The BCF is a measure of the extent of chemical sharing between an organism and the surrounding environment.In surface water, the BCF is the ratio of a chemical's concentration in an organism to the chemical's aqueous concentration. BCF is often expressed in units of liter per kilogram (ratio of mg of chemical per kg of organism to mg of chemical per liter of water). BCF can simply be an observed ratio, or it can be the prediction of a partitioning model. A partitioning model is based on assumptions that chemicals partition between water and aquatic organisms as well as the idea that chemical equilibrium exists between the organisms and the aquatic environment in which it is found The Bohn–Schmidt reaction, a named reaction in chemistry, introduces a hydroxy group at an anthraquinone system. The anthraquinone must already have at least one hydroxy group. The reaction was first described in 1889 by René Bohn (1862–1922) and in 1891 by Robert Emanuel Schmidt (1864–1938), two German industrial chemists.René Bohn is one of the few industrial chemists after whom a reaction is named. In 1901, he made indanthrone from 2-aminoanthraquinone and thus laid the basis for a new group of dyes. Calmodulin (CaM) is a complex signaling protein that transduces transient calcium ion signals. CaM's binding of calcium ions cause conformational changes, which interact with downstream proteins. Current research indicates that selective protein binding occurs through the mechanism of mutually induced conformational fit, which would explain how calcium dynamics in CaM would modulate its interaction.Current research on CaM signaling and CaM–BT interaction includes experimental kinetic rate observations and coarse grain/all atom molecular dynamics simulations. Because protein signaling and protein–protein interaction is a new field of research, many observed interactions cannot be explained through experiment alone. The unification between simulation and experimental results is necessary to expand the predictive power of the theoretical approach and create general laws that explain the mechanics of signaling/protein–protein interactions.The computational approach for modeling macro molecules is very resource intensive. The Hamiltonian equation in molecular dynamic software relates each atom to all other atoms in the system through kinetic, electrostatic, van der Waals, dihedra, bond, etc. energies. For example, the RRK polypeptide (CaMKII residues: 293-313) contains 21 residues and 318 atoms. For a single time step, the molecular dynamics software must perform energy calculations between every atom in the polypeptide, which is ~100,000 calculations. Since the time step must be in the sub picosecond range (to insure stability), several million time steps must be performed to obtain meaningful data. To remedy the large number of calculations involved in all atom simulations, the coarse grain simulation technique can be used. Current work from the biophysics group at the University of Houston uses open source coarse grain and all atomic models of CaM and wildtype/mutated binding targets of CaMKII in their research. Chemistry is often called the central science because of its role in connecting the physical sciences, which include chemistry, with the life sciences and applied sciences such as medicine and engineering. The nature of this relationship is one of the main topics in the philosophy of chemistry and in scientometrics. The phrase was popularized by its use in a textbook by Theodore L. Brown and H. Eugene LeMay, titled Chemistry: The Central Science, which was first published in 1977, with a thirteenth edition published in 2014.The central role of chemistry can be seen in the systematic and hierarchical classification of the sciences by Auguste Comte in which each discipline provides a more general framework for the area it precedes (mathematics → astronomy → physics → chemistry → physiology and medicine → social sciences). Balaban and Klein have more recently proposed a diagram showing partial ordering of sciences in which chemistry may be argued is “the central science” since it provides a significant degree of branching. In forming these connections the lower field cannot be fully reduced to the higher ones. It is recognized that the lower fields possess emergent ideas and concepts that do not exist in the higher fields of science.Thus chemistry is built on an understanding of laws of physics that govern particles such as atoms, protons, neutrons, electrons, thermodynamics, etc. although it has been shown that it has not been “fully 'reduced' to quantum mechanics”. Concepts such as the periodicity of the elements and chemical bonds in chemistry are emergent in that they are more than the underlying forces that are defined by physics.In the same way, biology cannot be fully reduced to chemistry despite the fact that the machinery that is responsible for life is composed of molecules. For instance, the machinery of evolution may be described in terms of chemistry by the understanding that it is a mutation in the order of genetic base pairs in the DNA of an organism. However, chemistry cannot fully describe the process since it does not contain concepts such as natural selection that are responsible for driving evolution. Chemistry is fundamental to biology since it provides a methodology for studying and understanding the molecules that compose cells.Connections made by chemistry are formed through various sub-disciplines that utilize concepts from multiple scientific disciplines. Chemistry and physics are both needed in the areas of physical chemistry, nuclear chemistry, and theoretical chemistry. Chemistry and biology intersect in the areas of biochemistry, medicinal chemistry, molecular biology, chemical biology, molecular genetics, and immunochemistry. Chemistry and the earth sciences intersect in areas like geochemistry and hydrology. The Charged Aerosol Detector (CAD) is a universal detector used in conjunction with high-performance liquid chromatography (HPLC) and ultra high-performance liquid chromatography (UHPLC) to measure the amount of chemicals in a sample by creating charged aerosol particles which are detected using an electrometer. It is commonly used for the analysis of compounds that cannot be detected using traditional UV/Vis approaches due to their lack of a chromophore. The CAD can measure all non-volatile and many semi-volatile analytes including, but not limited to, antibiotics, excipients, ions, lipids, natural products, biofuels, sugars and surfactants. The CAD, like other aerosol detectors (e.g., evaporative light scattering detectors (ELSD) and condensation nucleation light scattering detectors (CNLSD)), falls under the category of destructive general-purpose detectors (see Chromatography Detectors). Chemical biology is a scientific discipline spanning the fields of chemistry, biology, and physics. It involves the application of chemical techniques, tools, and analysis, and often compounds produced through synthetic chemistry, to the study and manipulation of biological systems. Chemical biologists attempt to use chemical principles to modulate systems to either investigate the underlying biology or create new function. Research done by chemical biologists is often closer related to that of cell biology than biochemistry. Biochemists study the chemistry of biomolecules and regulation of biochemical pathways within cells and tissues, e.g. cAMP or cGMP, while chemical biologists deal with novel chemical compounds applied to biology. A compound is a chemical substance composed of many identical molecules (or molecular entities) composed of atoms from more than one element held together by chemical bonds.There are four types of compounds, depending on how the constituent atoms are held together:molecules held together by covalent bondsionic compounds held together by ionic bondsintermetallic compounds held together by metallic bondscertain complexes held together by coordinate covalent bonds.Many chemical compounds have a unique numerical identifier assigned by the Chemical Abstracts Service (CAS): its CAS number.A chemical formula is a way of expressing information about the proportions of atoms that constitute a particular chemical compound, using the standard abbreviations for the chemical elements, and subscripts to indicate the number of atoms involved. For example, water is composed of two hydrogen atoms bonded to one oxygen atom: the chemical formula is H2O.A compound can be converted to a different chemical composition by interaction with a second chemical compound via a chemical reaction. In this process, bonds between atoms are broken in both of the interacting compounds, and then bonds are reformed so that new associations are made between atoms. Schematically, this reaction could be described as AB + CD -> AD + CB, where A, B, C, and D are each unique atoms; and AB, AD, CD, and CB are each unique compounds.A chemical element bonded to an identical chemical element is not a chemical compound since only one element, not two different elements, is involved. Examples are the diatomic molecule hydrogen (H2) and the polyatomic molecule sulfur (S8). A chemical element is a species of atoms having the same number of protons in their atomic nuclei (that is, the same atomic number, or Z). 118 elements are identified, of which the first 94 occur naturally on Earth with the remaining 24 being synthetic elements. There are 80 elements that have at least one stable isotope and 38 that have exclusively radionuclides, which decay over time into other elements. Iron is the most abundant element (by mass) making up Earth, while oxygen is the most common element in the Earth's crust.Chemical elements constitute all of the ordinary matter of the universe. However astronomical observations suggest that ordinary observable matter makes up only about 15% of the matter in the universe: the remainder is dark matter; the composition of this is unknown, but it is not composed of chemical elements. The two lightest elements, hydrogen and helium, were mostly formed in the Big Bang and are the most common elements in the universe. The next three elements (lithium, beryllium and boron) were formed mostly by cosmic ray spallation, and are thus rarer than heavier elements. Formation of elements with from 6 to 26 protons occurred and continues to occur in main sequence stars via stellar nucleosynthesis. The high abundance of oxygen, silicon, and iron on Earth reflects their common production in such stars. Elements with greater than 26 protons are formed by supernova nucleosynthesis in supernovae, which, when they explode, blast these elements as supernova remnants far into space, where they may become incorporated into planets when they are formed.The term "element" is used for atoms with a given number of protons (regardless of whether or not they are ionized or chemically bonded, e.g. hydrogen in water) as well as for a pure chemical substance consisting of a single element (e.g. hydrogen gas). For the second meaning, the terms "elementary substance" and "simple substance" have been suggested, but they have not gained much acceptance in English chemical literature, whereas in some other languages their equivalent is widely used (e.g. French corps simple, Russian простое вещество). A single element can form multiple substances differing in their structure; they are called allotropes of the element.When different elements are chemically combined, with the atoms held together by chemical bonds, they form chemical compounds. Only a minority of elements are found uncombined as relatively pure minerals. Among the more common of such native elements are copper, silver, gold, carbon (as coal, graphite, or diamonds), and sulfur. All but a few of the most inert elements, such as noble gases and noble metals, are usually found on Earth in chemically combined form, as chemical compounds. While about 32 of the chemical elements occur on Earth in native uncombined forms, most of these occur as mixtures. For example, atmospheric air is primarily a mixture of nitrogen, oxygen, and argon, and native solid elements occur in alloys, such as that of iron and nickel.The history of the discovery and use of the elements began with primitive human societies that found native elements like carbon, sulfur, copper and gold. Later civilizations extracted elemental copper, tin, lead and iron from their ores by smelting, using charcoal. Alchemists and chemists subsequently identified many more; almost all of the naturally occurring elements were known by 1900.The properties of the chemical elements are summarized in the periodic table, which organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. Save for unstable radioactive elements with short half-lives, all of the elements are available industrially, most of them in low degrees of impurities. Chemical free or chemical-free is a term used in marketing to imply that a product is safe, healthy or environmentally friendly because it only contains natural ingredients. From a chemist's perspective, the term is a misnomer, as all substances and objects are composed entirely of chemicals and energy. The term chemical is roughly a synonym for matter, and all substances, such as water and air, are chemicals.This use of the term chemical free in advertising to indicate that a product is free of synthetic chemicals, and the tolerance of its use in this fashion by the United Kingdom's Advertising Standards Authority has been the subject of criticism.A study of understandings of the term chemical among American undergraduates by chemist Gayle Nicoll in 1997 noted that "People may hold both a scientific and layman's definition of a chemical without linking the two together in any way. They may or may not consciously distinguish that the term 'chemical' has different connotations depending on the situation." Chemical physics is a subdiscipline of chemistry and physics that investigates physicochemical phenomena using techniques from atomic and molecular physics and condensed matter physics; it is the branch of physics that studies chemical processes from the point of view of physics. While at the interface of physics and chemistry, chemical physics is distinct from physical chemistry in that it focuses more on the characteristic elements and theories of physics. Meanwhile, physical chemistry studies the physical nature of chemistry. Nonetheless, the distinction between the two fields is vague, and workers often practice in both fields during the course of their research.The United States Department of Education defines chemical physics as "A program that focuses on the scientific study of structural phenomena combining the disciplines of physical chemistry and atomic/molecular physics. Includes instruction in heterogeneous structures, alignment and surface phenomena, quantum theory, mathematical physics, statistical and classical mechanics, chemical kinetics, and laser physics." A chemical reaction is a process that leads to the transformation of one set of chemical substances to another. Classically, chemical reactions encompass changes that only involve the positions of electrons in the forming and breaking of chemical bonds between atoms, with no change to the nuclei (no change to the elements present), and can often be described by a chemical equation. Nuclear chemistry is a sub-discipline of chemistry that involves the chemical reactions of unstable and radioactive elements where both electronic and nuclear changes can occur.The substance (or substances) initially involved in a chemical reaction are called reactants or reagents. Chemical reactions are usually characterized by a chemical change, and they yield one or more products, which usually have properties different from the reactants. Reactions often consist of a sequence of individual sub-steps, the so-called elementary reactions, and the information on the precise course of action is part of the reaction mechanism. Chemical reactions are described with chemical equations, which symbolically present the starting materials, end products, and sometimes intermediate products and reaction conditions.Chemical reactions happen at a characteristic reaction rate at a given temperature and chemical concentration. Typically, reaction rates increase with increasing temperature because there is more thermal energy available to reach the activation energy necessary for breaking bonds between atoms.Reactions may proceed in the forward or reverse direction until they go to completion or reach equilibrium. Reactions that proceed in the forward direction to approach equilibrium are often described as spontaneous, requiring no input of free energy to go forward. Non-spontaneous reactions require input of free energy to go forward (examples include charging a battery by applying an external electrical power source, or photosynthesis driven by absorption of electromagnetic radiation in the form of sunlight).Different chemical reactions are used in combinations during chemical synthesis in order to obtain a desired product. In biochemistry, a consecutive series of chemical reactions (where the product of one reaction is the reactant of the next reaction) form metabolic pathways. These reactions are often catalyzed by protein enzymes. Enzymes increase the rates of biochemical reactions, so that metabolic syntheses and decompositions impossible under ordinary conditions can occur at the temperatures and concentrations present within a cell.The general concept of a chemical reaction has been extended to reactions between entities smaller than atoms, including nuclear reactions, radioactive decays, and reactions between elementary particles as described by quantum field theory. Chemical similarity (or molecular similarity) refers to the similarity of chemical elements, molecules or chemical compounds with respect to either structural or functional qualities, i.e. the effect that the chemical compound has on reaction partners in inorganic or biological settings. Biological effects and thus also similarity of effects are usually quantified using the biological activity of a compound. In general terms, function can be related to the chemical activity of compounds (among others).The notion of chemical similarity (or molecular similarity) is one of the most important concepts in chemoinformatics. It plays an important role in modern approaches to predicting the properties of chemical compounds, designing chemicals with a predefined set of properties and, especially, in conducting drug design studies by screening large databases containing structures of available (or potentially available) chemicals. These studies are based on the similar property principle of Johnson and Maggiora, which states: similar compounds have similar properties. A chemical species is a chemical substance or ensemble composed of chemically identical molecular entities that can explore the same set of molecular energy levels on a characteristic or delineated time scale. The term is applied equally to a set of chemically identical atomic or molecular structural units in a solid array.In supramolecular chemistry, chemical species are those supramolecular structures whose interactions and associations are brought about via intermolecular bonding and debonding actions, and function to form the basis of this branch of chemistry. A chemical structure determination includes a chemist's specifying the molecular geometry and, when feasible and necessary, the electronic structure of the target molecule or other solid. Molecular geometry refers to the spatial arrangement of atoms in a molecule and the chemical bonds that hold the atoms together, and can be represented using structural formulae and by molecular models; complete electronic structure descriptions include specifying the occupation of a molecule's molecular orbitals. Structure determination can be applied to a range of targets from very simple molecules (e.g., diatomic oxygen or nitrogen), to very complex ones (e.g., such as of protein or DNA).Theories of chemical structure were first developed by August Kekule, Archibald Scott Couper, and Aleksandr Butlerov, among others, from about 1858. These theories were first to state that chemical compounds are not a random cluster of atoms and functional groups, but rather had a definite order defined by the valency of the atoms composing the molecule, giving the molecules a three dimensional structure that could be determined or solved.In determining structures of chemical compounds, one generally aims to obtain, minimally, the pattern and multiplicity of bonding between all atoms in the molecule; when possible, one seeks the three dimensional spatial coordinates of the atoms in the molecule (or other solid). The methods by which one can elucidate the structure of a molecule include spectroscopies such as nuclear magnetic resonance (proton and carbon-13 NMR), various methods of mass spectrometry (to give overall molecular mass, as well as fragment masses), and x-ray crystallography when applicable. The last technique can produce three-dimensional models at atomic-scale resolution, as long as crystals are available. When a molecule has an unpaired electron spin in a functional group of its structure, ENDOR and electron-spin resonance spectroscopes may also be performed. Techniques such as absorption spectroscopy and the vibrational spectroscopies, infrared and Raman, provide, respectively, important supporting information about the numbers and adjacencies of multiple bonds, and about the types of functional groups (whose internal bonding gives vibrational signatures); further inferential studies that give insight into the contributing electronic structure of molecules include cyclic voltammetry and X-ray photoelectron spectroscopy. These latter techniques become all the more important when the molecules contain metal atoms, and when the crystals required by crystallography or the specific atom types that are required by NMR are unavailable to exploit in the structure determination. Finally, more specialized methods such as electron microscopy are also applicable in some cases. Chemical synthesis is a purposeful execution of chemical reactions to obtain a product, or several products. This happens by physical and chemical manipulations usually involving one or more reactions. In modern laboratory usage, this tends to imply that the process is reproducible, reliable, and established to work in multiple laboratories.A chemical synthesis begins by selection of compounds that are known as reagents or reactants. Various reaction types can be applied to these to synthesize the product, or an intermediate product. This requires mixing the compounds in a reaction vessel such as a chemical reactor or a simple round-bottom flask. Many reactions require some form of work-up procedure before the final product is isolated.The amount of product in a chemical synthesis is the reaction yield. Typically, chemical yields are expressed as a weight in grams (in a laboratory setting) or as a percentage of the total theoretical quantity of product that could be produced. A side reaction is an unwanted chemical reaction taking place that diminishes the yield of the desired product.The word synthesis in the present day meaning was first used by the chemist Hermann Kolbe. Chemical technologists and technicians (abbr. chem techs) are workers who provide technical support or services in chemical-related fields. They may work under direct supervision or may work independently, depending on their specific position and duties. Their work environments differ widely and include, but are not limited to, laboratories and industrial settings. As such, it is nearly impossible to generalize the duties of chem techs as their individual jobs vary greatly. Biochemical techs often do similar work in biochemistry. Chemistry education (or chemical education) is the study of the teaching and learning of chemistry in all schools, colleges and universities. Topics in chemistry education might include understanding how students learn chemistry, how best to teach chemistry, and how to improve learning outcomes by changing teaching methods and appropriate training of chemistry instructors, within many modes, including classroom lecture, demonstrations, and laboratory activities. There is a constant need to update the skills of teachers engaged in teaching chemistry, and so chemistry education speaks to this need. Chemophobia (or chemphobia or chemonoia) is an aversion to or prejudice against chemicals or chemistry. The phenomenon has been ascribed both to a reasonable concern over the potential adverse effects of synthetic chemicals, and to an irrational fear of these substances because of misconceptions about their potential for harm. People marketing products react to widespread chemophobia with products marketed with an appeal to nature. Most of the chemophobia is perpetrated by the organic industry or groups such as March Against Monsanto, Organic Consumer's Association or Greenpeace. Minhaeng Cho (born 26 February 1965) is a South Korean scientist in researching physical chemistry, spectroscopy, and microscopy. He joined the faculty of the Department of Chemistry, College of Science, in Korea University in 1996. His research group actively studies nonlinear optical and vibrational spectroscopy, molecular dynamics simulations of chemical and biological systems in condensed phases, quantum dynamics of chemical reactions, linear and nonlinear chiroptical spectroscopy of biomolecules, quantum spectroscopy and imaging with high-precision laser technology, interferometric measurements of scattering fields for single particle tracking, chemically sensitive spectroscopy and imaging, surface-specific spectroscopy, and ultrafast vibrational microspectroscopy. He directed the National Creative Research Initiative Center for Coherent Multidimensional Spectroscopy (2000–2009). In December 2014, he was appointed as the Director of the Center for Molecular Spectroscopy and Dynamics in the Institute for Basic Science (IBS), located in Korea University, Seoul, South Korea. In organic chemistry, the Cieplak effect is a predictive model that explains why nucleophiles preferentially add to one face of a carbonyl over another. Proposed by Andrzej Stanislaw Cieplak in 1980, it explains anomalous results that other models of the time, such as the Cram and Felkin–Anh models, can't justify. In the Cieplak model, electrons from a neighboring bond delocalize into the forming carbon–nucleophile (C–Nuc) bond, lowering the energy of the transition state and accelerating the rate of reaction. Whichever bond can best donate its electrons into the C–Nuc bond determines which face of the carbonyl the nucleophile will add to. The nucleophile may be a number of reagents, most commonly organometallic or reducing agents. The Cieplak effect is subtle, and often competes with sterics, solvent effects, counterion complexation of the carbonyl oxygen, and other effects to determine product distribution. Clandestine chemistry is chemistry carried out in secret, and particularly in illegal drug laboratories. Larger labs are usually run by gangs or organized crime intending to produce for distribution on the black market. Smaller labs can be run by individual chemists working clandestinely in order to synthesize smaller amounts of controlled substances or simply out of a hobbyist interest in chemistry, often because of the difficulty in ascertaining the purity of other, illegally synthesized drugs obtained on the black market. The term clandestine lab is generally used in any situation involving the production of illicit compounds, regardless of whether the facilities being used qualify as a true laboratory. Clay chemistry is an applied subdiscipline of chemistry which studies the chemical structures, properties and reactions of or involving clays and clay minerals. It is a multidisciplinary field, involving concepts and knowledge from inorganic and structural chemistry, physical chemistry, materials chemistry, analytical chemistry, organic chemistry, mineralogy, geology and others.The study of the chemistry (and physics) of clays and clay minerals is of great academic and industrial relevance as they are among the most widely used industrial minerals, being employed as raw materials (ceramics, pottery, etc.), adsorbents, catalysts, additives, mineral charges, medicines, building materials and others.The unique properties of clay minerals including: nanometric scale layered construction, presence of fixed and interchangeable charges, possibility of adsorbing and hosting (intercalating) molecules, ability of forming stable colloidal dispersions, possibility of tailored surface and interlayer chemical modification and others, make the study of clay chemistry a very important and extremely varied field of research.Many distinct fields and knowledge areas are impacted by the phisico-chemical behavior of clay minerals, from environmental sciences to chemical process engineering, from pottery to nuclear waste management.Their cation exchange capacity (CEC) is of great importance in the balance of the most common cations in soil (Na+, K+, NH4+, Ca2+, Mg2+) and pH control, with direct impact on the soil fertility. It also plays an important role in the fate of most Ca2+ arriving from land (river water) into the seas. The ability to change and control the CEC of clay minerals offers a valuable tool in the development of selective adsorbants with applications as varied as chemical sensors or pollution cleaning substances for contaminated water, for example.The understanding of the reactions of clay minerals with water (intercalation, adsorption, colloidal dispersion, etc.) are indispensable for the ceramic industry (plasticity and flow control of ceramic raw mixtures, for example). Those interactions also influence a great number of mechanical properties of soils, being carefully studied by building and construction engineering specialists.The interactions of clay minerals with organic substances in the soil also plays a vital role in the fixation of nutrients and fertility, as well as in the fixation or leaching of pesticides and other contaminants. Some clay minerals (Kaolinite) are used as carrier material for fungicides and insecticides.The weathering of many rock types produce clay minerals as one of its last products. The understanding of these geochemical processes is also important for the understanding of geological evolution of landscapes and macroscopic properties of rocks and sediments. Presence of clay minerals in Mars, detected by the Mars Reconnaissance Orbiter in 2009 was another strong evidence of the existence of water on the planet in previous geological eras.The possibility to disperse nanometric scaled clay mineral particles into a matrix of polymer, with the formation of an inorganic-organic nanocomposite has prompted a large resurgence in the study of these minerals from the late 1990s.In addition, study of clay chemistry is also of great relevance to the chemical industry, as many clay minerals are used as catalysts, catalyst precursors or catalyst substrates in a number of chemical processes, like automotive catalysts and oil cracking catalysts. The colloidal probe technique is commonly used to measure interaction forces acting between colloidal particles and/or planar surfaces in air or in solution. This technique relies on the use of an atomic force microscope (AFM). However, instead of a cantilever with a sharp AFM tip, one uses the colloidal probe. The colloidal probe consists of a colloidal particle of few micrometers in diameter that is attached to an AFM cantilever. The colloidal probe technique can be used in the sphere-plane or sphere-sphere geometries (see figure). One typically achieves a force resolution between 1 and 100 pN and a distance resolution between 0.5 and 2 nm.The colloidal probe technique has been developed in 1991 independently by Ducker and Butt. Since its development this tool has gained wide popularity in numerous research laboratories, and numerous reviews are available in the scientific literature.Alternative techniques to measure force between surfaces involve the surface forces apparatus, total internal reflection microscopy, and optical tweezers techniques to with video microscopy. Combinatorial chemistry comprises chemical synthetic methods that make it possible to prepare a large number (tens to thousands or even millions) of compounds in a single process. These compound libraries can be made as mixtures, sets of individual compounds or chemical structures generated by computer software. Combinatorial chemistry can be used for the synthesis of small molecules and for peptides.Strategies that allow identification of useful components of the libraries are also part of combinatorial chemistry. The methods used in combinatorial chemistry are applied outside chemistry, too. Core–shell semiconducting nanocrystals (CSSNCs) are a class of materials which have properties intermediate between those of small, individual molecules and those of bulk, crystalline semiconductors. They are unique because of their easily modular properties, which are a result of their size. These nanocrystals are composed of a quantum dot semiconducting core material and a shell of a distinct semiconducting material. The core and the shell are typically composed of type II–VI, IV–VI, and III–V semiconductors, with configurations such as CdS/ZnS, CdSe/ZnS, CdSe/CdS, and InAs/CdSe (typical notation is: core/shell) Organically passivated quantum dots have low fluorescence quantum yield due to surface related trap states. CSSNCs address this problem because the shell increases quantum yield by passivating the surface trap states. In addition, the shell provides protection against environmental changes, photo-oxidative degradation, and provides another route for modularity. Precise control of the size, shape, and composition of both the core and the shell enable the emission wavelength to be tuned over a wider range of wavelengths than with either individual semiconductor. These materials have found applications in biological systems and optics. In chemistry, a crossover experiment is a method used to study the mechanism of a chemical reaction. In a crossover experiment, two similar but distinguishable reactants simultaneously undergo a reaction as part of the same reaction mixture. The products formed will either correspond directly to one of the two reactants (non-crossover products) or will include components of both reactants (crossover products). The aim of a crossover experiment is to determine whether or not a reaction process involves a stage where the components of each reactant have an opportunity to exchange with each other.The results of crossover experiments are often straightforward to analyze, making them one of the most useful and most frequently applied methods of mechanistic study. In organic chemistry crossover experiments are most often used to distinguish between intramolecular and intermolecular reactions. Inorganic and organometallic chemists rely heavily on crossover experiments, and in particular isotopic labeling experiments, for support or contradiction of proposed mechanisms. When the mechanism being investigated is more complicated than an intra- or intermolecular substitution or rearrangement, crossover experiment design can itself become a challenging question. A well-designed crossover experiment can lead to conclusions about a mechanism that would otherwise be impossible to make. Many mechanistic studies include both crossover experiments and measurements of rate and kinetic isotope effects. Crystal chemistry is the study of the principles of chemistry behind crystals and their use in describing structure-property relations in solids. The principles that govern the assembly of crystal and glass structures are described, models of many of the technologically important crystal structures (Zinc Blende, Alumina, Quartz, Perovskite) are studied, and the effect of crystal structure on the various fundamental mechanisms responsible for many physical properties are discussed.The objectives of the field include:identifying important raw materials and minerals as well as their names and chemical formulae.describing the crystal structure of important materials and determining their atomic detailslearning the systematics of crystal and glass chemistry.understanding how physical and chemical properties are related to crystal structure and microstructure.studying the engineering significance of these ideas and how they relate to foreign products: past, present, and future.Topics studied are:Chemical bonding, ElectronegativityFundamentals of crystallography: crystal systems, Miller Indices, symmetry elements, bond lengths and radii, theoretical densityCrystal and glass structure prediction: Pauling’s and Zachariasen’s rulesPhase diagrams and crystal chemistry (including solid solutions)Imperfections (including defect chemistry and line defects)Phase transitionsStructure – property relations: Neumann’s law, melting point, mechanical properties (hardness, slip, cleavage, elastic moduli), wetting, thermal properties (thermal expansion, specific heat, thermal conductivity), diffusion, ionic conductivity, refractive index, absorption, color, Dielectrics and Ferroelectrics, and MagnetismCrystal structures of representative metals, semiconductors, polymers, and ceramics Crystallography is a natural science with the scope of investigating matter in the crystalline state. In modern era, this mainly implies determining the arrangement of atoms in crystals (see crystal structure). The word "crystallography" derives from the Greek words crystallon "cold drop, frozen drop", with its meaning extending to all solids with some degree of transparency, and graphein "to write". In July 2012, the United Nations recognised the importance of the science of crystallography by proclaiming that 2014 would be the International Year of Crystallography. X-ray crystallography is used to determine the structure of large biomolecules such as proteins. Before the development of X-ray diffraction crystallography (see below), the study of crystals was based on physical measurements of their geometry. This involved measuring the angles of crystal faces relative to each other and to theoretical reference axes (crystallographic axes), and establishing the symmetry of the crystal in question. This physical measurement is carried out using a goniometer. The position in 3D space of each crystal face is plotted on a stereographic net such as a Wulff net or Lambert net. The pole to each face is plotted on the net. Each point is labelled with its Miller index. The final plot allows the symmetry of the crystal to be established.Crystallographic methods now depend on analysis of the diffraction patterns of a sample targeted by a beam of some type. X-rays are most commonly used; other beams used include electrons or neutrons. This is facilitated by the wave properties of the particles. Crystallographers often explicitly state the type of beam used, as in the terms X-ray crystallography, neutron diffraction and electron diffraction. These three types of radiation interact with the specimen in different ways.X-rays interact with the spatial distribution of electrons in the sample.Electrons are charged particles and therefore interact with the total charge distribution of both the atomic nuclei and the electrons of the sample.Neutrons are scattered by the atomic nuclei through the strong nuclear forces, but in addition, the magnetic moment of neutrons is non-zero. They are therefore also scattered by magnetic fields. When neutrons are scattered from hydrogen-containing materials, they produce diffraction patterns with high noise levels. However, the material can sometimes be treated to substitute deuterium for hydrogen.Because of these different forms of interaction, the three types of radiation are suitable for different crystallographic studies. DePriester Charts provide an efficient method to find the vapor-liquid equilibrium ratios for different substances at different conditions of pressure and temperature. The original chart was put forth by C.L. DePriester in an article in Chemical Engineering Progress in 1953. These nomograms have two vertical coordinates, one for pressure, and another for temperature. "K" values, representing the tendency of a given chemical species to partition itself preferentially between liquid and vapor phases, are plotted in between. Many DePriester charts have been printed for simple hydrocarbons. Double layer forces occur between charged objects across liquids, typically water. This force acts over distances that are comparable to the Debye length, which is on the order of one to a few tenths of nanometers. The strength of these forces increases with the magnitude of the surface charge density (or the electrical surface potential). For two similarly charged objects, this force is repulsive and decays exponentially at larger distances, see figure. For unequally charged objects and eventually at shorted distances, these forces may also be attractive. The theory due to Derjaguin, Landau, Verwey, and Overbeek (DLVO) combines such double layer forces together with Van der Waals forces in order to estimate the actual interaction potential between colloidal particles.An electrical double layer develops near charged surfaces (or another charged objects) in aqueous solutions. Within this double layer, the first layer corresponds to the charged surface. These charges may originate from tightly adsorbed ions, dissociated surface groups, or substituted ions within the crystal lattice. The second layer corresponds to the diffuse layer, which contains the neutralizing charge consisting of accumulated counterions and depleted coions. The resulting potential profile between these two objects leads to differences in the ionic concentrations within the gap between these objects with respect to the bulk solution. These differences generate an osmotic pressure, which generates a force between these objects.These forces are easily experienced when hands are washed with soap. Adsorbing soap molecules make the skin negatively charged, and the slippery feeling is caused by the strongly repulsive double layer forces. These forces are further relevant in many colloidal or biological systems, and may be responsible for their stability, formation of colloidal crystals, or their rheological properties. A dry cell is a type of battery, commonly used for portable electrical devices. It was developed in 1886 by the German scientist Carl Gassner, after development of wet zinc-carbon batteries by Georges Leclanché in 1866.A dry cell uses a paste electrolyte, with only enough moisture to allow current to flow. Unlike a wet cell, a dry cell can operate in any orientation without spilling, as it contains no free liquid, making it suitable for portable equipment. By comparison, the first wet cells were typically fragile glass containers with lead rods hanging from the open top and needed careful handling to avoid spillage. Lead–acid batteries did not achieve the safety and portability of the dry cell until the development of the gel battery. Wet cells have continued to be used for high-drain applications, such as starting internal combustion engines, because inhibiting the electrolyte flow tends to reduce the current capability.A common dry cell is the zinc-carbon cell, sometimes called the dry Leclanché cell, with a nominal voltage of 1.5 volts, the same as the alkaline cell (since both use the same zinc–manganese dioxide combination).A standard dry cell comprises a zinc anode, usually in the form of a cylindrical pot, with a carbon cathode in the form of a central rod. The electrolyte is ammonium chloride in the form of a paste next to the zinc anode. The remaining space between the electrolyte and carbon cathode is taken up by a second paste consisting of ammonium chloride and manganese dioxide, the latter acting as a depolariser. In some designs, the ammonium chloride is replaced by zinc chloride. History (from Greek ἱστορία, historia, meaning "inquiry, knowledge acquired by investigation") is the study of the past as it is described in written documents. Events occurring before written record are considered prehistory. It is an umbrella term that relates to past events as well as the memory, discovery, collection, organization, presentation, and interpretation of information about these events. Scholars who write about history are called historians.History can also refer to the academic discipline which uses a narrative to examine and analyse a sequence of past events, and objectively determine the patterns of cause and effect that determine them. Historians sometimes debate the nature of history and its usefulness by discussing the study of the discipline as an end in itself and as a way of providing "perspective" on the problems of the present.Stories common to a particular culture, but not supported by external sources (such as the tales surrounding King Arthur), are usually classified as cultural heritage or legends, because they do not show the "disinterested investigation" required of the discipline of history. Herodotus, a 5th-century BC Greek historian is considered within the Western tradition to be the "father of history", and, along with his contemporary Thucydides, helped form the foundations for the modern study of human history. Their works continue to be read today, and the gap between the culture-focused Herodotus and the military-focused Thucydides remains a point of contention or approach in modern historical writing. In Asia, a state chronicle, the Spring and Autumn Annals was known to be compiled from as early as 722 BC although only 2nd-century BC texts survived.Ancient influences have helped spawn variant interpretations of the nature of history which have evolved over the centuries and continue to change today. The modern study of history is wide-ranging, and includes the study of specific regions and the study of certain topical or thematical elements of historical investigation. Often history is taught as part of primary and secondary education, and the academic study of history is a major discipline in university studies. Castell Caer Seion is an Iron Age hillfort situated at the top of Conwy Mountain, in Conwy County, North Wales. It is unusual for the fact that the main fort contains a smaller, more heavily defended fort; complete with its own distinct defences and entrance, with no obvious means of access between the two. The construction date of the original fort is still unknown, but recent excavations have revealed evidence of occupation as early as the 6th Century BC; whilst the smaller fort can be dated with reasonable certainty to around the 4th Century BC. Whilst the forts were constructed in different periods, archaeologists have uncovered evidence of concurrent occupation, seemingly up until around the 2nd Century BC. The larger fort was host to around 50 roundhouses during its lifetime, whereas examinations of the smaller fort have turned up no more than six. The site was traditionally associated with Maelgwyn Gwynedd (c. 480 – c. 547 AD), but there is no evidence pointing to a 6th Century occupation. The fort and wider area beyond its boundaries have been said to retain significant archaeological potential, and are protected by law as a scheduled ancient monument. Elegant decay is the cultural agreement that some places, and structures, become gradually more elegant, notable, or beautiful as they decay, or fall into ruin, due to their historical, architectural, or cultural significance. Although such reverence is subjective, it is true that certain cities, regions or even countries are more susceptible to the general concept due to past opulence, or their lengthy or enduring history or culture.Contrary to the general interpretation that places or structures are more noteworthy in their newness, or older structures hold more value and interest after being historically restored to their original state, the concept of elegant decay is that in the slow degradation of the structures an inherent elegance, or beauty, emerges due to past historic importance. In recent times, this concept has been exploited by those seeking to stimulate tourism. In general, expansionism consists of policies of governments and states that involve territorial, military or economic expansion. While some have linked the term to promoting economic growth (in contrast to no growth or sustainable policies), more commonly expansionism refers to the doctrine of a state expanding its territorial base or economic influence. This occurs usually, though not necessarily, by means of military aggression. Compare empire-building, colonialism, and mensurable.Anarchism, reunification or pan-nationalism are sometimes used to justify and legitimize expansionism, but only when the explicit goal is to reconquer territories that have been lost, or to take over ancestral lands. A simple territorial dispute, such as a border dispute, is not usually referred to as expansionism. Dennis Griffiths (8 December 1933 – 24 December 2015) was a British journalist and historian, regarded as the founding father of newspaper history from the earliest days of Fleet Street. His Encyclopedia of the British Press 1422–1992 has become a standard work of reference for the whole industry. Born in Swansea, the son of a compositor, he trained as a printer himself, rose to become the production chief of the London Evening Standard for 18 years and wrote six books, including a definitive history of that newspaper from its launch in 1827, much praised in the foreword by its former owner the late Vere Harmsworth.From 1999 to 2002 Griffiths was an energetic chairman of the London Press Club. In March 2002, he helped organise the 300th anniversary celebration for the first regular daily newspaper to be printed in the United Kingdom. The Prince of Wales unveiled a brass plaque at a service in St Bride’s, the journalists’ church, on the date The Daily Courant was first published in Fleet Street. In 2006 the British Library published his book Fleet Street – Five Hundred Years of the Press to coincide with an exhibition of newspaper front pages which he co-curated. He also helped prepare an oral archive of newspaper history, and that year was himself interviewed by National Life Stories (C638/06) for the 'Oral History of the British Press' collection held by the library. In 2013 he founded the Coranto Press which published scholarly works on the media.Griffiths often retold the story of how in 1969 the Evening Standard pre-printed front pages showing a facsimile colour picture of Neil Armstrong being the first man to step onto the moon – 24 hours ahead of actually landing. A historical figure is a famous person in history, such as Catherine the Great, Gandhi, Martin Luther King, Abraham Lincoln, Washington, Napoleon, Mandela or Jim Morrison.The significance of such figures in human progress has been debated. Some think they play a crucial role, while others say they have little impact on the broad currents of thought and social change. The concept is generally used in the sense that the person really existed in the past, as opposed to being legendary. However, the legends that can grow up around historical figures may be hard to distinguish from fact. Sources are often incomplete and may be inaccurate, particularly those from early periods of history. Without a body of personal documents, the more subtle aspects of personality of a historical figure can only be deduced. With historical figures who were also religious figures attempts to separate fact from belief may be controversial.In education, presenting information as if it were being told by a historical figure may give it greater impact. Since classical times, students have been asked to put themselves in the place of a historical figure as a way of bringing history to life. Historical figures are often represented in fiction, where fact and fancy are combined. In earlier traditions, before the rise of a critical historical tradition, authors took less care to be as accurate when describing what they knew of historical figures and their actions, interpolating imaginary elements intended to serve a moral purpose to events: such is the Monk of St. Gall's anecdotal account of Charlemagne, De Carolo Magno. More recently there has been a tendency once again for authors to freely depart from the "facts" when they conflict with their creative goals. National memory is a form of collective memory defined by shared experiences and culture. It is an integral part to national identity.It represents one specific form of cultural memory which makes an essential contribution to national group cohesion. Historically national communities have drawn upon commemorative ceremonies and monuments, myths and rituals, glorified individuals, objects, and events in their own history to produce a common narrative.According to Lorraine Ryan, national memory is based on the public's reception of national historic narratives and the ability of people to affirm the legitimacy of these narratives. State collapse, breakdown, or downfall is the complete failure of a mode of governance within a state. Sometimes this brings about a failed state, as in Somalia ; more often, there is an immediate process of transition to a new administration, and basic services such as tax collection, defence, police, civil service and courts are either maintained throughout, or else quickly restored, as in Chad - as Martin Wight points out, 'states are immortal'. For example, Imperial Russia collapsed in 1917 and was replaced by the USSR, which in turn collapsed and fragmented in 1991. Nazi Germany, 'crushed' in 1945, was, as West Germany, 'rapidly rearmed against Russia'. Not all attempts at regime change bring about state collapse. The Babington plot to assassinate Queen Elizabeth I of England, the Decembrist revolt in Russia, and the Bay of Pigs invasion of Cuba were ineffective. Law is a system of rules that are created and enforced through social or governmental institutions to regulate behavior. Law as a system helps regulate and ensure that a community show respect, and equality amongst themselves. State-enforced laws can be made by a collective legislature or by a single legislator, resulting in statutes, by the executive through decrees and regulations, or established by judges through precedent, normally in common law jurisdictions. Private individuals can create legally binding contracts, including arbitration agreements that may elect to accept alternative arbitration to the normal court process. The formation of laws themselves may be influenced by a constitution, written or tacit, and the rights encoded therein. The law shapes politics, economics, history and society in various ways and serves as a mediator of relations between people.A general distinction can be made between (a) civil law jurisdictions, in which a legislature or other central body codifies and consolidates their laws, and (b) common law systems, where judge-made precedent is accepted as binding law. Historically, religious laws played a significant role even in settling of secular matters, and is still used in some religious communities. Islamic Sharia law is the world's most widely used religious law, and is used as the primary legal system in some countries, such as Iran and Saudi Arabia.The adjudication of the law is generally divided into two main areas. Criminal law deals with conduct that is considered harmful to social order and in which the guilty party may be imprisoned or fined. Civil law (not to be confused with civil law jurisdictions above) deals with the resolution of lawsuits (disputes) between individuals or organizations.Law provides a source of scholarly inquiry into legal history, philosophy, economic analysis and sociology. Law also raises important and complex issues concerning equality, fairness, and justice. This collection of lists of law topics collects the names of topics related to law. Everything related to law, even quite remotely, should be included on the alphabetical list, and on the appropriate topic lists. All links on topical lists should also appear in the main alphabetical listing. The process of creating lists is ongoing – these lists are neither complete nor up-to-date – if you see an article that should be listed but is not (or one that shouldn't be listed as legal but is), please update the lists accordingly. You may also want to include Wikiproject Law talk page banners on the relevant pages.This list needs formatting in bullet point style throughout, although partially completed, you can help finish the layout improvements to this article. Legal anthropology, also known as the anthropology of laws, is a sub-discipline of anthropology which specializes in "the cross-cultural study of social ordering". The questions that Legal Anthropologists seek to answer concern how is law present in cultures? How does it manifest? How may anthropologists contribute to understandings of law?Earlier legal anthropological research focused more narrowly on conflict management, crime, sanctions, or formal regulation. Bronisław Malinowski's 1926 work, Crime and Custom in Savage Society, explored law, order, crime, and punishment among the Trobriand Islanders. The English lawyer Sir Henry Maine is often credited with founding the study of Legal Anthropology through his book Ancient Law (1861), and although his evolutionary stance has been widely discredited within the discipline, his questions raised have shaped the subsequent discourse of the study. This ethno-centric evolutionary perspective was pre-eminent in early Anthropological discourse on law, evident through terms applied such as ‘pre-law’ or ‘proto-law’ and applied by so-called armchair anthropologists. However, a turning point was presented in the 1926 publication of Crime and Custom in Savage Society by Malinowski based upon his time with the Trobriand Islanders. Through emphasizing the order present in acephelous societies, Malinowski proposed the cross-cultural examining of law through its established functions as opposed to a discrete entity. This has led to multiple researchers and ethnographies examining such aspects as order, dispute, conflict management, crime, sanctions, or formal regulation, in addition (and often antagonistically) to law-centred studies, with small-societal studies leading to insightful self-reflections and better understanding of the founding concept of law.Legal anthropology remains a lively discipline with modern and recent applications including issues such as human rights, legal pluralism, Islamophobia and political uprisings. Legal archaeology is an area of legal scholarship "involving detailed historical reconstruction and analysis of important cases." While most legal scholars confine their research to published opinions of court cases, legal archaeologists examine the historical and social context in which a court case was decided. These facts may show what social and cultural forces were at work in a particular case. Professors can use legal archaeology to "sensitize students as to how inequality, specifically with regard to race, gender and class affects what occurs throughout the cases they study." A legal archaeologist might also research biographical material on the judges, attorneys, and parties to a court case. Such information might show whether a judge held particular biases in a case, or if one party had superior legal representation that caused the party to prevail in a case. Asymmetric negotiation is influence that occurs between counterparts of significantly different sizes as measured by the parties’ relative resources and clout in a particular context. The context for these negotiations or conflicts can range from mergers & acquisitions and international trade deals, to hostage-takings and initiating change at a local school board.A larger party in one context can be a smaller party in another. For instance, a US corporation may be a much larger buyer in an asymmetric negotiation with a North American supplier, while reduced to being a relatively small player overseas in negotiations with the European Union where it has fewer resources and less clout.Just as in asymmetric warfare, research has shown that smaller players can prevail in getting what they want from much larger players by applying distinct approaches, strategies and tactics that increase their odds of success.This specific form of negotiation contrasts with symmetrical or standard negotiations where the parties are more similar in size. Body image law is the developing area of law that, according to Dr Marilyn Bromberg of the University of Western Australia Law School and Cindy Halliwell, a law student at Deakin University, "encompasses the bills, laws and government actions (such as establishing parliamentary inquiries and creating policies) that may help to improve the body image of the general public, and particularly of young people". Among the reasons for implementing law in this area is to prevent the images of unhealthily thin women causing poor body image which can, along with other factors, lead to an eating disorder.The Israeli government passed a body image law in 2012 which became operational the following year. The law requires models to have a minimum body mass index to work and if an image was photoshopped to make the model appear thinner, it must have a warning. The warning must state that the image was modified and it must take up at least seven percent of the image. Breaches can result in a civil lawsuit. The French Government passed a similar law in 2015 which came into effect in 2017. This law requires that models provide their employers with a "medical certificate, valid for up to two years, confirming their general physical well-being and the fact that they are not excessively underweight." The BMI of models older than 16 will also be taken into consideration, when determining their overall health. In contrast to the Israeli law, breaching it attracts criminal sanctions. Additionally, any photo that has been digitally altered must be labeled as such; failure to label these photos will result in a "fine of 37,500 euros, or more than $41,000," and hiring a model without the verified medical certificate and requirements "carries a fine of €75,000 and six months in jail." However, the law that dictates that digitally altered images must be labeled "applies only to advertising, not to editorial images in magazines or newspapers." The Greater London Authority banned advertisements that promote unhealthy body image on Transport for London public transport in 2016. Similarly, Trondheim banned advertisements that promote unhealthy body image in public places. The Government of Australia's position in this area is that it is up to industry to solve the problem of poor body image. The previous Labor Government was responsible for creating a non-binding Voluntary Industry Code of Conduct on Body Image. Bullying in the legal profession is believed to be more common than in some other professions. It is believed that its adversarial, hierarchical tradition contributes towards this. Women, trainees and solicitors who have been qualified for five years or less are more impacted, as are ethnic minority lawyers and lesbian, gay and bisexual lawyers.Half of women lawyers and one in three men who took part in a study by the Law Council of Australia (LCA) reported they had been bullied or intimidated in the workplace. The Law Council of Australia has found that women face significant levels of discrimination, with one of the study's key figures telling Lawyers Weekly the profession is a "men's only club".According to former High Court judge Michael Kirby, the rudeness of judges trickles down to senior lawyers who then vent their frustrations on more junior staff, thus creating cycle of a bullying and stress that is rife within the legal profession. For the legal system of ecclesiastical canons, see Canon law and Canon law (Catholic Church).In Catholic canon law, a canon is a certain rule or norm of conduct or belief prescribed by the Catholic Church. The word "canon" comes from the Greek kanon, which in its original usage denoted a straight rod that was later the instrument used by architects and artificers as a measuring stick for making straight lines. Kanon eventually came to mean a rule or norm, so that when the first ecumentical council—Nicaea I—was held in 325, kanon started to obtain the restricted juridical denotation of a law promulgated by a synod or ecumenical council, as well as that of an individual bishop. Capital punishment, also known as the death penalty, is a government sanctioned practice whereby a person is put to death by the state as a punishment for a crime. The sentence that someone be punished in such a manner is referred to as a death sentence, whereas the act of carrying out the sentence is known as an execution. Crimes that are punishable by death are known as capital crimes or capital offences, and they commonly include offences such as murder, treason, espionage, war crimes, crimes against humanity and genocide. Etymologically, the term capital (lit. "of the head", derived via the Latin capitalis from caput, "head") in this context alluded to execution by beheading.Fifty-six countries retain capital punishment, 103 countries have completely abolished it de jure for all crimes, six have abolished it for ordinary crimes (while maintaining it for special circumstances such as war crimes), and 30 are abolitionist in practice.Capital punishment is a matter of active controversy in various countries and states, and positions can vary within a single political ideology or cultural region. In the European Union, Article 2 of the Charter of Fundamental Rights of the European Union prohibits the use of capital punishment. Also, the Council of Europe, which has 47 member states, prohibits the use of the death penalty by its members.The United Nations General Assembly has adopted, in 2007, 2008, 2010, 2012 and 2014, non-binding resolutions calling for a global moratorium on executions, with a view to eventual abolition. Although most nations have abolished capital punishment, over 60% of the world's population live in countries where death penalty is legal punishment, such as China, Japan, South Korea, India, Pakistan, Bangladesh, Sri Lanka, United States and Indonesia. In keeping with the Paris Principles definition of a child soldier, the Roméo Dallaire Child Soldiers Initiative defines a child pirate' as any person below 18 years of age who is or who has been recruited or used by a pirate gang in any capacity, including children - boys and/or girls - used as gunmen in boarding parties, hostage guards, negotiators, ship captains, messengers, spies or for sexual purposes, whether at sea or on land. It does not only refer to a child who is taking or has taken a direct part in kinetic criminal operations.Children may volunteer to participate in piratical activities (usually on account of socioeconomic desperation, familial suggestion or peer influence) or they may be forcibly abducted by piratical gangs. Community sentence or alternative sentencing or non-custodial sentence is a collective name in criminal justice for all the different ways in which courts can punish a defendant who has been convicted of committing an offence, other than through a custodial sentence (serving a jail or prison term) or capital punishment (death).Traditionally, the theory of retributive justice is based on the ideas of retaliation (punishment), which is valuable in itself, and also provides deterrent. Before the police, sentences of execution or imprisonment were thought pretty efficient at this, while at the same time removing the threat criminals pose to the public (protection). Alternative sentences add to these goals, trying to reform the offender (rehabilitation), and put right what he did (reparation).Traditionally, victims of a crime only played a small part in the criminal justice process, as this breaching the rules of the society. The restorative approach to justice approach often makes it a part of a sentence for the offender to apologize, compensate the damage they have caused or repair it with their own labour.The shift towards alternative sentencing means that some offenders avoid imprisonment with its many unwanted consequences. This is beneficial for the society, as it may prevent them from getting into the so-called the revolving door syndrome, the inability of a person to go back to normal life after leaving a prison, becoming a career criminal. Furthermore, there are hopes that this could alleviate prison overcrowding and reduce the cost of punishment.Instead of depriving those who commit less dangerous offences (such as summary offences) of their freedom, the courts put some limitations on them and give them some duties. The list of components that make up a community sentence is of course different in individual countries, and will be combined individually by the court. Non-custodial sentences can include:unpaid work (this can be called community payback or community service)house arrestcurfewsuspended sentence (that means that breaking the law during a sentence may lead to imprisonment)wearing an electronic tagmandatory treatments and programmes (drug or alcohol treatment, psychological help, back to work programmes,)apology to the victimspecific court orders and injunctions (not to drink alcohol, not to go to certain pubs, meet certain people)regular reporting to someone (offender manager, probation)judicial corporal punishment Comparative law is the study of differences and similarities between the law of different countries. More specifically, it involves study of the different legal "systems" (or "families") in existence in the world, including the common law, the civil law, socialist law, Canon law, Jewish Law, Islamic law, Hindu law, and Chinese law. It includes the description and analysis of foreign legal systems, even where no explicit comparison is undertaken. The importance of comparative law has increased enormously in the present age of internationalism, economic globalization and democratization. Constitutional law is the body of law which defines the role, powers, and structure of different entities within a state, namely, the executive, the parliament or legislature, and the judiciary; as well as the basic rights of citizens and, in federal countries such as India and Canada, the relationship between the central government and state, provincial, or territorial governments.Not all nation states have codified constitutions, though all such states have a jus commune, or law of the land, that may consist of a variety of imperative and consensual rules. These may include customary law, conventions, statutory law, judge-made law, or international rules and norms. Constitutional law deals with the fundamental principles by which the government exercises its authority. In some instances, these principles grant specific powers to the government, such as the power to tax and spend for the welfare of the population. Other times, constitutional principles act to place limits on what the government can do, such as prohibiting the arrest of an individual without sufficient cause.In most nations, such as the United States, India, and Singapore, constitutional law is based on the text of a document ratified at the time the nation came into being. Other constitutions, notably that of the United Kingdom, rely heavily on unwritten rules known as constitutional conventions; their status within constitutional law varies, and the terms of conventions are in some cases strongly contested. Although the general English usage of the adjective constructive is "helping to develop or improve something; helpful to someone, instead of upsetting and negative," as in the phrase "constructive criticism," in legal writing constructive has a different meaning.In its usage in law, constructive means what the law considers something to be, irrespective of the intentions of the relevant actor and irrespective of actual facts. It has also been defined in these terms: "That which exists, not in fact, but as a result of the operation of law. That which takes on a character as a consequence of the way it is treated by a rule or policy of law, as opposed to its actual character."For example:"Constructive notice" refers to a judicial presumption that a person knows of some fact, because certain acts such as registration with a public agency have occurred, even though the person is actually ignorant of the fact."Constructive knowledge" is knowledge that courts impute to a person because such knowledge is obtainable by the exercise of reasonable care."Constructive eviction" occurs when a landlord does not actually evict but does something that renders the premises unlivable. This might occur, for example, where a tenant vacates an apartment because a landlord turns off the heat or water. The tenant, however, must abandon possession in order to claim that there was a constructive eviction."Constructive fraud," unlike actual fraud which requires an intentional false statement, requires only a negligent false statement that causes damage to the plaintiff (or in some states an innocent but injurious false statement). The term course of dealing is defined in the Uniform Commercial Code as follows:A "course of dealing" is a sequence of conduct concerning previous transactions between the parties to a particular transaction that is fairly to be regarded as establishing a common basis of understanding for interpreting their expressions and other conduct.UCC § 1-303(b). "Course of dealing," as defined in subsection (b), is restricted, literally, to a sequence of conduct between the parties previous to the agreement. A sequence of conduct after or under the agreement, however, is a "course of performance."Even though, according to the parol evidence rule, words and terms in a writing intended to be the final expression of the agreement of the parties may not be contradicted by extrinsic evidence of a prior or contemporaneous agreement, extrinsic evidence in the form of course of dealing nonetheless may be used to explain or supplement the writing. An integration clause in a contract, stating that the parties intend the writing to be a complete and exclusive statement of the terms of the agreement does not suffice to negate the importance of course of dealing, "because these are such an integral part of the contract that they are not normally disclaimed by general language in the merger clause."Under the common law, extrinsic evidence such as course of dealing could be considered only the written contract was ambiguous. By contrast, "Under the UCC, the lack of facial ambiguity in the contract language is basically irrelevant to whether extrinsic evidence ought to be considered by the court as an initial matter."Evidence of course of dealing will be disallowed, however, if it is "carefully negated" in the parties' contract by "specific and unequivocal" language.Although the term is usually used in US contract law, where the parties' course of dealing helps the court to understand the intention of the contracting parties, it is also used elsewhere in the law. In US patent law the term is used to help interpret the meaning of words used in patent claims by examining the prosecution history of a patent to determine what meaning the applicant and patent examiner understood claim words to have. It has been observed in the Federal Circuit:The prosecution history often proves useful in determining a patent's scope, for it reveals the course of dealing with the Patent Office, which may show a particular meaning attached to the terms, or a position taken by the applicant to ensure that the patent would issue. The term course of performance is defined in the Uniform Commercial Code as follows:(a) A "course of performance" is a sequence of conduct between the parties to a particular transaction that exists if:(1) the agreement of the parties with respect to the transaction involves repeated occasions for performance by a party; and(2) the other party, with knowledge of the nature of the performance and opportunity for objection to it, accepts the performance or acquiesces in it without objection.UCC § 1-303(a). "Course of dealing," as defined in [UCC § 1-303] subsection (b), is restricted, literally, to a sequence of conduct between the parties previous to the agreement. A sequence of conduct after or under the agreement, however, is a "course of performance."Where a contract involves repeated occasions for performance and opportunity for objection "any course of performance accepted or acquiesced in without objection shall be relevant to determine the meaning of the agreement." "[S]uch course of performance shall be relevant to show a waiver or modification of any term inconsistent with such course of performance." This UCC section recognizes that the "parties themselves know best what they have meant by their words of agreement and their action under that agreement is the best indication of what that meaning was."It is well established that a written contract may be modified by the parties' post-agreement "course of performance." A waiver that changes the express terms of a contract can be established by evidence of a course of performance. This holds true even for contracts that are fully integrated. The policy behind this "broad doctrine of waiver" in contract law is to "prevent the waiving party from 'lull[ing] another into a false assurance that strict compliance with a contractual duty will not be required and then sue for noncompliance.' "It is not necessary that the contract be ambiguous before course of performance will be considered.A course of dealing is shown by repeated instances of the relevant conduct, not single occasions or actions. Freemen-on-the-land (also freemen-of-the-land, the freemen movement or simply freemen) are a loose group of individuals who believe that they are bound by statute laws only if they consent to those laws. They believe that they can therefore declare themselves independent of the government and the rule of law, holding that the only "true" law is their own interpretation of "common law". This belief has been described as a conspiracy theory. Freemen are active in English-speaking countries: the United Kingdom, Ireland, Canada, the United States, Australia, and New Zealand.In the Canadian court case Meads v. Meads, Alberta Court of Queen's Bench Associate Chief Justice John D. Rooke used the phrase "Organised Pseudolegal Commercial Arguments" (OPCA) to describe the techniques and arguments used by freemen in court describing them as frivolous and vexatious. There is no recorded instance of freeman tactics being upheld in a court of law; in refuting one by one each of the arguments used by Meads, Rooke concluded that "a decade of reported cases, many of which he refers to in his ruling, have failed to prove a single concept advanced by OPCA litigants."The Federal Bureau of Investigation (FBI) classifies freemen as sovereign citizen extremists and domestic terrorists. Gender empowerment is the empowerment of people of any gender. While conventionally being reduced to its aspect of empowerment of women, the concept stresses the distinction between biological sex and gender as a role, also referring to other marginalized genders in a particular political or social context.Gender empowerment has become a significant topic of discussion in regard to development and economics. Entire nations, businesses, communities, and groups can benefit from the implementation of programs and policies that adopt the notion of women empowerment. Empowerment is one of the main procedural concerns when addressing human rights and development. The Human Development and Capabilities Approach, The Millennium Development Goals, and other credible approaches/goals point to empowerment and participation as a necessary step if a country is to overcome the obstacles associated with poverty and development. Human rights are moral principles or norms that describe certain standards of human behaviour, and are regularly protected as legal rights in municipal and international law. They are commonly understood as inalienable fundamental rights "to which a person is inherently entitled simply because she or he is a human being", and which are "inherent in all human beings" regardless of their nation, location, language, religion, ethnic origin or any other status. They are applicable everywhere and at every time in the sense of being universal, and they are egalitarian in the sense of being the same for everyone. They are regarded as requiring empathy and the rule of law and imposing an obligation on persons to respect the human rights of others, and it is generally considered that they should not be taken away except as a result of due process based on specific circumstances; for example, human rights may include freedom from unlawful imprisonment, torture and execution.The doctrine of human rights has been highly influential within international law, global and regional institutions. Actions by states and non-governmental organisations form a basis of public policy worldwide. The idea of human rights suggests that "if the public discourse of peacetime global society can be said to have a common moral language, it is that of human rights". The strong claims made by the doctrine of human rights continue to provoke considerable scepticism and debates about the content, nature and justifications of human rights to this day. The precise meaning of the term right is controversial and is the subject of continued philosophical debate; while there is consensus that human rights encompasses a wide variety of rights such as the right to a fair trial, protection against enslavement, prohibition of genocide, free speech, or a right to education, there is disagreement about which of these particular rights should be included within the general framework of human rights; some thinkers suggest that human rights should be a minimum requirement to avoid the worst-case abuses, while others see it as a higher standard.Many of the basic ideas that animated the human rights movement developed in the aftermath of the Second World War and the events of the Holocaust, culminating in the adoption of the Universal Declaration of Human Rights in Paris by the United Nations General Assembly in 1948. Ancient peoples did not have the same modern-day conception of universal human rights. The true forerunner of human rights discourse was the concept of natural rights which appeared as part of the medieval natural law tradition that became prominent during the European Enlightenment with such philosophers as John Locke, Francis Hutcheson and Jean-Jacques Burlamaqui, and which featured prominently in the political discourse of the American Revolution and the French Revolution. From this foundation, the modern human rights arguments emerged over the latter half of the 20th century, possibly as a reaction to slavery, torture, genocide and war crimes, as a realisation of inherent human vulnerability and as being a precondition for the possibility of a just society.Whereas recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world ...All human beings are born free and equal in dignity and rights. International law is the set of rules generally regarded and accepted as binding in relations between states and between nations. It serves as a framework for the practice of stable and organized international relations. International law differs from state-based legal systems in that it is primarily applicable to countries rather than to private citizens. National law may become international law when treaties delegate national jurisdiction to supranational tribunals such as the European Court of Human Rights or the International Criminal Court. Treaties such as the Geneva Conventions may require national law to conform to respective parts.Much of international law is consent-based governance. This means that a state member is not obliged to abide by this type of international law, unless it has expressly consented to a particular course of conduct. This is an issue of state sovereignty. However, other aspects of international law are not consent-based but still are obligatory upon state and non-state actors such as customary international law and peremptory norms (jus cogens). Lawbots are a broad class of customer-facing legal AI applications that are used to automate specific legal tasks, such as document automation and legal research. Lawbots use various artificial intelligence techniques or other intelligent systems to limit humans' direct ongoing involvement in certain steps of a legal matter. The user interfaces on lawbots vary from smart searches and step-by-step forms to chatbots. Consumer and enterprise-facing lawbot solutions often do not require direct supervision from a legal professional. Depending on the task, some client-facing solutions used at law firms operate under an attorney supervision. Lawmaking is the process of crafting legislation. In its purest sense, it is the basis of governance.Lawmaking in modern democracies is the work of legislatures, which exist at the local, regional, and national levels and make such laws as are appropriate to their level, and binding over those under their jurisdictions. These bodies are influenced by lobbyists, pressure groups, sometimes partisan considerations, but ultimately by the voters who elected them and to which they are responsible, if the system is working as intended. Even the expenditure of governmental funds is an aspect of lawmaking, as in most jurisdictions the budget is a matter of law.In dictatorships and absolute monarchies the leader can make law essentially by the stroke of a pen, one of the main objections to such an arrangement. However, a seemingly-analogous event can occur even in a democracy where the executive can make executive orders which have the force of law. In some instance, even regulations issued by executive departments can have the force of law. Libertarians, in particular, are known for denouncing such actions as being anti-democratic, but they have become such a salient feature of modern governance that it is hard to picture a system in which they no longer exist, because it is hard to picture the time involved in every regulation being debated prior to becoming law. That, say libertarians, is precisely the point: if such executive orders and regulations do not stand up to legislative scrutiny, they should never be implemented. In response to this, limits on regulatory authority have been made legislatively, and libertarians still contend for, if not the abolition of executive orders altogether, then their automatic sunset after a fixed period if not legislatively reviewed and confirmed; this policy has been adopted in some jurisdictions. According to the National Association of Legal Fee Analysis (NALFA), legal auditing is a litigation management practice and risk management tool, used by insurance and other consumers of legal services, to determine if hourly billing errors, abuses, and inefficiencies exist by carefully examining and identifying unreasonable attorney fees and expenses. Because the majority of corporate law firms charge clients on an hourly basis, and base attorney promotion and compensation almost entirely on the number of hours billed, rather than the results achieved for clients, lawyers and law firms have much incentive to bill as many hours as possible, and little incentive to work efficiently or to bill fewer hours. According to the California State Bar, most lawyers who block-bill their time inflate each client bill by 10-30 percent, and at the average national billing rate of $661 per hour (as of April 2012) that means that most big-firm lawyers overcharge clients anywhere from $150,000 to $400,000 each year. Legal consciousness is a collection of understood and/or imagined to have understood, legal awareness of ideas, views, feelings and traditions imbibed through legal socialization; which reflects as legal culture among given individual, or a group, or a given society at large. The legal consciousness evaluates the existing law and also bears in mind an image of the desired or ideal law.Consciousness is not an individual trait nor solely ideational; legal consciousness is a type of social practice reflecting and forming social structures. The study of legal Consciousness documents the forms of participation and interpretation through which act or sustain, reproduce, or amend the circulating contested or hegemonic structures of meanings concerning law. Legal consciousness is the way in which law is experienced and interpreted by specific individuals as they engage, avoid, resist or just assume the law and legal meanings.Legal consciousness is a state of being, legal socialisation is the process to Legal consciousness; where as legal awareness & legal mobilisation are means to achieve the same. Legal cost finance (or legal cost credit) is an alternative funding solution to traditional legal financing (or litigation funding in United Kingdom).Unlike litigation funding, which is limited to contentious legal cases and claims a stake in the proceeds of a case outcome, legal cost finance is a funding alternative that offers consumers payment plans (on the basis of credit facilities) to cover legal costs of both contentious and non-contentious matters. Effectively, legal cost finance is a specialised financing solution that spreads the cost of legal services over an extended period and enables consumers to repay their legal bills via instalments, resulting in greater convenience, affordability and access to justice for clients.For lawyers, legal cost finance provides the assurance of on-time bill payment and minimisation of disputes over legal costs. A third-party arranger (specialised broker) installs the payment plan facility and settles legal costs on the client's behalf using the credit funds (usually held in escrow accounts). If a dispute over a legal bill arises, the broker can 'quarantine' the dispute to prevent stalling of the legal matter. That is, the disputed bill is placed in abeyance until a later date (usually after completion of the legal case) whilst the lawyer continues to be paid from the credit facility. The disputed bill is later revisited in the context of the case outcome – normally the client will forfeit the dispute in the event of a successful case result, and vice versa (i.e. the lawyer will forfeit the dispute in the event of an unsuccessful case outcome).Legal cost finance can function as a cost-neutral credit facility. That is, the charges associated with legal cost finance (arrangement fees and interest) can be absorbed by the lawyer (directly or by discounts on legal fees), enabling the client to effectively pay legal bills without incurring additional costs. Lawyers are usually forthcoming about covering client's charges associated with legal cost finance on the client's behalf in exchange for the payment assurances they receive from the broker over their bills.For consumers of legal services, legal cost finance is an alternative to the payment of significant up-front costs to lawyers on the basis of a retainer agreement. Effectively, therefore, LCF makes legal services more affordable.Consumers can apply for LCF solely on the basis of their credit history because the specialised loans are provided on an unsecured basis. However, in certain large legal matters security over loans will be required.LCF primarily relies on a report from the acting lawyer stating the merits of the case, which is assessed by the broker. This is designed to protect consumers from potentially irresponsible advice and to prevent the court system from cluttering by unmeritorious cases.LCF was first launched in the United Kingdom by an independent law broker, Dr Yuri Rapoport in October 2013. Legal mobilisation is a tool available to paralegal and advocacy groups, to achieve legal empowerment by supporting a marginalized issues of a stakeholder, in negotiating with the other concerned agencies and other stakeholders, by strategic combined use of legal processes along with advocacy, media engagement and social mobilisation. As per Frances Kahen Zemans (1983) the Legal mobilisation is "a desire or want, which is translated into a demand as an assertion of one's rights". According to Lisa Vanhala (November 2011) Legal mobilisation in its narrowest sense, may refer to high-profile litigation efforts for (or, arguably, against) social change or more broadly, term legal mobilisation has been used to describe any type of process by which an individual or collective actors invoke legal norms, discourse, or symbols to influence policy or behavior. This typically means that there are policies or regulations to mobilize around and a mechanism by which to do so. Legislative activity does create an opportunity for legal mobilization. The courts become particularly relevant when petitioners have grounds to file suit. Legal opportunity structure or legal opportunity is a concept found in the study of law and social movements. It was first used in order to distinguish it from political opportunity structure or political opportunity, on the basis that law and the courts deserved to be studied in their own right rather than being lumped together with political institutions. Legal opportunities are made up of: access to the courts, which may be affected in particular by the law on standing or locus standi, and costs rules; 'legal stock' or the set of available precedents on which to hang a case; and judicial receptiveness. Some of these are more obviously structural than others - hence the term legal opportunity is sometimes preferred over legal opportunity structure.Legal opportunity has been used as an independent variable to help to explain strategy choice by social movement organisations (SMOs) - e.g. why SMOS adopt litigation rather than protest or political lobbying as a strategy. Other variables or explanatory frameworks it is commonly found alongside include framing, resource mobilization and grievance. It can also be employed as a dependent variable. Legal opportunity theory has been applied to a wide range of policy areas which have seen legal mobilization by social movements, including the environmental, animal rights, women's, LGBT, labor, civil rights, human rights, and disability movements. Legal profession is a profession, and legal professionals study, develop and apply law. Usually, there is a requirement for someone choosing a career in law to first obtain a law degree or some other form of legal education.It is difficult to generalize about the structure of the profession, becausethere are two major legal systems, and even within them, there are different arrangements in jurisdictions, andterminology varies greatly.While in civil law countries there are usually distinct clearly defined career paths in law, such as judge, in common law jurisdictions there tends to be one legal profession, and it is not uncommon, for instance, that a requirement for a judge is several years of practising law privately. Legal recognition of some status or fact in a jurisdiction is formal acknowledgement of it as being true, valid, legal, or worthy of consideration and may involve approval or the granting of rights.For example, a nation or territory may require a person to hold a professional qualification to practice some occupation, such as medicine. While any establishment may grant a qualification, only recognised qualifications from recognised establishments entitle the holder to practice the restricted occupation. Qualifications from another jurisdiction may or may not be recognised. In this way the state controls and regulates access; for example, physicians of unknown competence may not practice, and it may be desired to protect employment of local people.Another example is that any person can undergo a form of marriage with anyone or anything, and claim to be married. However, a marriage which is recognised affords the participant certain rights and obligations, e.g., possible reduction in tax payable, obligation not to abandon the spouse, etc. A person who claims to be married to, say, a horse, has no rights and no obligations, and is subject to legal sanctions for any attempt to practice what would be conjugal rights if a marriage was recognised. In the early twenty-first century there was much controversy about recognising marriages between couples of the same sex.Article 16 of International Covenant on Civil and Political Rights requires right to be recognition everywhere as a person before the law.Legal recognition varies between jurisdictions. A person may be recognised as a physician, and to have been married and divorced, in one jurisdiction; on moving to another jurisdiction some or all of these issues may not be recognised. The new jurisdiction, while not recognising the medical qualification as such, may allow it to be used to give the right to take a short qualifying course leading to a recognised qualification, or may disregard it entirely.Diplomatic recognition is a similar concept whereby one state acknowledges the existence as an entity of another. Legal syllogism – it is a legal concept concerning the law and its application.In legal syllogism there are two premises, minor and major, and one conclusion.The major premise is constituted by a legal norm (or rule). This norm (rule) may be derived from canonical text (text of statutes, constitutions, regulations, ordinances ect) or from a judicial precedent. In the latter case, it can be called ratio decidendi or ruling.The facts of the case at hand (also called pending, instant, sub judice, at bar or under argument) serve as the minor premise.The conclusion is formed by the legal consequence for the case at hand.If a norm that forms major premises is valid (binding in a given legal system) and the facts of the case at hand are proven or posited as true, the conclusion of legal syllogism which flows from subsuming these facts under this norm (rule) is taken as correct as well.In that sense, legal syllogism can be deemed to be equally infallible as 'ordinary' ('logical') syllogism. Legal tests are various kinds of commonly applied methods of evaluation used to resolve matters of jurisprudence. In the context of a trial, a hearing, discovery, or other kinds of legal proceedings, the resolution of certain questions of fact or law may hinge on the application of one or more legal tests.Legal tests are often formulated from the logical analysis of a judicial decision or a court order where it appears that a finder of fact or the court made a particular decision after contemplating a well-defined set of circumstances. It is assumed that evaluating any given set of circumstances under a legal test will lead to an unambiguous and repeatable result. Legalism (or nomism), in Christian theology, is the act of putting the Law of Moses above gospel by establishing requirements for salvation beyond obedience, repentance and faith in Jesus Christ and reducing the broad, inclusive, and general precepts of the Bible to narrow and rigid moral codes. It is an over-emphasis of discipline of conduct, or legal ideas, usually implying an allegation of misguided rigour, pride, superficiality, the neglect of mercy, and ignorance of the grace of God or emphasizing the letter of law at the expense of the spirit. Legalism is alleged against any view that obedience to law, not faith in God's grace, is the pre-eminent principle of redemption. On the viewpoint that redemption is not earned by works, but that obedient faith is required to enter and remain in the redeemed state, see covenantal nomism. Legalism, in the Western sense, is an approach to the analysis of legal questions characterized by abstract logical reasoning focusing on the applicable legal text, such as a constitution, legislation, or case law, rather than on the social, economic, or political context. Legalism has occurred both in civil and common law traditions.In its narrower versions, legalism may endorse the notion that the pre-existing body of authoritative legal materials already contains a uniquely pre-determined right answer to any legal problem that may arise.Legalism typically also claims that the task of the judge is to ascertain the answer to a legal question by an essentially mechanical process. Legislation (or "statutory law") is law which has been promulgated (or "enacted") by a legislature or other governing body or the process of making it. Before an item of legislation becomes law it may be known as a bill, and may be broadly referred to as "legislation", while it remains under consideration to distinguish it from other business. Legislation can have many purposes: to regulate, to authorize, to outlaw, to provide (funds), to sanction, to grant, to declare or to restrict. It may be contrasted with a non-legislative act which is adopted by an executive or administrative body under the authority of a legislative act or for implementing a legislative act.Under the Westminster system, an item of primary legislation is known as an Act of Parliament after enactment.Legislation is usually proposed by a member of the legislature (e.g. a member of Congress or Parliament), or by the executive, whereupon it is debated by members of the legislature and is often amended before passage. Most large legislatures enact only a small fraction of the bills proposed in a given session. Whether a given bill will be proposed and is generally a matter of the legislative priorities of government.Legislation is regarded as one of the three main functions of government, which are often distinguished under the doctrine of the separation of powers. Those who have the formal power to create legislation are known as legislators; a judicial branch of government will have the formal power to interpret legislation (see statutory interpretation); the executive branch of government can act only within the powers and limits set by the law. Memory law fr. loi mémorielle, de. Erinnerungsgesetz - law that enshrines state-approved interpretations of crucial historical events and promotes certain narratives about the past, often on the expense of concurring historical interpretations. Memory laws are important elements of state-sanctioned politics of memory.Memory laws are found in both hard law and soft law.Hard law: for example criminal bans on Holocaust denial and other genocides' denial or gross trivializationSoft law: nudges incentivizing other actors to act in a certain way, for example European Parliament resolution to commemorate Armenian genocideAnother conceptual binary is division between positive and negative memory laws.Positive memory laws: for example national parliamentary resolutions recognizing Armenian genocide, EU Parliament’s resolutions establishing European "duty to remember”Negative memory laws: for example laws criminalizing genocide denial or gross trivialization, bans on propagation of totalitarian ideologies Mercy (Middle English, from Anglo-French merci, from Medieval Latin merced-, merces, from Latin, "price paid, wages", from merc-, merxi "merchandise") is a broad term that refers to benevolence, forgiveness, and kindness in a variety of ethical, religious, social, and legal contexts.The concept of a "Merciful God" appears in various religions, including Christianity, Judaism and Islam. Performing acts of mercy as a component of religious beliefs is also emphasized through actions such as the giving of alms, and care for the sick and Works of Mercy.In the social and legal context, mercy may refer both to compassionate behavior on the part of those in power (e.g. mercy shown by a judge toward a convict), or on the part of a humanitarian third party, e.g., a mission of mercy aiming to treat war victims. An oral law is a code of conduct in use in a given culture, religion or community application, by which a body of rules of human behaviour is transmitted by oral tradition and effectively respected, or the single rule that is orally transmitted.Many cultures have an oral law, while most contemporary legal systems have a formal written organisation. The oral tradition (from the Latin tradere = to transmit) is the typical instrument of transmission of the oral codes or, in a more general sense, is the complex of what a culture transmits of itself among the generations, "from father to son". This kind of transmission can be due to lack of other means, such as in illiterate or criminal societies, or can be expressly required by the same law.There has been a continuous debate over oral versus written transmission, with the focus on the perceived higher reliability of written evidence, primarily based on the "linear world of academia" where only written down records are accepted. However, "standard" theories of orality and literacy have been proposed. An ordinary law is a normal law, generally distinguished from a constitutional law, organic law, or other similar law. Typically, ordinary laws are subordinate to constitutional and organic laws, and are more easily changed than constitutional or organic laws, though that should not be assumed to be the case in all jurisdictions. (For example, the Constitutional Court of Spain has ruled that Spain's Organic Laws are not hierarchically superior to ordinary laws, but simply apply to different matters.) Ordinary laws often govern areas beyond the scope of constitutional or organic laws.Normally, in a democracy, an ordinary law must first obtain a simple majority of a congress, parliament, or other legislature, and then be signed into law by the representative of executive power. The process leading to a legislative vote may vary vastly from one jurisdiction to another: the process may be initiated by either house of a bicameral legislature or from the sole house of a unicameral legislature; from the head of government or head of state; or by popular initiative. Different jurisdictions may allow ordinary laws to be proposed by one or all of these means, and may have restrictions on which body may take the initiative for certain types of laws (for example, in some bicameral systems, tax-related laws must begin in the lower chamber of the legislature). In some jurisdictions, the legislature has a means to override an executive veto by a supermajority, or the voting populace have the means to override a law by a referendum.Under federal systems, ordinary laws may be created at the level of a sovereign state but also by its constituent components: for example, by states of the United States or autonomous communities of Spain. The Federal Government of the United States (U.S. Federal Government) is the national government of the United States, a republic in North America, composed of 50 states, one district, Washington, D.C. (the nation's capital), and several territories. The federal government is composed of three distinct branches: legislative, executive, and judicial, whose powers are vested by the U.S. Constitution in the Congress, the President, and the federal courts, respectively. The powers and duties of these branches are further defined by acts of Congress, including the creation of executive departments and courts inferior to the Supreme Court. AmeriCorps is a voluntary civil society program supported by the U.S. federal government, foundations, corporations, and other donors engaging adults in public service work with a goal of "helping others and meeting critical needs in the community." Members commit to full-time or part-time positions offered by a network of nonprofit community organizations and public agencies, to fulfill assignments in the fields of education, public safety, health care, and environmental protection. The program is often seen as a domestic Peace Corps. It employs more than 75,000 Americans in intensive service each year.AmeriCorps is an initiative of the Corporation for National and Community Service (CNCS), which also oversees the Senior Corps and the formerly-funded Learn and Serve America. It was created under President Bill Clinton by the National and Community Service Trust Act of 1993, incorporating VISTA (Volunteers in Service to America) and the National Civilian Community Corps (NCCC). A third division, AmeriCorps State and National, provides grants to hundreds of local community organizations throughout the United States.The program first became operational in 1994 and has expanded over time, with over 80,000 members participating annually as of 2012. Members may be provided low financial compensation in the form of cost-of-living allowances, student loan deferment, Public Service Loan Forgiveness, and the Americorps Education Award. Less tangible benefits include professional skill development and work experience. An internal study found that participation in AmeriCorps strengthened civic attitudes and sentiment, making members more likely to choose careers in public service. The United States Armed Forces are the military forces of the United States of America. It consists of the Army, Marine Corps, Navy, Air Force and Coast Guard. The President of the United States is the commander-in-chief of the U.S. Armed Forces and forms military policy with the U.S. Department of Defense (DoD) and U.S. Department of Homeland Security (DHS), both federal executive departments, acting as the principal organs by which military policy is carried out. All five armed services are among the seven uniformed services of the United States.From the time of its inception, the U.S. Armed Forces played a decisive role in the history of the United States. A sense of national unity and identity was forged as a result of victory in the First Barbary War and the Second Barbary War. Even so, the founders of the United States were suspicious of a permanent military force. It played a critical role in the American Civil War, continuing to serve as the armed forces of the United States, although a number of its officers resigned to join the military of the Confederate States. The National Security Act of 1947, adopted following World War II and during the Cold War's onset, created the modern U.S. military framework. The Act merged the previously Cabinet-level Department of War and the Department of the Navy into the National Military Establishment (renamed the Department of Defense in 1949), headed by the Secretary of Defense; and created the Department of the Air Force and the National Security Council.The U.S. Armed Forces are one of the largest militaries in terms of the number of personnel. It draws its personnel from a large pool of paid volunteers. Although conscription has been used in the past in various times of both war and peace, it has not been used since 1972, but the Selective Service System retains the power to conscript males, and requires that all male citizens and residents residing in the U.S. between the ages of 18–25 register with the service.As of 2016, the U.S. spends about US$580 billion annually to fund its military forces and Overseas Contingency Operations. Put together, the U.S. constitutes roughly 40 percent of the world's military expenditures. The U.S. Armed Forces has significant capabilities in both defense and power projection due to its large budget, resulting in advanced and powerful equipment and its widespread deployment of force around the world, including about 800 military bases in outside the United States. Added to this, the largest air force in the world is the U.S. Air Force. Moreover, the world’s second largest air arm is the U.S. Navy and the U.S. Marine Corps combined. The U.S. Navy is the largest navy by tonnage. Cognitive Madisonianism is the idea that divided government is better than one in which a single party controls both the executive and legislative branches. A relatively large percentage of the populace of the USA [over 20%] purposely votes a split ticket because of this belief, according to "Split-Ticket Voting: The Effects of Cognitive Madisonianism" by Lewis-Beck and Nadeau.In the USA, Cognitive Madisonianism is in keeping with Article One of the United States Constitution, and the principle of separation of powers under the United States Constitution. It comes about from a belief that James Madison, and the other Founding Fathers of the United States, intended power within the institutions of government (executive, legislature and the judiciary) to be separate and act as checks and balances against each other. Voters might vote in this way because they do not want any of the above institutions to exercise too much power individually, as this might lead to tyranny. Voting due to Cognitive Madisonianism has potential to create weak government and negatively impact on the administration of the country, because it creates split ticket voting, which in turn can create legislative gridlock. The Command, Control and Interoperability Division is a bureau of the United States Department of Homeland Security's Science and Technology Directorate, run by Dr.David Boyd. This division is responsible for creating informative resources(including standards, frameworks, tools, and technologies) that strengthen communications interoperability, improve Internet security, and integrity and accelerate the development of automated capabilities to help identify potential threats to the U.S. The purpose of this division is to enable seamless and secure interactions among homeland security stakeholders. This means enhancing the ability of owners to communicate, share, visualize, analyze and protect information through this practitioner-driven approach. The Command, Control and Interoperability Division's vision is for stakeholders to have comprehensive, real-time, and relevant information to create and maintain a secure and safe nation.Customers include local, tribal, state, federal, international, and private emergency response agencies; agencies that plan for, detect, and respond to hazards; and private-sector partners that own, operate, and maintain the nation's cyber infrastructure. Continuity of Operations (COOP) is a United States federal government initiative, required by U.S. Presidential Policy Directive 40 (PPD-40), to ensure that agencies are able to continue performance of essential functions under a broad range of circumstances.PPD-40 specifies certain requirements for continuity plan development, including the requirement that all federal executive branch departments and agencies develop an integrated, overlapping continuity capability, that supports the 8 National Essential Functions (NEFs) described in PPD-40.The Federal Continuity Directive 1 (FCD 1) is a 2017 directive, released by the Department of Homeland Security (DHS), that provides doctrine and guidance to all federal organizations for developing continuity program plans and capabilities. FCD 1 also serves as guidance to state, local, and tribal governments.The Federal Continuity Directive 2 (FCD 2) of July 2013 is a directive to assist federal Executive Branch organizations in identifying their Mission Essential Functions (MEFs) and candidate Primary Mission Essential Functions (PMEFs).The DHS together with the Federal Emergency Management Agency (FEMA), and in coordination with other non-federal partners in July 2013, developed the Continuity Guidance Circular 1 (CGC 1) and CGC 2.The preamble of the CGC 1 states that its function is to provide "direction to the non-Federal Governments (NFGs) for developing continuity plans and programs. Continuity planning facilitates the performance of essential functions during all-hazards emergencies or other situations that may disrupt normal operations. By continuing the performance of essential functions through a catastrophic emergency, the State, territorial, tribal, and local governments, and the private sector support the ability of the Federal Government to perform National Essential Functions (NEFs)."CGC 1 parallels the information in FCD 1 closely, but is geared to states, territories, tribal and local governments, and private-sector organizations.The purpose of Continuity Guidance Circular 2 (CGC 2) is to provide "non-Federal Governments (NFGs) with guidance on how to implement CGC 1, Annex D: ESSENTIAL FUNCTIONS. It provides them with guidance, a methodology, and checklists to identify, assess, and validate their essential functions. This CGC includes guidance for conducting a continuity Business Process Analysis (BPA), Business Impact Analysis (BIA), and a risk assessment that will identify essential function relationships, interdependencies, time sensitivities, threats and vulnerabilities, and mitigation strategies." FEMA provides guidance to the private sector for business continuity planning purposes. FEMA realizes that when business is disrupted, it can cost money, so a continuity plan is essential to help identify critical functions and develop preventative measures to continue functions should disruption occur. A Contracting Officer (CO or KO) is a person who can bind the Federal Government of the United States to a contract that is greater than the Micro-Purchase threshold. This is limited to the scope of authority delegated to the Contracting Officer by the head of the agency.A Contracting Officer enters into, administers, or terminates contracts and makes related determinations and findings, and is appointed by a (SF) 1402, Certificate of Appointment. Subsection 414(4) of Title 41, United States Code, requires agency heads to establish and maintain a procurement career management program and a system for the selection, appointment, and termination of appointment of contracting officers. Agency heads or their designees may select and appoint contracting officers and terminate their appointments. These selections and appointments shall be consistent with the Office of Management and Budget/Office of Federal Procurement Policy’s (OMB/OFPP) standards for skill-based training in performing acquisition, contracting and procurement duties as published in OFPP Policy Letter No. 05-01, Developing and Managing the Acquisition Workforce, April 15, 2005. A Contracting Officer's Technical Representative (COTR) is a business communications liaison between the United States government and a private contractor. He or she ensures that their goals are mutually beneficial. The COTR is normally a federal or state employee who is responsible for recommending actions and expenditures for both standard delivery orders and task orders, and those that fall outside of the normal business practices of its supporting contractors and sub-contractors. Most COTRs have experience in the technical area (e.g., electronics, chemistry, public health, etc.) that is critical to the success of translating government requirements into technical requirements that can be included in government acquisition documents for potential contractor to bid and execute that work. A COTR must be designated by a Contracting Officer (CO). The CO has the actual authority to enter into, administer, and/or terminate contracts and make related determinations and findings. Other terms for COTR include Contracting Officer's Representative (COR) and Project Officer (PO). The terminology may be agency specific. The Cyber Threat Intelligence Integration Center (CTIIC) is a new United States federal government agency that will be a fusion center between existing agencies and the private sector for real-time use against cyber attacks. CTIIC was created due to blocked efforts in Congress that were stymied over liability and privacy concerns of citizens.CTIIC was formally announced by Lisa Monaco February 10, 2015 at the Wilson Center. The agency will be within the Office of the Director of National Intelligence. In the United States, divided government describes a situation in which one party controls the executive branch while another party controls one or both houses of the legislative branch.Divided government is seen by different groups as a benefit or as an undesirable product of the model of governance used in the U.S. political system. Under said model, known as the separation of powers, the state is divided into different branches. Each branch has separate and independent powers and areas of responsibility so that the powers of one branch are not in conflict with the powers associated with the others. The model can be contrasted with the fusion of powers in a parliamentary system where the executive and legislature (and sometimes parts of the judiciary) are unified. Those in favor of divided government believe that such separations encourage more policing of those in power by the opposition, as well as limiting spending and the expansion of undesirable laws. Opponents, however, argue that divided governments become lethargic, leading to many gridlocks. In the late 1980s, Terry M. Moe, a professor of political science at Stanford University, examined the issue. He concluded that divided governments lead to compromise which can be seen as beneficial. But he also noticed that divided governments subvert performance and politicize the decisions of executive agencies.Early in the 20th century, divided government was rare, but since the 1970s it has become increasingly common. The Domestic Nuclear Detection Office (DNDO) is a jointly staffed office established April 15, 2005 by the United States to improve the nation’s capability to detect and report unauthorized attempts to import, possess, store, develop, or transport nuclear or radiological material for use against the nation, and to further enhance this capability over time.DNDO coordinates United States federal efforts to detect and protect against nuclear and radiological terrorism against the United States. DNDO, utilizing its interagency staff, is responsible for the development of the global nuclear detection architecture, the underlying strategy that guides the U.S. government’s nuclear detection efforts. DNDO conducts its own research, development, test, and evaluation of nuclear and radiological detection technologies, and is responsible for acquiring the technology systems necessary to implement the domestic portions of the global nuclear detection architecture. DNDO also provides standardized threat assessments, technical support, training, and response protocols for federal and non-federal partners. Enduring Constitutional Government, or ECG, means a cooperative effort among the executive, legislative, and judicial branches of the Federal Government, coordinated by the President, as a matter of comity with respect to the legislative and judicial branches and with proper respect for the constitutional separation of powers among the branches, to preserve the constitutional framework under which the Nation is governed and the capability of all three branches of government to execute constitutional responsibilities and provide for orderly succession, appropriate transition of leadership, and interoperability and support of the National Essential Functions during a catastrophic emergency. Executive Schedule (5 U.S.C. §§ 5311–5318) is the system of salaries given to the incumbents of the highest-ranked appointed positions in the executive branch of the U.S. government. The President of the United States, an elected official, appoints incumbents to these positions, most of them with the advice and consent of the Senate. They include members of the President's Cabinet as well as other subcabinet policy makers. There are five pay rates within the Executive Schedule, usually denoted with a Roman numeral with I being the highest level and V the lowest. Congress lists the positions eligible for the Executive Schedule and the corresponding level. Congress also gives the president the ability to grant Executive Schedule IV and V status to no more than 34 employees not listed in the law.The Executive Schedule is linked to the rates of pay for the General Schedule, Senior Executive Service, Senior Level, Senior Foreign Service, and other Federal civilian pay systems, as well as the pay of uniformed military personnel, because various federal laws establishing those pay systems normally tie the maximum amount payable to various levels of the Executive Schedule. The fast track authority for brokering trade agreements is the authority of the President of the United States to negotiate international agreements that Congress can approve or deny but cannot amend or filibuster. Renamed the trade promotion authority (TPA) in 2002, fast track negotiating authority is an impermanent power granted by Congress to the President. Fast track authority remained in effect from 1975 to 1994, pursuant to the Trade Act of 1974, and from 2002 to 2007 by the Trade Act of 2002. Although it technically expired in July 2007, it remained in effect for agreements that were already under negotiation until their passage in 2011. The following year, the Obama administration sought renewal of TPA, and in June 2015, it passed Congress and was signed into law by the President. Known as the Trade Preferences Extension Act of 2015, the legislation conferred on the Obama administration "enhanced power to negotiate major trade agreements with Asia and Europe." In the United States, a federal holiday is an authorized holiday which has been recognized by the US government. Every year on a US federal holiday, non-essential federal government offices are closed, and every federal employee is paid for the holiday. Private-sector employees required to work on a legal holiday may receive holiday pay in addition to their ordinary wages.Federal holidays are designated by the United States Congress in Title V of the United States Code (5 U.S.C. § 6103). Congress has authority to create holidays only for federal institutions (including federally owned properties) and employees, and for the District of Columbia. However, as a general rule other institutions, including banks, post offices, and schools, may be closed on those days. In various parts of the country, state and city holidays may be observed in addition to the federal holidays. Federal lands are lands in the United States for which ownership is claimed by the U.S. federal government, pursuant to Article Four, section 3, clause 2 of the United States Constitution. The United States Supreme Court has repeatedly held that this section empowers Congress to retain federal lands, to regulate federal lands such as by limiting cattle grazing, and to sell such lands. As of March 2012, out of the 2.27 billion acres (918.6 million hectares) in the country, about 28% of the total was owned by the Federal government according to the Interior Department. The United States Supreme Court has upheld the broad powers of the federal government to deal with federal lands, for example having unanimously held in Kleppe v. New Mexico that "the complete power that Congress has over federal lands under this clause necessarily includes the power to regulate and protect wildlife living there, state law notwithstanding." The Federal Risk and Authorization Management Program (FedRAMP) is an assessment and authorisation process which U.S. federal agencies have been directed by the Office of Management and Budget to use to ensure security is in place when accessing cloud computing products and services.The OMB identified cybersecurity as one of 14 Cross Agency Priority (CAP) Goals established in accordance with the Government Performance and Results Modernization Act of 2010.The second Chief Information Officer of the United States, Steven VanRoekel, issued a memorandum to federal agency Chief Information Officers on December 8, 2011 defining how federal agencies should use FedRAMP. FedRAMP consists of a subset of NIST Special Publication 800-53 security controls specifically selected to provide protection in cloud environments. A subset has been defined for the FIPS 199 low categorization and the FIPS 199 moderate categorization. The FedRAMP program has also established a Joint Authorization Board (JAB) consisting of Chief Information Officers from DoD, DHS, and GSA.Before the introduction of FedRAMP, individual federal agencies managed their own assessment methodologies following guidance loosely set by the Federal Information Security Management Act of 2002. The Galena Experiment is described as a period between 1807 and 1846 in the U.S. where the government granted mining permits to work a given area, and required workers to bring their ore to one of the officially licensed smelters, from whom the government collected 10% royalty. Initially, Federal revenues were enhanced; however, because of noncompliance on the sides of the miners, who evaded the licensed smelters; by the smelters, who did not pay the royalties; and the federal agents, who fraudulently sold mineral land at minimum prices as farmland, the system fell apart in the 1830s. The United States government's Information Sharing and Customer Outreach office or ISCO was one of five directorates within the office of the Chief Information Officer (CIO) under the Office of the Director of National Intelligence (ODNI). ISCO changed its name and function to Information Technology Policy, Plans, and Requirements (ITPR) in July 2007. Established by at least February 2006, ISCO is led by the Deputy Associate Director of National Intelligence for Information Sharing and Customer Outreach, which is currently Mr. Richard A. Russell. ISCO's information sharing and customer outreach responsibilities extend beyond the United States Intelligence Community and cross the entire U.S. government. Following is a list of persons who have served in all three branches of the United States federal government. Membership in this list is limited to persons who have:served in the executive branch, as President of the United States, Vice President, a Cabinet officer, or another executive branch office requiring confirmation by the United States Senate; andserved as a member of either the United States Senate or of the House of Representatives; andserved as a United States federal judge on a court established under Article Three of the United States Constitution. Under the Appointments Clause of the United States Constitution and law of the United States, certain federal positions appointed by the president of the United States require confirmation (advice and consent) of the United States Senate.These "PAS" (Presidential Appointment needing Senate confirmation) positions, as well as other types of federal government positions, are published in the United States Government Policy and Supporting Positions (Plum Book), which is released after each United States presidential election. A 2012 Congressional Research Service study estimated that approximately 1200-1400 positions require Senate confirmation. The Middle Class Working Families Task Force (MCWFTF) is a United States Federal Government initiative, established in 2009 via presidential memorandum. It was one of the earliest innovations of the Obama-Biden administration. Jared Bernstein was appointed the Executive Director, responsible for direct management of the project; while Vice-President Joseph Biden was appointed Chairman, with final oversight and responsibility for the work. The purpose of the task force is to empower the American middle class and to explore the vision and possibilities of green jobs. The Middle Class Working Families Task Force studies and recommends far-reaching and imaginative solutions to problems working families face. The Multistate Anti-Terrorism Information Exchange Program, also known by the acronym MATRIX, was a U.S. federally funded data mining system originally developed for the Florida Department of Law Enforcement described as a tool to identify terrorist subjects.The system was reported to analyze government and commercial databases to find associations between suspects or to discover locations of or completely new "suspects". The database and technologies used in the system were housed by Seisint, a Florida-based company since acquired by Lexis Nexis.The Matrix program was shut down in June 2005 after federal funding was cut in the wake of public concerns over privacy and state surveillance. The National Aeronautics and Space Administration (NASA ) is an independent agency of the executive branch of the United States federal government responsible for the civilian space program, as well as aeronautics and aerospace research.President Dwight D. Eisenhower established NASA in 1958 with a distinctly civilian (rather than military) orientation encouraging peaceful applications in space science. The National Aeronautics and Space Act was passed on July 29, 1958, disestablishing NASA's predecessor, the National Advisory Committee for Aeronautics (NACA). The new agency became operational on October 1, 1958.Since that time, most US space exploration efforts have been led by NASA, including the Apollo Moon landing missions, the Skylab space station, and later the Space Shuttle. Currently, NASA is supporting the International Space Station and is overseeing the development of the Orion Multi-Purpose Crew Vehicle, the Space Launch System and Commercial Crew vehicles. The agency is also responsible for the Launch Services Program (LSP) which provides oversight of launch operations and countdown management for unmanned NASA launches.NASA science is focused on better understanding Earth through the Earth Observing System, advancing heliophysics through the efforts of the Science Mission Directorate's Heliophysics Research Program, exploring bodies throughout the Solar System with advanced robotic spacecraft missions such as New Horizons, and researching astrophysics topics, such as the Big Bang, through the Great Observatories and associated programs. NASA shares data with various national and international organizations such as from the Greenhouse Gases Observing Satellite. The United States National Film Preservation Board (NFPB) is the board selecting films for preservation in the Library of Congress' National Film Registry. It was established by the National Film Preservation Act of 1988. The National Film Registry is meant to preserve up to 25 "culturally, historically or aesthetically significant films" each year; to be eligible, films must be at least 10 years old. Members of the Board also advise the Librarian of Congress on ongoing development and implementation of the national film preservation plan.The NFPB is a federal agency located within the Library of Congress. The NFPB was established by the National Film Preservation Act of 1988, and reauthorized in 1992, 1996 and 2005. The 1996 reauthorization also created the non-profit National Film Preservation Foundation, which is loosely affiliated with the National Film Preservation Board, but the private-sector Foundation (NFPF) and federal Board (NFPB) are separate, legally distinct entities. The National Search and Rescue Plan or National SAR Plan is a policy document of the US government that establishes the responsibilities for search and rescue in the domestic United States, as well as areas where the US has international commitments.The Plan makes the US Coast Guard responsible for maritime search and rescue, and while inland areas are overseen by the Air Force. Both have Rescue Coordination Centers to coordinate this effort, and also cooperatively operate Joint Rescue Coordination Centers where appropriate. These centers receive Cospas-Sarsat distress alerts sent by the United States Mission Control Center in Suitland, Maryland and are responsible for coordinating the rescue response to the distress. Each service takes a slightly different approach to search and rescue operations. The National Security and Homeland Security Presidential Directive (National Security Presidential Directive NSPD 51/Homeland Security Presidential Directive HSPD-20, sometimes called simply "Executive Directive 51" for short), created and signed by President of the United States George W. Bush on May 4, 2007, is a Presidential Directive which claims power to execute procedures for continuity of the federal government in the event of a "catastrophic emergency". Such an emergency is construed as "any incident, regardless of location, that results in extraordinary levels of mass casualties, damage, or disruption severely affecting the U.S. population, infrastructure, environment, economy, or government functions." The unclassified portion of the directive was posted on the White House website on May 9, 2007, without any further announcement or press briefings, although Special Assistant to George W. Bush Gordon Johndroe answered several questions on the matter when asked about it by members of the press in early June 2007. In the United States, the Office of Inspector General (OIG) is a generic term for the oversight division of a federal or state agency aimed at preventing inefficient or illegal operations within their parent agency. Such offices are attached to many federal executive departments, independent federal agencies, as well as state and local governments. Each office includes an Inspector General (or I.G.) and employees charged with identifying, auditing, and investigating fraud, waste, abuse, embezzlement and mismanagement of any kind within the executive department. The Presidential Early Career Award for Scientists and Engineers (PECASE) is the highest honor bestowed by the United States government on outstanding scientists and engineers in the early stages of their independent research careers. The White House, following recommendations from participating agencies, confers the awards annually. To be eligible for a Presidential Award, an individual must be a U.S. citizen, national or permanent resident. Some of the winning scientists and engineers receive up to a five-year research grant. The Presidential Young Investigator Award (PYI) was awarded by the National Science Foundation of the United States Federal Government. The program operated from 1984 to 1991, and was replaced by the NSF Young Investigator (NYI) Awards and Presidential Faculty Fellows Program (PFF).The award gave minimum of $25,000 a year for five years from NSF, with the possibility of up to $100,000 annually if the PYI obtained matching funds from industry. The program was criticized in 1990 as not being the best use of NSF funds in an era of tight budgets. The United States Preventive Services Task Force (USPSTF) is "an independent panel of experts in primary care and prevention that systematically reviews the evidence of effectiveness and develops recommendations for clinical preventive services". The task force, a panel of primary care physicians and epidemiologists, is funded, staffed, and appointed by the U.S. Department of Health and Human Services' Agency for Healthcare Research and Quality.The USPSTF does not consider cost-effectiveness. Recommendations are based solely upon evidence of medical benefit to the patient, no matter how expensive it is. Public policy consists of a system of laws, regulatory measures, courses of action, and funding priorities by a government or its representatives. Public policy decisions are often decided by a group of individuals with different beliefs and interests. The policies of the United States of America comprise all actions taken by its federal government. The executive branch is the primary entity through which policies are enacted, however the policies are derived from a collection of laws, executive decisions, and legal precedents. The SunShot Initiative is a federal government program run by the US Department of Energy's Solar Energy Technologies Office. It bills itself as a national effort to support solar energy adoption in order to make solar energy affordable for all Americans. The initiative is a collaboration of private companies, universities, state and local governments, and nonprofits, as well as national laboratories.The program began in 2011 with the initial goal of making solar energy competitive with traditional forms of electricity by 2020. The federal government invested $282 million in FY 2015 to fund the SunShot Initiative. According to the SunShot Q4 2016/Q1 2017 Solar Industry Update report, The United States installed 14.8 GW of PV in 2016, an increase of 97% from 2015, representing approximately $30 billion in deployed capital, along with another $2.2 billion in U.S.- manufactured PV products.By 2016, the program achieved 90% of the progress towards the 2020 goal. In September 2017, it was announced that it had already reached it's 2020 goal, and was now refocusing on grid reliability issues. The Acura RL is a full-size luxury sedan that was manufactured by the Acura division of Honda for the 1996–2012 model years over two generations. The RL was the flagship of the marque, having succeeded the Acura Legend, and was replaced in 2013 by the Acura RLX. All models of the Legend, RL and RLX lines have been adapted from the Japanese domestic market Honda Legend. The model name "RL" is an abbreviation for "Refined Luxury."The first-generation Acura RL was a rebadged version of the third-generation Honda Legend, and was first introduced to the North American market in 1996, to replace the second-generation Acura Legend. The second-generation Acura RL was a rebadged version of the fourth-generation Honda Legend, introduced to the North American market in September 2004, as a 2005 model. This iteration of the RL received an extensive mid-generational facelift for the 2009 model year, and a further update for 2011. The third-generation debuted for the 2014 model year as the Acura RLX. The Ambassador was the top-of-the-line automobile produced by the American Motors Corporation (AMC) from 1958 until 1974. The vehicle was known as the AMC Ambassador, Ambassador V-8 by Rambler, and Rambler Ambassador at various times during its tenure in production. Previously, the name Ambassador had applied to Nash's "senior" full-size cars.The Ambassador nameplate was used continuously from 1927 until 1974 (the name being a top-level trim line between 1927 and 1931); at the time it was discontinued, Ambassador was the longest continuously used nameplate in automotive history.Most Ambassador models were built in Kenosha, Wisconsin. They were also built at AMC's Brampton Assembly in Brampton, Ontario from 1963 to 1966. Australian Motor Industries (AMI) assembled Ambassadors from knock-down kits with right-hand drive. The U.S. fifth generation Ambassadors were produced by Industrias Kaiser Argentina (IKA) in Córdoba, Argentina from 1965 to 1972, as well as assembled by ECASA in Costa Rica from 1965 to 1970. Planta REO assembled first-generation Ambassadors in Mexico at its Monterrey, Nuevo León plant. Fifth and seventh generation Ambassadors were modified into custom stretch limousines in Argentina and the U.S. The AMC Matador is a mid-size car built and marketed by American Motors Corporation (AMC) from 1971 to 1978. The Matador came in two generations: 1971 to 1973, and a major redesign from 1974 to 1978. The second-generation four-door sedan and station wagon models were classified as full-size cars and did not share the distinctive styling featured by the Matador coupe that was introduced in 1974.Factory-backed AMC Matador hardtops and coupes competed in NASCAR stock car racing with drivers that included Mark Donohue and Bobby Allison winning several races. The new Matador coupe was featured in The Man with the Golden Gun, a James Bond film released in 1974. Matadors were a popular vehicle for the police, as it outperformed most other police cars. It was also featured in many television shows and movies during the 1970s.The Matador became AMC's largest automobile following the discontinuation of its flagship, the AMC Ambassador, built on the same platform. Premium trim level "Oleg Cassini" and "Barcelona" versions of the Matador coupe were positioned in the personal luxury car market segment. Matadors were also marketed under the Rambler marque in foreign markets, as well as assembled under license agreements with AMC that included Vehículos Automotores Mexicanos (VAM), as well as built in right-hand-drive versions by Australian Motor Industries (AMI). The Aston Martin Rapide is a 4-door, high-performance sports saloon, which British luxury marque Aston Martin introduced in early 2010. It was first presented as a concept car at Detroit's North American International Auto Show in 2006 and the production version of the Rapide was shown at the 2009 Frankfurt Motor Show.The Rapide name is a reference to the Lagonda Rapide, a four-door saloon produced by Lagonda, now part of Aston Martin. The new Rapide is the company's first 4-door fastback saloon since the Aston Martin Lagonda was discontinued in 1989. It is based on the Aston Martin DB9 and shares the same VH platform.The first cars were rolled out in May 2010, initially built at a dedicated plant at the Magna Steyr facility in Graz, Austria. The factory initially planned to build 2,000 per year, but relocated to England in 2012 after sales did not meet production targets. The Audi A8 is a four-door, full-size, luxury sedan manufactured and marketed by the German automaker Audi since 1994. Succeeding the Audi V8, and now in its third generation, the A8 has been offered with both front- or permanent all-wheel drive—and in short- and long-wheelbase variants. The first two generations employed the Volkswagen Group D platform, with the current generation deriving from the MLB platform. After the original model's 1994 release, Audi released the second generation in late 2002, and the third and current iteration in late 2009.Notable for being the first mass-market car with an aluminium chassis, all A8 models have used this construction method co-developed with Alcoa and marketed as the Audi Space Frame.A mechanically-upgraded, high-performance version of the A8 debuted in 1996 as the Audi S8. Produced exclusively at Audi's Neckarsulm plant, unlike the donor A8 model, the S8 has been available only in short-wheelbase form and is fitted standard with Audi's quattro all-wheel drive system. The Audi V8 (Typ 4C) is a four-door, full-size luxury sedan, built by Audi in Germany from 1988 to 1993, as the company's flagship model. It was the first car from Audi to use a V8 engine, and also the first Audi to combine a quattro system with an automatic transmission. Early cars used 3.6-litre V8s, while later cars featured a 4.2-litre version of the engine. The Audi V8 was replaced by the Audi A8 in 1994, although the A8 was not sold in North America until 1996.The competition model of the Audi V8 won back-to-back Deutsche Tourenwagen Meisterschaft driver's titles in 1990 and 1991, with the championship winners being Hans-Joachim Stuck and Frank Biela respectively. Audi was the first company to win back-to-back DTM titles. The Bendix SWC is a one-of-a-kind, hand built prototype concept car built in 1934. It is a four-door, five-passenger sedan sedan that was designed by Alfred Ney of the Bendix Corporation in South Bend, Indiana. Although considered a proof-of-concept vehicle rather than a true prototype for future production, the Bendix SWC is regarded as ahead of its time because of its innovative features, incorporating front wheel-drive, four-wheel hydraulic brakes with open drums for better cooling, and four-wheel independent suspension that used A-arms mounted in rubber blocks in place of conventional springs. The styling was similar to other examples of automotive streamlining such as the contemporary DeSoto Airflow and Chrysler Airflow. The Bentley Arnage was a large luxury car produced by Bentley Motors in Crewe, England from 1998 to 2009. The Arnage, and its Rolls-Royce-branded sibling, the Silver Seraph, were introduced in the Spring of 1998, and were the first entirely new designs for the two marques since 1980.Another break from the past was to be found under the bonnet, for decades home to the same 6.75-litre V8 engine, a powerplant which could trace its roots back to the 1950s. The new Arnage was to be powered by a BMW V8 engine, with Cosworth-engineered twin-turbo installation, and the Seraph was to employ a BMW V12 engine.The Arnage is over 5.4 meters (212 in) long, 1.9 metres (75 in) wide, and has a kerb weight of more than 2.5 metric tonnes. For a brief period it was the most powerful and fastest four-door saloon on the market.In September 2008, Bentley announced that production of the model would cease during 2009. Bentley Brooklands is the name of two distinct models made by Bentley Motors. The first Brooklands was a full-size luxury saloon, launched in 1992 to replace the Bentley Mulsanne and in turn succeeded by the Bentley Arnage in 1998.Bentley resurrected the nameplate in 2007 with the Brooklands Coupé, a 2-door, 4-seater hardtop coupé version of the Bentley Azure made between 2008 and 2011 in limited numbers.These cars were named after the Brooklands banked race track in Surrey, where Bentley obtained some of its greatest triumphs in the 1920s and 1930s. The BMW 7 Series is a full-size luxury sedan produced by the German automaker BMW since 1977. It is the successor to the BMW E3 "New Six" sedan and is currently in its sixth generation.The 7 Series is BMW's flagship car and is only available as a sedan (including long wheelbase and limousine models). It traditionally introduces technologies and exterior design themes before they trickle down to other models in BMW's lineup.The first generation 7 Series was powered by straight-6 petrol engines, and following generations have been powered by inline-4, straight-6, V8 and V12 engines with both natural aspiration and turbocharging. Since 1995, diesel engines have been included in the 7 Series range.Unlike the 3 Series and 5 Series sedans, BMW has not produced an M model for the 7 Series (ie an "M7"). However, in 2014 an "M Performance" option became available for the 7 Series. The BMW E23 is the first generation of the BMW 7 Series luxury cars, and was produced from 1977 to 1987. It was built in a 4-door sedan body style with 6-cylinder engines, to replacing the BMW E3 sedans. From 1983 to 1986, a turbocharged 6-cylinder engine was available.In 1986, the E23 was replaced by the E32 7 Series, however the E23 models (called L7) remained on sale in the United States until 1987.The E23 introduced many electronic features for the first time in a BMW, including an on-board computer, service interval indicator, a "check control panel" (warning lights to indicate system faults to the driver), a dictaphone and complex climate control systems. It was also the first BMW to offer an anti-lock braking system (ABS), a driver's airbag and a new design of front suspension. The BMW E32 is the second generation of the BMW 7 Series luxury cars and was produced from 1986 to 1994. It replaced the E23 and was initially available with a straight-six or V12 engine. In 1992, V8 engines became available. In 1994, the E32 was replaced by the E38.The E32 introduced the following features for the first time in a BMW: Electronic Damper Control, V12 and V8 engines, double glazing, the CAN bus electronic protocol, Xenon headlamps, traction control and dual-zone climate control. The E32 750i was the first car adhering to BMW's self-imposed speed limit of 250 km/h (155 mph). The 'iL' models were the first time that a long-wheelbase option was offered by BMW. The BMW 7 Series (G11) is a full-size luxury car manufactured by German automaker BMW. Succeeding the 2008 to 2015 produced BMW F01, it is the sixth model generation of the BMW 7 Series. It was revealed on June 10, 2015 at BMW's headquarters in Munich. An official public reveal took place at the 2015 International Motor Show Germany.G11 is the codename for the short-wheelbase model, the extended wheelbase model is codenamed G12 and designated with an additional L letter. This 7 Series generation is the first car lineup of BMW Group to be based on the modular OKL platform (Oberklasse, German for luxury class), the rear-wheel drive counterpart to BMW's front-wheel drive UKL platform. The OKL platform adopts technology first introduced in BMW i models, namely the introduction of carbon-fiber-reinforced polymer as structural chassis components.As part of BMW's strategy of introducing plug-in hybrid variants for all future car models, both, the short and long-wheelbase models will be available with hybrid powertrains under the designations 740e and 740Le in 2016. Buick Century is the model name used by Buick for a line of upscale performance cars from 1936 to 1942 and 1954 to 1958, and from 1973 to 2005 for a mid-size car.The model name Century came about when Buick was designing its first production automobile capable of reaching a speed of 100 mph. The division needed to come up with a name. One of the Buick executives had returned from a recent trip to Britain, and told the other executives that the British referred to going 100 mph as "doing the century". The executives liked the Century name and it stuck.The Century was sold as the Buick Regal in Japan, as Toyota owns the right to the name Century. The Buick LeSabre is a full-size car made by General Motors from 1959-2005. Prior to 1959, this position had been retained by the full-size Buick Special model (1936–58); in 1959 the LeSabre replaced the Special, a nameplate that was reintroduced in 1961 for Buick's line of compact cars. The name originated with the 1951 GM Le Sabre show car designed by Harley Earl; that car is often mistakenly attributed to the Buick division, but in fact it was presented as a GM vehicle without reference to a specific GM division. Buick closely related their 1956-1957 models to the GM LeSabre by replicating the top section of the rear wing into their design.. The word LeSabre is French for sabre. The Buick Park Avenue is a full-size luxury car built by Buick. The nameplate was first used in 1975 for an appearance option package on the Electra 225 Limited. It became an Electra trim level in 1978 and its own model starting in 1990 (1991 model year) after the Electra was discontinued.Two generations of the Park Avenue were manufactured in the United States until 2005, while in 2007 the nameplate was revived on a large Buick sedan built by Shanghai GM for the Chinese market based on the Holden Caprice from the WM/WN range. The nameplate is derived from the affluent New York City boulevard, Park Avenue. The Cadillac Ciel is a hybrid electric concept car created by Cadillac and unveiled at the 2011 Pebble Beach Concours d'Elegance. The Cadillac Ciel has a twin-turbocharged 3.6-liter direct injection V6 producing 425 horsepower and a hybrid system using lithium-ion battery technology. The Ciel is a four-seat convertible with a wheelbase of 125 inches. The concept car was developed at GM Design's North Hollywood Design Center.The Ciel comes with rear suicide doors, and the interior features a smooth wooden dashboard with a simple gauge look. The word "Ciel" is French for "sky"- which is what the designers had in mind when they made the vehicle.In 2012 and early 2013, Cadillac contemplated developing a production car based on the Ciel. However, in July 2013, they decided not to pursue the venture. At the 2013 Pebble Beach Concours d'Elegance, Cadillac unveiled a new concept, the Cadillac Elmiraj, which is similar in design to the Ciel, except it is a coupe. Both vehicle's were designed by Niki Smart. The Cadillac CT6 (short for Cadillac Touring 6) is a full-size luxury car manufactured by Cadillac, first introduced at the 2015 New York International Auto Show and first sold in the US for the 2016 model year. It is the first car under Cadillac president Johan De Nysschen's leadership to adopt the brand's revised naming strategy, as well as the first rear-wheel drive full-size Cadillac sedan since the Fleetwood was discontinued in 1996. It is built on a different platform than the smaller CTS and is engineered as a rear-wheel drive vehicle with optional all-wheel drive. In addition to its primary markets of North America and China, the CT6 is also offered in Europe, Korea, Japan, Israel and the Middle East. The Cadillac Escala is a concept car built by Cadillac for the 2016 Pebble Beach Concours d'Elegance. The last of a trifecta of concept cars initially conceived in 2007, it is preceded by the Ciel and Elmiraj, which had debuted back in 2011 and 2013 respectively. The Escala previews Cadillac's future design language under the leadership of president Johan de Nysschen, being an evolution of the Art and Science design philosophy that has been used on its cars for over a decade.The Escala was first announced via a trailer video on August 15, 2016. Its name, revealed one day before its public debut, derives from the Spanish word for scale. This refers to the Escala utilizing an elongated version of the Cadillac CT6's Omega underpinnings, being approximately six inches longer than the latter. The car was unveiled at a cocktail party in Carmel-by-the-Sea, California on August 18, 2016, which was attended by Johan de Nysschen, GM Vice President of Global Design Michael Simcoe and Cadillac's executive director of global design Andrew Smith, along with several other executives.Although yet to be officially confirmed for production, it was described by de Nysschen as "a potential addition to our existing product plan” in a press release, its ultimate fate determined by the fertility of the flagship luxury sedan market. However, it previews the design, powertrain and other advanced technological features currently in development that is set to appear on other upcoming Cadillac production cars in the future. "deVille" and "Deville" redirect here. For other uses, see Deville (disambiguation).The Cadillac DeVille was originally a trim level and later a separate model produced by Cadillac. The first car to bear the name was the 1949 Coupe de Ville, a pillarless two-door hardtop body style with a prestige trim level above that of the Series 62 luxury coupe. The last model to be formally known as a DeVille was the 2005 Cadillac DeVille, a full-size sedan, the largest car in the Cadillac model range at the time. The next year, the DeVille was officially renamed DTS. The Cadillac Series 65, after the Series 60, represented Cadillac's second, and, being built on the C-body instead of the B-body, somewhat physically larger entry into the mid-priced vehicle market when it appeared in 1937. It was slightly higher in status than the LaSalle, also offered by General Motors.In 1937 it was offered in only one body style, a 4-door 5-seat sedan, built by Fisher on the same 131.0 in (3,327 mm) wheelbase as used by the Cadillac Series 70 and the Buick Roadmaster. The car offered a longer heavier car than the Series 60 at a price below that of the Fleetwood bodied Series 70.Under the hood was the Monobloc V8. The only displacement that was available was the 346 cu in (5.7 L). This engine produced 135 hp (101 kW) at 3400 R.P.M. The car had Bendix dual-servo brakes, "Knee-Action" independent suspension in front and a Stromberg carburetor ('37: AA-25; '38: AAV-25) with an electric choke.In 1938 the Series 65 and the Series 75 shared a new front end style featuring a massive vertical cellular grille, three sets of horizontal bars on the hood sides, alligator hood, and headlights on the filler space between the fenders and the hood. Optional sidemount covers were hinged to the fenders. Quarter windows were of sliding rather than hinged construction. The rear of the body had rounder corners and more smoothly blended lines. Trunks had more of an appearance of being an integral part of the body. Bodies were all steel except for wooden main sills. New chassis details included a column gear shift, horns just behind the grille, battery under the right hand side of the hood, transverse muffler just behind the fuel tank, wheels by a different manufacturer, "Synchro-Flex" flywheel, hypoid rear axle and the deletion of the oil filter. The Cadillac Sixteen is a concept car first developed and presented by Cadillac in 2003.The vehicle is equipped with a Cadillac proprietary-developed aluminum 32-valve V16 engine displacing 13.6 liters (~830 cu. in), which was exclusive to the Sixteen and based on the GM Generation IV LS architecture. It is mated to a four-speed, electronically controlled, automatic transmission driving the rear wheels. The engine features fuel-saving Active Fuel Management "Displacement on Demand" technology, which could shut down either twelve or eight of the cylinders when the full output was not needed. The V16 was capable of 16.65 mpg under normal conditions. The engine was said to produce a minimum of 1,000 bhp (746 kW; 1,014 PS) and at least 1,000 lb·ft (1,400 N·m) of torque. using no form of forced induction. The car itself weighs about 2,270 kilograms (5,000 lb).The car referenced the Cadillac V-16 of the 1930s. The actual design of the car was a combination of Cadillac's current "Art and Science" design theme and 1967 Cadillac Eldorado cues. Additional original design elements were provided by an in-house design competition led by GM Vice President Bob Lutz. The Sixteen has the Cadillac logo carved out of solid crystal on the steering wheel and a Bulgari clock on the dashboard.Although the Sixteen remained a concept car, its design language was implemented in Cadillac's subsequent vehicles, most noticeably on the 2008 Cadillac CTS. Since its unveiling there have been resurfacing rumors about a possible very limited production of an exclusive Cadillac halo model. A scaled-down version of the car, referred to as the ULS (Ultra Luxury Sedan) or XLS, with a standard V8 and an optional V12 (the latter was to be called the Cadillac Twelve), had been rumored for production since 2005, but was eventually shelved in favor of the Cadillac XTS.Ever since the Sixteen was first unveiled there have been resurfacing rumors, speculation and high hopes of automotive journalists and aficionados about a possible limited production of an exclusive Cadillac halo model, such as the Sixteen, to be the "ultimate flagship" of the brand and sit atop of the upcoming flagship, as previewed by the Ciel concept of late 2011.It was on the episode of Ride with Funkmaster Flex at the 2003 New York International Auto Show.Also in 2003, Top Gear reviewed the Cadillac 16 with its presenter James May in Series 2, Episode 10. May called praised the Sixteen as "exactly what a Cadillac should be" and said it should be put into production.In the 2006 comedy film Click starring Adam Sandler, Sandler's character is seen driving a Cadillac Sixteen when he visits his family in the year 2017.In the 2011 film Real Steel, starring Hugh Jackman, the child's parents are seen getting in and out of a Cadillac Sixteen at around 18 minutes in, as Jackman's character collects his money and his child. The Cadillac XTS (X-Series Touring Sedan) is a full-size luxury sedan from Cadillac. It is based on an enlarged version of the Epsilon II platform. The XTS replaces both the Cadillac STS and DTS, and is smaller than the DTS but larger than the STS. It began production in May 2012 at the Oshawa Assembly Plant and launched in June. The XTS is available with both front-wheel drive and all-wheel drive.For the Chinese market, the Cadillac XTS is being assembled by Shanghai GM. Production began in February 2013. In addition to the LFX 3.6 V6, Cadillac XTS also comes with an LTG 2.0 turbo engine in the Chinese market. In the Chinese market, the Cadillac XTS with an LFX 3.6 V6 engine is called XTS 36S, and the version with LTG 2.0 turbo engine is called XTS 2.0T.The Cadillac XTS Sedan is currently available in the United States, Canada, Mexico, China, and the Middle East (except Israel) in LHD only. The Chevrolet Bel Air was a full-size car produced by Chevrolet for the 1950–1981 model years. Initially only the two door hardtops in the Chevrolet model range were designated with the Bel Air name from 1950 to 1952, as distinct from the Styleline and Fleetline models for the remainder of the range. With the 1953 model year the Bel Air name was changed from a designation for a unique body shape to a premium level of trim applied across a number of body styles. The Bel Air continued with various other trim level designations until US production ceased in 1975. Production continued in Canada, for its home market only, through the 1981 model year. The Chevrolet Biscayne was a series of full-size cars produced by the American manufacturer Chevrolet between 1958 and 1972. Named after a show car displayed at the 1955 General Motors Motorama, the Biscayne was the least expensive model in the Chevrolet full-size car range (except the 1958-only Chevrolet Delray). The absence of most exterior and fancy interior trimmings remained through the life of the series, as the slightly costlier Chevrolet Bel Air offered more interior and exterior features at a price significantly lower than the mid-line Chevrolet Impala. The Chevrolet Caprice is a full-sized automobile produced by Chevrolet in North America for the 1965 to 1996 model years. Full-size Chevrolet sales peaked in 1965 with over a million sold. It was the most popular American car in the sixties and early seventies, which, during its lifetime, included the Biscayne, Bel Air, and Impala.Introduced in mid-1965 as a luxury trim package for the Impala four-door hardtop, Chevrolet offered a full line of Caprice models for the 1966 and subsequent model years, including a "formal hardtop" coupe and an Estate station wagon. The 1971 to 1976 models are the largest Chevrolets ever built. The downsized 1977 and restyled 1991 models were awarded Motor Trend Car of the Year. Production ended in 1996.In 2011, the Caprice nameplate returned to North America as a full-size, rear wheel drive police vehicle, a captive import from Australia built by General Motors's subsidiary Holden—the police vehicle is a rebadged version of the Holden WM/WN Caprice. The nameplate has also had a civilian and police presence in the Middle East since 1999, where the imported Holden Statesman/Caprice built by Holden has been marketed as the Chevrolet Caprice in markets such as the UAE and Saudi Arabia. The Chevrolet Impala ) is a full-size car built by Chevrolet for model years 1958-85, 1994-96, and since 2000 onwards.The Impala was Chevrolet's most expensive passenger model through 1965, and had become the best-selling automobile in the United States.For its debut in 1958, the Impala was distinguished from other models by its symmetrical triple taillights, which returned from 1960-96. The Caprice was introduced as a top-line Impala Sport Sedan for model year 1965, later becoming a separate series positioned above the Impala in 1966, which, in turn, remained above the Bel Air and the Biscayne. The Impala continued as Chevrolet's most popular full-size model through the mid-1980s. Between 1994-96, the Impala was revised as a 5.7-liter V8–powered version of the Caprice Classic sedan.In 2000, the Impala was reintroduced again as a mainstream front-wheel drive Hi-Mid sedan. As of February 2014, the 2014 Impala ranked #1 among Affordable Large Cars in U.S. News & World Report's rankings. When the current tenth generation of the Impala was introduced for the 2014 model year, the ninth generation was rebadged as the Impala Limited and sold only to fleet customers through 2016. As of the 2015 model year, both versions are sold in the United States and Canada, with the current-generation Impala also sold in the Middle East, the People's Republic of China, and South Korea. The fifth-generation Chevrolet Impala were full-sized automobiles produced by Chevrolet for the 1971 through 1976 model years and was one of GM's top-selling models throughout the 1970s. Models included a sport coupe using a semi-fastback roofline shared with other B-body GM cars, custom coupe with the formal roofline from the Caprice, four-door sedan, four-door hardtop sport sedan, and a convertible, - each of which rode on a new 121.5-inch wheelbase and measured 217 inches overall. Station wagons rode on a longer 125-inch wheelbase. The Chevrolet Nomad was a station wagon model made off and on from 1955 to 1972, and a Chevy Van trim package in the late 1970s and early 1980s, produced by Chevrolet. The Nomad is best remembered in its Tri-Five, two-door 1955–57 form, and was considered a halo model during its three-year production as a two-door station wagon. The name nomad (Greek: νομάς, nomas, plural νομάδες, nomades; means someone who is roaming about for pasture, or pastoral tribe) is a member of a community of people who live in different locations, moving from one place to another. The Chrysler 300 is a rear-wheel-drive, front-engine, full-sized luxury car manufactured and marketed by Chrysler as a four door sedan and station wagon its first generation (model years 2005-2010) and solely as a four-door sedan in its second and current generation (model years 2011-present).The 300 debuted as a concept at the 2003 New York Auto Show with styling by Ralph Gilles and production starting in April 2004 for the 2005 model year.The second generation 300 was marketed as the Chrysler 300C in the United Kingdom and Ireland and as the Lancia Thema in the remainder of Europe. The Chrysler 300 "letter series" are high-performance personal luxury cars that were built by Chrysler in the U.S. from 1955 to 1965. After the initial year, which was named C-300, the 1956 cars were designated 300B. Successive model years were given the next letter of the alphabet as a suffix (skipping "i"), reaching the 300L by 1965, after which the model was dropped.The 300 "letter series" cars were among the vehicles that focused on performance built by domestic U.S. manufacturers after World War II, and thus can be considered one of the muscle car's ancestors, though full-sized and more expensive.The automaker began using the 300 designations again for performance-luxury sedans, using the 300M nameplate from 1999 to 2004, and expanding the 300 series with a new V8-powered 300C, the top model of a new Chrysler 300 line, a new rear-wheel drive car launched in 2004 for the 2005 model year. Unlike the first "letter series" series, the successive variants do not feature standard engines producing at least 300 hp (220 kW), except for Chrysler's current top-line 300C models. The Chrysler 300 (Chrysler 300 Non-Letter Series) was a full-size automobile produced by Chrysler from 1962 until 1971. It was the replacement for the 1961 Chrysler Windsor, which itself filled the place in Chrysler's line previously occupied by the Saratoga just the year before that (1960). At the time, it was considered a luxurious "muscle car", with all the performance of the Dodge and Plymouth products of the time, but with the luxury features expected of the Chrysler name.The 300 was positioned as a replacement of the 300 "letter series", adding 4-door versions and running alongside that model until its discontinuation in 1966. It became the sole 300 model until 1971, when production ended. The 300 name returned to the Chrysler line in 1979 as an option package on the Cordoba coupe. The Chrysler Concorde is a large four-door, full-size, front wheel drive sedan that was produced by Chrysler from 1992 to 2004. It assumed the C-body Chrysler New Yorker's position as the entry-level full-size sedan in Chrysler's lineup. One of Chrysler's three original Chrysler LH platform models derived from the American Motors/Renault-designed Eagle Premier, it used revolutionary cab forward design. The Concorde was related to the Dodge Intrepid, Eagle Vision, Chrysler 300M, Chrysler LHS, and the eleventh and final generation Chrysler New Yorker. It was on Car and Driver magazine's Ten Best list for 1993 and 1994. The Chrysler Imperial, introduced in 1926, was Chrysler's top of the line vehicle for much of its history. Models were produced with the Chrysler name until 1954, and again from 1990 to 1993. The company positioned the cars as a prestige marque to rival Cadillac, Lincoln, Duesenberg, Pierce Arrow, Cord and Packard. According to Antique Automobile, "The adjective ‘imperial’ according to Webster’s Dictionary means sovereign, supreme, superior or of unusual size or excellence. The word imperial thus justly befits Chrysler’s highest priced model." The Chrysler New Yorker is an automobile model which was produced by Chrysler from 1940 to 1996, serving for several years as the brand's flagship model. A trim level named the "New York Special" first appeared in 1938 and the "New Yorker" name debuted in 1939. Until its discontinuation in 1996, the New Yorker had made its mark as the longest-running American car nameplate.The New Yorker name helped define the Chrysler brand as a maker of upscale models, priced and equipped above mainstream brands like Ford, Chevrolet/Pontiac, and Dodge/Plymouth, but below full luxury brands like Cadillac, Lincoln and Packard. During the New Yorker's tenure, it competed against upper-level models from Buick, Oldsmobile and Mercury. The 2012 Kermadec Islands eruption was a major undersea volcanic eruption that was produced by the previously little-known Havre Seamount near the L'Esperance and L'Havre Rocks in the Kermadec Islands of New Zealand. The large volume of low density pumice produced by the eruption accumulated as a large area of floating pumice, a pumice raft, that was originally covering a surface of 400 square kilometres (150 square miles), spread to a continuous float of between 19,000 and 26,000 square kilometres (7,500 and 10,000 sq mi) and within three months dispersed to an area of more than twice the size of New Zealand.The thickness of the raft may initially have been as high as 3.5 metres (11 feet) and was reduced to around 0.5 metres (1 foot 8 inches) within a month.Three months after the eruption, the mass had dispersed into very dilute rafts and ribbons of floating pumice clasts. Most pumice clasts became waterlogged and sunk to the sea floor while some flocks have stranded in the Tonga islands, on the northern shores of New Zealand, and eventually on the eastern coast of Australia one year after the eruption. Mount Adatara (安達太良山, Adatara-yama) is a stratovolcano in Fukushima Prefecture, Japan.It is located about 15 kilometres southwest of the city of Fukushima and east of Mount Bandai. Its last known eruption was in 1996. An eruption in 1900 killed 72 workers at a sulfur mine located in the summit crater.The mountain is actually multiple volcanoes forming a broad, forested massif. It abuts Mount Azuma, a dormant volcano to the north. The peak is called Minowa-yama. It is the highest peak in the Adatara range, which stretches about 9 km in a north-south direction.The active summit crater is surrounded by hot springs and fumaroles. Sulfur mining was carried out in the 19th century, and 72 mine workers were killed in an eruption in 1900. Poems about Mount Adatara by Kōtarō Takamura from his book "Chieko-sho" helped make it famous. Ah Peku Patera is a patera, or a complex crater with scalloped edges, on Jupiter's moon Io. It is 84 kilometers in diameter and is located at 10.3°N 107°W / 10.3; -107. It is named after the Mayan thunder god Ah Peku. Its name was adopted by the International Astronomical Union in 2006. Ah Peku Patera is located on the south end of Monan Mons, north of which is Monan Patera. The eruptive centers Amirani and Maui can be found northwest, as well as Maui Patera. Gish Bar Patera is located toward the northeast. Ah Peku Patera was first detected by the spacecraft Galileo's Solid State Imager and Near-Infrared Mapping Spectrometer. It is considered an active hot spot. Aira Caldera (姶良カルデラ, Aira-Karudera) is a gigantic volcanic caldera in the south of the island of Kyūshū, Japan. The caldera was created by a massive eruption, approximately 22,000 years ago. Eruption of voluminous pyroclastic flows accompanied the formation of the 17 × 23 km-wide Aira caldera. Together with a large pumice fall, these amounted to approximately 400 km3 of tephra (VEI 7).The major city of Kagoshima and the 16,000-year-old Sakurajima volcano lie within the caldera. Sakura-jima, one of Japan's most active volcanoes, is a post-caldera cone of the Aira caldera at the northern half of Kagoshima Bay. Mount Akutan, officially Akutan Peak, is a stratovolcano in the Aleutian Islands of Alaska. Akutan Peak, at 4,275 feet (1,303 m), is the highest point on the caldera of the Akutan stratovolcano. Akutan contains a 2 km-wide caldera formed during a major explosive eruption about 1600 years ago. Recent eruptive activity has originated from a large cinder cone on the NE part of the caldera. It has been the source of frequent explosive eruptions with occasional lava effusion that blankets the caldera floor. A lava flow in 1978 traveled through a narrow breach in the north caldera rim to within 2 km of the coast. A small lake occupies part of the caldera floor. Two volcanic centers are located on the NW flank: Lava Peak is of Pleistocene age; and, a cinder cone lower on the flank which produced a lava flow in 1852 that extended the shoreline of the island and forms Lava Point. An older, mostly buried caldera seems to have formed in Pleistocene or Holocene time, while the current caldera formed in a VEI-5 eruption c. 340 AD. AVO has recorded 33 confirmed eruptions at Akutan, making it the volcano with the most eruptions in Alaska.The volcano erupted most recently in 1992, but there is still fumarolic activity at the base of Lava Point and there are hot springs North-East of the caldera. In March 1996, an earthquake swarm was followed by deformation of the volcanic edifice, including a lowering of the eastern side and a rise of the western side of the volcano. Alcedo Volcano is one of the six coalescing shield volcanoes that make up Isabela Island in the Galapagos. The remote location of the volcano has meant that even the most recent eruption in 1993 was not recorded until two years later. It is also the only volcano in the Galapagos to have erupted rhyolite and basaltic lava.The volcano has the largest number of wild tortoises of any of the volcanoes in the Galapagos, though their genetic diversity is amongst the lowest of any of the breeds in the archipelago. The habitat of the tortoises was threatened when feral goats crossed from southern Isabela Island in the 1970s and then reproduced rapidly. Amirani is an active volcano on Jupiter's moon Io, the inner-most of the Galilean Moons. It is located on Io's leading hemisphere at 24.46°N 114.68°W / 24.46; -114.68 (Amirani). The volcano is responsible for the largest active lava flow in the entire Solar System, with recent flows dwarfing those of even other volcanos on Io.The volcano was first observed in images acquired by the Voyager 1 spacecraft in March 1979. Later that year, the International Astronomical Union named this feature after an Georgian fire god, Amirani. The undissected stratovolcano of Amukta volcano makes up most of nearly circular, 7.7-km-wide Amukta Island (Amuux̂tax̂ in Aleut). It is the westernmost of the Islands of Four Mountains chain. The cone, about 5.8 km in basal diameter and topped by a 0.4 km wide summit crater, appears on synthetic-aperture radar imagery to be built upon a 300+ meter high, east-west trending arcuate ridge. Extensions of that ridge on the southwest and east sides of the island indicate an older caldera approximately 6 km in diameter and open to the sea on the south side. No hot springs or fumaroles have been reported from Amukta. Sekora (1973, p. 29) reports the presence of a cinder cone near the northeastern shore of the island. Antsiferov Island (Russian: Остров Анциферова; also known as Shirinki Russian: Ширинки Japanese 志林規島; Shirinki-tō) is an uninhabited volcanic island located in the northern Kuril Islands chain in the Sea of Okhotsk in the northwest Pacific Ocean. Its former Japanese name is derived from the Ainu language for "place of tall waves". Its nearest neighbor is Paramushir, located 15 km away across the Luzhin Strait. It is currently named for the cossack explorer Danila Antsiferov, who first described it along with other northern Kuril islands in the early eighteenth century. Aogashima (青ヶ島) is a volcanic Japanese island in the Philippine Sea. The island is administered by Tokyo and located approximately 358 kilometres (222 mi) south of Tokyo and 64 kilometres (40 mi) south of Hachijō-jima. It is the southernmost and most isolated inhabited island of the Izu archipelago.The village of Aogashima administers the island under Hachijō Subprefecture of Tokyo Metropolis. The island's area is 8.75 km2 (3.38 sq mi) and, as of 2014, its population is 170. Aogashima is also within the boundaries of the Fuji-Hakone-Izu National Park. Aracar is a large conical stratovolcano in northwestern Argentina, just east of the Chilean border. It has a main summit crater about 1–1.5 kilometres (0.6–0.9 mi) in diameter which sometimes contains crater lakes, and a secondary crater. The volcano has formed, starting during the Pliocene, on top of a lava platform and an older basement. Constructed on a base with an altitude of 4,100 metres (13,500 ft), it covers a surface area of 192.4 square kilometres (74.3 sq mi) and has a volume of 148 cubic kilometres (36 cu mi). The only observed volcanic activity was a possible steam or ash plume on March 28, 1993 seen from the village of Tolar Grande about 50 km (31 mi) southeast of the volcano, but with no evidence of deformation of the volcano from satellite observations. Inca archeological sites are found on the volcano. Arenal Volcano (Spanish: Volcán Arenal) is an active andesitic stratovolcano in north-western Costa Rica around 90 km northwest of San José, in the province of Alajuela, canton of San Carlos, and district of La Fortuna. The Arenal volcano measures at least 1,633 metres (5,358 ft) high. It is conically shaped with a crater 140 metres (460 ft) in diameter. Geologically, Arenal is considered a young volcano and it is estimated to be less than 7,500 years old. It is also known as "Pan de Azúcar", "Canaste", "Volcan Costa Rica", "Volcan Río Frío" or "Guatusos Peak".The volcano was dormant for hundreds of years and exhibited a single crater at its summit, with minor fumaroles activity, covered by dense vegetation. In 1968 it erupted unexpectedly, destroying the small town of Tabacón. Due to the eruption three more craters were created on the western flanks but only one of them still exists today. Since 2010, Arenal has been dormant. Arjuno-Welirang is a stratovolcano in the province of East Java on Java, Indonesia. It is a twin volcano, with the 'twins' being Arjuno and Welirang. There is at least one other stratovolcano in the area, and there are around 10 pyroclastic cones nearby. They are located in a 6 km line between Arjuno and Welirang. The Arjuno-Welirang volcanic complex itself lies in the older two volcanoes, Mount Ringgit to the east and Mount Linting to the south. The summit lacks vegetation. Fumarolic areas with sulfur deposits are found in several locations on Welirang.The name Arjuno is Javanese rendition of Arjuna, a hero in Mahabharata epic, while Welirang is Javanese word for sulfur.A 1950 eruption had a VEI=2. There was an explosive eruption. Another eruption occurred two years later in 1952. This eruption had a VEI=0.A 300 hectares at slope of Mount Arjuno near the road of Surabaya-Malang is used by Taman Safari II. Atlasov Island, known in Russian as Ostrov Atlasova (Остров Атласова), or in Japanese as Araido (阿頼度島), is the northernmost island and volcano and also the highest volcano of the Kuril islands, part of the Sakhalin Oblast in Russia. The Russian name is sometimes rendered in English as Atlasova Island. Other names for the island include Uyakhuzhach, Oyakoba and Alaid, the name of the volcano on the island.The island is named after Vladimir Atlasov, a 17th-century Russian explorer who incorporated the nearby Kamchatka Peninsula into Russia. It is essentially the cone of a submarine volcano called Vulkan Alaid protruding above the Sea of Okhotsk to a height of 2,339 metres (7,674 feet). The island has an area of 119 square kilometres (46 square miles), but is currently uninhabited. Numerous pyroclastic cones dot the lower flanks of basaltic to basaltic andesite volcano, particularly on the NW and SE sides, including an offshore cone formed during the 1933–34 eruption.Its near perfect shape gave rise to many legends about the volcano among the peoples of the region, such as the Itelmens and Kuril Ainu. The Russian scientist Stepan Krasheninnikov was told the story that it was once a mountain in Kamchatka, but the neighbouring mountains became jealous of its beauty and exiled it to the sea, leaving behind Kurile Lake in southern Kamchatka. Geographically, this story is not without evidence, as after the last Ice Age most of the icecaps melted, raising the world's water level, and possibly submerging a landbridge to the volcano. Following the transfer of the Kuril Islands to Japan by the Treaty of St Petersburg, 1875, Oyakoba as it is called by the Japanese, became the northernmost island of the empire and subject of much aesthetic praise, described in haiku, ukiyo-e, etc. Ito Osamu (1926) described it as more exquisitely shaped than Mount Fuji.Administratively this island belongs to the Sakhalin Oblast of the Russian Federation. Augustine Volcano is a central lava dome and lava flow complex, surrounded by pyroclastic debris. It forms Augustine Island in southwestern Cook Inlet in the Kenai Peninsula Borough of southcentral coastal Alaska, 280 kilometers (174 mi) southwest of Anchorage. Augustine Island has a land area of 83.872 square kilometers (32.4 sq mi), while West Island, just off Augustine's western shores, has 5.142 km2 (2.0 sq mi).The island is made up mainly of past eruption deposits. Scientists have been able to discern that past dome collapse has resulted in large avalanches. Avachinsky (also known as Avacha or Avacha Volcano or Avachinskaya Sopka) (Russian: Авачинская сопка, Авача) is an active stratovolcano on the Kamchatka Peninsula in the far east of Russia. It lies within sight of the capital of Kamchatka Krai, Petropavlovsk-Kamchatsky. Together with neighboring Koryaksky volcano, it has been designated a Decade Volcano, worthy of particular study in light of its history of explosive eruptions and proximity to populated areas.Avachinsky's last eruption occurred in 2008. This eruption was tiny compared to the volcano's major Volcanic Explosivity Index 4 eruption in 1945. Axial Seamount (also Coaxial Seamount or Axial Volcano) is a seamount and submarine volcano located on the Juan de Fuca Ridge, approximately 480 km (298 mi) west of Cannon Beach, Oregon. Standing 1,100 m (3,609 ft) high, Axial Seamount is the youngest volcano and current eruptive center of the Cobb–Eickelberg Seamount chain. Located at the center of both a geological hotspot and a mid-ocean ridge, the seamount is geologically complex, and its origins are still poorly understood. Axial Seamount is set on a long, low-lying plateau, with two large rift zones trending 50 km (31 mi) to the northeast and southwest of its center. The volcano features an unusual rectangular caldera, and its flanks are pockmarked by fissures, vents, sheet flows, and pit craters up to 100 m (328 ft) deep; its geology is further complicated by its intersection with several smaller seamounts surrounding it.Axial Seamount was first detected in the 1970s by satellite altimetry, and mapped and explored by Pisces IV, DSV Alvin, and others through the 1980s. A large package of sensors was dropped on the seamount through 1992, and the New Millennium Observatory was established on its flanks in 1996. Axial Seamount received significant scientific attention following the seismic detection of a submarine eruption at the volcano in January 1998, the first time a submarine eruption had been detected and followed in situ. Subsequent cruises and analysis showed that the volcano had generated lava flows up to 13 m (43 ft) thick, and the total eruptive volume was found to be 18,000–76,000 km3 (4,300–18,200 cu mi). Axial Seamount erupted again in April 2011, producing a mile-wide lava flow. There was another eruption in 2015. Cerro Azul (Spanish pronunciation: [ˈsero aˈsul], blue hill in Spanish), sometimes referred to as Quizapu, is an active stratovolcano in the Maule Region of central Chile, immediately south of Descabezado Grande. Part of the South Volcanic Zone of the Andes, its summit is 3,788 meters (12,428 ft) above sea level, and is capped by a summit crater that is 500 meters (1,600 ft) wide and opens to the north. Beneath the summit, the volcano features numerous scoria cones and flank vents.Cerro Azul is responsible for several of South America's largest recorded eruptions, in 1846 and 1932. In 1846, an effusive eruption formed the vent at the site of present-day Quizapu crater on the northern flank of Cerro Azul and sent lava flowing down the sides of the volcano, creating a lava field 8–9 square kilometres (3–3.5 square miles) in area. Phreatic and Strombolian volcanism between 1907 and 1932 excavated this crater. In 1932, one of the largest explosive eruptions of the 20th century occurred at Quizapu Crater and sent 9.5 cubic kilometers (2.3 cu mi) of ash into the atmosphere. The volcano's most recent eruption was in 1967.The South Volcanic Zone has a long history of eruptions and poses a threat to the surrounding region. Any volcanic hazard—ranging from minor ashfalls to pyroclastic flows—could pose a significant risk to humans and wildlife. Despite its inactivity, Cerro Azul could again produce a major eruption; if this were to happen, relief efforts would probably be quickly organized. Teams such as the Volcano Disaster Assistance Program (VDAP) are prepared to effectively evacuate, assist, and rescue people threatened by volcanic eruptions. Mount Azuma-kofuji (吾妻小富士) is an active stratovolcano in Fukushima prefecture, Japan.It has a conical-shaped crater and as the name "Kofuji" (small Mount Fuji) suggests, the shape of Mount Azuma is like that of Mount Fuji. Mount Azuma's appealing symmetrical crater and the nearby fumarolic area with its many onsen have made it a popular tourist destination.The Bandai-Azuma Skyline passes just below the crater, allowing visitors to drive to within walking distance of the crater and other various hiking trails on the mountain. There is also a visitor center along the roadway near the crater, where a collection of eateries, facilities, a parking lot, and a stop for buses from Fukushima Station are located.The Azuma volcanic group contains several volcanic lakes, including Goshiki-numa, the 'Five Colored Lakes'.Each Spring, as the snow melts away, a white rabbit appears on the side of Mount Azuma. The melting snow shaped like a rabbit is known as the 'seeding rabbit' and signals to the people of Fukushima that the farming season has come. Mt Bagana is an active volcano located in the centre of the island of Bougainville, Papua New Guinea, the largest island of the Solomon group. It is the most active volcano in the country, occupying a remote portion of central Bougainville Island. One of Melanesia's youngest and most active volcanoes.Bagana is a massive symmetrical, roughly 1750-m-high lava cone largely constructed by an accumulation of viscous andesitic lava flows. The entire lava cone could have been constructed in about 300 years at its present rate of lava production. Eruptive activity at Bagana is frequent and is characterized by non-explosive effusion of viscous lava that maintains a small lava dome in the summit crater, although explosive activity occasionally producing pyroclastic flows also occurs. Lava flows form dramatic, freshly preserved tongue-shaped lobes up to 50-m-thick with prominent levees that descend the volcano's flanks on all sides.Just north-east of Bagana is the volcano crater lake Billy Mitchell. Bagana is one of 17 post-Miocene strato volcanos on Bougainville. U.S. General Floyd L. Parks flew over the Solomon Islands on 27 October 1948 and witnessed the eruption of Bagana. His photographs of Bagana erupting were published in Life magazine. Mount Baker (Lummi: Qwú’mə Kwəlshéːn; Nooksack: Kw’eq Smaenit or Kwelshán), also known as Koma Kulshan or simply Kulshan, is an active glaciated andesitic stratovolcano in the Cascade Volcanic Arc and the North Cascades of Washington in the United States. Mount Baker has the second-most thermally active crater in the Cascade Range after Mount Saint Helens. About 31 miles (50 km) due east of the city of Bellingham, Whatcom County, Mount Baker is the youngest volcano in the Mount Baker volcanic field. While volcanism has persisted here for some 1.5 million years, the current glaciated cone is likely no more than 140,000 years old, and possibly no older than 80–90,000 years. Older volcanic edifices have mostly eroded away due to glaciation.After Mount Rainier, Mount Baker is the most heavily glaciated of the Cascade Range volcanoes; the volume of snow and ice on Mount Baker, 0.43 cu mi (1.79 km3) is greater than that of all the other Cascades volcanoes (except Rainier) combined. It is also one of the snowiest places in the world; in 1999, Mount Baker Ski Area, located 14 km (8.7 mi) to the northeast, set the world record for recorded snowfall in a single season—1,140 in (2,900 cm).At 10,781 ft (3,286 m), it is the third-highest mountain in Washington State and the fifth-highest in the Cascade Range, if Little Tahoma Peak, a subpeak of Mount Rainier, and Shastina, a subpeak of Mount Shasta, are not counted. Located in the Mount Baker Wilderness, it is visible from much of Greater Victoria, Nanaimo, and Greater Vancouver in British Columbia and, to the south, from Seattle (and on clear days Tacoma) in Washington.Indigenous Peoples have known the mountain for thousands of years, but the first written record of the mountain is from Spanish explorer Gonzalo Lopez de Haro, who mapped it in 1790 as Gran Montaña del Carmelo, "Great Mount Carmel". The explorer George Vancouver renamed the mountain for 3rd Lieutenant Joseph Baker of HMS Discovery, who saw it on April 30, 1792. Beerenberg is a 2,277 m (7,470 ft) stratovolcano which forms the northeastern end of the Norwegian island of Jan Mayen. It is the world's northernmost subaerial active volcano. The volcano is topped by a mostly ice-filled crater about 1 km (0.6 mi) wide, with numerous peaks along its rim including the highest summit, Haakon VII Toppen, on its western side.The upper slopes of the volcano are largely ice-covered, with several major glaciers including five which reach the sea. The longest of the glaciers is the Weyprecht Glacier, which flows from the summit crater via a breach through the northwestern portion of the crater rim, and extends about 6 km (4 mi) down to the sea.Beerenberg is composed primarily of basaltic lava flows with minor amounts of tephra. Numerous cinder cones have been formed along flank fissures.Its most recent eruptions took place in 1985 and 1970 both of which were flank eruptions from fissures on the northeast side of the mountain. Other eruptions with historical records occurred in 1732, 1818, and 1851.Its name means "Bear Mountain" in Dutch, and it takes its name from the polar bears seen there by Dutch whalers in the early 17th century. Mount Belinda is a stratovolcano on Montagu Island, in the South Sandwich Islands of the Scotia Sea. A part of the British Overseas Territory, South Georgia and the South Sandwich Islands, Mount Belinda is also the highest peak in the South Sandwich Islands, at 1,370 m (4,490 ft).Belinda was inactive until late 2001, when it erupted. The eruption produced large quantities of basaltic lava, melting the thick cover of ice that had accumulated while the volcano lay dormant, and "producing a marvelous 'natural laboratory'; for studying lava-ice interactions relevant to the biology of extreme environments as well as to processes believed to be important on Mars."The activity throughout 2005 marked the highest levels yet. The increase in activity in the fall of 2005 produced an active 3.5-kilometre (2.2-mile)-long lava flow, extending from the summit cone of Mount Belinda to the sea. The flow spread northeast from the volcanic vent, and then became diverted due north by an arête. By late 2007, eruptive activity had ceased, and in 2010 the only activity was from scattered fumeroles and cooling lava. Bezymianny (Russian: Безымянный, meaning unnamed) is an active stratovolcano in Kamchatka, Russia. Bezymianny volcano had been considered extinct until 1955. Activity starting in 1955, cumulated in a dramatic eruption on 30 March 1956. This eruption, similar to that of Mount St. Helens in 1980, produced a large horseshoe-shaped crater that was formed by collapse of the summit and an associated lateral blast. Subsequent episodic but ongoing lava dome growth, accompanied by intermittent explosive activity and pyroclastic flows, has largely filled the 1956 crater. The most recent eruption of lava flows occurred in February 2013. The modern Bezymianny volcano, much smaller than its massive neighbors Kamen and Kliuchevskoi, was formed about 4700 years ago over a late-Pleistocene lava-dome complex and an ancestral volcano that was built between about 11,000–7000 years ago. There have been three periods of intensified activity in the past 3000 years. Big Ben is a volcanic massif that dominates the geography of Heard Island in the southern Indian Ocean. It is a composite cone with a diameter of approximately 25 kilometres. Its highest peak is Mawson Peak, which is 2,745 m above sea level. Much of it is covered by ice, including 14 major glaciers which descend from Big Ben to the sea. Big Ben is the highest mountain in Australian Territory, except for those claimed in the Australian Antarctic Territory. A smaller volcanic headland, the Laurens Peninsula, extends approximately 10 km to the northwest, created by a separate volcano, Mount Dixon; its highest point is Anzac Peak, at 715 m. Bogoslof Island or Agasagook Island (Aleut: Aĝasaaĝux̂) is the summit of a submarine stratovolcano located at the southern edge of the Bering Sea, 35 miles (56 km) northwest of Unalaska Island of the Aleutian Island chain. It has a land area of 319.3 acres (1.292 km2) and is uninhabited by people. The peak elevation of the island is 490 feet (150 m). It is 1,040 meters (3,410 ft) long and 1,512 m (4,961 ft) wide. The stratovolcano rises about 6,000 ft (1,800 m) from the seabed, but the summit is the only part that projects above sea level. The Brennisteinsalda is a volcano in the south of Iceland. Its height is about 855 m. It is situated near Landmannalaugar and not far from Hekla.The name means in English: sulphur wave. It comes from the sulphur spots which have coloured its sides. But there are other colours, too: green from mosses, black and blue from lava and ashes, red from iron in the earth. It could very well be the most colourful mountain of Iceland and so its picture is often found in books and calendars.The mountain is still visibly an active volcano with hot sulphur springs and vapour at its sides. The hiking trail Laugavegur passes by. In front of it there is an obsidian lava field. Monte Burney is a volcano in southern Chile, part of its Austral Volcanic Zone which consists of six volcanoes with activity during the Quaternary. This volcanism is linked to the subduction of the Antarctic Plate beneath the South America Plate and the Scotia Plate.Monte Burney is formed by a caldera with a glaciated stratovolcano on its rim. This stratovolcano in turn has a smaller caldera. An eruption is reported for 1910, with less certain eruptions in 1970 and 1920.Tephra analysis has yielded evidence for many eruptions during the Pleistocene and Holocene, including two large explosive eruptions during the early and mid-Holocene. These eruptions deposited significant tephra layers over Patagonia and Tierra del Fuego.