text
stringlengths 12
14.7k
|
---|
Handwriting recognition : AI effect Applications of artificial intelligence Electronic signature eScriptorium Handwriting movement analysis Intelligent character recognition Live Ink Character Recognition Solution Neocognitron Optical character recognition Pen computing Sketch recognition Stylus (computing) Tablet PC
|
Handwriting recognition : Annotated bibliography of references to gesture and pen computing Notes on the History of Pen-based Computing – video on YouTube
|
Inverted pendulum : An inverted pendulum is a pendulum that has its center of mass above its pivot point. It is unstable and falls over without additional help. It can be suspended stably in this inverted position by using a control system to monitor the angle of the pole and move the pivot point horizontally back under the center of mass when it starts to fall over, keeping it balanced. The inverted pendulum is a classic problem in dynamics and control theory and is used as a benchmark for testing control strategies. It is often implemented with the pivot point mounted on a cart that can move horizontally under control of an electronic servo system as shown in the photo; this is called a cart and pole apparatus. Most applications limit the pendulum to 1 degree of freedom by affixing the pole to an axis of rotation. Whereas a normal pendulum is stable when hanging downward, an inverted pendulum is inherently unstable, and must be actively balanced in order to remain upright; this can be done either by applying a torque at the pivot point, by moving the pivot point horizontally as part of a feedback system, changing the rate of rotation of a mass mounted on the pendulum on an axis parallel to the pivot axis and thereby generating a net torque on the pendulum, or by oscillating the pivot point vertically. A simple demonstration of moving the pivot point in a feedback system is achieved by balancing an upturned broomstick on the end of one's finger. A second type of inverted pendulum is a tiltmeter for tall structures, which consists of a wire anchored to the bottom of the foundation and attached to a float in a pool of oil at the top of the structure that has devices for measuring movement of the neutral position of the float away from its original position.
|
Inverted pendulum : A pendulum with its bob hanging directly below the support pivot is at a stable equilibrium point, where it remains motionless because there is no torque on the pendulum. If displaced from this position, it experiences a restoring torque that returns it toward the equilibrium position. A pendulum with its bob in an inverted position, supported on a rigid rod directly above the pivot, 180° from its stable equilibrium position, is at an unstable equilibrium point. At this point again there is no torque on the pendulum, but the slightest displacement away from this position causes a gravitation torque on the pendulum that accelerates it away from equilibrium, causing it to fall over. In order to stabilize a pendulum in this inverted position, a feedback control system can be used, which monitors the pendulum's angle and moves the position of the pivot point sideways when the pendulum starts to fall over, to keep it balanced. The inverted pendulum is a classic problem in dynamics and control theory and is widely used as a benchmark for testing control algorithms (PID controllers, state-space representation, neural networks, fuzzy control, genetic algorithms, etc.). Variations on this problem include multiple links, allowing the motion of the cart to be commanded while maintaining the pendulum, and balancing the cart-pendulum system on a see-saw. The inverted pendulum is related to rocket or missile guidance, where the center of gravity is located behind the center of drag causing aerodynamic instability. The understanding of a similar problem can be shown by simple robotics in the form of a balancing cart. Balancing an upturned broomstick on the end of one's finger is a simple demonstration, and the problem is solved by self-balancing personal transporters such as the Segway PT, the self-balancing hoverboard and the self-balancing unicycle. Another way that an inverted pendulum may be stabilized, without any feedback or control mechanism, is by oscillating the pivot rapidly up and down. This is called Kapitza's pendulum. If the oscillation is sufficiently strong (in terms of its acceleration and amplitude) then the inverted pendulum can recover from perturbations in a strikingly counterintuitive manner. If the driving point moves in simple harmonic motion, the pendulum's motion is described by the Mathieu equation.
|
Inverted pendulum : The equations of motion of inverted pendulums are dependent on what constraints are placed on the motion of the pendulum. Inverted pendulums can be created in various configurations resulting in a number of Equations of Motion describing the behavior of the pendulum.
|
Inverted pendulum : Achieving stability of an inverted pendulum has become a common engineering challenge for researchers. There are different variations of the inverted pendulum on a cart ranging from a rod on a cart to a multiple segmented inverted pendulum on a cart. Another variation places the inverted pendulum's rod or segmented rod on the end of a rotating assembly. In both, (the cart and rotating system) the inverted pendulum can fall only in a plane. The inverted pendulums in these projects can either be required to maintain balance only after an equilibrium position is achieved, or can achieve equilibrium by itself. Another platform is a two-wheeled balancing inverted pendulum. The two wheeled platform has the ability to spin on the spot offering a great deal of maneuverability. Yet another variation balances on a single point. A spinning top, a unicycle, or an inverted pendulum atop a spherical ball all balance on a single point.
|
Inverted pendulum : Arguably the most prevalent example of a stabilized inverted pendulum is a human being. A person standing upright acts as an inverted pendulum with their feet as the pivot, and without constant small muscular adjustments would fall over. The human nervous system contains an unconscious feedback control system, the sense of balance or righting reflex, that uses proprioceptive input from the eyes, muscles and joints, and orientation input from the vestibular system consisting of the three semicircular canals in the inner ear, and two otolith organs, to make continual small adjustments to the skeletal muscles to keep us standing upright. Walking, running, or balancing on one leg puts additional demands on this system. Certain diseases and alcohol or drug intoxication can interfere with this reflex, causing dizziness and disequilibration, an inability to stand upright. A field sobriety test used by police to test drivers for the influence of alcohol or drugs, tests this reflex for impairment. Some simple examples include balancing brooms or meter sticks by hand. The inverted pendulum has been employed in various devices and trying to balance an inverted pendulum presents a unique engineering problem for researchers. The inverted pendulum was a central component in the design of several early seismometers due to its inherent instability resulting in a measurable response to any disturbance. The inverted pendulum model has been used in some recent personal transporters, such as the two-wheeled self-balancing scooters and single-wheeled electric unicycles. These devices are kinematically unstable and use an electronic feedback servo system to keep them upright. Swinging a pendulum on a cart into its inverted pendulum state is considered a traditional optimal control toy problem/benchmark.
|
Inverted pendulum : Double inverted pendulum Inertia wheel pendulum Furuta pendulum iBOT Humanoid robot Ballbot
|
Inverted pendulum : D. Liberzon Switching in Systems and Control (2003 Springer) pp. 89ff
|
Inverted pendulum : Franklin; et al. (2005). Feedback control of dynamic systems, 5, Prentice Hall. ISBN 0-13-149930-0
|
Inverted pendulum : YouTube - Inverted Pendulum - Demo #3 YouTube - inverted pendulum YouTube - Double Pendulum on a Cart YouTube - Triple Pendulum on a Cart A dynamical simulation of an inverse pendulum on an oscillatory base Archived 2019-09-13 at the Wayback Machine Inverted Pendulum: Analysis, Design, and Implementation Non-Linear Swing-Up and Stabilizing Control of an Inverted Pendulum System Stabilization fuzzy control of inverted pendulum systems Blog post on inverted pendulum, with Python code Equations of Motion for the Cart and Pole Control Task
|
Speech recognition : Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech-to-text (STT). It incorporates knowledge and research in the computer science, linguistics and computer engineering fields. The reverse process is speech synthesis. Some speech recognition systems require "training" (also called "enrollment") where an individual speaker reads text or isolated vocabulary into the system. The system analyzes the person's specific voice and uses it to fine-tune the recognition of that person's speech, resulting in increased accuracy. Systems that do not use training are called "speaker-independent" systems. Systems that use training are called "speaker dependent". Speech recognition applications include voice user interfaces such as voice dialing (e.g. "call home"), call routing (e.g. "I would like to make a collect call"), domotic appliance control, search key words (e.g. find a podcast where particular words were spoken), simple data entry (e.g., entering a credit card number), preparation of structured documents (e.g. a radiology report), determining speaker characteristics, speech-to-text processing (e.g., word processors or emails), and aircraft (usually termed direct voice input). Automatic pronunciation assessment is used in education such as for spoken language learning. The term voice recognition or speaker identification refers to identifying the speaker, rather than what they are saying. Recognizing the speaker can simplify the task of translating speech in systems that have been trained on a specific person's voice or it can be used to authenticate or verify the identity of a speaker as part of a security process. From the technology perspective, speech recognition has a long history with several waves of major innovations. Most recently, the field has benefited from advances in deep learning and big data. The advances are evidenced not only by the surge of academic papers published in the field, but more importantly by the worldwide industry adoption of a variety of deep learning methods in designing and deploying speech recognition systems.
|
Speech recognition : The key areas of growth were: vocabulary size, speaker independence, and processing speed.
|
Speech recognition : Both acoustic modeling and language modeling are important parts of modern statistically based speech recognition algorithms. Hidden Markov models (HMMs) are widely used in many systems. Language modeling is also used in many other natural language processing applications such as document classification or statistical machine translation.
|
Speech recognition : The performance of speech recognition systems is usually evaluated in terms of accuracy and speed. Accuracy is usually rated with word error rate (WER), whereas speed is measured with the real time factor. Other measures of accuracy include Single Word Error Rate (SWER) and Command Success Rate (CSR). Speech recognition by machine is a very complex problem, however. Vocalizations vary in terms of accent, pronunciation, articulation, roughness, nasality, pitch, volume, and speed. Speech is distorted by a background noise and echoes, electrical characteristics. Accuracy of speech recognition may vary with the following: Vocabulary size and confusability Speaker dependence versus independence Isolated, discontinuous or continuous speech Task and language constraints Read versus spontaneous speech Adverse conditions
|
Speech recognition : Cole, Ronald; Mariani, Joseph; Uszkoreit, Hans; Varile, Giovanni Battista; Zaenen, Annie; Zampolli; Zue, Victor, eds. (1997). Survey of the state of the art in human language technology. Cambridge Studies in Natural Language Processing. Vol. XII–XIII. Cambridge University Press. ISBN 978-0-521-59277-2. Junqua, J.-C.; Haton, J.-P. (1995). Robustness in Automatic Speech Recognition: Fundamentals and Applications. Kluwer Academic Publishers. ISBN 978-0-7923-9646-8. Karat, Clare-Marie; Vergo, John; Nahamoo, David (2007). "Conversational Interface Technologies". In Sears, Andrew; Jacko, Julie A. (eds.). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications (Human Factors and Ergonomics). Lawrence Erlbaum Associates Inc. ISBN 978-0-8058-5870-9. Pieraccini, Roberto (2012). The Voice in the Machine. Building Computers That Understand Speech. The MIT Press. ISBN 978-0262016858. Pirani, Giancarlo, ed. (2013). Advanced algorithms and architectures for speech understanding. Springer Science & Business Media. ISBN 978-3-642-84341-9. Signer, Beat; Hoste, Lode (December 2013). "SpeeG2: A Speech- and Gesture-based Interface for Efficient Controller-free Text Entry". Proceedings of ICMI 2013. 15th International Conference on Multimodal Interaction. Sydney, Australia. Woelfel, Matthias; McDonough, John (26 May 2009). Distant Speech Recognition. Wiley. ISBN 978-0470517048.
|
Abess : abess (Adaptive Best Subset Selection, also ABESS) is a machine learning method designed to address the problem of best subset selection. It aims to determine which features or variables are crucial for optimal model performance when provided with a dataset and a prediction task. abess was introduced by Zhu in 2020 and it dynamically selects the appropriate model size adaptively, eliminating the need for selecting regularization parameters. abess is applicable in various statistical and machine learning tasks, including linear regression, the Single-index model, and other common predictive models. abess can also be applied in biostatistics.
|
Abess : The basic form of abess is employed to address the optimal subset selection problem in general linear regression. abess is an l 0 method, it is characterized by its polynomial time complexity and the property of providing both unbiased and consistent estimates. In the context of linear regression, assuming we have knowledge of n independent samples ( x i , y i ) , i = 1 , … , n ,y_),i=1,\ldots ,n , where x i ∈ R p × 1 \in \mathbb ^ and y i ∈ R \in \mathbb , we define X = ( x 1 , … , x n ) ⊤ ,\ldots ,x_)^ and y = ( y 1 , … , y n ) ⊤ ,\ldots ,y_)^ . The following equation represents the general linear regression model: y = X β + ε . To obtain appropriate parameters β , one can consider the loss function for linear regression: L n LR ( β ; X , y ) = 1 2 n ‖ y − X β ‖ 2 2 . _^(\beta ;X,y)=\|y-X\beta \|_^. In abess, the initial focus is on optimizing the loss function under the l 0 constraint. That is, we consider the following problem: min β ∈ R p × 1 L n LR ( β ; X , y ) , subject to ‖ β ‖ 0 ≤ s , ^_^(\beta ;X,y),\|\beta \|_\leq s, where s represents the desired size of the support set, and ‖ β ‖ 0 = ∑ i = 1 p I ( β i ≠ 0 ) =\sum _^_\neq 0) is the l 0 norm of the vector. To address the optimization problem described above, abess iteratively exchanges an equal number of variables between the active set and the inactive set. In each iteration, the concept of sacrifice is introduced as follows: For j in the active set ( j ∈ A ^ ): ξ j = L n LR ( β ^ A ∖ ) − L n LR ( β ^ A ) = X j ⊤ X j 2 n ( β ^ j ) 2 =_^\left(^\backslash \\right)-_^\left(^\right)=_^_\left(_\right)^ For j in the inactive set ( j ∉ A ^ ): ξ j = L n LR ( β ^ A ) − L n LR ( β ^ A + t ^ ) = X j ⊤ X j 2 n ( d ^ j X j ⊤ X j / n ) 2 =_^\left(^\right)-_^\left(^+^\right)=_^_\left( __^_/n\right)^ Here are the key elements in the above equations: β ^ A ^ : This represents the estimate of β obtained in the previous iteration. A ^ : It denotes the estimated active set from the previous iteration. β ^ A ∖ ^\backslash \ : This is a vector where the j-th element is set to 0, while the other elements are the same as β ^ A ^ . t ^ = arg min t L n LR ( β ^ A + t ) ^=\arg \min __^\left(^+^\right) : Here, t represents a vector where all elements are 0 except the j-th element. d ^ j = X j ⊤ ( y − X β ^ ) / n _=_^(-)/n : This is calculated based on the equation mentioned. The iterative process involves exchanging variables, with the aim of minimizing the sacrifices in the active set while maximizing the sacrifices in the inactive set during each iteration. This approach allows abess to efficiently search for the optimal feature subset. In abess, select an appropriate s max and optimize the above problem for active sets size s = 1 , … , s max using the information criterion GIC = n log L n LR + s log p log log n , =n\log _^+s\log p\log \log n, to adaptively choose the appropriate active set size s and obtain its corresponding abess estimator.
|
Abess : The splicing algorithm in abess can be employed for subset selection in other models.
|
Abess : The abess library. (version 0.4.5) is an R package and python package based on C++ algorithms. It is open-source on GitHub. The library can be used for optimal subset selection in linear regression, (multi-)classification, and censored-response modeling models. The abess package allows for parameters to be chosen in a grouped format. Information and tutorials are available on the abess homepage.
|
Abess : abess can be applied in biostatistics, such as assessing the robust severity of COVID-19 patients, conducting antibiotic resistance in Mycobacterium tuberculosis, exploring prognostic factors in neck pain, and developing prediction models for severe pain in patients after percutaneous nephrolithotomy. abess can also be applied to gene selection. In the field of data-driven partial differential equation (PDE) discovery, Thanasutives applied abess to automatically identify parsimonious governing PDEs. == References ==
|
Accumulated local effects : Accumulated local effects (ALE) is a machine learning interpretability method.
|
Accumulated local effects : ALE uses a conditional feature distribution as an input and generates augmented data, creating more realistic data than a marginal distribution. It ignores far out-of-distribution (outlier) values. Unlike partial dependence plots and marginal plots, ALE is not defeated in the presence of correlated predictors. It analyzes differences in predictions instead of averaging them by calculating the average of the differences in model predictions over the augmented data, instead of the average of the predictions themselves.
|
Accumulated local effects : Given a model that predicts house prices based on its distance from city center and size of the building area, ALE compares the differences of predictions of houses of different sizes. The result separates the impact of the size from otherwise correlated features.
|
Accumulated local effects : Defining evaluation windows is subjective. High correlations between features can defeat the technique. ALE requires more and more uniformly distributed observations than PDP so that the conditional distribution can be reliably determined. The technique may produce inadequate results if the data is highly sparse, which is more common with high-dimensional data (curse of dimensionality).
|
Accumulated local effects : Interpretability (machine learning)
|
Accumulated local effects : Munn, Michael (2022). Explainable AI for Practitioners. O'Reilly Media, Incorporated. ISBN 978-1-0981-1910-2. OCLC 1350433516.
|
Algorithms of Oppression : Algorithms of Oppression: How Search Engines Reinforce Racism is a 2018 book by Safiya Umoja Noble in the fields of information science, machine learning, and human-computer interaction.
|
Algorithms of Oppression : Noble earned an undergraduate degree in sociology from California State University, Fresno in the 1990s, then worked in advertising and marketing for fifteen years before going to the University of Illinois Urbana-Champaign for a Master of Library and Information Science degree in the early 2000s. The book's first inspiration came in 2011, when Noble Googled the phrase "black girls" and saw results for pornography on the first page. Noble's doctoral thesis, completed in 2012, was titled Searching for Black Girls: Old Traditions in New Media. At this time, Noble thought of the title "Algorithms of Oppression" for the eventual book. Noble became an assistant professor at University of California, Los Angeles in 2014. In 2017, she published an article on racist and sexist bias in search engines in The Chronicle of Higher Education. The book was published by New York University Press on February 20, 2018. By this time, changes to Google's algorithm had changed the most common results for a search of "black girls," though the underlying biases remain influential.
|
Algorithms of Oppression : Algorithms of Oppression addresses the relationship between search engines and discriminatory biases. She takes a Black intersectional feminist approach. Intersectional feminism takes into account the experiences of women of different races and sexualities when discussing the oppression of women. Noble argues that search algorithms are racist and perpetuate societal problems because they reflect the negative biases that exist in society and the people who create them. Noble rejects the idea that search engines are inherently neutral, explaining how algorithms in search engines privilege whiteness by depicting positive cues when key words like “white” are searched as opposed to “Asian,” “Hispanic,” or “Black.” Her main example surrounds the search results of "Black girls" versus "white girls" and the biases that are depicted in the results.
|
Algorithms of Oppression : Chapter 1 explores how Google search's auto suggestion feature is demoralizing, discussing example searches for terms like "black girls" (which returned pornography) and "Jew" (which returned anti-Semitic pages). Noble coins the term algorithmic oppression to describe data failures specific to people of color, women, and other marginalized groups. She discusses how Google could use human curation to eliminate slurs or inappropriate images from the first page of results, and criticizes Google's policy that unless pages are unlawful, Google will allow its algorithm to act without human curation. She identifies AdWords as a hypocritical use of curation to promote commercial interests, since it allows advertisers to pay for controversial or less-relevant topics to appear above the algorithm's selections. Chapter 2 examines Google's claims that they are not responsible for the content of search results, instead blaming the content creators and searchers. Noble highlights aspects of the algorithm which normalize whiteness and men. She argues that Google hides behind their algorithm, while reinforcing social inequalities and stereotypes for Black, Latina, and Asian women. Chapter 3 discusses how Google's search engine combines multiple sources to create threatening narratives about minorities. She explains a case study where she searched “black on white crimes” on Google. Noble highlights that the sources and information that were found after the search pointed to conservative sources that skewed information. These sources displayed racist and anti-black information from white supremacist sources. Ultimately, she believes this readily-available, false information fueled the actions of white supremacist Dylann Roof, who committed a massacre. Chapter 4 examines examples of women being shamed due to their activity in the porn industry, regardless if it was consensual or not. She critiques the internet's ability to influence one's future and compares U.S. privacy laws to those of the European Union, which provides citizens with “the right to forget or be forgotten.” She argues that these breaches of privacy disproportionately affect women and people of color. Chapter 5 moves away from Google and onto other information sources deemed credible and neutral. Noble says that prominent libraries, including the Library of Congress, reinforce hegemonies such as whiteness, heteronormativity, and patriarchy. As an example, she discusses a two-year effort to change the Library of Congress's catalog terminology from "illegal aliens" to "noncitizen" or "unauthorised immigrants". Noble argues all digital search engines reinforce discriminatory biases, highlighting how interconnected technology and society are. Chapter 6 discusses possible solutions for the problem of algorithmic bias. She insists that governments and corporations bear the most responsibility to reform their systemic issues, and rejects the neoliberal argument that algorithmic biases will disappear if more women and racial minorities enter the industry as software engineers. She critiques a mindset she calls “big-data optimism,” or the notion that large institutions solve inequalities. She argues that policies enacted by local and federal governments could reduce Google's “information monopoly” and regulate the ways in which search engines filter their results. To illustrate this point, she uses the example a Black hairdresser whose business faces setbacks because the review site Yelp has used biased advertising practices and searching strategies against her. She closes the chapter by calling upon the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) to “regulate decency,” or to limit the amount of racist, homophobic, or prejudiced rhetoric on the Internet. She urges the public to shy away from “colorblind” ideologies toward race, arguing that these erase the struggles faced by racial minorities. The conclusion synthesizes the previous chapters, and challenges the idea that the internet is a fully democratic or post-racial environment.
|
Algorithms of Oppression : Critical reception for Algorithms of Oppression has been largely positive. In the Los Angeles Review of Books, Emily Drabinski writes, "What emerges from these pages is the sense that Google’s algorithms of oppression comprise just one of the hidden infrastructures that govern our daily lives, and that the others are likely just as hard-coded with white supremacy and misogyny as the one that Noble explores." In PopMatters, Hans Rollman writes that Algorithms of Oppression "demonstrate[s] that search engines, and in particular Google, are not simply imperfect machines, but systems designed by humans in ways that replicate the power structures of the western countries where they are built, complete with all the sexism and racism that are built into those structures." In Booklist, reviewer Lesley Williams states, "Noble’s study should prompt some soul-searching about our reliance on commercial search engines and about digital social equity." In early February 2018, Algorithms of Oppression received press attention when the official Twitter account for the Institute of Electrical and Electronics Engineers expressed criticism of the book, saying that the results of a Google search suggested in its blurb did not match Noble's predictions. IEEE's outreach historian, Alexander Magoun, later revealed that he had not read the book, and issued an apology.
|
Algorithms of Oppression : Algorithmic bias Techlash
|
Algorithms of Oppression : Algorithms of Oppression: How Search Engines Reinforce Racism
|
Almeida–Pineda recurrent backpropagation : Almeida–Pineda recurrent backpropagation is an extension to the backpropagation algorithm that is applicable to recurrent neural networks. It is a type of supervised learning. It was described somewhat cryptically in Richard Feynman's senior thesis, and rediscovered independently in the context of artificial neural networks by both Fernando Pineda and Luis B. Almeida. A recurrent neural network for this algorithm consists of some input units, some output units and eventually some hidden units. For a given set of (input, target) states, the network is trained to settle into a stable activation state with the output units in the target state, based on a given input state clamped on the input units. == References ==
|
Bootstrap aggregating : Bootstrap aggregating, also called bagging (from bootstrap aggregating) or bootstrapping, is a machine learning (ML) ensemble meta-algorithm designed to improve the stability and accuracy of ML classification and regression algorithms. It also reduces variance and overfitting. Although it is usually applied to decision tree methods, it can be used with any type of method. Bagging is a special case of the ensemble averaging approach.
|
Bootstrap aggregating : Given a standard training set D of size n , bagging generates m new training sets D i , each of size n ′ , by sampling from D uniformly and with replacement. By sampling with replacement, some observations may be repeated in each D i . If n ′ = n , then for large n the set D i is expected to have the fraction (1 - 1/e) (~63.2%) of the unique samples of D , the rest being duplicates. This kind of sample is known as a bootstrap sample. Sampling with replacement ensures each bootstrap is independent from its peers, as it does not depend on previous chosen samples when sampling. Then, m models are fitted using the above bootstrap samples and combined by averaging the output (for regression) or voting (for classification). Bagging leads to "improvements for unstable procedures", which include, for example, artificial neural networks, classification and regression trees, and subset selection in linear regression. Bagging was shown to improve preimage learning. On the other hand, it can mildly degrade the performance of stable methods such as k-nearest neighbors.
|
Bootstrap aggregating : While the techniques described above utilize random forests and bagging (otherwise known as bootstrapping), there are certain techniques that can be used in order to improve their execution and voting time, their prediction accuracy, and their overall performance. The following are key steps in creating an efficient random forest: Specify the maximum depth of trees: Instead of allowing the random forest to continue until all nodes are pure, it is better to cut it off at a certain point in order to further decrease chances of overfitting. Prune the dataset: Using an extremely large dataset may create results that are less indicative of the data provided than a smaller set that more accurately represents what is being focused on. Continue pruning the data at each node split rather than just in the original bagging process. Decide on accuracy or speed: Depending on the desired results, increasing or decreasing the number of trees within the forest can help. Increasing the number of trees generally provides more accurate results while decreasing the number of trees will provide quicker results.
|
Bootstrap aggregating : For classification, use a training set D , Inducer I and the number of bootstrap samples m as input. Generate a classifier C ∗ as output Create m new training sets D i , from D with replacement Classifier C i is built from each set D i using I to determine the classification of set D i Finally classifier C ∗ is generated by using the previously created set of classifiers C i on the original dataset D , the classification predicted most often by the sub-classifiers C i is the final classification for i = 1 to m C*(x) = argmax # (most often predicted label y) y∈Y
|
Bootstrap aggregating : To illustrate the basic principles of bagging, below is an analysis on the relationship between ozone and temperature (data from Rousseeuw and Leroy (1986), analysis done in R). The relationship between temperature and ozone appears to be nonlinear in this dataset, based on the scatter plot. To mathematically describe this relationship, LOESS smoothers (with bandwidth 0.5) are used. Rather than building a single smoother for the complete dataset, 100 bootstrap samples were drawn. Each sample is composed of a random subset of the original data and maintains a semblance of the master set's distribution and variability. For each bootstrap sample, a LOESS smoother was fit. Predictions from these 100 smoothers were then made across the range of the data. The black lines represent these initial predictions. The lines lack agreement in their predictions and tend to overfit their data points: evident by the wobbly flow of the lines. By taking the average of 100 smoothers, each corresponding to a subset of the original dataset, we arrive at one bagged predictor (red line). The red line's flow is stable and does not overly conform to any data point(s).
|
Bootstrap aggregating : Advantages: Many weak learners aggregated typically outperform a single learner over the entire set, and have less overfit Reduces variance in high-variance low-bias weak learner, which can improve efficiency (statistics) Can be performed in parallel, as each separate bootstrap can be processed on its own before aggregation. Disadvantages: For a weak learner with high bias, bagging will also carry high bias into its aggregate Loss of interpretability of a model. Can be computationally expensive depending on the dataset.
|
Bootstrap aggregating : The concept of bootstrap aggregating is derived from the concept of bootstrapping which was developed by Bradley Efron. Bootstrap aggregating was proposed by Leo Breiman who also coined the abbreviated term "bagging" (bootstrap aggregating). Breiman developed the concept of bagging in 1994 to improve classification by combining classifications of randomly generated training sets. He argued, "If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy".
|
Bootstrap aggregating : Boosting (machine learning) Bootstrapping (statistics) Cross-validation (statistics) Out-of-bag error Random forest Random subspace method (attribute bagging) Resampled efficient frontier Predictive analysis: Classification and regression trees
|
Bootstrap aggregating : Breiman, Leo (1996). "Bagging predictors". Machine Learning. 24 (2): 123–140. CiteSeerX 10.1.1.32.9399. doi:10.1007/BF00058655. S2CID 47328136. Alfaro, E., Gámez, M. and García, N. (2012). "adabag: An R package for classification with AdaBoost.M1, AdaBoost-SAMME and Bagging". : Cite journal requires |journal= (help)CS1 maint: multiple names: authors list (link) Kotsiantis, Sotiris (2014). "Bagging and boosting variants for handling classifications problems: a survey". Knowledge Eng. Review. 29 (1): 78–100. doi:10.1017/S0269888913000313. S2CID 27301684. Boehmke, Bradley; Greenwell, Brandon (2019). "Bagging". Hands-On Machine Learning with R. Chapman & Hall. pp. 191–202. ISBN 978-1-138-49568-5.
|
Characteristic samples : Characteristic samples is a concept in the field of grammatical inference, related to passive learning. In passive learning, an inference algorithm I is given a set of pairs of strings and labels S , and returns a representation R that is consistent with S . Characteristic samples consider the scenario when the goal is not only finding a representation consistent with S , but finding a representation that recognizes a specific target language. A characteristic sample of language L is a set of pairs of the form ( s , l ( s ) ) where: l ( s ) = 1 if and only if s ∈ L l ( s ) = − 1 if and only if s ∉ L Given the characteristic sample S , I 's output on it is a representation R , e.g. an automaton, that recognizes L .
|
Characteristic samples : There are some classes that do not have polynomially sized characteristic samples. For example, from the first theorem in the Related theorems segment, it has been shown that the following classes of languages do not have polynomial sized characteristic samples: C F G - The class of context-free grammars Languages over Σ of cardinality larger than 1 L I N G - The class of linear grammar languages over Σ of cardinality larger than 1 S D G - The class of simple deterministic grammars Languages N F A - The class of nondeterministic finite automata Languages
|
Characteristic samples : Classes of representations that has characteristic samples relates to the following learning paradigms:
|
Characteristic samples : Grammar induction Passive learning Induction of regular languages Deterministic finite automaton == References ==
|
Constructing skill trees : Constructing skill trees (CST) is a hierarchical reinforcement learning algorithm which can build skill trees from a set of sample solution trajectories obtained from demonstration. CST uses an incremental MAP (maximum a posteriori) change point detection algorithm to segment each demonstration trajectory into skills and integrate the results into a skill tree. CST was introduced by George Konidaris, Scott Kuindersma, Andrew Barto and Roderic Grupen in 2010.
|
Constructing skill trees : CST consists of mainly three parts;change point detection, alignment and merging. The main focus of CST is online change-point detection. The change-point detection algorithm is used to segment data into skills and uses the sum of discounted reward R t as the target regression variable. Each skill is assigned an appropriate abstraction. A particle filter is used to control the computational complexity of CST. The change point detection algorithm is implemented as follows. The data for times t ∈ T and models Q with prior p ( q ∈ Q ) are given. The algorithm is assumed to be able to fit a segment from time j + 1 to t using model q with the fit probability P ( j , t , q ) ^ . A linear regression model with Gaussian noise is used to compute P ( j , t , q ) . The Gaussian noise prior has mean zero, and variance which follows I n v e r s e G a m m a ( v 2 , u 2 ) \left(,\right) . The prior for each weight follows N o r m a l ( 0 , σ 2 δ ) (0,\sigma ^\delta ) . The fit probability P ( j , t , q ) is computed by the following equation. P ( j , t , q ) = π − n 2 δ m | ( A + D ) − 1 | 1 2 u v 2 ( y + u ) u + v 2 Γ ( n + v 2 ) Γ ( v 2 ) \left|(A+D)^\right|^)) Then, CST compute the probability of the changepoint at time j with model q, P t ( j , q ) (j,q) and P j MAP ^ using a Viterbi algorithm. P t ( j , q ) = ( 1 − G ( t − j − 1 ) ) P ( j , t , q ) p ( q ) P j MAP (j,q)=(1-G(t-j-1))P(j,t,q)p(q)P_^ P j MAP = max i , q P j ( i , q ) g ( j − i ) 1 − G ( j − i − 1 ) , ∀ j < t ^=\max _(i,q)g(j-i),\forall j<t The descriptions of the parameters and variables are as follows; A = ∑ i = j t Φ ( x i ) Φ ( x i ) T ^\Phi (x_)\Phi (x_)^ Φ ( x i ) ) : a vector of m basis functions evaluated at state x i y = ( ∑ i = j t R i 2 ) − b T ( A + D ) − 1 b ^R_^)-b^(A+D)^b b = ∑ i = j t R i Φ ( x i ) ^R_\Phi (x_) R i = ∑ j = i T γ j − i r j =\sum _^\gamma ^r_ 𝛾: Gamma function n = t − j m: The number of basis functions q has. D: an m by m matrix with δ − 1 on the diagonal and zeros elsewhere The skill length l is assumed to follow a Geometric distribution with parameter p g ( l ) = ( 1 − p ) l − 1 p ^(l)=(1-p)^p G ( l ) = ( 1 − ( 1 − p ) l ) ^(l)=(1-(1-p)^) p = 1 k ^= k: Expected skill length Using the method above, CST can segment data into a skill chain. The time complexity of the change point detection is O ( N L ) and storage size is O ( N c ) , where N is the number of particles, L is the time of computing P ( j , t , q ) , and there are O ( c ) change points. Next step is alignment. CST needs to align the component skills because the change-point does not occur in the exactly same places. Thus, when segmenting second trajectory after segmenting the first trajectory, it has a bias on the location of change point in the second trajectory. This bias follows a mixture of gaussians. The last step is merging. CST merges skill chains into a skill tree. CST merges a pair of trajectory segments by allocating the same skill. All trajectories have the same goal and it merges two chains by starting at their final segments. If two segments are statistically similar, it merges them. This procedure is repeated until it fails to merge a pair of skill segments. P ( j , t , q ) are used to determine whether a pair of trajectories are modeled better as one skill or as two different skills.
|
Constructing skill trees : The following pseudocode describes the change point detection algorithm: particles := []; Process each incoming data point for t = 1:T do //Compute fit probabilities for all particles for p ∈ particles do p_tjq := (1 − G(t − p.pos − 1)) × p.fit_prob × model_prior(p.model) × p.prev_MAP p.MAP := p_tjq × g(t−p.pos) / (1 − G(t − p.pos − 1)) end // Filter if necessary if the number of particles ≥ N then particles := particle_filter(p.MAP, M) end // Determine the Viterbi path for t = 1 do max_path := [] max_MAP := 1/|Q| else max_particle := maxp p.MAP max_path := max_particle.path ∪ max_particle max_MAP := max_particle.MAP end // Create new particles for a changepoint at time t for q ∈ Q do new_p := create_particle(model=q, pos=t, prev_MAP=max_MAP, path=max_path) p := p ∪ new_p end // Update all particles for p ∈ P do particles := update_particle(current_state, current_reward, p) end end // Return the most likely path to the final point return max_path function update_particle(current_state, current_reward, particle) is p := particle r_t := current_reward // Initialization if t = 0 then p.A := zero matrix(p.m, p.m) p.b := zero vector(p.m) p.z := zero vector(p.m) p.sum r := 0 p.tr1 := 0 p.tr2 := 0 end if // Compute the basis function vector for the current state Φt := p.Φ (current state) // Update sufficient statistics p.A := p.A + ΦtΦTt p.z := 𝛾p.z + Φt p.b := p.b + rt p.z p.tr1 := 1 + 𝛾2 p.tr1 p.sum r := sum p.r + r2t p.tr1 + 2𝛾rt p.tr2 p.tr2 := 𝛾p.tr2 + rt p.tr1 p.fit_prob := compute_fit_prob(p, v, u, delta, 𝛾)
|
Constructing skill trees : CTS assume that the demonstrated skills form a tree, the domain reward function is known and the best model for merging a pair of skills is the model selected for representing both individually.
|
Constructing skill trees : CST is much faster learning algorithm than skill chaining. CST can be applied to learning higher dimensional policies. Even unsuccessful episode can improve skills. Skills acquired using agent-centric features can be used for other problems.
|
Constructing skill trees : CST has been used to acquire skills from human demonstration in the PinBall domain. It has been also used to acquire skills from human demonstration on a mobile manipulator.
|
Constructing skill trees : Prefrontal cortex basal ganglia working memory State–action–reward–state–action Sammon Mapping
|
Constructing skill trees : Konidaris, George; Scott Kuindersma; Andrew Barto; Roderic Grupen (2010). "Constructing Skill Trees for Reinforcement Learning Agents from Demonstration Trajectories". Advances in Neural Information Processing Systems 23. Konidaris, George; Andrew Barto (2009). "Skill discovery in continuous reinforcement learning domains using skill chaining". Advances in Neural Information Processing Systems 22. Fearnhead, Paul; Zhen Liu (2007). "On-line Inference for Multiple Change Points". Journal of the Royal Statistical Society.
|
Curriculum learning : Curriculum learning is a technique in machine learning in which a model is trained on examples of increasing difficulty, where the definition of "difficulty" may be provided externally or discovered as part of the training process. This is intended to attain good performance more quickly, or to converge to a better local optimum if the global optimum is not found.
|
Curriculum learning : Most generally, curriculum learning is the technique of successively increasing the difficulty of examples in the training set that is presented to a model over multiple training iterations. This can produce better results than exposing the model to the full training set immediately under some circumstances; most typically, when the model is able to learn general principles from easier examples, and then gradually incorporate more complex and nuanced information as harder examples are introduced, such as edge cases. This has been shown to work in many domains, most likely as a form of regularization. There are several major variations in how the technique is applied: A concept of "difficulty" must be defined. This may come from human annotation or an external heuristic; for example in language modeling, shorter sentences might be classified as easier than longer ones. Another approach is to use the performance of another model, with examples accurately predicted by that model being classified as easier (providing a connection to boosting). Difficulty can be increased steadily or in distinct epochs, and in a deterministic schedule or according to a probability distribution. This may also be moderated by a requirement for diversity at each stage, in cases where easier examples are likely to be disproportionately similar to each other. Applications must also decide the schedule for increasing the difficulty. Simple approaches may use a fixed schedule, such as training on easy examples for half of the available iterations and then all examples for the second half. Other approaches use self-paced learning to increase the difficulty in proportion to the performance of the model on the current set. Since curriculum learning only concerns the selection and ordering of training data, it can be combined with many other techniques in machine learning. The success of the method assumes that a model trained for an easier version of the problem can generalize to harder versions, so it can be seen as a form of transfer learning. Some authors also consider curriculum learning to include other forms of progressively increasing complexity, such as increasing the number of model parameters. It is frequently combined with reinforcement learning, such as learning a simplified version of a game first. Some domains have shown success with anti-curriculum learning: training on the most difficult examples first. One example is the ACCAN method for speech recognition, which trains on the examples with the lowest signal-to-noise ratio first.
|
Curriculum learning : The term "curriculum learning" was introduced by Yoshua Bengio et al in 2009, with reference to the psychological technique of shaping in animals and structured education for humans: beginning with the simplest concepts and then building on them. The authors also note that the application of this technique in machine learning has its roots in the early study of neural networks such as Jeffrey Elman's 1993 paper Learning and development in neural networks: the importance of starting small. Bengio et al showed good results for problems in image classification, such as identifying geometric shapes with progressively more complex forms, and language modeling, such as training with a gradually expanding vocabulary. They conclude that, for curriculum strategies, "their beneficial effect is most pronounced on the test set", suggesting good generalization. The technique has since been applied to many other domains: Natural language processing: Part-of-speech tagging Intent detection Sentiment analysis Machine translation Speech recognition Image recognition: Facial recognition Object detection Reinforcement learning: Game-playing Graph learning Matrix factorization
|
Curriculum learning : Curriculum Learning: A Survey A Survey on Curriculum Learning Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey Curriculum learning at IEEE Xplore
|
Diffusion map : Diffusion maps is a dimensionality reduction or feature extraction algorithm introduced by Coifman and Lafon which computes a family of embeddings of a data set into Euclidean space (often low-dimensional) whose coordinates can be computed from the eigenvectors and eigenvalues of a diffusion operator on the data. The Euclidean distance between points in the embedded space is equal to the "diffusion distance" between probability distributions centered at those points. Different from linear dimensionality reduction methods such as principal component analysis (PCA), diffusion maps are part of the family of nonlinear dimensionality reduction methods which focus on discovering the underlying manifold that the data has been sampled from. By integrating local similarities at different scales, diffusion maps give a global description of the data-set. Compared with other methods, the diffusion map algorithm is robust to noise perturbation and computationally inexpensive.
|
Diffusion map : Following and, diffusion maps can be defined in four steps.
|
Diffusion map : The basic algorithm framework of diffusion map is as: Step 1. Given the similarity matrix L. Step 2. Normalize the matrix according to parameter α : L ( α ) = D − α L D − α =D^LD^ . Step 3. Form the normalized matrix M = ( D ( α ) ) − 1 L ( α ) ^)^L^ . Step 4. Compute the k largest eigenvalues of M t and the corresponding eigenvectors. Step 5. Use diffusion map to get the embedding Ψ t .
|
Diffusion map : In the paper Nadler et al. showed how to design a kernel that reproduces the diffusion induced by a Fokker–Planck equation. They also explained that, when the data approximate a manifold, one can recover the geometry of this manifold by computing an approximation of the Laplace–Beltrami operator. This computation is completely insensitive to the distribution of the points and therefore provides a separation of the statistics and the geometry of the data. Since diffusion maps give a global description of the data-set, they can measure the distances between pairs of sample points in the manifold in which the data is embedded. Applications based on diffusion maps include face recognition, spectral clustering, low dimensional representation of images, image segmentation, 3D model segmentation, speaker verification and identification, sampling on manifolds, anomaly detection, image inpainting, revealing brain resting state networks organization and so on. Furthermore, the diffusion maps framework has been productively extended to complex networks, revealing a functional organisation of networks which differs from the purely topological or structural one.
|
Diffusion map : Nonlinear dimensionality reduction Spectral clustering == References ==
|
Dominance-based rough set approach : The dominance-based rough set approach (DRSA) is an extension of rough set theory for multi-criteria decision analysis (MCDA), introduced by Greco, Matarazzo and Słowiński. The main change compared to the classical rough sets is the substitution for the indiscernibility relation by a dominance relation, which permits one to deal with inconsistencies typical to consideration of criteria and preference-ordered decision classes.
|
Dominance-based rough set approach : Multicriteria classification (sorting) is one of the problems considered within MCDA and can be stated as follows: given a set of objects evaluated by a set of criteria (attributes with preference-order domains), assign these objects to some pre-defined and preference-ordered decision classes, such that each object is assigned to exactly one class. Due to the preference ordering, improvement of evaluations of an object on the criteria should not worsen its class assignment. The sorting problem is very similar to the problem of classification, however, in the latter, the objects are evaluated by regular attributes and the decision classes are not necessarily preference ordered. The problem of multicriteria classification is also referred to as ordinal classification problem with monotonicity constraints and often appears in real-life application when ordinal and monotone properties follow from the domain knowledge about the problem. As an illustrative example, consider the problem of evaluation in a high school. The director of the school wants to assign students (objects) to three classes: bad, medium and good (notice that class good is preferred to medium and medium is preferred to bad). Each student is described by three criteria: level in Physics, Mathematics and Literature, each taking one of three possible values bad, medium and good. Criteria are preference-ordered and improving the level from one of the subjects should not result in worse global evaluation (class). As a more serious example, consider classification of bank clients, from the viewpoint of bankruptcy risk, into classes safe and risky. This may involve such characteristics as "return on equity (ROE)", "return on investment (ROI)" and "return on sales (ROS)". The domains of these attributes are not simply ordered but involve a preference order since, from the viewpoint of bank managers, greater values of ROE, ROI or ROS are better for clients being analysed for bankruptcy risk . Thus, these attributes are criteria. Neglecting this information in knowledge discovery may lead to wrong conclusions.
|
Dominance-based rough set approach : On the basis of the approximations obtained by means of the dominance relations, it is possible to induce a generalized description of the preferential information contained in the decision table, in terms of decision rules. The decision rules are expressions of the form if [condition] then [consequent], that represent a form of dependency between condition criteria and decision criteria. Procedures for generating decision rules from a decision table use an inductive learning principle. We can distinguish three types of rules: certain, possible and approximate. Certain rules are generated from lower approximations of unions of classes; possible rules are generated from upper approximations of unions of classes and approximate rules are generated from boundary regions. Certain rules has the following form: if f ( x , q 1 ) ≥ r 1 )\geq r_\,\! and f ( x , q 2 ) ≥ r 2 )\geq r_\,\! and … f ( x , q p ) ≥ r p )\geq r_\,\! then x ∈ C l t ≥ ^ if f ( x , q 1 ) ≤ r 1 )\leq r_\,\! and f ( x , q 2 ) ≤ r 2 )\leq r_\,\! and … f ( x , q p ) ≤ r p )\leq r_\,\! then x ∈ C l t ≤ ^ Possible rules has a similar syntax, however the consequent part of the rule has the form: x could belong to C l t ≥ ^ or the form: x could belong to C l t ≤ ^ . Finally, approximate rules has the syntax: if f ( x , q 1 ) ≥ r 1 )\geq r_\,\! and f ( x , q 2 ) ≥ r 2 )\geq r_\,\! and … f ( x , q k ) ≥ r k )\geq r_\,\! and f ( x , q k + 1 ) ≤ r k + 1 )\leq r_\,\! and f ( x , q k + 2 ) ≤ r k + 2 )\leq r_\,\! and … f ( x , q p ) ≤ r p )\leq r_\,\! then x ∈ C l s ∪ C l s + 1 ∪ C l t \cup Cl_\cup Cl_ The certain, possible and approximate rules represent certain, possible and ambiguous knowledge extracted from the decision table. Each decision rule should be minimal. Since a decision rule is an implication, by a minimal decision rule we understand such an implication that there is no other implication with an antecedent of at least the same weakness (in other words, rule using a subset of elementary conditions or/and weaker elementary conditions) and a consequent of at least the same strength (in other words, rule assigning objects to the same union or sub-union of classes). A set of decision rules is complete if it is able to cover all objects from the decision table in such a way that consistent objects are re-classified to their original classes and inconsistent objects are classified to clusters of classes referring to this inconsistency. We call minimal each set of decision rules that is complete and non-redundant, i.e. exclusion of any rule from this set makes it non-complete. One of three induction strategies can be adopted to obtain a set of decision rules: generation of a minimal description, i.e. a minimal set of rules, generation of an exhaustive description, i.e. all rules for a given data matrix, generation of a characteristic description, i.e. a set of rules covering relatively many objects each, however, all together not necessarily all objects from the decision table The most popular rule induction algorithm for dominance-based rough set approach is DOMLEM, which generates minimal set of rules.
|
Dominance-based rough set approach : Consider the following problem of high school students’ evaluations: Each object (student) is described by three criteria q 1 , q 2 , q 3 ,q_,q_\,\! , related to the levels in Mathematics, Physics and Literature, respectively. According to the decision attribute, the students are divided into three preference-ordered classes: C l 1 = =\ , C l 2 = =\ and C l 3 = =\ . Thus, the following unions of classes were approximated: C l 1 ≤ ^ i.e. the class of (at most) bad students, C l 2 ≤ ^ i.e. the class of at most medium students, C l 2 ≥ ^ i.e. the class of at least medium students, C l 3 ≥ ^ i.e. the class of (at least) good students. Notice that evaluations of objects x 4 \,\! and x 6 \,\! are inconsistent, because x 4 \,\! has better evaluations on all three criteria than x 6 \,\! but worse global score. Therefore, lower approximations of class unions consist of the following objects: P _ ( C l 1 ≤ ) = (Cl_^)=\,x_\ P _ ( C l 2 ≤ ) = = C l 2 ≤ (Cl_^)=\,x_,x_,x_,x_,x_,x_\=Cl_^ P _ ( C l 2 ≥ ) = (Cl_^)=\,x_,x_,x_,x_,x_\ P _ ( C l 3 ≥ ) = = C l 3 ≥ (Cl_^)=\,x_,x_\=Cl_^ Thus, only classes C l 1 ≤ ^ and C l 2 ≥ ^ cannot be approximated precisely. Their upper approximations are as follows: P ¯ ( C l 1 ≤ ) = (Cl_^)=\,x_,x_,x_\ P ¯ ( C l 2 ≥ ) = (Cl_^)=\,x_,x_,x_,x_,x_,x_,x_\ while their boundary regions are: B n P ( C l 1 ≤ ) = B n P ( C l 2 ≥ ) = (Cl_^)=Bn_(Cl_^)=\,x_\ Of course, since C l 2 ≤ ^ and C l 3 ≥ ^ are approximated precisely, we have P ¯ ( C l 2 ≤ ) = C l 2 ≤ (Cl_^)=Cl_^ , P ¯ ( C l 3 ≥ ) = C l 3 ≥ (Cl_^)=Cl_^ and B n P ( C l 2 ≤ ) = B n P ( C l 3 ≥ ) = ∅ (Cl_^)=Bn_(Cl_^)=\emptyset The following minimal set of 10 rules can be induced from the decision table: if P h y s i c s ≤ b a d then s t u d e n t ≤ b a d if L i t e r a t u r e ≤ b a d and P h y s i c s ≤ m e d i u m and M a t h ≤ m e d i u m then s t u d e n t ≤ b a d if M a t h ≤ b a d then s t u d e n t ≤ m e d i u m if L i t e r a t u r e ≤ m e d i u m and P h y s i c s ≤ m e d i u m then s t u d e n t ≤ m e d i u m if M a t h ≤ m e d i u m and L i t e r a t u r e ≤ b a d then s t u d e n t ≤ m e d i u m if L i t e r a t u r e ≥ g o o d and M a t h ≥ m e d i u m then s t u d e n t ≥ g o o d if P h y s i c s ≥ g o o d and M a t h ≥ g o o d then s t u d e n t ≥ g o o d if M a t h ≥ g o o d then s t u d e n t ≥ m e d i u m if P h y s i c s ≥ g o o d then s t u d e n t ≥ m e d i u m if M a t h ≤ b a d and P h y s i c s ≥ m e d i u m then s t u d e n t = b a d ∨ m e d i u m The last rule is approximate, while the rest are certain.
|
Dominance-based rough set approach : 4eMka2 Archived 2007-09-09 at the Wayback Machine is a decision support system for multiple criteria classification problems based on dominance-based rough sets (DRSA). JAMM Archived 2007-07-19 at the Wayback Machine is a much more advanced successor of 4eMka2. Both systems are freely available for non-profit purposes on the Laboratory of Intelligent Decision Support Systems (IDSS) website.
|
Dominance-based rough set approach : Rough sets Granular computing Multicriteria Decision Analysis (MCDA)
|
Dominance-based rough set approach : Chakhar S., Ishizaka A., Labib A., Saad I. (2016). Dominance-based rough set approach for group decisions, European Journal of Operational Research, 251(1): 206-224 Li S., Li T. Zhang Z., Chen H., Zhang J. (2015). Parallel Computing of Approximations in Dominance-based Rough Sets Approach, Knowledge-based Systems, 87: 102-111 Li S., Li T. (2015). Incremental Update of Approximations in Dominance-based Rough Sets Approach under the Variation of Attribute Values, Information Sciences, 294: 348-361 Li S., Li T., Liu D. (2013). Dynamic Maintenance of Approximations in Dominance-based Rough Set Approach under the Variation of the Object Set, International Journal of Intelligent Systems, 28(8): 729-751
|
Dominance-based rough set approach : The International Rough Set Society Laboratory of Intelligent Decision Support Systems (IDSS) at Poznań University of Technology. Extensive list of DRSA references on the Roman Słowiński home page.
|
Dynamic time warping : In time series analysis, dynamic time warping (DTW) is an algorithm for measuring similarity between two temporal sequences, which may vary in speed. For instance, similarities in walking could be detected using DTW, even if one person was walking faster than the other, or if there were accelerations and decelerations during the course of an observation. DTW has been applied to temporal sequences of video, audio, and graphics data — indeed, any data that can be turned into a one-dimensional sequence can be analyzed with DTW. A well-known application has been automatic speech recognition, to cope with different speaking speeds. Other applications include speaker recognition and online signature recognition. It can also be used in partial shape matching applications. In general, DTW is a method that calculates an optimal match between two given sequences (e.g. time series) with certain restriction and rules: Every index from the first sequence must be matched with one or more indices from the other sequence, and vice versa The first index from the first sequence must be matched with the first index from the other sequence (but it does not have to be its only match) The last index from the first sequence must be matched with the last index from the other sequence (but it does not have to be its only match) The mapping of the indices from the first sequence to indices from the other sequence must be monotonically increasing, and vice versa, i.e. if j > i are indices from the first sequence, then there must not be two indices l > k in the other sequence, such that index i is matched with index l and index j is matched with index k , and vice versa We can plot each match between the sequences 1 : M and 1 : N as a path in a M × N matrix from ( 1 , 1 ) to ( M , N ) , such that each step is one of ( 0 , 1 ) , ( 1 , 0 ) , ( 1 , 1 ) . In this formulation, we see that the number of possible matches is the Delannoy number. The optimal match is denoted by the match that satisfies all the restrictions and the rules and that has the minimal cost, where the cost is computed as the sum of absolute differences, for each matched pair of indices, between their values. The sequences are "warped" non-linearly in the time dimension to determine a measure of their similarity independent of certain non-linear variations in the time dimension. This sequence alignment method is often used in time series classification. Although DTW measures a distance-like quantity between two given sequences, it doesn't guarantee the triangle inequality to hold. In addition to a similarity measure between the two sequences (a so called "warping path" is produced), by warping according to this path the two signals may be aligned in time. The signal with an original set of points X(original), Y(original) is transformed to X(warped), Y(warped). This finds applications in genetic sequence and audio synchronisation. In a related technique sequences of varying speed may be averaged using this technique see the average sequence section. This is conceptually very similar to the Needleman–Wunsch algorithm.
|
Dynamic time warping : This example illustrates the implementation of the dynamic time warping algorithm when the two sequences s and t are strings of discrete symbols. For two symbols x and y, d(x, y) is a distance between the symbols, e.g. d(x, y) = | x − y | . int DTWDistance(s: array [1..n], t: array [1..m]) where DTW[i, j] is the distance between s[1:i] and t[1:j] with the best alignment. We sometimes want to add a locality constraint. That is, we require that if s[i] is matched with t[j], then | i − j | is no larger than w, a window parameter. We can easily modify the above algorithm to add a locality constraint (differences marked). However, the above given modification works only if | n − m | is no larger than w, i.e. the end point is within the window length from diagonal. In order to make the algorithm work, the window parameter w must be adapted so that | n − m | ≤ w (see the line marked with (*) in the code). int DTWDistance(s: array [1..n], t: array [1..m], w: int)
|
Dynamic time warping : The DTW algorithm produces a discrete matching between existing elements of one series to another. In other words, it does not allow time-scaling of segments within the sequence. Other methods allow continuous warping. For example, Correlation Optimized Warping (COW) divides the sequence into uniform segments that are scaled in time using linear interpolation, to produce the best matching warping. The segment scaling causes potential creation of new elements, by time-scaling segments either down or up, and thus produces a more sensitive warping than DTW's discrete matching of raw elements.
|
Dynamic time warping : The time complexity of the DTW algorithm is O ( N M ) , where N and M are the lengths of the two input sequences. The 50 years old quadratic time bound was broken in 2016: an algorithm due to Gold and Sharir enables computing DTW in O ( N 2 / log log N ) /\log \log N) time and space for two input sequences of length N . This algorithm can also be adapted to sequences of different lengths. Despite this improvement, it was shown that a strongly subquadratic running time of the form O ( N 2 − ϵ ) ) for some ϵ > 0 cannot exist unless the Strong exponential time hypothesis fails. While the dynamic programming algorithm for DTW requires O ( N M ) space in a naive implementation, the space consumption can be reduced to O ( min ( N , M ) ) using Hirschberg's algorithm.
|
Dynamic time warping : Fast techniques for computing DTW include PrunedDTW, SparseDTW, FastDTW, and the MultiscaleDTW. A common task, retrieval of similar time series, can be accelerated by using lower bounds such as LB_Keogh, LB_Improved, or LB_Petitjean. However, the Early Abandon and Pruned DTW algorithm reduces the degree of acceleration that lower bounding provides and sometimes renders it ineffective. In a survey, Wang et al. reported slightly better results with the LB_Improved lower bound than the LB_Keogh bound, and found that other techniques were inefficient. Subsequent to this survey, the LB_Enhanced bound was developed that is always tighter than LB_Keogh while also being more efficient to compute. LB_Petitjean is the tightest known lower bound that can be computed in linear time.
|
Dynamic time warping : Averaging for dynamic time warping is the problem of finding an average sequence for a set of sequences. NLAAF is an exact method to average two sequences using DTW. For more than two sequences, the problem is related to the one of the multiple alignment and requires heuristics. DBA is currently a reference method to average a set of sequences consistently with DTW. COMASA efficiently randomizes the search for the average sequence, using DBA as a local optimization process.
|
Dynamic time warping : A nearest-neighbour classifier can achieve state-of-the-art performance when using dynamic time warping as a distance measure.
|
Dynamic time warping : Amerced Dynamic Time Warping (ADTW) is a variant of DTW designed to better control DTW's permissiveness in the alignments that it allows. The windows that classical DTW uses to constrain alignments introduce a step function. Any warping of the path is allowed within the window and none beyond it. In contrast, ADTW employs an additive penalty that is incurred each time that the path is warped. Any amount of warping is allowed, but each warping action incurs a direct penalty. ADTW significantly outperforms DTW with windowing when applied as a nearest neighbor classifier on a set of benchmark time series classification tasks.
|
Dynamic time warping : In functional data analysis, time series are regarded as discretizations of smooth (differentiable) functions of time. By viewing the observed samples at smooth functions, one can utilize continuous mathematics for analyzing data. Smoothness and monotonicity of time warp functions may be obtained for instance by integrating a time-varying radial basis function, thus being a one-dimensional diffeomorphism. Optimal nonlinear time warping functions are computed by minimizing a measure of distance of the set of functions to their warped average. Roughness penalty terms for the warping functions may be added, e.g., by constraining the size of their curvature. The resultant warping functions are smooth, which facilitates further processing. This approach has been successfully applied to analyze patterns and variability of speech movements. Another related approach are hidden Markov models (HMM) and it has been shown that the Viterbi algorithm used to search for the most likely path through the HMM is equivalent to stochastic DTW. DTW and related warping methods are typically used as pre- or post-processing steps in data analyses. If the observed sequences contain both random variation in both their values, shape of observed sequences and random temporal misalignment, the warping may overfit to noise leading to biased results. A simultaneous model formulation with random variation in both values (vertical) and time-parametrization (horizontal) is an example of a nonlinear mixed-effects model. In human movement analysis, simultaneous nonlinear mixed-effects modeling has been shown to produce superior results compared to DTW.
|
Dynamic time warping : The tempo C++ library with Python bindings implements Early Abandoned and Pruned DTW as well as Early Abandoned and Pruned ADTW and DTW lower bounds LB_Keogh, LB_Enhanced and LB_Webb. The UltraFastMPSearch Java library implements the UltraFastWWSearch algorithm for fast warping window tuning. The lbimproved C++ library implements Fast Nearest-Neighbor Retrieval algorithms under the GNU General Public License (GPL). It also provides a C++ implementation of dynamic time warping, as well as various lower bounds. The FastDTW library is a Java implementation of DTW and a FastDTW implementation that provides optimal or near-optimal alignments with an O(N) time and memory complexity, in contrast to the O(N2) requirement for the standard DTW algorithm. FastDTW uses a multilevel approach that recursively projects a solution from a coarser resolution and refines the projected solution. FastDTW fork (Java) published to Maven Central. time-series-classification (Java) a package for time series classification using DTW in Weka. The DTW suite provides Python (dtw-python) and R packages (dtw) with a comprehensive coverage of the DTW algorithm family members, including a variety of recursion rules (also called step patterns), constraints, and substring matching. The mlpy Python library implements DTW. The pydtw Python library implements the Manhattan and Euclidean flavoured DTW measures including the LB_Keogh lower bounds. The cudadtw C++/CUDA library implements subsequence alignment of Euclidean-flavoured DTW and z-normalized Euclidean distance similar to the popular UCR-Suite on CUDA-enabled accelerators. The JavaML machine learning library implements DTW. The ndtw C# library implements DTW with various options. Sketch-a-Char uses Greedy DTW (implemented in JavaScript) as part of LaTeX symbol classifier program. The MatchBox implements DTW to match mel-frequency cepstral coefficients of audio signals. Sequence averaging: a GPL Java implementation of DBA. The Gesture Recognition Toolkit|GRT C++ real-time gesture-recognition toolkit implements DTW. The PyHubs software package implements DTW and nearest-neighbour classifiers, as well as their extensions (hubness-aware classifiers). The simpledtw Python library implements the classic O(NM) Dynamic Programming algorithm and bases on Numpy. It supports values of any dimension, as well as using custom norm functions for the distances. It is licensed under the MIT license. The tslearn Python library implements DTW in the time-series context. The cuTWED CUDA Python library implements a state of the art improved Time Warp Edit Distance using only linear memory with phenomenal speedups. DynamicAxisWarping.jl Is a Julia implementation of DTW and related algorithms such as FastDTW, SoftDTW, GeneralDTW and DTW barycenters. The Multi_DTW implements DTW to match two 1-D arrays or 2-D speech files (2-D array). The dtwParallel (Python) package incorporates the main functionalities available in current DTW libraries and novel functionalities such as parallelization, computation of similarity (kernel-based) values, and consideration of data with different types of features (categorical, real-valued, etc.).
|
Dynamic time warping : Levenshtein distance Elastic matching Sequence alignment Multiple sequence alignment Wagner–Fischer algorithm Needleman–Wunsch algorithm Fréchet distance Nonlinear mixed-effects model
|
Dynamic time warping : Pavel Senin, Dynamic Time Warping Algorithm Review Vintsyuk, T. K. (1968). "Speech discrimination by dynamic programming". Kibernetika. 4: 81–88. Sakoe, H.; Chiba (1978). "Dynamic programming algorithm optimization for spoken word recognition". IEEE Transactions on Acoustics, Speech, and Signal Processing. 26 (1): 43–49. doi:10.1109/tassp.1978.1163055. S2CID 17900407. Myers, C. S.; Rabiner, L. R. (1981). "A Comparative Study of Several Dynamic Time-Warping Algorithms for Connected-Word Recognition". Bell System Technical Journal. 60 (7): 1389–1409. doi:10.1002/j.1538-7305.1981.tb00272.x. ISSN 0005-8580. S2CID 12857347. Rabiner, Lawrence; Juang, Biing-Hwang (1993). "Chapter 4: Pattern-Comparison Techniques". Fundamentals of speech recognition. Englewood Cliffs, N.J.: PTR Prentice Hall. ISBN 978-0-13-015157-5. Müller, Meinard (2007). Dynamic Time Warping. In Information Retrieval for Music and Motion, chapter 4, pages 69-84 (PDF). Springer. doi:10.1007/978-3-540-74048-3. ISBN 978-3-540-74047-6. Rakthanmanon, Thanawin (September 2013). "Addressing Big Data Time Series: Mining Trillions of Time Series Subsequences Under Dynamic Time Warping". ACM Transactions on Knowledge Discovery from Data. 7 (3): 10:1–10:31. doi:10.1145/2513092.2500489. PMC 6790126. PMID 31607834.
|
Elastic net regularization : In statistics and, in particular, in the fitting of linear or logistic regression models, the elastic net is a regularized regression method that linearly combines the L1 and L2 penalties of the lasso and ridge methods. Nevertheless, elastic net regularization is typically more accurate than both methods with regard to reconstruction.
|
Elastic net regularization : The elastic net method overcomes the limitations of the LASSO (least absolute shrinkage and selection operator) method which uses a penalty function based on ‖ β ‖ 1 = ∑ j = 1 p | β j | . =\textstyle \sum _^|\beta _|. Use of this penalty function has several limitations. For example, in the "large p, small n" case (high-dimensional data with few examples), the LASSO selects at most n variables before it saturates. Also if there is a group of highly correlated variables, then the LASSO tends to select one variable from a group and ignore the others. To overcome these limitations, the elastic net adds a quadratic part ( ‖ β ‖ 2 ) to the penalty, which when used alone is ridge regression (known also as Tikhonov regularization). The estimates from the elastic net method are defined by β ^ ≡ argmin β ( ‖ y − X β ‖ 2 + λ 2 ‖ β ‖ 2 + λ 1 ‖ β ‖ 1 ) . \equiv (\|y-X\beta \|^+\lambda _\|\beta \|^+\lambda _\|\beta \|_). The quadratic penalty term makes the loss function strongly convex, and it therefore has a unique minimum. The elastic net method includes the LASSO and ridge regression: in other words, each of them is a special case where λ 1 = λ , λ 2 = 0 =\lambda ,\lambda _=0 or λ 1 = 0 , λ 2 = λ =0,\lambda _=\lambda . Meanwhile, the naive version of elastic net method finds an estimator in a two-stage procedure : first for each fixed λ 2 it finds the ridge regression coefficients, and then does a LASSO type shrinkage. This kind of estimation incurs a double amount of shrinkage, which leads to increased bias and poor predictions. To improve the prediction performance, sometimes the coefficients of the naive version of elastic net is rescaled by multiplying the estimated coefficients by ( 1 + λ 2 ) ) . Examples of where the elastic net method has been applied are: Support vector machine Metric learning Portfolio optimization Cancer prognosis
|
Elastic net regularization : It was proven in 2014 that the elastic net can be reduced to the linear support vector machine. A similar reduction was previously proven for the LASSO in 2014. The authors showed that for every instance of the elastic net, an artificial binary classification problem can be constructed such that the hyper-plane solution of a linear support vector machine (SVM) is identical to the solution β (after re-scaling). The reduction immediately enables the use of highly optimized SVM solvers for elastic net problems. It also enables the use of GPU acceleration, which is often already used for large-scale SVM solvers. The reduction is a simple transformation of the original data and regularization constants X ∈ R n × p , y ∈ R n , λ 1 ≥ 0 , λ 2 ≥ 0 ^,y\in ^,\lambda _\geq 0,\lambda _\geq 0 into new artificial data instances and a regularization constant that specify a binary classification problem and the SVM regularization constant X 2 ∈ R 2 p × n , y 2 ∈ 2 p , C ≥ 0. \in ^,y_\in \^,C\geq 0. Here, y 2 consists of binary labels − 1 , 1 . When 2 p > n it is typically faster to solve the linear SVM in the primal, whereas otherwise the dual formulation is faster. Some authors have referred to the transformation as Support Vector Elastic Net (SVEN), and provided the following MATLAB pseudo-code:
|
Elastic net regularization : "Glmnet: Lasso and elastic-net regularized generalized linear models" is a software which is implemented as an R source package and as a MATLAB toolbox. This includes fast algorithms for estimation of generalized linear models with ℓ1 (the lasso), ℓ2 (ridge regression) and mixtures of the two penalties (the elastic net) using cyclical coordinate descent, computed along a regularization path. JMP Pro 11 includes elastic net regularization, using the Generalized Regression personality with Fit Model. "pensim: Simulation of high-dimensional data and parallelized repeated penalized regression" implements an alternate, parallelised "2D" tuning method of the ℓ parameters, a method claimed to result in improved prediction accuracy. scikit-learn includes linear regression and logistic regression with elastic net regularization. SVEN, a Matlab implementation of Support Vector Elastic Net. This solver reduces the Elastic Net problem to an instance of SVM binary classification and uses a Matlab SVM solver to find the solution. Because SVM is easily parallelizable, the code can be faster than Glmnet on modern hardware. SpaSM, a Matlab implementation of sparse regression, classification and principal component analysis, including elastic net regularized regression. Apache Spark provides support for Elastic Net Regression in its MLlib machine learning library. The method is available as a parameter of the more general LinearRegression class. SAS (software) The SAS procedure Glmselect and SAS Viya procedure Regselect support the use of elastic net regularization for model selection.
|
Elastic net regularization : Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome (2017). "Shrinkage Methods" (PDF). The Elements of Statistical Learning : Data Mining, Inference, and Prediction (2nd ed.). New York: Springer. pp. 61–79. ISBN 978-0-387-84857-0.
|
Elastic net regularization : Regularization and Variable Selection via the Elastic Net (presentation)
|
Error-driven learning : In reinforcement learning, error-driven learning is a method for adjusting a model's (intelligent agent's) parameters based on the difference between its output results and the ground truth. These models stand out as they depend on environmental feedback, rather than explicit labels or categories. They are based on the idea that language acquisition involves the minimization of the prediction error (MPSE). By leveraging these prediction errors, the models consistently refine expectations and decrease computational complexity. Typically, these algorithms are operated by the GeneRec algorithm. Error-driven learning has widespread applications in cognitive sciences and computer vision. These methods have also found successful application in natural language processing (NLP), including areas like part-of-speech tagging, parsing, named entity recognition (NER), machine translation (MT), speech recognition (SR), and dialogue systems.
|
Error-driven learning : Error-driven learning models are ones that rely on the feedback of prediction errors to adjust the expectations or parameters of a model. The key components of error-driven learning include the following: A set S of states representing the different situations that the learner can encounter. A set A of actions that the learner can take in each state. A prediction function P ( s , a ) that gives the learner’s current prediction of the outcome of taking action a in state s . An error function E ( o , p ) that compares the actual outcome o with the prediction p and produces an error value. An update rule U ( p , e ) that adjusts the prediction p in light of the error e .
|
Error-driven learning : Error-driven learning algorithms refer to a category of reinforcement learning algorithms that leverage the disparity between the real output and the expected output of a system to regulate the system's parameters. Typically applied in supervised learning, these algorithms are provided with a collection of input-output pairs to facilitate the process of generalization. The widely utilized error backpropagation learning algorithm is known as GeneRec, a generalized recirculation algorithm primarily employed for gene prediction in DNA sequences. Many other error-driven learning algorithms are derived from alternative versions of GeneRec.
|
Error-driven learning : Error-driven learning has several advantages over other types of machine learning algorithms: They can learn from feedback and correct their mistakes, which makes them adaptive and robust to noise and changes in the data. They can handle large and high-dimensional data sets, as they do not require explicit feature engineering or prior knowledge of the data distribution. They can achieve high accuracy and performance, as they can learn complex and nonlinear relationships between the input and the output.
|
Error-driven learning : Although error driven learning has its advantages, their algorithms also have the following limitations: They can suffer from overfitting, which means that they memorize the training data and fail to generalize to new and unseen data. This can be mitigated by using regularization techniques, such as adding a penalty term to the loss function, or reducing the complexity of the model. They can be sensitive to the choice of the error function, the learning rate, the initialization of the weights, and other hyperparameters, which can affect the convergence and the quality of the solution. This requires careful tuning and experimentation, or using adaptive methods that adjust the hyperparameters automatically. They can be computationally expensive and time-consuming, especially for nonlinear and deep models, as they require multiple iterations(repetitions) and calculations to update the weights of the system. This can be alleviated by using parallel and distributed computing, or using specialized hardware such as GPUs or TPUs.
|
Error-driven learning : Predictive coding == References ==
|
Evolutionary multimodal optimization : In applied mathematics, multimodal optimization deals with optimization tasks that involve finding all or most of the multiple (at least locally optimal) solutions of a problem, as opposed to a single best solution. Evolutionary multimodal optimization is a branch of evolutionary computation, which is closely related to machine learning. Wong provides a short survey, wherein the chapter of Shir and the book of Preuss cover the topic in more detail.
|
Evolutionary multimodal optimization : Knowledge of multiple solutions to an optimization task is especially helpful in engineering, when due to physical (and/or cost) constraints, the best results may not always be realizable. In such a scenario, if multiple solutions (locally and/or globally optimal) are known, the implementation can be quickly switched to another solution and still obtain the best possible system performance. Multiple solutions could also be analyzed to discover hidden properties (or relationships) of the underlying optimization problem, which makes them important for obtaining domain knowledge. In addition, the algorithms for multimodal optimization usually not only locate multiple optima in a single run, but also preserve their population diversity, resulting in their global optimization ability on multimodal functions. Moreover, the techniques for multimodal optimization are usually borrowed as diversity maintenance techniques to other problems.
|
Evolutionary multimodal optimization : Classical techniques of optimization would need multiple restart points and multiple runs in the hope that a different solution may be discovered every run, with no guarantee however. Evolutionary algorithms (EAs) due to their population based approach, provide a natural advantage over classical optimization techniques. They maintain a population of possible solutions, which are processed every generation, and if the multiple solutions can be preserved over all these generations, then at termination of the algorithm we will have multiple good solutions, rather than only the best solution. Note that this is against the natural tendency of classical optimization techniques, which will always converge to the best solution, or a sub-optimal solution (in a rugged, “badly behaving” function). Finding and maintenance of multiple solutions is wherein lies the challenge of using EAs for multi-modal optimization. Niching is a generic term referred to as the technique of finding and preserving multiple stable niches, or favorable parts of the solution space possibly around multiple solutions, so as to prevent convergence to a single solution. The field of Evolutionary algorithms encompasses genetic algorithms (GAs), evolution strategy (ES), differential evolution (DE), particle swarm optimization (PSO), and other methods. Attempts have been made to solve multi-modal optimization in all these realms and most, if not all the various methods implement niching in some form or the other.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.