text
stringlengths 12
14.7k
|
---|
PCVC Speech Dataset : The Kaggle page of PCVC speech dataset PCVC Paper on ResearchGate
|
Persian Speech Corpus : The Persian Speech Corpus is a Modern Persian speech corpus for speech synthesis. The corpus contains phonetic and orthographic transcriptions of about 2.5 hours of Persian speech aligned with recorded speech on the phoneme level, including annotations of word boundaries. Previous spoken corpora of Persian include FARSDAT, which consists of read aloud speech from newspaper texts from 100 Persian speakers and the Telephone FARsi Spoken language DATabase (TFARSDAT) which comprises seven hours of read and spontaneous speech produced by 60 native speakers of Persian from ten regions of Iran. The Persian Speech Corpus was built using the same methodologies laid out in the doctoral project on Modern Standard Arabic of Nawar Halabi at the University of Southampton. The work was funded by MicroLinkPC, who own an exclusive license to commercialise the corpus, though the corpus is available for non-commercial use through the corpus' website. It is distributed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. The corpus was built for speech synthesis purposes, but has been used for building HMM based voices in Persian. It can also be used to automatically align other speech corpora with their phonetic transcript and could be used as part of a larger corpus for training speech recognition systems.
|
Persian Speech Corpus : The corpus is downloadable from its website, and contains the following: 396 .wav files containing spoken utterances 396 .lab files containing text utterances 396 .TextGrid files containing the phoneme labels with time stamps of the boundaries where these occur in the .wav files. phonetic-transcript.txt which has the form "[wav_filename]" "[Phoneme Sequence]" in every line orthographic-transcript.txt which has the form "[wav_filename]" "[Orthographic Transcript]" in every line
|
Persian Speech Corpus : Comparison of datasets in machine learning
|
Persian Speech Corpus : The Persian Speech Corpus official website The Arabic Speech Corpus official website The Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
|
TIMIT : TIMIT is a corpus of phonemically and lexically transcribed speech of American English speakers of different sexes and dialects. Each transcribed element has been delineated in time. TIMIT was designed to further acoustic-phonetic knowledge and automatic speech recognition systems. It was commissioned by DARPA and corpus design was a joint effort between the Massachusetts Institute of Technology, SRI International, and Texas Instruments (TI). The speech was recorded at TI, transcribed at MIT, and verified and prepared for publishing by the National Institute of Standards and Technology (NIST). There is also a telephone bandwidth version called NTIMIT (Network TIMIT). TIMIT and NTIMIT are not freely available — either membership of the Linguistic Data Consortium, or a monetary payment, is required for access to the dataset.
|
TIMIT : TIMIT contains ~5 hours of speech, of 10 sentences spoken by each of 630 speakers. The sentences were randomly sampled from a corpus of 2342 sentences. The speakers were native speakers of American English, classified under 8 major dialect regions: New England, Northern, North Midland, South Midland, Southern, New York City, Western, Army Brat (moved around). The speakers were 70% male and 30% female. Recordings were made in a noise-isolated recording booth at Texas Instrument, using a semi-automatic computer system (STEROIDS) to control the presentation of prompts to the speaker and the recording. Two-channel recordings were made using a Sennheiser HMD 414 headset-mounted microphone and a Brüel & Kjær 1/2" far-field pressure microphone (#4165). The speech was digitized at a sample rate of 20 kHz then and downsampled to 16 kHz.
|
TIMIT : The TIMIT telephone corpus was an early attempt to create a database with speech samples. It was published in the year 1988 on CD-ROM and consists of only 10 sentences per speaker. Two 'dialect' sentences were read by each speaker, as well as another 8 sentences selected from a larger set Each sentence averages 3 seconds long and is spoken by 630 different speakers. It was the first notable attempt in creating and distributing a speech corpus and the overall project has produced costs of 1.5 million US$. An update was released in October 1990. It included full 630-speaker corpus; checked and corrected transcriptions; word-alignment transcriptions; NIST SPHERE-headered waveform files and header manipulation software; phonemic dictionary; new test and training subsets balanced for dialectal and phonetic coverage; more extensive documentation. The full name of the project is DARPA-TIMIT Acoustic-Phonetic Continuous Speech Corpus and the acronym TIMIT stands for Texas Instruments/Massachusetts Institute of Technology. The main reason why a corpus of telephone speech was created was to train speech recognition software. In the Blizzard challenge, different software has the obligation to convert audio recordings into textual data and the TIMIT corpus was used as a standardized baseline.
|
TIMIT : Comparison of datasets in machine learning
|
TIMIT : TIMIT Acoustic-Phonetic Continuous Speech Corpus
|
Training, validation, and test data sets : In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from input data. These input data used to build the model are usually divided into multiple data sets. In particular, three data sets are commonly used in different stages of the creation of the model: training, validation, and test sets. The model is initially fit on a training data set, which is a set of examples used to fit the parameters (e.g. weights of connections between neurons in artificial neural networks) of the model. The model (e.g. a naive Bayes classifier) is trained on the training data set using a supervised learning method, for example using optimization methods such as gradient descent or stochastic gradient descent. In practice, the training data set often consists of pairs of an input vector (or scalar) and the corresponding output vector (or scalar), where the answer key is commonly denoted as the target (or label). The current model is run with the training data set and produces a result, which is then compared with the target, for each input vector in the training data set. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include both variable selection and parameter estimation. Successively, the fitted model is used to predict the responses for the observations in a second data set called the validation data set. The validation data set provides an unbiased evaluation of a model fit on the training data set while tuning the model's hyperparameters (e.g. the number of hidden units—layers and layer widths—in a neural network). Validation data sets can be used for regularization by early stopping (stopping training when the error on the validation data set increases, as this is a sign of over-fitting to the training data set). This simple procedure is complicated in practice by the fact that the validation data set's error may fluctuate during training, producing multiple local minima. This complication has led to the creation of many ad-hoc rules for deciding when over-fitting has truly begun. Finally, the test data set is a data set used to provide an unbiased evaluation of a final model fit on the training data set. If the data in the test data set has never been used in training (for example in cross-validation), the test data set is also called a holdout data set. The term "validation set" is sometimes used instead of "test set" in some literature (e.g., if the original data set was partitioned into only two subsets, the test set might be referred to as the validation set). Deciding the sizes and strategies for data set division in training, test and validation sets is very dependent on the problem and data available.
|
Training, validation, and test data sets : A training data set is a data set of examples used during the learning process and is used to fit the parameters (e.g., weights) of, for example, a classifier. For classification tasks, a supervised learning algorithm looks at the training data set to determine, or learn, the optimal combinations of variables that will generate a good predictive model. The goal is to produce a trained (fitted) model that generalizes well to new, unknown data. The fitted model is evaluated using “new” examples from the held-out data sets (validation and test data sets) to estimate the model’s accuracy in classifying new data. To reduce the risk of issues such as over-fitting, the examples in the validation and test data sets should not be used to train the model. Most approaches that search through training data for empirical relationships tend to overfit the data, meaning that they can identify and exploit apparent relationships in the training data that do not hold in general. When a training set is continuously expanded with new data, then this is incremental learning.
|
Training, validation, and test data sets : A validation data set is a data set of examples used to tune the hyperparameters (i.e. the architecture) of a model. It is sometimes also called the development set or the "dev set". An example of a hyperparameter for artificial neural networks includes the number of hidden units in each layer. It, as well as the testing set (as mentioned below), should follow the same probability distribution as the training data set. In order to avoid overfitting, when any classification parameter needs to be adjusted, it is necessary to have a validation data set in addition to the training and test data sets. For example, if the most suitable classifier for the problem is sought, the training data set is used to train the different candidate classifiers, the validation data set is used to compare their performances and decide which one to take and, finally, the test data set is used to obtain the performance characteristics such as accuracy, sensitivity, specificity, F-measure, and so on. The validation data set functions as a hybrid: it is training data used for testing, but neither as part of the low-level training nor as part of the final testing. The basic process of using a validation data set for model selection (as part of training data set, validation data set, and test data set) is: Since our goal is to find the network having the best performance on new data, the simplest approach to the comparison of different networks is to evaluate the error function using data which is independent of that used for training. Various networks are trained by minimization of an appropriate error function defined with respect to a training data set. The performance of the networks is then compared by evaluating the error function using an independent validation set, and the network having the smallest error with respect to the validation set is selected. This approach is called the hold out method. Since this procedure can itself lead to some overfitting to the validation set, the performance of the selected network should be confirmed by measuring its performance on a third independent set of data called a test set. An application of this process is in early stopping, where the candidate models are successive iterations of the same network, and training stops when the error on the validation set grows, choosing the previous model (the one with minimum error).
|
Training, validation, and test data sets : A test data set is a data set that is independent of the training data set, but that follows the same probability distribution as the training data set. If a model fit to the training data set also fits the test data set well, minimal overfitting has taken place (see figure below). A better fitting of the training data set as opposed to the test data set usually points to over-fitting. A test set is therefore a set of examples used only to assess the performance (i.e. generalization) of a fully specified classifier. To do this, the final model is used to predict classifications of examples in the test set. Those predictions are compared to the examples' true classifications to assess the model's accuracy. In a scenario where both validation and test data sets are used, the test data set is typically used to assess the final model that is selected during the validation process. In the case where the original data set is partitioned into two subsets (training and test data sets), the test data set might assess the model only once (e.g., in the holdout method). Note that some sources advise against such a method. However, when using a method such as cross-validation, two partitions can be sufficient and effective since results are averaged after repeated rounds of model training and testing to help reduce bias and variability.
|
Training, validation, and test data sets : Testing is trying something to find out about it ("To put to the proof; to prove the truth, genuineness, or quality of by experiment" according to the Collaborative International Dictionary of English) and to validate is to prove that something is valid ("To confirm; to render valid" Collaborative International Dictionary of English). With this perspective, the most common use of the terms test set and validation set is the one here described. However, in both industry and academia, they are sometimes used interchanged, by considering that the internal process is testing different models to improve (test set as a development set) and the final model is the one that needs to be validated before real use with an unseen data (validation set). "The literature on machine learning often reverses the meaning of 'validation' and 'test' sets. This is the most blatant example of the terminological confusion that pervades artificial intelligence research." Nevertheless, the important concept that must be kept is that the final set, whether called test or validation, should only be used in the final experiment.
|
Training, validation, and test data sets : In order to get more stable results and use all valuable data for training, a data set can be repeatedly split into several training and a validation data sets. This is known as cross-validation. To confirm the model's performance, an additional test data set held out from cross-validation is normally used. It is possible to use cross-validation on training and validation sets, and within each training set have further cross-validation for a test set for hyperparameter tuning. This is known as nested cross-validation.
|
Training, validation, and test data sets : Omissions in the training of algorithms are a major cause of erroneous outputs. Types of such omissions include: Particular circumstances or variations were not included. Obsolete data Ambiguous input information Inability to change to new environments Inability to request help from a human or another AI system when needed An example of an omission of particular circumstances is a case where a boy was able to unlock the phone because his mother registered her face under indoor, nighttime lighting, a condition which was not appropriately included in the training of the system. Usage of relatively irrelevant input can include situations where algorithms use the background rather than the object of interest for object detection, such as being trained by pictures of sheep on grasslands, leading to a risk that a different object will be interpreted as a sheep if located on a grassland.
|
Training, validation, and test data sets : Statistical classification List of datasets for machine learning research Hierarchical classification == References ==
|
Amazon Rekognition : Amazon Rekognition is a cloud-based software as a service (SaaS) computer vision platform that was launched in 2016. It has been sold to, and used by, a number of United States government agencies, including U.S. Immigration and Customs Enforcement (ICE) and Orlando, Florida police, as well as private entities.
|
Amazon Rekognition : Rekognition provides a number of computer vision capabilities, which can be divided into two categories: Algorithms that are pre-trained on data collected by Amazon or its partners, and algorithms that a user can train on a custom dataset. As of July 2019, Rekognition provides the following computer vision capabilities.
|
Amazon Rekognition : Amazon Lex Amazon Mechanical Turk Amazon Polly Amazon SageMaker Amazon Web Services Facial recognition system Timeline of Amazon Web Services == References ==
|
Angoss : Angoss Software Corporation, headquartered in Toronto, Ontario, Canada, with offices in the United States and UK, acquired by Datawatch and now owned by Altair, was a provider of predictive analytics systems through software licensing and services. Angoss' customers represent industries including finance, insurance, mutual funds, retail, health sciences, telecom and technology. The company was founded in 1984, and publicly traded on the TSX Venture Exchange from 2008-2013 under the ticker symbol ANC. In June 2013, the private equity firm Peterson Partners acquired Angoss for $8.4 million.
|
Angoss : KnowledgeREADER is an integrated customer intelligence product combining visual text discovery and predictive analytics for customer experience management. KnowledgeSEEKER is a data mining product. Its features include data profiling, data visualization and decision tree analysis. It was first released in 1990. KnowledgeSTUDIO is a data mining and predictive analytics suite for the model development and deployment cycle. Its features include data profiling, data visualization, decision tree analysis, predictive modeling, implementation, scoring, validation, monitoring and scorecard development. KnowledgeEXCELERATOR is a visual data discovery software and prediction tool for business analysts and knowledge workers. StrategyBUILDER is an add-on module for KnowledgeSEEKER and KnowledgeSTUDIO and is a product to design, verify, and deploy predictive and business rules.
|
Angoss : FundGUARD is software as a service for marketing, sales targeting and predictive leads for mutual funds and wealth management companies. ClaimGUARD is a fraud and abuse detection service. Cloud on demand Software is offered for KnowledgeSEEKER, KnowledgeSTUDIO and its text analytics module. KnowledgeSCORE for Salesforce.com customer relationship management is a forecasting and predictive sales analytics system for Salesforce users.
|
Angoss : List of statistical packages Predictive analytics
|
Angoss : Official website
|
Anne O'Tate : Anne O'Tate is a free, web-based application that analyses sets of records identified on PubMed, the bibliographic database of articles from over 5,500 biomedical journals worldwide. While PubMed has its own wide range of search options to identify sets of records relevant to a researchers query it lacks the ability to analyse these sets of records further, a process for which the terms text mining and drill down have been used. Anne O'Tate is able to perform such analysis and can process sets of up to 25,000 PubMed records.
|
Anne O'Tate : Once a set of articles has been identified using Anne O’Tate with its PubMed-like interface and search syntax, the set can be analysed and words and concepts mentioned in specific 'fields' (sections) of PubMed records can be displayed in order of frequency. ‘Fields’ which Anne O’Tate can display in this manner are:
|
Anne O'Tate : Anne O'Tate (a pun on the word ‘annotate’) was developed by Neil R Smalheiser and a team of researchers from the University of Chicago. It is part of the Arrowsmith Project, which developed tools such as “Arrowsmith” proper, a text-comparison application, "Adam", a database of medical abbreviations, and ‘’Author-ity’’ (an author-disambiguation tool), "Compendium", a list of biomedical text mining tools, and Anne O’Tate. The Project is based on research led by Don R. Swanson at the University of Chicago which hosted the original tool. Further research was led by Neil R. Smalheiser at the University of Illinois at Chicago, with funding from the National Institutes of Health.
|
Anne O'Tate : A wide range of text-mining applications for PubMed have been developed, using their own interface, such as GoPubMed, ClusterMed, or PubReMiner. Only Anne O’Tate uses PubMed’s standard interface, search syntax, and some of its functionality.
|
Anne O'Tate : Anne O'Tate PubMed Home Page Medical Subject Headings Fact Sheet "The Arrowsmith Project Homepage". University of Illinois at Chicago, Department of Psychiatry. December 20, 2007. Retrieved July 4, 2011.
|
Aphelion (software) : The Aphelion Imaging Software Suite is a software suite that includes three base products - Aphelion Lab, Aphelion Dev, and Aphelion SDK for addressing image processing and image analysis applications. The suite also includes a set of extension programs to implement specific vertical applications that benefit from imaging techniques. The Aphelion software products can be used to prototype and deploy applications, or can be integrated, in whole or in part, into a user's system as processing and visualization libraries whose components are available as both DLLs or .Net components.
|
Aphelion (software) : The development of Aphelion started in 1995 as a joint project of a French company, ADCIS S.A., and an American company, Amerinex Applied Imaging, Inc. (AAI) Aphelion's image processing and analysis functions were made from operators available from the KBVision software developed and sold by Amerinex's predecessor, Amerinex Artificial Intelligence Inc. In the 1990s, the XLim software library was developed at the Center of Mathematical Morphology of Mines ParisTech, and both companies carried out its development tasks. The first version of Aphelion was completed and released in April 1996. Successive versions were released before the first official stable release in December 1996 at the Photonics East conference in Boston and the Solutions Vision show in Paris in January 1997, where at the latter it competed with Stemmer Imaging's CVB imaging toolbox. In 1998, version 2.3 of Aphelion for Windows 98 was released, and its user base was growing in both France and the United States. Version 3.0, totally rewritten to take advantage of Microsoft's then-recent ActiveX technology, was officially released in 2000. It also became available as a « Developer » version, for rapid prototyping of applications using its intuitive GUI and the macro recording capability, and a « Core » version, including the full library as a set of ActiveX components to be used by software developers, integrators and original equipment manufacturers (OEM). As AAI turned its focus to security, in 2001, ADCIS took the lead on developing Aphelion. AAI focused on millimeter wave scanners for concealed weapon detection at airports, and eventually merged with Millimetrics to become Millivision. In 2004, ADCIS specified version 4.0 of Aphelion. The set of image processing/analysis functions was rewritten one more time to be compatible with the .NET technology and the emergence of 64 bit architecture PCs. In addition, the GUI was redesigned to address two usage types: a semi-automatic use where the user is guided through the different steps of functions, and a fully automatic use where the expert user can quickly invoke imaging functions. Its first release was presented at the IPOT exhibition in Birmingham, UK the same year. During the Vision Show in Paris in October 2008, the new Aphelion Lab product was launched for users that are not specialists in image processing. It is easier to use, and only includes fewer image processing functions. It was then included in the Aphelion Image Processing Suite, consisting of Aphelion Dev (replacing Aphelion Developer), Aphelion Lab, Aphelion SDK (replacing Aphelion Core), and a set of extensions. Nowadays, ADCIS is still working on the suite, and updated versions with new extensions and functionalities continually become available from the websites of both companies. In 2015, support was added for very large images and scan microscope images (virtual slides compound into a very large JPEG 2000 image) for high throughput imaging, and new specific extensions were also added. In late 2015, ADCIS announced Aphelion's port for tablets and smartphones, for vertical applications. The name "Aphelion" comes from the astronomical term of the same name, meaning the point on a planet rotating around the Sun where it lies farthest from it, applying the term in a metaphorical sense. Unix was the operating system used on scientific workstations in the 1990s, such as on the workstations manufactured by market leader Sun Microsystems, which Windows suite Aphelion was quite removed from.
|
Aphelion (software) : Aphelion is a software suite to be used for image processing and image analysis. It supports 2D and 3D, monochrome, color, and multi-band images. It is developed by ADCIS, a French software house located in Saint-Contest, Calvados, Normandy. Aphelion is widely used in the scientific/industry community to solve basic and complex imaging applications. First, the imaging application is quickly developed from the Graphical User Interface, involving a set of functions that can be automatically recorded into a macro command. The macro languages available in Aphelion (i.e. BasicScript, Python, and C#) help to process batch of images, and prompt the user if needed for specific parameters that are applied to the imaging functions. All Aphelion image processing functions are written in C++, and the Aphelion user interface is written in C#. C++ functions can be called from the C# language thanks the use of dedicated wrappers. The main principle of image processing is to automatically process pixels of a digital image, then extract one or more objects of interest (i.e. cells in the field of biology, inclusions in the field of material science) and compute one or more measurements on those objects to quantify the image and generate a verdict (good image, image with defects, cancerous cells). In other words, starting from an image, pixels are processed by a set of successive functions or operators until only measurements are computed and used as the input of a 3rd party system or a classification software that will classify objects of interest that have been extracted during the imaging process. An acquisition system such as a digital camera, a video camera, an optical or electron microscope, a medical scanner, or a smartphone can be used to capture images. The set of values or pixels can be processed as a 1D image (1D signal), a 2D image (array of pixel values corresponding to a monochrome or color image), or a 3D image displayed using volume rendering (array of voxels in the 3D space) or displaying surfaces by using 3D rendering. A 2D color image is made of 3 value pixels (typically Red, Green, and Blue information or another color space), and a 3D image is made of monochrome, color (indexed color are often used), multispectral, or hyperspectral data. When dealing with videos, an additional band is added corresponding to temporal information. The Aphelion Software Suite includes three base products, and a set of optional extensions for specific applications: Aphelion Lab: Entry-level package for non-experts in image processing. It helps to quickly segment an image in a semi-automatic or manual ways, and compute a set of measurements computed on objects of interest that have been extracted during the segmentation process. A set of wizards guides the user from image acquisition to report generation. Aphelion Dev: Full imaging environment including over 450 functions to develop and deploy an application that involves image processing and analysis. It also includes a set of macro-command languages to automate any application to be invoked from the user interface. It also helps to run the imaging algorithm on more than one image that are stored on disk, available on the network, or captured by an acquisition device. Aphelion libraries for image processing and visualization are provided in Aphelion Dev as DLLs and .Net components. Aphelion SDK: A set of libraries to develop a stand-alone application with a custom interface based on the Aphelion libraries. This software development kit including display, processing and analysis functions that can be used by software developers and OEMs. It is provided as DLLs and .Net components. The stand-alone application is typically developed in C# on one computer, and then deployed on multiple PCs and systems. A set of optional extensions can be added to the « Aphelion Dev » product, depending on the application. An evaluation version of Aphelion can be run on a PC for 30 days. A permanent version of Aphelion is available based on a perpetual license. Upgrades are available through a maintenance agreement based on a yearly fee. Technical support is provided by the engineers who are developing the product. The goal of image processing is usually to extract object(s) of interest in an image, and then to classify them based on some characteristics such as shape, density, position, etc. Using Aphelion, this goal is achieved by performing the following tasks: Load an image from disk or acquire an image using an acquisition device. Enhance the image removing noise or modifying its contrast. Segment the image extracting objects of interest to be measured and analyzed. Typically, for simple applications, a threshold is performed to generate a binary image. Then, morphological operators are applied to clean the image and only keep objects of interest. Finally, a label value is given to each object based on its connectivity (4 or 8 connectivity when a square grid is used), and the background of the image is given value zero. The set of objects can be manually edited by the user to remove artifacts, and alter their edges. Objects can then be measured in terms of shape, color, densitometry, and then classified using the measurements. What has been developed above for one image can be applied to a batch of images thanks to the use of the macro-commands available in the Aphelion User Interface. It helps to generate more measurements and get a more robust algorithm working on multiple images. Statistical analysis can be performed on the measurements and classifiers can be trained if the number of objects is large enough and if descriptors or measurements are available to classify objects into classes or categories.
|
Aphelion (software) : The Aphelion Imaging Software Suite is used by students, researchers, engineers, and software developers in many application domains involving image processing and computer vision, such as: security (surveillance, object tracking) remote sensing quality control for the industry and inspection applications materials science life sciences (medicine and biology) earth science (geology) theory (image processing, machine learning and optimization)
|
Aphelion (software) : All products of the Aphelion Imaging Software Suite can be run on PC equipped with Windows (Vista, 7, 8, 8.1, or 10) 32 or 64 bits. An online help and video tutorials are available to the user.
|
Aphelion (software) : Below is a list of Aphelion optional extensions: 3D Image Processing and 3D Image Display: A set of extensions to display and process 3D images. The 3D display extension is based on the VTK software product. 3D Skeletonization: Extension to compute the 3D skeleton. Image Registration: Image registration extension to register images coming from different acquisition devices. Classification Tools: Classification extension including a « Fuzzy Logic » (fuzzy logic classification),« Neural Networks » (classification based on artificial neural networks, and « Random Forest » (classification based on random forests, derived from the R software product) Kriging: Specific extension to remove image noise using geostatistics techniques. Camera interface drivers and microscope interface software Virtual Image Capture and Virtual Image Stitcher: Two software products to capture mult-field images and stitch them into one single and very large image in the fields of optical and electron microscopy (image stitching). Stereology Analyzer: Software to analyze a very large image using stereology. This extension is mainly used in the field of biology on images acquired by a scan microscope. VisionTutor: Online image processing course including all the theory and application macro commands that are compatible with Aphelion. The Aphelion user can add his/her own macro-commands in the user interface that have been automatically recorded to process a batch of images. He/she can also add plugins and libraries in the GUI that have been developed outside the Aphelion environment.
|
Aphelion (software) : Official website Version history
|
BigDL : BigDL is a distributed deep learning framework for Apache Spark, created by Jason Dai at Intel. BigDL has its source code hosted on GitHub.
|
BigDL : Comparison of deep learning software == References ==
|
CellCognition : CellCognition is a free open-source computational framework for quantitative analysis of high-throughput fluorescence microscopy (time-lapse) images in the field of bioimage informatics and systems microscopy. The CellCognition framework uses image processing, computer vision and machine learning techniques for single-cell tracking and classification of cell morphologies. This enables measurements of temporal progression of cell phases, modeling of cellular dynamics and generation of phenotype map.
|
CellCognition : CellCognition uses a computational pipeline which includes image segmentation, object detection, feature extraction, statistical classification, tracking of individual cells over time, detection of class-transition motifs (e.g. cells entering mitosis), and HMM correction of classification errors on class labels. The software is written in Python 2.7 and binaries are available for Windows and Mac OS X.
|
CellCognition : CellCognition (Version 1.0.1) was first released in December 2009 by scientists from the Gerlich Lab and the Buhmann group at the Swiss Federal Institute of Technology Zürich and the Ellenberg Lab at the European Molecular Biology Laboratory Heidelberg. The latest release is 1.6.1 and the software is developed and maintained by the Gerlich Lab at the Institute of Molecular Biotechnology.
|
CellCognition : CellCognition has been used in RNAi-based screening, applied in basic cell cycle study, and extended to unsupervised modeling.
|
CellCognition : Official website CellCognition on GitHub
|
DADiSP : DADiSP (Data Analysis and Display, pronounced day-disp) is a numerical computing environment developed by DSP Development Corporation which allows one to display and manipulate data series, matrices and images with an interface similar to a spreadsheet. DADiSP is used in the study of signal processing, numerical analysis, statistical and physiological data processing.
|
DADiSP : DADiSP is designed to perform technical data analysis in a spreadsheet like environment. However, unlike a typical business spreadsheet that operates on a table of cells each of which contain single scalar values, a DADiSP Worksheet consists of multiple interrelated windows where each window contains an entire series or multi-column matrix. A window not only stores the data, but also displays the data in several interactive forms, including 2D graphs, XYZ plots, 3D surfaces, images and numeric tables. Like a traditional spreadsheet, the windows are linked such that a change to the data in one window automatically updates all dependent windows both numerically and graphically. Users manipulate data primarily through windows. A DADiSP window is normally referred to by the letter "W" followed by a window number, as in "W1". For example, the formula W1: 1..3 assigns the series values to "W1". The formula W2: W1*W1 sets a second window to compute the square of each value in "W1" such that "W2" will contain the series . If the values of "W1" change to , the values of "W2" automatically update to .
|
DADiSP : DADiSP includes a series based programming language called SPL (Series Processing Language) used to implement custom algorithms. SPL has a C/C++ like syntax and is incrementally compiled into intermediate bytecode, which is executed by a virtual machine. SPL supports both standard variables assigned with = and "hot" variables assigned with :=. For example, the statement A = 1..3 assigns the series to the standard variable "A". The square of the values can be assigned with B = A * A. Variable "B" contains the series . If "A" changes, "B" does not change because "B" preserves the values as assigned without regard to the future state of "A". However, the statement A := 1..3 creates a "hot" variable. A hot variable is analogous to a window, except hot variables do not display their data. The assignment B := A * A computes the square of the values of "A" as before, but now if "A" changes, "B" automatically updates. Setting A = causes "B" to automatically update with .
|
DADiSP : DADiSP was originally developed in the early 1980s, as part of a research project at MIT to explore the aerodynamics of Formula One racing cars. The original goal of the project was to enable researchers to quickly explore data analysis algorithms without the need for traditional programming.
|
DADiSP : DADiSP 6.7 B02, Jan 2017 DADiSP 6.7 B01, Oct 2015 DADiSP 6.5 B05, Dec 2012 DADiSP 6.5, May 2010 DADiSP 6.0, Sep 2002 DADiSP 5.0, Oct 2000 DADiSP 4.1, Dec 1997 DADiSP 4.0, Jul 1995 DADiSP 3.01, Feb 1993 DADiSP 2.0, Feb 1992 DADiSP 1.05, May 1989 DADiSP 1.03, Apr 1987
|
DADiSP : List of numerical-analysis software Comparison of numerical-analysis software
|
DADiSP : Allen Brown, Zhang Jun: First Course In Digital Signal Processing Using DADiSP, Abramis, ISBN 9781845495022 Charles Stephen Lessard: Signal Processing of Random Physiological Signals (Google eBook), Morgan & Claypool Publishers
|
DADiSP : DSP Development Corporation (DADiSP vendor) DADiSP Online Help DADiSP Tutorials Getting Started with DADiSP Introduction to DADiSP
|
Data Mining Extensions : Data Mining Extensions (DMX) is a query language for data mining models supported by Microsoft's SQL Server Analysis Services product. Like SQL, it supports a data definition language (DDL), data manipulation language (DML) and a data query language (DQL), all three with SQL-like syntax. Whereas SQL statements operate on relational tables, DMX statements operate on data mining models. Similarly, Microsoft SQL Server supports the MDX language for OLAP databases. DMX is used to create and train data mining models, and to browse, manage, and predict against them. DMX is composed of data definition language (DDL) statements, data manipulation language (DML) statements, and functions and operators.
|
Data Mining Extensions : DMX Queries are formulated using the SELECT statement. They can extract information from existing data mining models in various ways.
|
Data Mining Extensions : The data definition language (DDL) part of DMX can be used to Create new data mining models and mining structures - CREATE MINING STRUCTURE, CREATE MINING MODEL Delete existing data mining models and mining structures - DROP MINING STRUCTURE, DROP MINING MODEL Export and import mining structures - EXPORT, IMPORT Copy data from one mining model to another - SELECT INTO
|
Data Mining Extensions : The data manipulation language (DML) part of DMX can be used to Train mining models - INSERT INTO Browse data in mining models - SELECT FROM Make predictions using mining model - SELECT ... FROM PREDICTION JOIN
|
Data Mining Extensions : This example is a singleton prediction query, which predicts for the given customer whether she will be interested in home loan products.
|
Data Mining Extensions : Data Mining Extensions (DMX) Reference, (at MSDN)
|
Data Version Control (software) : DVC is a free and open-source, platform-agnostic version system for data, machine learning models, and experiments. It is designed to make ML models shareable, experiments reproducible, and to track versions of models, data, and pipelines. DVC works on top of Git repositories and cloud storage. The first (beta) version of DVC 0.6 was launched in May 2017. In May 2020, DVC 1.0 was publicly released by Iterative.ai.
|
Data Version Control (software) : DVC is designed to incorporate the best practices of software development into Machine Learning workflows. It does this by extending the traditional software tool Git by cloud storages for datasets and Machine Learning models. Specifically, DVC makes Machine Learning operations: Codified: it codifies datasets and models by storing pointers to the data files in cloud storages. Reproducible: it allows users to reproduce experiments, and rebuild datasets from raw data. These features also allow to automate the construction of datasets, the training, evaluation, and deployment of ML models.
|
Data Version Control (software) : DVC stores large files and datasets in separate storage, outside of Git. This storage can be on the user’s computer or hosted on any major cloud storage provider, such as Amazon S3, Google Cloud Storage, and Microsoft Azure Blob Storage. DVC users may also set up a remote repository on any server and connect to it remotely. When a user stores their data and models in the remote repository, text file is created in their Git repository which points to the actual data in remote storage.
|
Data Version Control (software) : DVC's features can be divided into three categories: data management, pipelines, and experiment tracking.
|
Data Version Control (software) : In 2022, Iterative released a free extension for Visual Studio Code (VS Code), a source-code editor made by Microsoft, which provides VS Code users with the ability to use DVC in their editors with additional user interface functionality.
|
Data Version Control (software) : In 2017, the first (beta) version of DVC 0.6 was publicly released (as a simple command line tool). It allowed data scientists to keep track of their machine learning processes and file dependencies in the simple form of git-like commands. It also allowed them to transform existing machine learning processes into reproducible DVC pipelines. DVC 0.6 solved most of the common problems that machine learning engineers and data scientists were facing: the reproducibility of machine learning experiments, as well as data versioning and low levels of collaboration between teams. Created by ex-Microsoft data scientist Dmitry Petrov, DVC aimed to integrate the best existing software development practices into machine learning operations. In 2018, Dmitry Petrov together with Ivan Shcheklein, an engineer and entrepreneur, founded Iterative.ai, an MLOps company that continued the development of DVC. Besides DVC, Iterative.ai is also behind open source tools like CML, MLEM, and Studio, the enterprise version of the open source tools. In June 2020, the Iterative.ai team released DVC 1.0. New features like multi-stage DVC files, run cache, plots, data transfer optimizations, hyperparameter tracking, and stable release cycles were added as a result of discussions and contributions from the community. In March 2021, DVC released DVC 2.0, which introduced ML experiments (experiment management), model checkpoints and metrics logging. ML experiments: To solve the problem of Git overhead, when hundreds of experiments need to be run in a single day and each experiment run requires additional Git commands, DVC 2.0 introduced the lightweight experiments feature. It allows its users to auto-track ML experiments and capture code changes. This eliminated the dependence upon additional services by saving data versions as metadata in Git, as opposed to relegating it to external databases or APIs. ML model checkpoints versioning: The new release also enables versioning of all checkpoints with corresponding code and data. Metrics logging: DVC 2.0 introduced a new open-source library DVC-Live that would provide functionality for tracking model metrics and organizing metrics in a way that DVC could visualize with navigation in Git history.
|
Data Version Control (software) : There are several open source projects that provide similar data version control capabilities to DVC, such as: Git LFS, Dolt, Nessie, and lakeFS. These projects vary in their fit to the different needs of data engineers and data scientists such as: scalability, supported file formats, support in tabular data and unstructured data, volume of data that are supported, and more.
|
Data Version Control (software) : Official website dvc on GitHub VS Code extension
|
Deep Web Technologies : Deep Web Technologies is a software company that specializes in mining the Deep Web — the part of the Internet that is not directly searchable through ordinary web search engines. The company produces a proprietary software platform "Explorit" for searches. It also produces the federated search engine ScienceResearch.com, which provides free federated public searching of a large number of databases, and is also produced in specialized versions, Biznar for business research, Mednar for medical research, and customized versions for individual clients. In January 2020, Deep Web Technologies were acquired by UK technology company AMPLYFI Ltd. AMPLYFI Ltd were established in 2015 and provide business intelligence solutions to global corporations through artificial intelligence based software. AMPLYFI Ltd's CEO is Chris Ganje, and has offices in London and Cardiff.
|
Deep Web Technologies : Arnold, Stephen E. (June 10, 2008). "Deep Web Technologies: An Interview with Abe Lederman". ArnoldIT. Retrieved 2015-03-31. Mayfield, Dan (February 6, 2015). "For many companies, raising capital comes with one quirky rule". Albuquerque Business First. Retrieved 2015-03-31. Nguyen, Ivy (April 1, 2010). "Stanford collaborates on new database search tool". The Stanford Daily. Retrieved 2015-03-31. Quint, Barbara (June 11, 2010). "Interview with Deep Web Technologies' Abe Lederman". Unlimited Priorities. Retrieved 2015-03-31.
|
Distributed R : Distributed R is an open source, high-performance platform for the R language. It splits tasks between multiple processing nodes to reduce execution time and analyze large data sets. Distributed R enhances R by adding distributed data structures, parallelism primitives to run functions on distributed data, a task scheduler, and multiple data loaders. It is mostly used to implement distributed versions of machine learning tasks. Distributed R is written in C++ and R, and retains the familiar look and feel of R. As of February 2015, Hewlett-Packard (HP) provides enterprise support for Distributed R with proprietary additions such as a fast data loader from the Vertica database.
|
Distributed R : Distributed R was begun in 2011 by Indrajit Roy, Shivaram Venkataraman, Alvin AuYoung, and Robert S. Schreiber as a research project at HP Labs. It was open sourced in 2014 under the GPLv2 license and is available at GitHub. In February 2015, Distributed R reached its first stable version 1.0, along with enterprise support from HP.
|
Distributed R : Distributed R is a platform to implement and execute distributed applications in R. The goal is to extend R for distributed computing, while retaining the simplicity and look-and-feel of R. Distributed R consists of the following components: Distributed data structures: Distributed R extends R's common data structures such as array, data.frame, and list to store data across multiple nodes. The corresponding Distributed R data structures are darray, dframe, and dlist. Many of the common data structure operations in R, such as colSums, rowSums, nrow and others, are also available on distributed data structures. Parallel loop: Programmers can use the parallel loop, called foreach, to manipulate distributed data structures and execute tasks in parallel. Programmers only specify the data structure and function to express applications, while the runtime schedules tasks and, if required, moves around data. Distributed algorithms: Distributed versions of common machine learning and graph algorithms, such as clustering, classification, and regression. Data loaders: Users can leverage Distributed R constructs to implement parallel connectors that load data from different sources. Distributed R already provides implementations to load data from files and databases to distributed data structures.
|
Distributed R : HP Vertica provides tight integration with their database and the open source Distributed R platform. HP Vertica 7.1 includes features that enable fast, parallel loading from the Vertica database to Distribute R. This parallel Vertica loader can be more than five times (5x) faster than using traditional ODBC based connectors. The Vertica database also supports deployment of machine learning models in the database. Distributed R users can call the distributed algorithms to create machine learning models, deploy them in the Vertica database, and use the model for in-database scoring and predictions. Architectural details of the Vertica database and Distributed R integration are described in the Sigmod 2015 paper.
|
Distributed R : Official website
|
Dlib : Dlib is a general purpose cross-platform software library written in the programming language C++. Its design is heavily influenced by ideas from design by contract and component-based software engineering. Thus it is, first and foremost, a set of independent software components. It is open-source software released under a Boost Software License. Since development began in 2002, Dlib has grown to include a wide variety of tools. As of 2016, it contains software components for dealing with networking, threads, graphical user interfaces, data structures, linear algebra, machine learning, image processing, data mining, XML and text parsing, numerical optimization, Bayesian networks, and many other tasks. In recent years, much of the development has been focused on creating a broad set of statistical machine learning tools and in 2009 Dlib was published in the Journal of Machine Learning Research. Since then it has been used in a wide range of domains.
|
Dlib : Comparison of deep learning software
|
Dlib : Official website DLib: Library for Machine Learning
|
ELKI : ELKI (Environment for Developing KDD-Applications Supported by Index-Structures) is a data mining (KDD, knowledge discovery in databases) software framework developed for use in research and teaching. It was originally created by the database systems research unit at the Ludwig Maximilian University of Munich, Germany, led by Professor Hans-Peter Kriegel. The project has continued at the Technical University of Dortmund, Germany. It aims at allowing the development and evaluation of advanced data mining algorithms and their interaction with database index structures.
|
ELKI : The ELKI framework is written in Java and built around a modular architecture. Most currently included algorithms perform clustering, outlier detection, and database indexes. The object-oriented architecture allows the combination of arbitrary algorithms, data types, distance functions, indexes, and evaluation measures. The Java just-in-time compiler optimizes all combinations to a similar extent, making benchmarking results more comparable if they share large parts of the code. When developing new algorithms or index structures, the existing components can be easily reused, and the type safety of Java detects many programming errors at compile time. ELKI is a free tool for analyzing data, mainly focusing on finding patterns and unusual data points without needing labels. It's written in Java and aims to be fast and able to handle big datasets by using special structures. It's made for researchers and students to add their own methods and compare different algorithms easily. ELKI has been used in data science to cluster sperm whale codas, for phoneme clustering, for anomaly detection in spaceflight operations, for bike sharing redistribution, and traffic prediction.
|
ELKI : The university project is developed for use in teaching and research. The source code is written with extensibility and reusability in mind, but is also optimized for performance. The experimental evaluation of algorithms depends on many environmental factors and implementation details can have a large impact on the runtime. ELKI aims at providing a shared codebase with comparable implementations of many algorithms. As research project, it currently does not offer integration with business intelligence applications or an interface to common database management systems via SQL. The copyleft (AGPL) license may also be a hindrance to an integration in commercial products; nevertheless it can be used to evaluate algorithms prior to developing an own implementation for a commercial product. Furthermore, the application of the algorithms requires knowledge about their usage, parameters, and study of original literature. The audience is students, researchers, data scientists, and software engineers.
|
ELKI : ELKI is modeled around a database-inspired core, which uses a vertical data layout that stores data in column groups (similar to column families in NoSQL databases). This database core provides nearest neighbor search, range/radius search, and distance query functionality with index acceleration for a wide range of dissimilarity measures. Algorithms based on such queries (e.g. k-nearest-neighbor algorithm, local outlier factor and DBSCAN) can be implemented easily and benefit from the index acceleration. The database core also provides fast and memory efficient collections for object collections and associative structures such as nearest neighbor lists. ELKI makes extensive use of Java interfaces, so that it can be extended easily in many places. For example, custom data types, distance functions, index structures, algorithms, input parsers, and output modules can be added and combined without modifying the existing code. This includes the possibility of defining a custom distance function and using existing indexes for acceleration. ELKI uses a service loader architecture to allow publishing extensions as separate jar files. ELKI uses optimized collections for performance rather than the standard Java API. For loops for example are written similar to C++ iterators: In contrast to typical Java iterators (which can only iterate over objects), this conserves memory, because the iterator can internally use primitive values for data storage. The reduced garbage collection improves the runtime. Optimized collections libraries such as GNU Trove3, Koloboke, and fastutil employ similar optimizations. ELKI includes data structures such as object collections and heaps (for, e.g., nearest neighbor search) using such optimizations.
|
ELKI : The visualization module uses SVG for scalable graphics output, and Apache Batik for rendering of the user interface as well as lossless export into PostScript and PDF for easy inclusion in scientific publications in LaTeX. Exported files can be edited with SVG editors such as Inkscape. Since cascading style sheets are used, the graphics design can be restyled easily. Unfortunately, Batik is rather slow and memory intensive, so the visualizations are not very scalable to large data sets (for larger data sets, only a subsample of the data is visualized by default).
|
ELKI : Version 0.4, presented at the "Symposium on Spatial and Temporal Databases" 2011, which included various methods for spatial outlier detection, won the conference's "best demonstration paper award".
|
ELKI : Select included algorithms: Cluster analysis: K-means clustering (including fast algorithms such as Elkan, Hamerly, Annulus, and Exponion k-Means, and robust variants such as k-means--) K-medians clustering K-medoids clustering (PAM) (including FastPAM and approximations such as CLARA, CLARANS) Expectation-maximization algorithm for Gaussian mixture modeling Hierarchical clustering (including the fast SLINK, CLINK, NNChain and Anderberg algorithms) Single-linkage clustering Leader clustering DBSCAN (Density-Based Spatial Clustering of Applications with Noise, with full index acceleration for arbitrary distance functions) OPTICS (Ordering Points To Identify the Clustering Structure), including the extensions OPTICS-OF, DeLi-Clu, HiSC, HiCO and DiSH HDBSCAN Mean-shift clustering BIRCH clustering SUBCLU (Density-Connected Subspace Clustering for High-Dimensional Data) CLIQUE clustering ORCLUS and PROCLUS clustering COPAC, ERiC and 4C clustering CASH clustering DOC and FastDOC subspace clustering P3C clustering Canopy clustering algorithm Anomaly detection: k-Nearest-Neighbor outlier detection LOF (Local outlier factor) LoOP (Local Outlier Probabilities) OPTICS-OF DB-Outlier (Distance-Based Outliers) LOCI (Local Correlation Integral) LDOF (Local Distance-Based Outlier Factor) EM-Outlier SOD (Subspace Outlier Degree) COP (Correlation Outlier Probabilities) Frequent Itemset Mining and association rule learning Apriori algorithm Eclat FP-growth Dimensionality reduction Principal component analysis Multidimensional scaling T-distributed stochastic neighbor embedding (t-SNE) Spatial index structures and other search indexes: R-tree R*-tree M-tree k-d tree X-tree Cover tree iDistance NN descent Locality sensitive hashing (LSH) Evaluation: Precision and recall, F1 score, Average Precision Receiver operating characteristic (ROC curve) Discounted cumulative gain (including NDCG) Silhouette index Davies–Bouldin index Dunn index Density-based cluster validation (DBCV) Visualization Scatter plots Histograms Parallel coordinates (also in 3D, using OpenGL) Other: Statistical distributions and many parameter estimators, including robust MAD based and L-moment based estimators Dynamic time warping Change point detection in time series Intrinsic dimensionality estimators
|
ELKI : Version 0.1 (July 2008) contained several Algorithms from cluster analysis and anomaly detection, as well as some index structures such as the R*-tree. The focus of the first release was on subspace clustering and correlation clustering algorithms. Version 0.2 (July 2009) added functionality for time series analysis, in particular distance functions for time series. Version 0.3 (March 2010) extended the choice of anomaly detection algorithms and visualization modules. Version 0.4 (September 2011) added algorithms for geo data mining and support for multi-relational database and index structures. Version 0.5 (April 2012) focuses on the evaluation of cluster analysis results, adding new visualizations and some new algorithms. Version 0.6 (June 2013) introduces a new 3D adaption of parallel coordinates for data visualization, apart from the usual additions of algorithms and index structures. Version 0.7 (August 2015) adds support for uncertain data types, and algorithms for the analysis of uncertain data. Version 0.7.5 (February 2019) adds additional clustering algorithms, anomaly detection algorithms, evaluation measures, and indexing structures. Version 0.8 (October 2022) adds automatic index creation, garbage collection, and incremental priority search, as well as many more algorithms such as BIRCH.
|
ELKI : scikit-learn: machine learning library in Python Weka: A similar project by the University of Waikato, with a focus on classification algorithms RapidMiner: An application available commercially (a restricted version is available as open source) KNIME: An open source platform which integrates various components for machine learning and data mining
|
ELKI : Comparison of statistical packages
|
ELKI : Official website of ELKI with download and documentation.
|
Feature Selection Toolbox : Feature Selection Toolbox (FST) is software primarily for feature selection in the machine learning domain, written in C++, developed at the Institute of Information Theory and Automation (UTIA), of the Czech Academy of Sciences.
|
Feature Selection Toolbox : The first generation of Feature Selection Toolbox (FST1) was a Windows application with user interface allowing users to apply several sub-optimal, optimal and mixture-based feature selection methods on data stored in a trivial proprietary textual flat file format.
|
Feature Selection Toolbox : The third generation of Feature Selection Toolbox (FST3) was a library without user interface, written to be more efficient and versatile than the original FST1. FST3 supports several standard data mining tasks, more specifically, data preprocessing and classification, but its main focus is on feature selection. In feature selection context, it implements several common as well as less usual techniques, with particular emphasis put on threaded implementation of various sequential search methods (a form of hill-climbing). Implemented methods include individual feature ranking, floating search, oscillating search (suitable for very high-dimension problems) in randomized or deterministic form, optimal methods of branch and bound type, probabilistic class distance criteria, various classifier accuracy estimators, feature subset size optimization, feature selection with pre-specified feature weights, criteria ensembles, hybrid methods, detection of all equivalent solutions, or two-criterion optimization. FST3 is more narrowly specialized than popular software like the Waikato Environment for Knowledge Analysis Weka, RapidMiner or PRTools. By default, techniques implemented in the toolbox are predicated on the assumption that the data is available as a single flat file in a simple proprietary format or in Weka format ARFF, where each data point is described by a fixed number of numeric attributes. FST3 is provided without user interface, and is meant to be used by users familiar both with machine learning and C++ programming. The older FST1 software is more suitable for simple experimenting or educational purposes because it can be used with no need to code in C++.
|
Feature Selection Toolbox : In 1999, development of the first Feature Selection Toolbox version started at UTIA as part of a PhD thesis. It was originally developed in Optima++ (later renamed Power++) RAD C++ environment. In 2002, the development of the first FST generation has been suspended, mainly due to end of Sybase's support of the then used development environment. In 2002–2008, FST kernel was recoded and used for research experimentation within UTIA only. In 2009, 3rd FST kernel recoding from scratch begun. In 2010, FST3 was made publicly available in form of a C++ library without GUI. The accompanying webpage collects feature selection related links, references, documentation and the original FST1 available for download. In 2011, an update of FST3 to version 3.1 included new methods (particularly a novel dependency-aware feature ranking suitable for very-high-dimension recognition problems) and core code improvements.
|
Feature Selection Toolbox : Feature selection Pattern recognition Machine learning Data mining OpenNN, Open neural networks library for predictive analytics Weka, comprehensive and popular Java open-source software from University of Waikato RapidMiner, formerly Yet Another Learning Environment (YALE) a commercial machine learning framework PRTools of the Delft University of Technology Infosel++ specialized in information theory based feature selection Tooldiag a C++ pattern recognition toolbox List of numerical analysis software
|
FICO : FICO (legal name: Fair Isaac Corporation), originally Fair, Isaac and Company, is an American data analytics company based in Bozeman, Montana, focused on credit scoring services. It was founded by Bill Fair and Earl Isaac in 1956. Its FICO score, a measure of consumer credit risk, has become a fixture of consumer lending in the United States. In 2013, lenders purchased more than 10 billion FICO scores and about 30 million American consumers accessed their scores themselves. The company reported a revenue of $1.29 billion for the fiscal year of 2020.
|
FICO : FICO was founded in 1956 as Fair, Isaac and Company by engineer William R. "Bill" Fair and mathematician Earl Judson Isaac. The two met while working at the Stanford Research Institute in Menlo Park, California. Selling its first credit scoring system two years after the company's creation, FICO pitched its system to fifty American lenders. FICO went public in July 1987 and is traded on the New York Stock Exchange. The company debuted its first general-purpose FICO score in 1989. FICO scores are based on credit reports and "base" FICO scores range from 300 to 850, while industry-specific scores range from 250 to 900. Lenders use the scores to gauge a potential borrower's creditworthiness. Fannie Mae and Freddie Mac first began using FICO scores to help determine which American consumers qualified for mortgages bought and sold by the companies in 1995.
|
FICO : DynaMark 1992 Risk Management Technologies 1997 Prevision 1997 Nykamp Consulting Group 2001 HNC Software 2002 NAREX 2003 Diversified Healthcare Services 2003 Seurat (2003) London Bridge Software 2004 Braun Consulting 2004 RulesPower 2005 Dash Optimization 2008 Entiera 2012 Adeptra 2012 CR Software 2012 Infoglide 2013 InfoCentricity 2014 Karmasphere 2014 TONBELLER AG 2015 QuadMetrics 2016 GoOn 2018 EZMCOM 2019
|
FICO : In March 2020, the US Department of Justice (DOJ) opened an antitrust investigation into FICO, which was reported to be closed in December 2020. In March 2024, US Senator Josh Hawley sent a letter to the DOJ's Antitrust Division urging them to open an investigation into FICO for anti-competitive practices, stating that the company "appears to be using its monopolistic power over the credit scoring market to increase costs for mortgage lenders." Between 2020 and 2023, at least 10 antitrust class action lawsuits were filed against FICO involving "business to business" purchases of FICO scores, with the plaintiffs alleging that FICO maintains monopoly power through anticompetitive agreements and charges artificially inflated prices for FICO scores. In September 2023 US District Judge Edmond Chang ruled that the plaintiffs, which include credit unions, banks, mortgage lenders, real estate brokerages, auto dealers, and other companies, had presented enough evidence that FICO had violated antitrust law to allow the lawsuits to proceed.
|
FICO : FICO is headquartered in Bozeman, Montana and it has additional U.S. locations in San Jose, California, Roseville, Minnesota, San Diego, California, San Rafael, California, Fairfax, Virginia, and Austin, Texas. The company has international locations in Australia, Brazil, Canada, China, Germany, India, Italy, Japan, South Korea, Lithuania, Malaysia, the Philippines, Poland, Russia, Singapore, South Africa, Spain, Taiwan, Thailand, Turkey, and the United Kingdom.
|
FICO : A measure of credit risk, FICO scores are available through all of the major consumer reporting agencies in the United States: Equifax, Experian, and TransUnion. FICO scores are also offered in other markets, including Mexico and Canada, as well as through the fourth U.S. credit reporting bureau, PRBC.
|
FICO : Official website Business data for Fair Isaac Corporation: How Does FICO Calculate a Score?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.