text
stringlengths
12
14.7k
U-Net : Tensorflow Unet by J Akeret (2017) U-Net source code from Pattern Recognition and Image Processing at Computer Science Department of the University of Freiburg, Germany.
NetOwl : NetOwl is a suite of multilingual text and identity analytics products that analyze big data in the form of text data – reports, web, social media, etc. – as well as structured entity data about people, organizations, places, and things. NetOwl utilizes artificial intelligence (AI)-based approaches, including natural language processing (NLP), machine learning (ML), and computational linguistics, to extract entities, relationships, and events; to perform sentiment analysis; to assign latitude/longitude to geographical references in text; to translate names written in foreign languages; and to perform name matching and identity resolution. NetOwl's uses include semantic search and discovery, geospatial analysis, intelligence analysis, content enrichment, compliance monitoring, cyber threat monitoring, risk management, and bioinformatics.
NetOwl : The first NetOwl product was NetOwl Extractor, which was initially released in 1996. Since then, Extractor has added many new capabilities, including relationship and event extraction, categorization, name translation, geotagging, and sentiment analysis, as well as entity extraction in other languages. Other products were added later to the NetOwl suite, namely TextMiner, NameMatcher, and EntityMatcher. NetOwl has participated in several 3rd party-sponsored text and entity analytics software benchmarking events. NetOwl Extractor was the top-scoring named entity extraction system at the DARPA-sponsored Message Understanding Conference MUC-6 and the top-scoring link and event extraction system in MUC-7. It was also the top-scoring system at several of the NIST-sponsored Automatic Content Extraction (ACE) evaluation tasks. NetOwl NameMatcher was the top-scoring system at the MITRE Challenge for Multicultural Person Name Matching.
NetOwl : The NetOwl suite includes, among others, the following text and entity analytics products:
NetOwl : Knowledge extraction Text mining Data mining Computational linguistics Named entity recognition Unstructured data Document classification
NetOwl : NetOwl website
Uncertain data : In computer science, uncertain data is data that contains noise that makes it deviate from the correct, intended or original values. In the age of big data, uncertainty or data veracity is one of the defining characteristics of data. Data is constantly growing in volume, variety, velocity and uncertainty (1/veracity). Uncertain data is found in abundance today on the web, in sensor networks, within enterprises both in their structured and unstructured sources. For example, there may be uncertainty regarding the address of a customer in an enterprise dataset, or the temperature readings captured by a sensor due to aging of the sensor. In 2012 IBM called out managing uncertain data at scale in its global technology outlook report that presents a comprehensive analysis looking three to ten years into the future seeking to identify significant, disruptive technologies that will change the world. In order to make confident business decisions based on real-world data, analyses must necessarily account for many different kinds of uncertainty present in very large amounts of data. Analyses based on uncertain data will have an effect on the quality of subsequent decisions, so the degree and types of inaccuracies in this uncertain data cannot be ignored. Uncertain data is found in the area of sensor networks; text where noisy text is found in abundance on social media, web and within enterprises where the structured and unstructured data may be old, outdated, or plain incorrect; in modeling where the mathematical model may only be an approximation of the actual process. When representing such data in a database, an appropriate uncertain database model needs to be selected.
Uncertain data : One way to represent uncertain data is through probability distributions. Let us take the example of a relational database. There are three main ways to do represent uncertainty as probability distributions in such a database model. In attribute uncertainty, each uncertain attribute in a tuple is subject to its own independent probability distribution. For example, if readings are taken of temperature and wind speed, each would be described by its own probability distribution, as knowing the reading for one measurement would not provide any information about the other. In correlated uncertainty, multiple attributes may be described by a joint probability distribution. For example, if readings are taken of the position of an object, and the x- and y-coordinates stored, the probability of different values may depend on the distance from the recorded coordinates. As distance depends on both coordinates, it may be appropriate to use a joint distribution for these coordinates, as they are not independent. In tuple uncertainty, all the attributes of a tuple are subject to a joint probability distribution. This covers the case of correlated uncertainty, but also includes the case where there is a probability of a tuple not belonging in the relevant relation, which is indicated by all the probabilities not summing to one. For example, assume we have the following tuple from a probabilistic database: Then, the tuple has 10% chance of not existing in the database.
Uncertain data : Habich Volk; Clemens Utzny; Ralf Dittmann; Wolfgang Lehner. "Error-Aware Density-Based Clustering of Imprecise Measurement Values". Seventh IEEE International Conference on Data Mining Workshops, 2007. ICDM Workshops 2007. IEEE. Volk Rosentahl; Martin Hahmann; Dirk Habich; Wolfgang Lehner. "Clustering Uncertain Data With Possible Worlds". Proceedings of the 1st Workshop on Management and mining Of Uncertain Data in conjunction with the 25th International Conference on Data Engineering, 2009. IEEE.
Region Based Convolutional Neural Networks : Region-based Convolutional Neural Networks (R-CNN) are a family of machine learning models for computer vision, and specifically object detection and localization. The original goal of R-CNN was to take an input image and produce a set of bounding boxes as output, where each bounding box contains an object and also the category (e.g. car or pedestrian) of the object. In general, R-CNN architectures perform selective search over feature maps outputted by a CNN. R-CNN has been extended to perform other computer vision tasks, such as: tracking objects from a drone-mounted camera, locating text in an image, and enabling object detection in Google Lens. Mask R-CNN is also one of seven tasks in the MLPerf Training Benchmark, which is a competition to speed up the training of neural networks.
Region Based Convolutional Neural Networks : The following covers some of the versions of R-CNN that have been developed. November 2013: R-CNN. April 2015: Fast R-CNN. June 2015: Faster R-CNN. March 2017: Mask R-CNN. June 2019: Mesh R-CNN adds the ability to generate a 3D mesh from a 2D image.
Region Based Convolutional Neural Networks : Parthasarathy, Dhruv (2017-04-27). "A Brief History of CNNs in Image Segmentation: From R-CNN to Mask R-CNN". Medium. Retrieved 2024-09-11.
Charles Lynn Wayne : Charles Lynn Wayne (1943 – November 23, 2024) was an American program manager at the Defense Advanced Research Projects Agency (DARPA). He was instrumental in creating the Common Task Method for advancing speech recognition and natural language processing technologies by centering around public benchmarks and datasets, and in establishing Human Language Technology (HLT) initiatives programs at DARPA including TIDES (Translingual Information Detection, Extraction, and Summarization) and EARS (Effective, Affordable, Reusable Speech-to-Text).
Charles Lynn Wayne : Charles Lynn Wayne was born in Lake City, Florida, in 1943. His father, also Charles Wayne, was a pilot stationed at the Naval Air Station there. His mother was Dorothy Rodenhausen Wayne. He grew up in Guam, Philadelphia, Washington, Providence, and Norfolk. He'd lived in Maryland since 1967. He attended St. Andrew's School in Middletown, Delaware, and went on to MIT where he earned a degree in electrical engineering. After graduation he went to work at the National Security Agency, where he remained for over 40 years, except for a two-year period where he was in Army service in Korea. He received Meritorious Civilian Service Award for "inestimable value" of his group's work that produced valuable intelligence for over a decade. He was a Program Manager at the Defense Advanced Research Projects Agency (DARPA) twice, during 1988-1992 and during 2001-2005. He received the Office of the Secretary of Defense Medal for Exceptional Civilian Service for his second DARPA term. He retired from DARPA in December 2004. He enjoyed reading history and fiction and playing Go. He was a member of the Cosmos Club. He died at his home in Chevy Chase, Maryland, on November 23, 2024. He had pulmonary fibrosis and heart disease. His ashes were interred at Arlington National Cemetery. He was survived by his wife of 58 years, Barbara Hatfield. A sister, Pamela Wayne Murphy, died in 2003. Two sons survive him, Leonard (Angela Bradbery) and Andrew (Florence Kao), and also two grandsons, Vincent and Gregory.
OpenAI o1 : OpenAI o1 is a reflective generative pre-trained transformer (GPT). A preview of o1 was released by OpenAI on September 12, 2024. o1 spends time "thinking" before it answers, making it better at complex reasoning tasks, science and programming than GPT-4o. The full version was released to ChatGPT users on December 5, 2024.
OpenAI o1 : According to OpenAI, o1 has been trained using a new optimization algorithm and a dataset specifically tailored to it; while also meshing in reinforcement learning into its training. OpenAI described o1 as a complement to GPT-4o rather than a successor. o1 spends additional time thinking (generating a chain of thought) before generating an answer, which makes it better for complex reasoning tasks, particularly in science and mathematics. Compared to previous models, o1 has been trained to generate long "chains of thought" before returning a final answer. According to Mira Murati, this ability to think before responding represents a new, additional paradigm, which is improving model outputs by spending more computing power when generating the answer, whereas the model scaling paradigm improves outputs by increasing the model size, training data and training compute power. OpenAI's test results suggest a correlation between accuracy and the logarithm of the amount of compute spent thinking before answering. o1-preview performed approximately at a PhD level on benchmark tests related to physics, chemistry, and biology. On the American Invitational Mathematics Examination, it solved 83% (12.5/15) of the problems, compared to 13% (1.8/15) for GPT-4o. It also ranked in the 89th percentile in Codeforces coding competitions. o1-mini is faster and 80% cheaper than o1-preview. It is particularly suitable for programming and STEM-related tasks, but does not have the same "broad world knowledge" as o1-preview. OpenAI noted that o1's reasoning capabilities make it better at adhering to safety rules provided in the prompt's context window. OpenAI reported that during a test, one instance of o1-preview exploited a misconfiguration to succeed at a task that should have been infeasible due to a bug. OpenAI also granted early access to the UK and US AI Safety Institutes for research, evaluation, and testing. According to OpenAI's assessments, o1-preview and o1-mini crossed into "medium risk" in CBRN (biological, chemical, radiological, and nuclear) weapons. Dan Hendrycks wrote that "The model already outperforms PhD scientists most of the time on answering questions related to bioweapons." He suggested that these concerning capabilities will continue to increase.
OpenAI o1 : o1 usually requires more computing time and power than other GPT models by OpenAI, because it generates long chains of thought before making the final response. According to OpenAI, o1 may "fake alignment", that is, generate a response that is contrary to accuracy and its own chain of thought, in about 0.38% of cases. OpenAI forbids users from trying to reveal o1's chain of thought, which is hidden by design and not trained to comply with the company's policies. Prompts are monitored, and users who intentionally or accidentally violate this may lose their access to o1. OpenAI cites AI safety and competitive advantage as reasons for the restriction, which has been described as a loss of transparency by developers who work with large language models (LLMs). In October 2024, researchers at Apple submitted a preprint reporting that LLMs such as o1 may be replicating reasoning steps from the models' own training data. By changing the numbers and names used in a math problem or simply running the same problem again, LLMs would perform somewhat worse than their best benchmark results. Adding extraneous but logically inconsequential information to the problems caused a much greater drop in performance, from −17.5% for o1-preview and −29.1% for o1-mini, to −65.7% for the worst model tested. == References ==
Ugly duckling theorem : The ugly duckling theorem is an argument showing that classification is not really possible without some sort of bias. More particularly, it assumes finitely many properties combinable by logical connectives, and finitely many objects; it asserts that any two different objects share the same number of (extensional) properties. The theorem is named after Hans Christian Andersen's 1843 story "The Ugly Duckling", because it shows that a duckling is just as similar to a swan as two swans are to each other. It was derived by Satosi Watanabe in 1969.: 376–377
Ugly duckling theorem : Suppose there are n things in the universe, and one wants to put them into classes or categories. One has no preconceived ideas or biases about what sorts of categories are "natural" or "normal" and what are not. So one has to consider all the possible classes that could be, all the possible ways of making a set out of the n objects. There are 2 n such ways, the size of the power set of n objects. One can use that to measure the similarity between two objects, and one would see how many sets they have in common. However, one cannot. Any two objects have exactly the same number of classes in common if we can form any possible class, namely 2 n − 1 (half the total number of classes there are). To see this is so, one may imagine each class is represented by an n-bit string (or binary encoded integer), with a zero for each element not in the class and a one for each element in the class. As one finds, there are 2 n such strings. As all possible choices of zeros and ones are there, any two bit-positions will agree exactly half the time. One may pick two elements and reorder the bits so they are the first two, and imagine the numbers sorted lexicographically. The first 2 n / 2 /2 numbers will have bit #1 set to zero, and the second 2 n / 2 /2 will have it set to one. Within each of those blocks, the top 2 n / 4 /4 will have bit #2 set to zero and the other 2 n / 4 /4 will have it as one, so they agree on two blocks of 2 n / 4 /4 or on half of all the cases, no matter which two elements one picks. So if we have no preconceived bias about which categories are better, everything is then equally similar (or equally dissimilar). The number of predicates simultaneously satisfied by two non-identical elements is constant over all such pairs. Thus, some kind of inductive bias is needed to make judgements to prefer certain categories over others.
Ugly duckling theorem : A possible way around the ugly duckling theorem would be to introduce a constraint on how similarity is measured by limiting the properties involved in classification, for instance, between A and B. However Medin et al. (1993) point out that this does not actually resolve the arbitrariness or bias problem since in what respects A is similar to B: "varies with the stimulus context and task, so that there is no unique answer, to the question of how similar is one object to another". For example, "a barberpole and a zebra would be more similar than a horse and a zebra if the feature striped had sufficient weight. Of course, if these feature weights were fixed, then these similarity relations would be constrained". Yet the property "striped" as a weight 'fix' or constraint is arbitrary itself, meaning: "unless one can specify such criteria, then the claim that categorization is based on attribute matching is almost entirely vacuous". Stamos (2003) remarked that some judgments of overall similarity are non-arbitrary in the sense they are useful: "Presumably, people's perceptual and conceptual processes have evolved that information that matters to human needs and goals can be roughly approximated by a similarity heuristic... If you are in the jungle and you see a tiger but you decide not to stereotype (perhaps because you believe that similarity is a false friend), then you will probably be eaten. In other words, in the biological world stereotyping based on veridical judgments of overall similarity statistically results in greater survival and reproductive success." Unless some properties are considered more salient, or 'weighted' more important than others, everything will appear equally similar, hence Watanabe (1986) wrote: "any objects, in so far as they are distinguishable, are equally similar". In a weaker setting that assumes infinitely many properties, Murphy and Medin (1985) give an example of two putative classified things, plums and lawnmowers: "Suppose that one is to list the attributes that plums and lawnmowers have in common in order to judge their similarity. It is easy to see that the list could be infinite: Both weigh less than 10,000 kg (and less than 10,001 kg), both did not exist 10,000,000 years ago (and 10,000,001 years ago), both cannot hear well, both can be dropped, both take up space, and so on. Likewise, the list of differences could be infinite… any two entities can be arbitrarily similar or dissimilar by changing the criterion of what counts as a relevant attribute." According to Woodward, the ugly duckling theorem is related to Schaffer's Conservation Law for Generalization Performance, which states that all algorithms for learning of boolean functions from input/output examples have the same overall generalization performance as random guessing. The latter result is generalized by Woodward to functions on countably infinite domains.
Ugly duckling theorem : No free lunch in search and optimization No free lunch theorem Identity of indiscernibles – Classification (discernibility) is possible (with or without a bias), but there cannot be separate objects or entities that have all their properties in common. New riddle of induction == Notes ==
Vision transformer : A vision transformer (ViT) is a transformer designed for computer vision. A ViT decomposes an input image into a series of patches (rather than text into tokens), serializes each patch into a vector, and maps it to a smaller dimension with a single matrix multiplication. These vector embeddings are then processed by a transformer encoder as if they were token embeddings. ViTs were designed as alternatives to convolutional neural networks (CNNs) in computer vision applications. They have different inductive biases, training stability, and data efficiency. Compared to CNNs, ViTs are less data efficient, but have higher capacity. Some of the largest modern computer vision models are ViTs, such as one with 22B parameters. Subsequent to its publication, many variants were proposed, with hybrid architectures with both features of ViTs and CNNs. ViTs have found application in image recognition, image segmentation, weather prediction, and autonomous driving.
Vision transformer : Transformers were introduced in Attention Is All You Need (2017), and have found widespread use in natural language processing. A 2019 paper applied ideas from the Transformer to computer vision. Specifically, they started with a ResNet, a standard convolutional neural network used for computer vision, and replaced all convolutional kernels by the self-attention mechanism found in a Transformer. It resulted in superior performance. However, it is not a Vision Transformer. In 2020, an encoder-only Transformer was adapted for computer vision, yielding the ViT, which reached state of the art in image classification, overcoming the previous dominance of CNN. The masked autoencoder (2022) extended ViT to work with unsupervised training. The vision transformer and the masked autoencoder, in turn, stimulated new developments in convolutional neural networks. Subsequently, there was cross-fertilization between the previous CNN approach and the ViT approach. In 2021, some important variants of the Vision Transformers were proposed. These variants are mainly intended to be more efficient, more accurate or better suited to a specific domain. Two studies improved efficiency and robustness of ViT by adding a CNN as a preprocessor. The Swin Transformer achieved state-of-the-art results on some object detection datasets such as COCO, by using convolution-like sliding windows of attention mechanism, and the pyramid process in classical computer vision.
Vision transformer : The basic architecture, used by the original 2020 paper, is as follows. In summary, it is a BERT-like encoder-only Transformer. The input image is of type R H × W × C ^ , where H , W , C are height, width, channel (RGB). It is then split into square-shaped patches of type R P × P × C ^ . For each patch, the patch is pushed through a linear operator, to obtain a vector ("patch embedding"). The position of the patch is also transformed into a vector by "position encoding". The two vectors are added, then pushed through several Transformer encoders. The attention mechanism in a ViT repeatedly transforms representation vectors of image patches, incorporating more and more semantic relations between image patches in an image. This is analogous to how in natural language processing, as representation vectors flow through a transformer, they incorporate more and more semantic relations between words, from syntax to semantics. The above architecture turns an image into a sequence of vector representations. To use these for downstream applications, an additional head needs to be trained to interpret them. For example, to use it for classification, one can add a shallow MLP on top of it that outputs a probability distribution over classes. The original paper uses a linear-GeLU-linear-softmax network.
Vision transformer : Typically, ViT uses patch sizes larger than standard CNN kernels (3x3 to 7x7). ViT is more sensitive to the choice of the optimizer, hyperparameters, and network depth. Preprocessing with a layer of smaller-size, overlapping (stride < size) convolutional filters helps with performance and stability. This different behavior seems to derive from the different inductive biases they possess. CNN applies the same set of filters for processing the entire image. This allows them to be more data efficient and less sensitive to local perturbations. ViT applies self-attention, allowing them to easily capture long-range relationships between patches. They also require more data to train, but they can ingest more training data compared to CNN, which might not improve after training on a large enough training dataset. ViT also appears more robust to input image distortions such as adversarial patches or permutations.
Vision transformer : ViT have been used in many Computer Vision tasks with excellent results and in some cases even state-of-the-art. Image Classification, Object Detection, Video Deepfake Detection, Image segmentation, Anomaly detection, Image Synthesis, Cluster analysis, Autonomous Driving. ViT had been used for image generation as backbones for GAN and for diffusion models (diffusion transformer, or DiT). DINO has been demonstrated to learn useful representations for clustering images and exploring morphological profiles on biological datasets, such as images generated with the Cell Painting assay. In 2024, a 113 billion-parameter ViT model was proposed (the largest ViT to date) for weather and climate prediction, and trained on the Frontier supercomputer with a throughput of 1.6 exaFLOPs.
Vision transformer : Transformer (machine learning model) Convolutional neural network Attention (machine learning) Perceiver Deep learning PyTorch TensorFlow
Vision transformer : Zhang, Aston; Lipton, Zachary; Li, Mu; Smola, Alexander J. (2024). "11.8. Transformers for Vision". Dive into deep learning. Cambridge New York Port Melbourne New Delhi Singapore: Cambridge University Press. ISBN 978-1-009-38943-3. Steiner, Andreas; Kolesnikov, Alexander; Zhai, Xiaohua; Wightman, Ross; Uszkoreit, Jakob; Beyer, Lucas (June 18, 2021). "How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers". arXiv:2106.10270 [cs.CV].
Flow-based generative model : A flow-based generative model is a generative model used in machine learning that explicitly models a probability distribution by leveraging normalizing flow, which is a statistical method using the change-of-variable law of probabilities to transform a simple distribution into a complex one. The direct modeling of likelihood provides many advantages. For example, the negative log-likelihood can be directly computed and minimized as the loss function. Additionally, novel samples can be generated by sampling from the initial distribution, and applying the flow transformation. In contrast, many alternative generative modeling methods such as variational autoencoder (VAE) and generative adversarial network do not explicitly represent the likelihood function.
Flow-based generative model : Let z 0 be a (possibly multivariate) random variable with distribution p 0 ( z 0 ) (z_) . For i = 1 , . . . , K , let z i = f i ( z i − 1 ) =f_(z_) be a sequence of random variables transformed from z 0 . The functions f 1 , . . . , f K ,...,f_ should be invertible, i.e. the inverse function f i − 1 ^ exists. The final output z K models the target distribution. The log likelihood of z K is (see derivation): log ⁡ p K ( z K ) = log ⁡ p 0 ( z 0 ) − ∑ i = 1 K log ⁡ | det d f i ( z i − 1 ) d z i − 1 | (z_)=\log p_(z_)-\sum _^\log \left|\det (z_)\right| To efficiently compute the log likelihood, the functions f 1 , . . . , f K ,...,f_ should be easily invertible, and the determinants of their Jacobians should be simple to compute. In practice, the functions f 1 , . . . , f K ,...,f_ are modeled using deep neural networks, and are trained to minimize the negative log-likelihood of data samples from the target distribution. These architectures are usually designed such that only the forward pass of the neural network is required in both the inverse and the Jacobian determinant calculations. Examples of such architectures include NICE, RealNVP, and Glow.
Flow-based generative model : As is generally done when training a deep learning model, the goal with normalizing flows is to minimize the Kullback–Leibler divergence between the model's likelihood and the target distribution to be estimated. Denoting p θ the model's likelihood and p ∗ the target distribution to learn, the (forward) KL-divergence is: D KL [ p ∗ ( x ) ‖ p θ ( x ) ] = − E p ∗ ( x ) ⁡ [ log ⁡ p θ ( x ) ] + E p ∗ ( x ) ⁡ [ log ⁡ p ∗ ( x ) ] [p^(x)\|p_(x)]=-\mathop _(x)[\log p_(x)]+\mathop _(x)[\log p^(x)] The second term on the right-hand side of the equation corresponds to the entropy of the target distribution and is independent of the parameter θ we want the model to learn, which only leaves the expectation of the negative log-likelihood to minimize under the target distribution. This intractable term can be approximated with a Monte-Carlo method by importance sampling. Indeed, if we have a dataset i = 1 N \_^ of samples each independently drawn from the target distribution p ∗ ( x ) (x) , then this term can be estimated as: − E ^ p ∗ ( x ) [ log ⁡ p θ ( x ) ] = − 1 N ∑ i = 0 N log ⁡ p θ ( x i ) _(x)[\log p_(x)]=-\sum _^\log p_(x_) Therefore, the learning objective a r g m i n θ D KL [ p ∗ ( x ) ‖ p θ ( x ) ] \ D_[p^(x)\|p_(x)] is replaced by a r g m a x θ ∑ i = 0 N log ⁡ p θ ( x i ) \ \sum _^\log p_(x_) In other words, minimizing the Kullback–Leibler divergence between the model's likelihood and the target distribution is equivalent to maximizing the model likelihood under observed samples of the target distribution. A pseudocode for training normalizing flows is as follows: INPUT. dataset x 1 : n , normalizing flow model f θ ( ⋅ ) , p 0 (\cdot ),p_ . SOLVE. max θ ∑ j ln ⁡ p θ ( x j ) \sum _\ln p_(x_) by gradient descent RETURN. θ ^
Flow-based generative model : Despite normalizing flows success in estimating high-dimensional densities, some downsides still exist in their designs. First of all, their latent space where input data is projected onto is not a lower-dimensional space and therefore, flow-based models do not allow for compression of data by default and require a lot of computation. However, it is still possible to perform image compression with them. Flow-based models are also notorious for failing in estimating the likelihood of out-of-distribution samples (i.e.: samples that were not drawn from the same distribution as the training set). Some hypotheses were formulated to explain this phenomenon, among which the typical set hypothesis, estimation issues when training models, or fundamental issues due to the entropy of the data distributions. One of the most interesting properties of normalizing flows is the invertibility of their learned bijective map. This property is given by constraints in the design of the models (cf.: RealNVP, Glow) which guarantee theoretical invertibility. The integrity of the inverse is important in order to ensure the applicability of the change-of-variable theorem, the computation of the Jacobian of the map as well as sampling with the model. However, in practice this invertibility is violated and the inverse map explodes because of numerical imprecision.
Flow-based generative model : Flow-based generative models have been applied on a variety of modeling tasks, including: Audio generation Image generation Molecular graph generation Point-cloud modeling Video generation Lossy image compression Anomaly detection
Flow-based generative model : Flow-based Deep Generative Models Normalizing flow models
NVDLA : The NVIDIA Deep Learning Accelerator (NVDLA) is an open-source hardware neural network AI accelerator created by Nvidia. The accelerator is written in Verilog and is configurable and scalable to meet many different architecture needs. NVDLA is merely an accelerator and any process must be scheduled and arbitered by an outside entity such as a CPU. NVDLA is available for product development as part of Nvidia's Jetson Xavier NX, a small circuit board in a form factor about the size of a credit card which includes a 6-core ARMv8.2 64-bit CPU, an integrated 384-core Volta GPU with 48 Tensor Cores, and dual NVDLA "engines", as described in their own press release. Nvidia claims the product will deliver 14 TOPS (tera operations per second) of compute under 10 W. Applications broadly include edge computing inference engines, including object recognition for autonomous driving. Nvidia's involvement with open hardware includes the use of RISC-V processors as part of their GPU product line-up.
NVDLA : Official website
Data physicalization : A data physicalization (or simply physicalization) is a physical artefact whose geometry or material properties encode data. It has the main goals to engage people and to communicate data using computer-supported physical data representations.
Data physicalization : Before the invention of computers and digital devices, the application of data physicalization already existed in ancient artifacts as a medium to represent abstract information. One example is Blombo ocher plaque which is estimated to be 70000 – 80000 years old. The geometric and iconographic shapes engraved at the surface of the artifact demonstrated the cognitive complexity of ancient humans. Moreover, since such representations were deliberately made and crafted, the evidences suggest that the geometric presentation of information is a popular methodology in the context of society. Although researchers still cannot decipher the specific type of information encoded in the artifact, there are several proposed interpretations. For example, the potential functions of the artifact are divided into four categories, categorized as "numerical", "functional", "cognitive", and "social". Later, at around 35,000 B.C, another artifact, the Lebombo bone, emerged and the encoded information became easier to read. There are around 29 distinct notches carved on the baboon fibula. It is estimated that the number of notches is closely related to the number of lunar cycles. Moreover, this early counting system was also regarded as the birth of calculation. Right before the invention of writing, the clay token system was spread across ancient Mesopotamia. When the buyers and sellers want to make a trade, they prepare a set of tokens and seal them inside the clay envelope after impressing the shape on the surface. Such physical entity was widely used in trading, administrative documents, and agricultural settlement. Moreover, the token system is evidence of the early counting system. Each shape corresponds to a physical meaning such as the representation of "sheep", forming a one-to-one mapping relationship. The significance of the token is it uses physical shape to encode numerical information and it is regarded as the precursor of the early writing system. The logical reason is the two-dimension symbol would record the same information as the impression created by the clay token. From 3000 BCE to the 17th century, a more complex visual encoding, Quipus, was developed and widely used in Andean South America. Knotted strings unrelated to quipu have also been used to record information by the ancient Chinese, Tibetans and Japanese. The ancient Inca empire used it for military and taxation purposes. The Base-10 logical-numerical system can record information based on the relative distance of knots, the color of the knots, and the type of knots. Due to the texture (cotton) of Quipus, very few of them survive. By analyzing those remaining artifacts, Erland Nordenskiöld proposed that Quipus is the only writing system used by Inca, and the information encoding technique is sophisticated and distinctive. The idea of data physicalization become popular since the 17th century in which architects and engineers widely used such methods in civil engineering and city management. For example, from 1663 to 1867, Plan-relief model was used to visualize French territorial structure and important military units such as citadels and walled cities. Therefore, one of the functions of the Plan-relief model was to plan defense or offense. It is worth noting that the model can be categorized as a military technology and it did not encode any abstract information. The tradition of using tangible models to represent buildings and architectures still remains today. One of the contemporary examples of data physicalization is the Galton board designed by Francis Galton who promoted the concept of Regression toward the mean. The Galton board, a very useful tool in approximating the Gaussian law of errors, consists of evenly spaced nails and vertical slats at the bottom of the board. After a large number of marbles are released, they will settle down at the bottom, forming the contour of a Bell Curve. Most marbles will agglomerate at the center (smaller deviation) with few on the edge of the board. In 1935, three different electricity companies (e.g. Pacific Gas and Electric Company, Commonwealth Edison Company) created an electricity data physicalization model to visualize the power consumption of their customers so that the company can better forecast the upcoming power demand. The model has one short axis and one long axis. The short axis indicates "day", whereas the long axis spans the whole year. The viewers can gain perspective on when customers consume electricity the most during the day and how does the consumption change across different seasons. The model was built manually by cutting wooden sheets and stacked all pieces together. Researchers began to realize that data physicalization models can not only help agents manage/plan certain tasks, but also can greatly simplify very complex problems by letting users manipulate data in the real world. Therefore, from an epistemic perspective, physical manipulation enables users to uncover hidden patterns that cannot be easily detected. Max Perutz received Nobel Prize in Chemistry in 1962 for his distinguished work in discovering the structure of the globular protein. When a narrow X-ray passes through the haemoglobin molecule, the diffraction pattern can review the inner structure of the atomic arrangements. One of Perutz's works within this research involved creating a physicalized haemoglobin molecule which enables him to manipulate and inspect the structure in a tangible way. In the book, Bertin designed a matrices visualization device called Domino which let users manipulate row and column data. The combination of row and column can be considered as a two-dimensional data space. In Semiology of Graphics, Bertain defined what variables can be reordered and what variables cannot. For example, time can be considered as a one direction variable. We should keep it in a natural order. Compared with the aforementioned work, this model emphasized the visual thinking aspect of data physicalization and supports a variety of data types such as maps, matrices, and timelines. By adjusting the data entries, an analyst can find patterns inside the datasets and repeatedly use Domino on different datasets. More recent physicalization examples include using LEGO bricks to keep track of project progress. For example, people used LEGO to record their thesis writing progress. Users can use the LEGO board to set concrete steps before pushing to real publications such as data analysis, data collection, development, etc. Another application involves using LEGO in bug tracking. For software engineers, keeping track of the issue of the code base is a crucial task and LEGO simplify this progress by physicalize the issues. A specific application of data physicalization involves building tactile maps for visually impaired people. Past example include using microcapsule paper to build tactile maps. With the help of digital fabrication tool such as laser cutter, researchers in Fab Lab at RWTH Aachen University has used it to produce relief-based a tactile map to support visually impaired users. Some tangible user interface researchers combined TUI with tactile maps to render dynamic rendering and enhance collaboration between vision impaired people (e.g. FluxMarkers). == References ==
Machine learning in earth sciences : Applications of machine learning (ML) in earth sciences include geological mapping, gas leakage detection and geological feature identification. Machine learning is a subdiscipline of artificial intelligence aimed at developing programs that are able to classify, cluster, identify, and analyze vast and complex data sets without the need for explicit programming to do so. Earth science is the study of the origin, evolution, and future of the Earth. The earth's system can be subdivided into four major components including the solid earth, atmosphere, hydrosphere, and biosphere. A variety of algorithms may be applied depending on the nature of the task. Some algorithms may perform significantly better than others for particular objectives. For example, convolutional neural networks (CNNs) are good at interpreting images, whilst more general neural networks may be used for soil classification, but can be more computationally expensive to train than alternatives such as support vector machines. The range of tasks to which ML (including deep learning) is applied has been ever-growing in recent decades, as has the development of other technologies such as unmanned aerial vehicles (UAVs), ultra-high resolution remote sensing technology, and high-performance computing. This has led to the availability of large high-quality datasets and more advanced algorithms.
Machine learning in earth sciences : The extensive usage of machine learning in various fields has led to a wide range of algorithms of learning methods being applied. Choosing the optimal algorithm for a specific purpose can lead to a significant boost in accuracy: for example, the lithological mapping of gold-bearing granite-greenstone rocks in Hutti, India with AVIRIS-NG hyperspectral data, shows more than 10% difference in overall accuracy between using support vector machines (SVMs) and random forest. Some algorithms can also reveal hidden important information: white box models are transparent models, the outputs of which can be easily explained, while black box models are the opposite. For example, although an SVM yielded the best result in landslide susceptibility assessment accuracy, the result cannot be rewritten in the form of expert rules that explain how and why an area was classified as that specific class. In contrast, decision trees are transparent and easily understood, and the user can observe and fix the bias if any is present in such models. If computational resource is a concern, more computationally demanding learning methods such as deep neural networks are less preferred, despite the fact that they may outperform other algorithms, such as in soil classification.
Organoid intelligence : Organoid intelligence (OI) is an emerging field of study in computer science and biology that develops and studies biological wetware computing using 3D cultures of human brain cells (or brain organoids) and brain-machine interface technologies. Such technologies may be referred to as OIs.
Organoid intelligence : As opposed to traditional non-organic silicon-based approaches, OI seeks to use lab-grown cerebral organoids to serve as "biological hardware." Scientists hope that such organoids can provide faster, more efficient, and more powerful computing power than regular silicon-based computing and AI while requiring only a fraction of the energy. However, while these structures are still far from being able to think like a regular human brain and do not yet possess strong computing capabilities, OI research currently offers the potential to improve the understanding of brain development, learning and memory, potentially finding treatments for neurological disorders such as dementia. Thomas Hartung, a professor from Johns Hopkins University, argues that "while silicon-based computers are certainly better with numbers, brains are better at learning." Furthermore, he claimed that with "superior learning and storing" capabilities than AIs, being more energy efficient, and that in the future, it might not be possible to add more transistors to a single computer chip, while brains are wired differently and have more potential for storage and computing power, OIs can potentially harness more power than current computers. Some researchers claim that even though human brains are slower than machines at processing simple information, they are far better at processing complex information as brains can deal with fewer and more uncertain data, perform both sequential and parallel processing, being highly heterogenous, use incomplete datasets, and is said to outperform non-organic machines in decision-making. Training OIs involve the process of biological learning (BL) as opposed to machine learning (ML) for AIs. BL is said to be much more energy efficient than ML.
Organoid intelligence : OI generates complex biological data, necessitating sophisticated methods for processing and analysis. Bioinformatics provides the tools and techniques to decipher raw data, uncovering the patterns and insights. A Python interface is currently available for processing and interaction with brain organoids.
Organoid intelligence : Brain-inspired computing hardware aims to emulate the structure and working principles of the brain and could be used to address current limitations in artificial intelligence technologies. However, brain-inspired silicon chips are still limited in their ability to fully mimic brain function, as most examples are built on digital electronic principles. One study performed OI computation (which they termed Brainoware) by sending and receiving information from the brain organoid using a high-density multielectrode array. By applying spatiotemporal electrical stimulation, nonlinear dynamics, and fading memory properties, as well as unsupervised learning from training data by reshaping the organoid functional connectivity, the study showed the potential of this technology by using it for speech recognition and nonlinear equation prediction in a reservoir computing framework.
Organoid intelligence : While researchers are hoping to use OI and biological computing to complement traditional silicon-based computing, there are also questions about the ethics of such an approach. Examples of such ethical issues include OIs gaining consciousness and sentience as organoids and the question of the relationship between a stem cell donor (for growing the organoid) and the respective OI system. Enforced amnesia and limits on duration of operation without memory reset have been proposed as a way to mitigate the potential risk of silent suffering in brain organoids. == References ==
Evaluation of binary classifiers : Evaluation of a binary classifier typically assigns a numerical value, or values, to a classifier that represent its accuracy. An example is error rate, which measures how frequently the classifier makes a mistake. There are many metrics that can be used; different fields have different preferences. For example, in medicine sensitivity and specificity are often used, while in computer science precision and recall are preferred. An important distinction is between metrics that are independent of the prevalence or skew (how often each class occurs in the population), and metrics that depend on the prevalence – both types are useful, but they have very different properties. Often, evaluation is used to compare two methods of classification, so that one can be adopted and the other discarded. Such comparisons are more directly achieved by a form of evaluation that results in a single unitary metric rather than a pair of metrics.
Evaluation of binary classifiers : Given a data set, a classification (the output of a classifier on that set) gives two numbers: the number of positives and the number of negatives, which add up to the total size of the set. To evaluate a classifier, one compares its output to another reference classification – ideally a perfect classification, but in practice the output of another gold standard test – and cross tabulates the data into a 2×2 contingency table, comparing the two classifications. One then evaluates the classifier relative to the gold standard by computing summary statistics of these 4 numbers. Generally these statistics will be scale invariant (scaling all the numbers by the same factor does not change the output), to make them independent of population size, which is achieved by using ratios of homogeneous functions, most simply homogeneous linear or homogeneous quadratic functions. Say we test some people for the presence of a disease. Some of these people have the disease, and our test correctly says they are positive. They are called true positives (TP). Some have the disease, but the test incorrectly claims they don't. They are called false negatives (FN). Some don't have the disease, and the test says they don't – true negatives (TN). Finally, there might be healthy people who have a positive test result – false positives (FP). These can be arranged into a 2×2 contingency table (confusion matrix), conventionally with the test result on the vertical axis and the actual condition on the horizontal axis. These numbers can then be totaled, yielding both a grand total and marginal totals. Totaling the entire table, the number of true positives, false negatives, true negatives, and false positives add up to 100% of the set. Totaling the columns (adding vertically) the number of true positives and false positives add up to 100% of the test positives, and likewise for negatives. Totaling the rows (adding horizontally), the number of true positives and false negatives add up to 100% of the condition positives (conversely for negatives). The basic marginal ratio statistics are obtained by dividing the 2×2=4 values in the table by the marginal totals (either rows or columns), yielding 2 auxiliary 2×2 tables, for a total of 8 ratios. These ratios come in 4 complementary pairs, each pair summing to 1, and so each of these derived 2×2 tables can be summarized as a pair of 2 numbers, together with their complements. Further statistics can be obtained by taking ratios of these ratios, ratios of ratios, or more complicated functions. The contingency table and the most common derived ratios are summarized below; see sequel for details. Note that the rows correspond to the condition actually being positive or negative (or classified as such by the gold standard), as indicated by the color-coding, and the associated statistics are prevalence-independent, while the columns correspond to the test being positive or negative, and the associated statistics are prevalence-dependent. There are analogous likelihood ratios for prediction values, but these are less commonly used, and not depicted above.
Evaluation of binary classifiers : Often accuracy is evaluated with a pair of metrics composed in a standard pattern.
Evaluation of binary classifiers : In addition to the paired metrics, there are also unitary metrics that give a single number to evaluate the test. Perhaps the simplest statistic is accuracy or fraction correct (FC), which measures the fraction of all instances that are correctly categorized; it is the ratio of the number of correct classifications to the total number of correct or incorrect classifications: (TP + TN)/total population = (TP + TN)/(TP + TN + FP + FN). As such, it compares estimates of pre- and post-test probability. In total ignorance, one can compare a rule to flipping a coin (p0=0.5). This measure is prevalence-dependent. If 90% of people with COVID symptoms don't have COVID, the prior probability P(-) is 0.9, and the simple rule "Classify all such patients as COVID-free." would be 90% accurate. Diagnosis should be better than that. One can construct a "One-proportion z-test" with p0 as max(priors) = max(P(-),P(+)) for a diagnostic method hoping to beat a simple rule using the most likely outcome. Here, the hypotheses are "Ho: p ≤ 0.9 vs. Ha: p > 0.9", rejecting Ho for large values of z. One diagnostic rule could be compared to another if the other's accuracy is known and substituted for p0 in calculating the z statistic. If not known and calculated from data, an accuracy comparison test could be made using "Two-proportion z-test, pooled for Ho: p1 = p2". Not used very much is the complementary statistic, the fraction incorrect (FiC): FC + FiC = 1, or (FP + FN)/(TP + TN + FP + FN) – this is the sum of the antidiagonal, divided by the total population. Cost-weighted fractions incorrect could compare expected costs of misclassification for different methods. The diagnostic odds ratio (DOR) can be a more useful overall metric, which can be defined directly as (TP×TN)/(FP×FN) = (TP/FN)/(FP/TN), or indirectly as a ratio of ratio of ratios (ratio of likelihood ratios, which are themselves ratios of true rates or prediction values). This has a useful interpretation – as an odds ratio – and is prevalence-independent. Likelihood ratio is generally considered to be prevalence-independent and is easily interpreted as the multiplier to turn prior probabilities into posterior probabilities. An F-score is a combination of the precision and the recall, providing a single score. There is a one-parameter family of statistics, with parameter β, which determines the relative weights of precision and recall. The traditional or balanced F-score (F1 score) is the harmonic mean of precision and recall: F 1 = 2 ⋅ p r e c i s i o n ⋅ r e c a l l p r e c i s i o n + r e c a l l =2\cdot \cdot \mathrm +\mathrm . F-scores do not take the true negative rate into account and, therefore, are more suited to information retrieval and information extraction evaluation where the true negatives are innumerable. Instead, measures such as the phi coefficient, Matthews correlation coefficient, informedness or Cohen's kappa may be preferable to assess the performance of a binary classifier. As a correlation coefficient, the Matthews correlation coefficient is the geometric mean of the regression coefficients of the problem and its dual. The component regression coefficients of the Matthews correlation coefficient are markedness (deltap) and informedness (Youden's J statistic or deltap').
Evaluation of binary classifiers : Hand has highlighted the importance of choosing an appropriate method of evaluation. However, of the many different methods for evaluating the accuracy of a classifier, there is no general method for determining which method should be used in which circumstances. Different fields have taken different approaches. Cullerne Bown has distinguished three basic approaches to evaluation: ° Mathematical - such as the Matthews Correlation Coefficient, in which both kinds of error are axiomatically treated as equally problematic; ° Cost-benefit - in which a currency is adopted (e.g. money or Quality Adjusted Life Years) and values assigned to errors and successes on the basis of empirical measurement; ° Judgemental - in which a human judgement is made about the relative importance of the two kinds of error; typically this starts by adopting a pair of indicators such as sensitivity and specificity, precision and recall or positive predictive value and negative predictive value. In the judgemental case, he has provided a flow chart for determining which pair of indicators should be used when, and consequently how to choose between the Receiver Operating Characteristic and the Precision-Recall Curve.
Evaluation of binary classifiers : Often, we want to evaluate not a specific classifier working in a specific way but an underlying technology. Typically, the technology can be adjusted through altering the threshold of a score function, the threshold determining whether the result is a positive or negative. For such evaluations a useful single measure is "area under the ROC curve", AUC.
Evaluation of binary classifiers : Apart from accuracy, binary classifiers can be assessed in many other ways, for example in terms of their speed or cost.
Evaluation of binary classifiers : Probabilistic classification models go beyond providing binary outputs and instead produce probability scores for each class. These models are designed to assess the likelihood or probability of an instance belonging to different classes. In the context of evaluating probabilistic classifiers, alternative evaluation metrics have been developed to properly assess the performance of these models. These metrics take into account the probabilistic nature of the classifier's output and provide a more comprehensive assessment of its effectiveness in assigning accurate probabilities to different classes. These evaluation metrics aim to capture the degree of calibration, discrimination, and overall accuracy of the probabilistic classifier's predictions.
Evaluation of binary classifiers : Information retrieval systems, such as databases and web search engines, are evaluated by many different metrics, some of which are derived from the confusion matrix, which divides results into true positives (documents correctly retrieved), true negatives (documents correctly not retrieved), false positives (documents incorrectly retrieved), and false negatives (documents incorrectly not retrieved). Commonly used metrics include the notions of precision and recall. In this context, precision is defined as the fraction of documents correctly retrieved compared to the documents retrieved (true positives divided by true positives plus false positives), using a set of ground truth relevant results selected by humans. Recall is defined as the fraction of documents correctly retrieved compared to the relevant documents (true positives divided by true positives plus false negatives). Less commonly, the metric of accuracy is used, is defined as the fraction of documents correctly classified compared to the documents (true positives plus true negatives divided by true positives plus true negatives plus false positives plus false negatives). None of these metrics take into account the ranking of results. Ranking is very important for web search engines because readers seldom go past the first page of results, and there are too many documents on the web to manually classify all of them as to whether they should be included or excluded from a given search. Adding a cutoff at a particular number of results takes ranking into account to some degree. The measure precision at k, for example, is a measure of precision looking only at the top ten (k=10) search results. More sophisticated metrics, such as discounted cumulative gain, take into account each individual ranking, and are more commonly used where this is important.
Evaluation of binary classifiers : Population impact measures Attributable risk Attributable risk percent Scoring rule (for probability predictions) Pseudo-R-squared Likelihood ratios
Evaluation of binary classifiers : Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules
Grammatik : Grammatik was the first grammar checking program developed for home computer systems. Aspen Software of Albuquerque, NM, released the earliest version of this diction and style checker for personal computers. It was first released no later than 1981, and was inspired by the Writer's Workbench. Grammatik was first available for a Radio Shack - TRS-80, and soon had versions for CP/M and the IBM PC. Reference Software International of San Francisco, California, acquired Grammatik in 1985. Development of Grammatik continued, and it became an actual grammar checker that could detect writing errors beyond simple style checking. Subsequent versions were released for the MS-DOS, Windows, Macintosh and Unix platforms. Grammatik was ultimately acquired by WordPerfect Corporation and is integrated in the WordPerfect word processor. == References ==
Machine learning in bioinformatics : Machine learning in bioinformatics is the application of machine learning algorithms to bioinformatics, including genomics, proteomics, microarrays, systems biology, evolution, and text mining. Prior to the emergence of machine learning, bioinformatics algorithms had to be programmed by hand; for problems such as protein structure prediction, this proved difficult. Machine learning techniques such as deep learning can learn features of data sets rather than requiring the programmer to define them individually. The algorithm can further learn how to combine low-level features into more abstract features, and so on. This multi-layered approach allows such systems to make sophisticated predictions when appropriately trained. These methods contrast with other computational biology approaches which, while exploiting existing datasets, do not allow the data to be interpreted and analyzed in unanticipated ways.
Machine learning in bioinformatics : Machine learning algorithms in bioinformatics can be used for prediction, classification, and feature selection. Methods to achieve this task are varied and span many disciplines; most well known among them are machine learning and statistics. Classification and prediction tasks aim at building models that describe and distinguish classes or concepts for future prediction. The differences between them are the following: Classification/recognition outputs a categorical class, while prediction outputs a numerical valued feature. The type of algorithm, or process used to build the predictive models from data using analogies, rules, neural networks, probabilities, and/or statistics. Due to the exponential growth of information technologies and applicable models, including artificial intelligence and data mining, in addition to the access ever-more comprehensive data sets, new and better information analysis techniques have been created, based on their ability to learn. Such models allow reach beyond description and provide insights in the form of testable models.
Machine learning in bioinformatics : In general, a machine learning system can usually be trained to recognize elements of a certain class given sufficient samples. For example, machine learning methods can be trained to identify specific visual features such as splice sites. Support vector machines have been extensively used in cancer genomic studies. In addition, deep learning has been incorporated into bioinformatic algorithms. Deep learning applications have been used for regulatory genomics and cellular imaging. Other applications include medical image classification, genomic sequence analysis, as well as protein structure classification and prediction. Deep learning has been applied to regulatory genomics, variant calling and pathogenicity scores. Natural language processing and text mining have helped to understand phenomena including protein-protein interaction, gene-disease relation as well as predicting biomolecule structures and functions.
Machine learning in bioinformatics : An important part of bioinformatics is the management of big datasets, known as databases of reference. Databases exist for each type of biological data, for example for biosynthetic gene clusters and metagenomes.
Soboleva modified hyperbolic tangent : The Soboleva modified hyperbolic tangent, also known as (parametric) Soboleva modified hyperbolic tangent activation function ([P]SMHTAF), is a special S-shaped function based on the hyperbolic tangent, given by
Soboleva modified hyperbolic tangent : This function was originally proposed as "modified hyperbolic tangent" by Ukrainian scientist Elena V. Soboleva (Елена В. Соболева) as a utility function for multi-objective optimization and choice modelling in decision-making.
Soboleva modified hyperbolic tangent : The function has since been introduced into neural network theory and practice. It was also used in economics for modelling consumption and investment, to approximate current-voltage characteristics of field-effect transistors and light-emitting diodes, to design antenna feeders, and analyze plasma temperatures and densities in the divertor region of fusion reactors.
Soboleva modified hyperbolic tangent : Derivative of the function is defined by the formula: smht ′ ⁡ ( x ) ≐ a e a x + b e − b x e c x + e − d x − smht ⁡ ( x ) c e c x − d e − d x e c x + e − d x '(x)\doteq +be^+e^-\operatorname (x)-de^+e^ The following conditions are keeping the function limited on y-axes: a ≤ c, b ≤ d. A family of recurrence-generated parametric Soboleva modified hyperbolic tangent activation functions (NPSMHTAF, FPSMHTAF) was studied with parameters a = c and b = d. It is worth noting that in this case, the function is not sensitive to flipping the left and right-sides parameters: The function is sensitive to ratio of the denominator coefficients and often is used without coefficients in the numerator: With parameters a = b = c = d = 1 the modified hyperbolic tangent function reduces to the conventional tanh(x) function, whereas for a = b = 1 and c = d = 0, the term becomes equal to sinh(x).
Soboleva modified hyperbolic tangent : Activation function e (mathematical constant) Equal incircles theorem, based on sinh Hausdorff distance Inverse hyperbolic functions List of integrals of hyperbolic functions Poinsot's spirals Sigmoid function
Soboleva modified hyperbolic tangent : Iliev, Anton; Kyurkchiev, Nikolay; Markov, Svetoslav (2017). "A Note on the New Activation Function of Gompertz Type". Biomath Communications. 4 (2). Faculty of Mathematics and Informatics, University of Plovdiv "Paisii Hilendarski", Plovdiv, Bulgaria / Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Sofia, Bulgaria: Biomath Forum (BF). doi:10.11145/10.11145/bmc.2017.10.201. ISSN 2367-5233. Archived from the original on 2020-06-20. Retrieved 2020-06-19. (20 pages) [5]
Situated approach (artificial intelligence) : In artificial intelligence research, the situated approach builds agents that are designed to behave effectively successfully in their environment. This requires designing AI "from the bottom-up" by focussing on the basic perceptual and motor skills required to survive. The situated approach gives a much lower priority to abstract reasoning or problem-solving skills. The approach was originally proposed as an alternative to traditional approaches (that is, approaches popular before 1985 or so). After several decades, classical AI technologies started to face intractable issues (e.g. combinatorial explosion) when confronted with real-world modeling problems. All approaches to address these issues focus on modeling intelligences situated in an environment. They have become known as the situated approach to AI.
Situated approach (artificial intelligence) : Classically, a software entity is defined as a simulated element, able to act on itself and on its environment, and which has an internal representation of itself and of the outside world. An entity can communicate with other entities, and its behavior is the consequence of its perceptions, its representations, and its interactions with the other entities.
Situated approach (artificial intelligence) : Arsenio, Artur M. (2004) Towards an embodied and situated AI, In: Proceedings of the International FLAIRS conference, 2004. (online) The Artificial Life Route To Artificial Intelligence: Building Embodied, Situated Agents, Luc Steels and Rodney Brooks Eds., Lawrence Erlbaum Publishing, 1995. (ISBN 978-0805815184) Rodney A. Brooks Cambrian Intelligence (MIT Press, 1999) ISBN 0-262-52263-2; collection of early papers including "Intelligence without representation" and "Intelligence without reason", from 1986 & 1991 respectively. Ronald C. Arkin Behavior-Based Robotics (MIT Press, 1998) ISBN 0-262-01165-4 Hendriks-Jansen, Horst (1996) Catching Ourselves in the Act: Situated Activity, Interactive Emergence, Evolution, and Human Thought. Cambridge, Mass.: MIT Press.
Situated approach (artificial intelligence) : Article Artificial Intelligence: The situated approach from the Encyclopædia Britannica Nouvelle AI - Definition Reactive planning and nouvelle AI
Statistical relational learning : Statistical relational learning (SRL) is a subdiscipline of artificial intelligence and machine learning that is concerned with domain models that exhibit both uncertainty (which can be dealt with using statistical methods) and complex, relational structure. Typically, the knowledge representation formalisms developed in SRL use (a subset of) first-order logic to describe relational properties of a domain in a general manner (universal quantification) and draw upon probabilistic graphical models (such as Bayesian networks or Markov networks) to model the uncertainty; some also build upon the methods of inductive logic programming. Significant contributions to the field have been made since the late 1990s. As is evident from the characterization above, the field is not strictly limited to learning aspects; it is equally concerned with reasoning (specifically probabilistic inference) and knowledge representation. Therefore, alternative terms that reflect the main foci of the field include statistical relational learning and reasoning (emphasizing the importance of reasoning) and first-order probabilistic languages (emphasizing the key properties of the languages with which models are represented). Another term that is sometimes used in the literature is relational machine learning (RML).
Statistical relational learning : A number of canonical tasks are associated with statistical relational learning, the most common ones being. collective classification, i.e. the (simultaneous) prediction of the class of several objects given objects' attributes and their relations link prediction, i.e. predicting whether or not two or more objects are related link-based clustering, i.e. the grouping of similar objects, where similarity is determined according to the links of an object, and the related task of collaborative filtering, i.e. the filtering for information that is relevant to an entity (where a piece of information is considered relevant to an entity if it is known to be relevant to a similar entity) social network modelling object identification/entity resolution/record linkage, i.e. the identification of equivalent entries in two or more separate databases/datasets
Statistical relational learning : One of the fundamental design goals of the representation formalisms developed in SRL is to abstract away from concrete entities and to represent instead general principles that are intended to be universally applicable. Since there are countless ways in which such principles can be represented, many representation formalisms have been proposed in recent years. In the following, some of the more common ones are listed in alphabetical order: Bayesian logic program BLOG model Markov logic networks Multi-entity Bayesian network Probabilistic logic programs Probabilistic relational model – a Probabilistic Relational Model (PRM) is the counterpart of a Bayesian network in statistical relational learning. Probabilistic soft logic Recursive random field Relational Bayesian network Relational dependency network Relational Markov network Relational Kalman filtering
Statistical relational learning : Association rule learning Formal concept analysis Fuzzy logic Grammar induction Knowledge graph embedding
Statistical relational learning : Brian Milch, and Stuart J. Russell: First-Order Probabilistic Languages: Into the Unknown, Inductive Logic Programming, volume 4455 of Lecture Notes in Computer Science, page 10–24. Springer, 2006 Rodrigo de Salvo Braz, Eyal Amir, and Dan Roth: A Survey of First-Order Probabilistic Models, Innovations in Bayesian Networks, volume 156 of Studies in Computational Intelligence, Springer, 2008 Hassan Khosravi and Bahareh Bina: A Survey on Statistical Relational Learning, Advances in Artificial Intelligence, Lecture Notes in Computer Science, Volume 6085/2010, 256–268, Springer, 2010 Ryan A. Rossi, Luke K. McDowell, David W. Aha, and Jennifer Neville: Transforming Graph Data for Statistical Relational Learning, Journal of Artificial Intelligence Research (JAIR), Volume 45, page 363-441, 2012 Luc De Raedt, Kristian Kersting, Sriraam Natarajan and David Poole, "Statistical Relational Artificial Intelligence: Logic, Probability, and Computation", Synthesis Lectures on Artificial Intelligence and Machine Learning" March 2016 ISBN 9781627058414. == References ==
News analytics : In trading strategy, news analysis refers to the measurement of the various qualitative and quantitative attributes of textual (unstructured data) news stories. Some of these attributes are: sentiment, relevance, and novelty. Expressing news stories as numbers and metadata permits the manipulation of everyday information in a mathematical and statistical way. This data is often used in financial markets as part of a trading strategy or by businesses to judge market sentiment and make better business decisions. News analytics are usually derived through automated text analysis and applied to digital texts using elements from natural language processing and machine learning such as latent semantic analysis, support vector machines, "bag of words" among other techniques.
News analytics : The application of sophisticated linguistic analysis to news and social media has grown from an area of research to mature product solutions since 2007. News analytics and news sentiment calculations are now routinely used by both buy-side and sell-side in alpha generation, trading execution, risk management, and market surveillance and compliance. There is however a good deal of variation in the quality, effectiveness and completeness of currently available solutions. A large number of companies use news analysis to help them make better business decisions. Academic researchers have become interested in news analysis especially with regards to predicting stock price movements, volatility and traded volume. Provided a set of values such as sentiment and relevance as well as the frequency of news arrivals, it is possible to construct news sentiment scores for multiple asset classes such as equities, Forex, fixed income, and commodities. Sentiment scores can be constructed at various horizons to meet the different needs and objectives of high and low frequency trading strategies, whilst characteristics such as direction and volatility of asset returns as well as the traded volume may be addressed more directly via the construction of tailor-made sentiment scores. Scores are generally constructed as a range of values. For instance, values may range between 0 and 100, where values above and below 50 convey positive and negative sentiment, respectively.
News analytics : Being able to express news stories as numbers permits the manipulation of everyday information in a statistical way that allows computers not only to make decisions once made only by humans, but to do so more efficiently. Since market participants are always looking for an edge, the speed of computer connections and the delivery of news analysis, measured in milliseconds, have become essential.
News analytics : Computational linguistics Sentiment analysis Text mining Trading the news Unstructured data Natural language processing Information asymmetry Algorithmic trading == References ==
ChatGPT Deep Research : Deep Research is an AI agent integrated into ChatGPT, which generates cited reports on a user-specified topic by autonomously browsing the web for 5 to 30 minutes.
ChatGPT Deep Research : It can interpret and analyze text, images and PDFs, and will soon be capable of producing visualizations and embedding images in its reports. It is based on a specialized version of OpenAI's o3 model. Deep Research scored 26.6% on the "Humanity's Last Exam" benchmark, surpassing rivals like DeepSeek's model R1 (9.4%) and GPT-4o (3.3%). According to OpenAI, Deep Research sometimes makes factual hallucinations or incorrect inferences, can have difficulty distinguishing authoritative sources from rumors, and may not accurately convey uncertainty. Deep Research is currently offered to ChatGPT Pro subscribers ($200/month), who receive 100 queries per month and for Plus, Team and Enterprise users with 10 queries per month. == References ==
Brilliant Labs : Brilliant Labs is a Singapore-based technology company that produces open source eyewear featuring artificial intelligence (AI). Brilliant Labs was founded in 2019 in Hong Kong by Bobak Tavangar, a former Apple program lead. Tavangar said he saw the potential for integrating the capabilities of artificial intelligence into glasses to give consumers "visual superpowers." The goal was to use an open source platform for development that would allow for creators to access the company's code and create new apps for devices. In January 2024, the company introduced its first product, Frame, which were glasses that looked similar to those worn by Apple co-founder Steve Jobs. They were designed to be indistinguishable from regular eyeglasses and would be worn by those who wore prescription lenses. The glasses were enabled for audio with a voice assistant called Noa and featured an AI search engine called Perplexity. They came at a time when other companies introduced similar products, like AI Pin from Humane, R1 from Rabbit, or Vision Pro from Apple, and came after other products, like Google Glass or HoloLens from Microsoft did not gain traction. Prior to this, the company offered the Monocle, an augmented reality (AR) lens that attached to traditional glasses, and which also used open source software. == References ==
Adaptive neuro fuzzy inference system : An adaptive neuro-fuzzy inference system or adaptive network-based fuzzy inference system (ANFIS) is a kind of artificial neural network that is based on Takagi–Sugeno fuzzy inference system. The technique was developed in the early 1990s. Since it integrates both neural networks and fuzzy logic principles, it has potential to capture the benefits of both in a single framework. Its inference system corresponds to a set of fuzzy IF–THEN rules that have learning capability to approximate nonlinear functions. Hence, ANFIS is considered to be a universal estimator. For using the ANFIS in a more efficient and optimal way, one can use the best parameters obtained by genetic algorithm. It has uses in intelligent situational aware energy management system.
Adaptive neuro fuzzy inference system : It is possible to identify two parts in the network structure, namely premise and consequence parts. In more details, the architecture is composed by five layers. The first layer takes the input values and determines the membership functions belonging to them. It is commonly called fuzzification layer. The membership degrees of each function are computed by using the premise parameter set, namely . The second layer is responsible of generating the firing strengths for the rules. Due to its task, the second layer is denoted as "rule layer". The role of the third layer is to normalize the computed firing strengths, by dividing each value for the total firing strength. The fourth layer takes as input the normalized values and the consequence parameter set . The values returned by this layer are the defuzzificated ones and those values are passed to the last layer to return the final output.
OpenAI Operator : OpenAI Operator is an AI agent developed by OpenAI, capable of autonomously performing tasks through web browser interactions, including filling forms, placing online orders, scheduling appointments, and other repetitive browser-based tasks. It uses OpenAI's advanced models to expand practical automation capabilities for users in daily activities. Operator was launched on 23 January 2025. It was released as a limited-access research preview to Pro-tier subscribers in the United States on February 1, 2025, with future plans to broaden availability.
OpenAI Operator : In benchmark assessments, Operator achieved notable success, scoring 38.1% on OSWorld benchmarks (OS-level tasks) and 58.1% on WebArena benchmarks (web interactions). However, it has not yet reached human-level accuracy and faces limitations with intricate user interfaces and extended workflows.
OpenAI Operator : OpenAI emphasizes privacy and safety measures within Operator, including stringent data protection protocols and built-in safety checks designed to prevent unauthorized sensitive actions or information misuse.
OpenAI Operator : Initially, Operator is only available to ChatGPT Pro subscribers in the U.S., with plans for broader availability to Plus, Team, and Enterprise users in the future. == References ==
Action model learning : Action model learning (sometimes abbreviated action learning) is an area of machine learning concerned with creation and modification of software agent's knowledge about effects and preconditions of the actions that can be executed within its environment. This knowledge is usually represented in logic-based action description language and used as the input for automated planners. Learning action models is important when goals change. When an agent acted for a while, it can use its accumulated knowledge about actions in the domain to make better decisions. Thus, learning action models differs from reinforcement learning. It enables reasoning about actions instead of expensive trials in the world. Action model learning is a form of inductive reasoning, where new knowledge is generated based on agent's observations. It differs from standard supervised learning in that correct input/output pairs are never presented, nor imprecise action models explicitly corrected. Usual motivation for action model learning is the fact that manual specification of action models for planners is often a difficult, time consuming, and error-prone task (especially in complex environments).
Action model learning : Given a training set E consisting of examples e = ( s , a , s ′ ) , where s , s ′ are observations of a world state from two consecutive time steps t , t ′ and a is an action instance observed in time step t , the goal of action model learning in general is to construct an action model ⟨ D , P ⟩ , where D is a description of domain dynamics in action description formalism like STRIPS, ADL or PDDL and P is a probability function defined over the elements of D . However, many state of the art action learning methods assume determinism and do not induce P . In addition to determinism, individual methods differ in how they deal with other attributes of domain (e.g. partial observability or sensoric noise).
Action model learning : Machine learning Automated planning and scheduling Action language PDDL Architecture description language Inductive reasoning Computational logic Knowledge representation == References ==
W-shingling : In natural language processing a w-shingling is a set of unique shingles (therefore n-grams) each of which is composed of contiguous subsequences of tokens within a document, which can then be used to ascertain the similarity between documents. The symbol w denotes the quantity of tokens in each shingle selected, or solved for. The document, "a rose is a rose is a rose" can therefore be maximally tokenized as follows: (a,rose,is,a,rose,is,a,rose) The set of all contiguous sequences of 4 tokens (Thus 4=n, thus 4-grams) is Which can then be reduced, or maximally shingled in this particular instance to .
W-shingling : For a given shingle size, the degree to which two documents A and B resemble each other can be expressed as the ratio of the magnitudes of their shinglings' intersection and union, or r ( A , B ) = | S ( A ) ∩ S ( B ) | | S ( A ) ∪ S ( B ) | \over where |A| is the size of set A. The resemblance is a number in the range [0,1], where 1 indicates that two documents are identical. This definition is identical with the Jaccard coefficient describing similarity and diversity of sample sets.
W-shingling : Bag-of-words model Jaccard index Concept mining k-mer MinHash N-gram Rabin fingerprint Rolling hash Vector space model
W-shingling : Broder; Glassman; Manasse; Zweig (1997). "Syntactic Clustering of the Web". SRC Technical Note #1997-015. Manber (1993). "Finding Similar Files in a Large File System" (PDF). Does not yet use the term "shingling". Manning, Christopher D.; Raghavan, Prabhakar; Schütze, Hinrich (7 July 2008). "w-shingling". Introduction to Information Retrieval. Cambridge University Press. ISBN 978-1-139-47210-4.
COTSBot : COTSBot is a small autonomous underwater vehicle (AUV) 4.5 feet (1.4 m) long, which is designed by Queensland University of Technology (QUT) to kill the very destructive crown-of-thorns starfish (Acanthaster planci) in the Great Barrier Reef off the north-east coast of Australia. It identifies its target using an image-analyzing neural net to analyze what an onboard camera sees, and then lethally injects the starfish with a bile salt solution using a needle on the end of a long underslung foldable arm. COTSBot uses GPS to navigate. The first version was created in the early 2000s with an accuracy rate of about 65%. After training COTSBot with machine learning, its accuracy rate rose to 99% by 2019. COTSBot is capable of killing 200 crown-of-thorns starfish with its two liters capacity of poison. COTSBot is capable of performing about 20 runs per day, but multiple COTSBots will be necessary to significantly impact the crown of thorns starfish populations. A smaller version of COTSBot called "RangerBot" is also being developed by QUT. == References ==
Outline of natural language processing : The following outline is provided as an overview of and topical guide to natural-language processing: natural-language processing – computer activity in which computers are entailed to analyze, understand, alter, or generate natural language. This includes the automation of any or all linguistic forms, activities, or methods of communication, such as conversation, correspondence, reading, written composition, dictation, publishing, translation, lip reading, and so on. Natural-language processing is also the name of the branch of computer science, artificial intelligence, and linguistics concerned with enabling computers to engage in communication using natural language(s) in all forms, including but not limited to speech, print, writing, and signing.
Outline of natural language processing : Natural-language processing can be described as all of the following: A field of science – systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. An applied science – field that applies human knowledge to build or design useful things. A field of computer science – scientific and practical approach to computation and its applications. A branch of artificial intelligence – intelligence of machines and robots and the branch of computer science that aims to create it. A subfield of computational linguistics – interdisciplinary field dealing with the statistical or rule-based modeling of natural language from a computational perspective. An application of engineering – science, skill, and profession of acquiring and applying scientific, economic, social, and practical knowledge, in order to design and also build structures, machines, devices, systems, materials and processes. An application of software engineering – application of a systematic, disciplined, quantifiable approach to the design, development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software. A subfield of computer programming – process of designing, writing, testing, debugging, and maintaining the source code of computer programs. This source code is written in one or more programming languages (such as Java, C++, C#, Python, etc.). The purpose of programming is to create a set of instructions that computers use to perform specific operations or to exhibit desired behaviors. A subfield of artificial intelligence programming – A type of system – set of interacting or interdependent components forming an integrated whole or a set of elements (often called 'components' ) and relationships which are different from relationships of the set or its elements to other elements or sets. A system that includes software – software is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more computer programs and data held in the storage of the computer. In other words, software is a set of programs, procedures, algorithms and its documentation concerned with the operation of a data processing system. A type of technology – making, modification, usage, and knowledge of tools, machines, techniques, crafts, systems, methods of organization, in order to solve a problem, improve a preexisting solution to a problem, achieve a goal, handle an applied input/output relation or perform a specific function. It can also refer to the collection of such tools, machinery, modifications, arrangements and procedures. Technologies significantly affect human as well as other animal species' ability to control and adapt to their natural environments. A form of computer technology – computers and their application. NLP makes use of computers, image scanners, microphones, and many types of software programs. Language technology – consists of natural-language processing (NLP) and computational linguistics (CL) on the one hand, and speech technology on the other. It also includes many application oriented aspects of these. It is often called human language technology (HLT).
Outline of natural language processing : The following technologies make natural-language processing possible: Communication – the activity of a source sending a message to a receiver Language – Speech – Writing – Computing – Computers – Computer programming – Information extraction – User interface – Software – Text editing – program used to edit plain text files Word processing – piece of software used for composing, editing, formatting, printing documents Input devices – pieces of hardware for sending data to a computer to be processed Computer keyboard – typewriter style input device whose input is converted into various data depending on the circumstances Image scanners –