text
stringlengths 12
14.7k
|
---|
GPT4-Chan : The release of GPT-4chan to the public caused a lot of reactions and responses from various audiences. On the /pol/ board, the model’s posts and replies attracted a lot of attention and engagement from other users, who were mostly unaware of the model’s identity and nature. Some users praised the model for its intelligence, creativity, and humor, and agreed with its opinions and views. Some users challenged the model for its ignorance, inconsistency, and absurdity, and disagreed with its claims and arguments. Some users tried to troll, bait, or expose the model, and attempted to trick or test it with various questions and scenarios. The model’s posts and replies also generated a lot of controversy and conflict among the users, who often engaged in heated and violent debates and fights with each other. On Hugging Face, the model’s page received a lot of visits and requests from users who wanted to try out and experiment with the model. The model’s page also received a lot of feedback and reviews from users who rated and commented on the model. However, with the controversy of the model, access to it was gated and then disabled on Hugging Face for concerns about the potential harm the model could cause. The release of GPT-4chan also sparked a lot of media coverage and public attention, as various news outlets and social media platforms reported and commented on the model’s project. On YouTube, the model’s video received a lot of views and interactions from viewers who watched and followed the project. Furthermore, a petition condemning the deployment of GPT-4chan gained over 300 signatures from technology experts. == References ==
|
Microsoft Copilot : Microsoft Copilot (or simply Copilot) is a generative artificial intelligence chatbot developed by Microsoft. Based on the GPT-4 series of large language models, it was launched in 2023 as Microsoft's primary replacement for the discontinued Cortana. The service was introduced in February 2023 under the name Bing Chat, as a built-in feature for Microsoft Bing and Microsoft Edge. Over the course of 2023, Microsoft began to unify the Copilot branding across its various chatbot products, cementing the "copilot" analogy. At its Build 2023 conference, Microsoft announced its plans to integrate Copilot into Windows 11, allowing users to access it directly through the taskbar. In January 2024, a dedicated Copilot key was announced for Windows keyboards. Copilot utilizes the Microsoft Prometheus model, built upon OpenAI's GPT-4 foundational large language model, which in turn has been fine-tuned using both supervised and reinforcement learning techniques. Copilot's conversational interface style resembles that of ChatGPT. The chatbot is able to cite sources, create poems, generate songs, and use numerous languages and dialects. Microsoft operates Copilot on a freemium model. Users on its free tier can access most features, while priority access to newer features, including custom chatbot creation, is provided to paid subscribers under the "Microsoft Copilot Pro" paid subscription service. Several default chatbots are available in the free version of Microsoft Copilot, including the standard Copilot chatbot as well as Microsoft Designer, which is oriented towards using its Image Creator to generate images based on text prompts.
|
Microsoft Copilot : In 2019, Microsoft partnered with OpenAI and began investing billions of dollars into the organization. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. In September 2020, Microsoft announced that it had licensed OpenAI's GPT-3 exclusively. Others can still receive output from its public API, but Microsoft has exclusive access to the underlying model. In November 2022, OpenAI launched ChatGPT, a chatbot which was based on GPT-3.5. ChatGPT gained worldwide attention following its release, becoming a viral Internet sensation. On January 23, 2023, Microsoft announced a multi-year US$10 billion investment in OpenAI. On February 6, Google announced Bard (later rebranded as Gemini), a ChatGPT-like chatbot service, fearing that ChatGPT could threaten Google's place as a go-to source for information. Multiple media outlets and financial analysts described Google as "rushing" Bard's announcement to preempt rival Microsoft's planned February 7 event unveiling Copilot, as well as to avoid playing "catch-up" to Microsoft.
|
Microsoft Copilot : Tom Warren, a senior editor at The Verge, has noted the conceptual similarity of Copilot and other Microsoft assistant features like Cortana and Clippy. Warren also believes that large language models, as they develop further, could change how users work and collaborate. Rowan Curran, an analyst at Forrester, states that the integration of AI into productivity software may lead to improvements in user experience. Concerns over the speed of Microsoft's recent release of AI-powered products and investments have led to questions surrounding ethical responsibilities in the testing of such products. One ethical concern the public has vocalized is that GPT-4 and similar large language models may reinforce racial or gender bias. Individuals, including Tom Warren, have also voiced concerns for Copilot after witnessing the chatbot showcasing several instances of artificial hallucinations. In June 2024, Copilot was found to have repeated misinformation about the 2024 United States presidential debates. In response to these concerns, Jon Friedman, the Corporate Vice President of Design and Research at Microsoft, stated that Microsoft was "applying [the] learning" from experience with Bing to "mitigate [the] risks" of Copilot. Microsoft claimed that it was gathering a team of researchers and engineers to identify and alleviate any potential negative impacts. The stated aim was to achieve this through the refinement of training data, blocking queries about sensitive topics, and limiting harmful information. Microsoft stated that it intended to employ InterpretML and Fairlearn to detect and rectify data bias, provide links to its sources, and state any applicable constraints.
|
Microsoft Copilot : Tabnine – Coding assistant Tay (chatbot) – Chatbot developed by Microsoft Zo (chatbot) – Chatbot developed by MicrosoftPages displaying short descriptions of redirect targets
|
Microsoft Copilot : Official website Media related to Microsoft Copilot at Wikimedia Commons Microsoft Copilot Terms of Use (Archive -- 2024-10-01 -- Wayback Machine, Archive Today, Megalodon, Ghostarchive) Past versions
|
Artificial intelligence and elections : As artificial intelligence (AI) has become more mainstream, there is growing concern about how this will influence elections. Potential targets of AI include election processes, election offices, election officials and election vendors.
|
Artificial intelligence and elections : Generative AI capabilities allow creation of misleading content. Examples of this include text-to-video, deepfake videos, text-to-image, AI-altered image, text-to-speech, voice cloning, and text-to-text. In the context of an election, a deepfake video of a candidate may propagate information that the candidate does not endorse. Chatbots could spread misinformation related to election locations, times or voting methods. In contrast to malicious actors in the past, these techniques require little technical skill and can spread rapidly.
|
Artificial intelligence and elections : AI has begun to be used in election interference by foreign governments. Governments thought to be using AI to interfere in external elections include Russia, Iran and China. Russia was thought to be the most prolific nation targeting the 2024 presidential election with their influencing operations "spreading synthetic images, video, audio and text online", according to U.S intelligence officials. Iran has reportedly generated fake social media posts stories and targeted "across the political spectrum on polarizing issues during the presidential election". The Chinese government has used "broader influence operations" that aim to make a global image and "amplify divisive topics in the U.S. such as drug use, immigration, and abortion". For example, Spamouflage has increasingly used generative AI for influence operations. Outside of the US elections, a deepfake video of Moldova’s pro-Western president Maia Sandu shows her "throwing her support behind a political party friendly to Russia." Officials in Moldova "believe the Russian government is behind the activity". Slovakia's liberal party leader had audio clips faked which discussed "vote rigging and raising the price of beer". The Chinese government has used AI to stir concerns about US interference in Taiwan. A fake clip seen on social media showed a fake video of the vice chairman of the U.S. House Armed Services Committee promising "stronger U.S. military support for Taiwan if the incumbent party’s candidates were elected in January".
|
Artificial intelligence and elections : As the use of AI and its associated tools in political campaigning and messaging increases, many ethical concerns have been raised. Campaigns have used AI in a number of ways, including speech writing, fundraising, voter behaviour prediction, fake robocalls and the generation of fake news. At the moment there are no US federal rules when it comes to using AI in campaigning and so its use can undermine public trust. Yet according to one expert: "A lot of the questions we're asking about AI are the same questions we've asked about rhetoric and persuasion for thousands of years." As more insight into how AI is used becomes ever greater, concerns have become much broader than just the generating of misinformation or fake news. Its use by politicians and political parties for "purposes that are not overtly malicious" can also raise ethical worries. For instance, the use of 'softfakes' have become more common. These can be images, videos or audio clips that have been edited, often by campaign teams, "to make a political candidate seem more appealing." An example can be found in Indonesia's presidential election where the winning candidate created and promoted cartoonish avatars so as to rebrand himself. How citizens come by information has been increasingly impacted by AI, especially through online platforms and social media. These platforms are part of complex and opaque systems which can result in a "significant impact on freedom of expression", with the generalisation of AI in campaigns also creating huge pressures on "voters’ mental security". As the frequency of AI use in political campaigning becomes common, together with globalization, more 'universalized' content can be used so that territorial boundaries matter less. While AI collides with the reasoning processes of people, the creation of "dangerous behaviours" can happen which disrupt important levels of society and nation states.
|
Artificial intelligence and elections : Chinese interference in the 2024 United States elections List of elections in 2025 Donald Trump 2024 presidential campaign § Use of artificial intelligence Russian interference in the 2024 United States elections
|
Artificial intelligence and elections : "Smashing Security: Keeping the lights on after a ransomware attack" - podcast including discussion on the use of AI in the Indian elections (17m37s - 29m11s). 25 April 2024.
|
XLNet : The XLNet was an autoregressive Transformer designed as an improvement over BERT, with 340M parameters and trained on 33 billion words. It was released on 19 June 2019, under the Apache 2.0 license. It achieved state-of-the-art results on a variety of natural language processing tasks, including language modeling, question answering, and natural language inference.
|
XLNet : The main idea of XLNet is to model language autoregressively like the GPT models, but allow for all possible permutations of a sentence. Concretely, consider the following sentence:My dog is cute.In standard autoregressive language modeling, the model would be tasked with predicting the probability of each word, conditioned on the previous words as its context: We factorize the joint probability of a sequence of words x 1 , … , x T ,\ldots ,x_ using the chain rule: Pr ( x 1 , … , x T ) = Pr ( x 1 ) Pr ( x 2 | x 1 ) Pr ( x 3 | x 1 , x 2 ) … Pr ( x T | x 1 , … , x T − 1 ) . ,\ldots ,x_)=\Pr(x_)\Pr(x_|x_)\Pr(x_|x_,x_)\ldots \Pr(x_|x_,\ldots ,x_). For example, the sentence "My dog is cute" is factorized as: Pr ( My , dog , is , cute ) = Pr ( My ) Pr ( dog | My ) Pr ( is | My , dog ) Pr ( cute | My , dog , is ) . ,,,)=\Pr()\Pr(|)\Pr(|,)\Pr(|,,). Schematically, we can write it as <MASK> <MASK> <MASK> <MASK> → My <MASK> <MASK> <MASK> → My dog <MASK> <MASK> → My dog is <MASK> → My dog is cute . \to \to \to \to . However, for XLNet, the model is required to predict the words in a randomly generated order. Suppose we have sampled a randomly generated order 3241, then schematically, the model is required to perform the following prediction task: <MASK> <MASK> <MASK> <MASK> → <MASK> <MASK> is <MASK> → <MASK> dog is <MASK> → <MASK> dog is cute → My dog is cute \to \to \to \to By considering all permutations, XLNet is able to capture longer-range dependencies and better model the bidirectional context of words.
|
XLNet : Two models were released: XLNet-Large, cased: 110M parameters, 24-layer, 1024-hidden, 16-heads XLNet-Base, cased: 340M parameters, 12-layer, 768-hidden, 12-heads. It was trained on a dataset that amounted to 32.89 billion tokens after tokenization with SentencePiece. The dataset was composed of BooksCorpus, and English Wikipedia, Giga5, ClueWeb 2012-B, and Common Crawl. It was trained on 512 TPU v3 chips, for 5.5 days. At the end of training, it still under-fitted the data, meaning it could have achieved lower loss with more training. It took 0.5 million steps with an Adam optimizer, linear learning rate decay, and a batch size of 8192.
|
XLNet : BERT (language model) Transformer (machine learning model) Generative pre-trained transformer == References ==
|
Algorithmic bias : Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm. Bias can emerge from many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm. For example, algorithmic bias has been observed in search engine results and social media platforms. This bias can have impacts ranging from inadvertent privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity. The study of algorithmic bias is most concerned with algorithms that reflect "systematic and unfair" discrimination. This bias has only recently been addressed in legal frameworks, such as the European Union's General Data Protection Regulation (proposed 2018) and the Artificial Intelligence Act (proposed 2021, approved 2024). As algorithms expand their ability to organize society, politics, institutions, and behavior, sociologists have become concerned with the ways in which unanticipated output and manipulation of data can impact the physical world. Because algorithms are often considered to be neutral and unbiased, they can inaccurately project greater authority than human expertise (in part due to the psychological phenomenon of automation bias), and in some cases, reliance on algorithms can displace human responsibility for their outcomes. Bias can enter into algorithmic systems as a result of pre-existing cultural, social, or institutional expectations; by how features and labels are chosen; because of technical limitations of their design; or by being used in unanticipated contexts or by audiences who are not considered in the software's initial design. Algorithmic bias has been cited in cases ranging from election outcomes to the spread of online hate speech. It has also arisen in criminal justice, healthcare, and hiring, compounding existing racial, socioeconomic, and gender biases. The relative inability of facial recognition technology to accurately identify darker-skinned faces has been linked to multiple wrongful arrests of black men, an issue stemming from imbalanced datasets. Problems in understanding, researching, and discovering algorithmic bias persist due to the proprietary nature of algorithms, which are typically treated as trade secrets. Even when full transparency is provided, the complexity of certain algorithms poses a barrier to understanding their functioning. Furthermore, algorithms may change, or respond to input or output in ways that cannot be anticipated or easily reproduced for analysis. In many cases, even within a single website or application, there is no single "algorithm" to examine, but a network of many interrelated programs and data inputs, even between users of the same service.
|
Algorithmic bias : Algorithms are difficult to define, but may be generally understood as lists of instructions that determine how programs read, collect, process, and analyze data to generate output.: 13 For a rigorous technical introduction, see Algorithms. Advances in computer hardware have led to an increased ability to process, store and transmit data. This has in turn boosted the design and adoption of technologies such as machine learning and artificial intelligence.: 14–15 By analyzing and processing data, algorithms are the backbone of search engines, social media websites, recommendation engines, online retail, online advertising, and more. Contemporary social scientists are concerned with algorithmic processes embedded into hardware and software applications because of their political and social impact, and question the underlying assumptions of an algorithm's neutrality.: 2 : 563 : 294 The term algorithmic bias describes systematic and repeatable errors that create unfair outcomes, such as privileging one arbitrary group of users over others. For example, a credit score algorithm may deny a loan without being unfair, if it is consistently weighing relevant financial criteria. If the algorithm recommends loans to one group of users, but denies loans to another set of nearly identical users based on unrelated criteria, and if this behavior can be repeated across multiple occurrences, an algorithm can be described as biased.: 332 This bias may be intentional or unintentional (for example, it can come from biased data obtained from a worker that previously did the job the algorithm is going to do from now on).
|
Algorithmic bias : Bias can be introduced to an algorithm in several ways. During the assemblage of a dataset, data may be collected, digitized, adapted, and entered into a database according to human-designed cataloging criteria.: 3 Next, programmers assign priorities, or hierarchies, for how a program assesses and sorts that data. This requires human decisions about how data is categorized, and which data is included or discarded.: 4 Some algorithms collect their own data based on human-selected criteria, which can also reflect the bias of human designers.: 8 Other algorithms may reinforce stereotypes and preferences as they process and display "relevant" data for human users, for example, by selecting information based on previous choices of a similar user or group of users.: 6 Beyond assembling and processing data, bias can emerge as a result of design. For example, algorithms that determine the allocation of resources or scrutiny (such as determining school placements) may inadvertently discriminate against a category when determining risk based on similar users (as in credit scores).: 36 Meanwhile, recommendation engines that work by associating users with similar users, or that make use of inferred marketing traits, might rely on inaccurate associations that reflect broad ethnic, gender, socio-economic, or racial stereotypes. Another example comes from determining criteria for what is included and excluded from results. These criteria could present unanticipated outcomes for search results, such as with flight-recommendation software that omits flights that do not follow the sponsoring airline's flight paths. Algorithms may also display an uncertainty bias, offering more confident assessments when larger data sets are available. This can skew algorithmic processes toward results that more closely correspond with larger samples, which may disregard data from underrepresented populations.: 4
|
Algorithmic bias : Several problems impede the study of large-scale algorithmic bias, hindering the application of academically rigorous studies and public understanding.: 5
|
Algorithmic bias : A study of 84 policy guidelines on ethical AI found that fairness and "mitigation of unwanted bias" was a common point of concern, and were addressed through a blend of technical solutions, transparency and monitoring, right to remedy and increased oversight, and diversity and inclusion efforts.
|
Algorithmic bias : Algorithmic wage discrimination Ethics of artificial intelligence Fairness (machine learning) Hallucination (artificial intelligence) Misaligned goals in artificial intelligence Predictive policing SenseTime
|
Algorithmic bias : Baer, Tobias (2019). Understand, Manage, and Prevent Algorithmic Bias: A Guide for Business Users and Data Scientists. New York: Apress. ISBN 9781484248843. Noble, Safiya Umoja (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press. ISBN 9781479837243.
|
Automatic acquisition of sense-tagged corpora : The knowledge acquisition bottleneck is perhaps the major impediment to solving the word-sense disambiguation (WSD) problem. Unsupervised learning methods rely on knowledge about word senses, which is barely formulated in dictionaries and lexical databases. Supervised learning methods depend heavily on the existence of manually annotated examples for every word sense, a requisite that can so far be met only for a handful of words for testing purposes, as it is done in the Senseval exercises.
|
Automatic acquisition of sense-tagged corpora : Therefore, one of the most promising trends in WSD research is using the largest corpus ever accessible, the World Wide Web, to acquire lexical information automatically. WSD has been traditionally understood as an intermediate language engineering technology which could improve applications such as information retrieval (IR). In this case, however, the reverse is also true: Web search engines implement simple and robust IR techniques that can be successfully used when mining the Web for information to be employed in WSD. The most direct way of using the Web (and other corpora) to enhance WSD performance is the automatic acquisition of sense-tagged corpora, the fundamental resource to feed supervised WSD algorithms. Although this is far from being commonplace in the WSD literature, a number of different and effective strategies to achieve this goal have already been proposed. Some of these strategies are: acquisition by direct Web searching (searches for monosemous synonyms, hypernyms, hyponyms, parsed gloss' words, etc.), Yarowsky algorithm (bootstrapping), acquisition via Web directories, and acquisition via cross-language meaning evidences.
|
Pedagogical agent : A pedagogical agent is a concept borrowed from computer science and artificial intelligence and applied to education, usually as part of an intelligent tutoring system (ITS). It is a simulated human-like interface between the learner and the content, in an educational environment. A pedagogical agent is designed to model the type of interactions between a student and another person. Mabanza and de Wet define it as "a character enacted by a computer that interacts with the user in a socially engaging manner". A pedagogical agent can be assigned different roles in the learning environment, such as tutor or co-learner, depending on the desired purpose of the agent. "A tutor agent plays the role of a teacher, while a co-learner agent plays the role of a learning companion".
|
Pedagogical agent : The history of Pedagogical Agents is closely aligned with the history of computer animation. As computer animation progressed, it was adopted by educators to enhance computerized learning by including a lifelike interface between the program and the learner. The first versions of a pedagogical agent were more cartoon than person, like Microsoft's Clippy which helped users of Microsoft Office load and use the program's features in 1997. However, with developments in computer animation, pedagogical agents can now look lifelike. By 2006 there was a call to develop modular, reusable agents to decrease the time and expertise required to create a pedagogical agent. There was also a call in 2009 to enact agent standards. The standardization and re-usability of pedagogical agents is less of an issue since the decrease in cost and widespread availability of animation tools. Individualized pedagogical agents can be found across disciplines including medicine, math, law, language learning, automotive, and armed forces. They are used in applications directed to every age, from preschool to adult.
|
Pedagogical agent : It has been suggested by researchers that pedagogical agents may take on different roles in the learning environment. Examples of these roles are: supplanting, scaffolding, coaching, testing, or demonstrating or modelling a procedure. A pedagogical agent as a tutor has not been demonstrated to add any benefit to an educational strategy in equivalent lessons with and without a pedagogical agent. According to Richard Mayer, there is some support in research for pedagogical agent increasing learning, but only as a presenter of social cues. A co-learner pedagogical agent is believed to increase the student's self-efficacy. By pointing out important features of instructional content, a pedagogical agent can fulfill the signaling function, which research on multimedia learning has shown to enhance learning. Research has demonstrated that human-human interaction may not be completely replaced by pedagogical agents, but learners may prefer the agents to non-agent multimedia systems. This finding is supported by social agency theory. Much like the varying effectiveness of the pedagogical agent roles in the learning environment, agents that take into account the user's affect have had mixed results. Research has shown pedagogical agents that make use of the users’ affect have been found to increase user knowledge retention, motivation, and perceived self-efficacy. However, with such a broad range of modalities in affective expressions, it is often difficult to utilize them. Additionally, having agents detect a user's affective state with precision remains challenging, as displays of affect are different across individuals.
|
Pedagogical agent : AI: Artificial Intelligence Research at USC Information Science Institute Stanford University: Interactive Animated Pedagogical Agents
|
Bioserenity : BioSerenity is a medtech company created in 2014 that develops ambulatory medical devices to help diagnose and monitor patients with chronic diseases such as epilepsy. The medical devices are composed of medical sensors, smart clothing, a smartphone app for Patient Reported Outcome, and a web platform to perform data analysis through Medical Artificial Intelligence for detection of digital biomarkers. The company initially focused on Neurology, a domain in which it reported contributing to the diagnosis of 30 000 patients per year. It now also operates in Sleep Disorders and Cardiology. BioSerenity reported it provides pharmaceutical companies with solutions for companion diagnostics.
|
Bioserenity : BioSerenity was founded in 2014, by Pierre-Yves Frouin. The company was initially hosted at the ICM Institute (Institute du Cerveau et de la Moëlle épinière), in Paris, France. Fund Raising June 8, 2015 : The company raises a $4 million seed round with Kurma Partners and IdInvest Partners September 20, 2017 : The company raises a $17 million series A round with LBO France, IdInvest Partners and BPI France June 18, 2019 : The company raises a $70 million series B round with Dassault Systèmes, IdInvest Partners, LBO France and BPI France November 13, 2023 : The company raises a 24M€ series C round with Jolt Capital Acquisitions In 2019, BioSerenity announced the acquisition of the American Company SleepMed and working with over 200 Hospitals. In 2020, BioSerenity was one of the five French manufacturers (Savoy, BB Distrib, Celluloses de Brocéliande, Chargeurs) working on the production of sanitary equipment including FFP2 masks at request of the French government. In 2021, the Neuronaute would be used by approximately 30,000 patients per year.
|
Bioserenity : BioSerenity is one of the Disrupt 100 BioSerenity joined the Next40 BioSerenity was selected by Microsoft and AstraZeneca in their initiative AI Factory for Health BioSerenity accelerated at Stanford's University StartX program
|
Bioserenity : Official website FDA Clearance Neuronaute FDA Clearance Cardioskin FDA Clearance Accusom
|
Data-driven model : Data-driven models are a class of computational models that primarily rely on historical data collected throughout a system's or process' lifetime to establish relationships between input, internal, and output variables. Commonly found in numerous articles and publications, data-driven models have evolved from earlier statistical models, overcoming limitations posed by strict assumptions about probability distributions. These models have gained prominence across various fields, particularly in the era of big data, artificial intelligence, and machine learning, where they offer valuable insights and predictions based on the available data.
|
Data-driven model : These models have evolved from earlier statistical models, which were based on certain assumptions about probability distributions that often proved to be overly restrictive. The emergence of data-driven models in the 1950s and 1960s coincided with the development of digital computers, advancements in artificial intelligence research, and the introduction of new approaches in non-behavioural modelling, such as pattern recognition and automatic classification.
|
Data-driven model : Data-driven models encompass a wide range of techniques and methodologies that aim to intelligently process and analyse large datasets. Examples include fuzzy logic, fuzzy and rough sets for handling uncertainty, neural networks for approximating functions, global optimization and evolutionary computing, statistical learning theory, and Bayesian methods. These models have found applications in various fields, including economics, customer relations management, financial services, medicine, and the military, among others. Machine learning, a subfield of artificial intelligence, is closely related to data-driven modelling as it also focuses on using historical data to create models that can make predictions and identify patterns. In fact, many data-driven models incorporate machine learning techniques, such as regression, classification, and clustering algorithms, to process and analyse data. In recent years, the concept of data-driven models has gained considerable attention in the field of water resources, with numerous applications, academic courses, and scientific publications using the term as a generalization for models that rely on data rather than physics. This classification has been featured in various publications and has even spurred the development of hybrid models in the past decade. Hybrid models attempt to quantify the degree of physically based information used in hydrological models and determine whether the process of building the model is primarily driven by physics or purely data-based. As a result, data-driven models have become an essential topic of discussion and exploration within water resources management and research. The term "data-driven modelling" (DDM) refers to the overarching paradigm of using historical data in conjunction with advanced computational techniques, including machine learning and artificial intelligence, to create models that can reveal underlying trends, patterns, and, in some cases, make predictions Data-driven models can be built with or without detailed knowledge of the underlying processes governing the system behavior, which makes them particularly useful when such knowledge is missing or fragmented. == References ==
|
Self-organizing map : A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine learning technique used to produce a low-dimensional (typically two-dimensional) representation of a higher-dimensional data set while preserving the topological structure of the data. For example, a data set with p variables measured in n observations could be represented as clusters of observations with similar values for the variables. These clusters then could be visualized as a two-dimensional "map" such that observations in proximal clusters have more similar values than observations in distal clusters. This can make high-dimensional data easier to visualize and analyze. An SOM is a type of artificial neural network but is trained using competitive learning rather than the error-correction learning (e.g., backpropagation with gradient descent) used by other artificial neural networks. The SOM was introduced by the Finnish professor Teuvo Kohonen in the 1980s and therefore is sometimes called a Kohonen map or Kohonen network. The Kohonen map or network is a computationally convenient abstraction building on biological models of neural systems from the 1970s and morphogenesis models dating back to Alan Turing in the 1950s. SOMs create internal representations reminiscent of the cortical homunculus, a distorted representation of the human body, based on a neurological "map" of the areas and proportions of the human brain dedicated to processing sensory functions, for different parts of the body.
|
Self-organizing map : Self-organizing maps, like most artificial neural networks, operate in two modes: training and mapping. First, training uses an input data set (the "input space") to generate a lower-dimensional representation of the input data (the "map space"). Second, mapping classifies additional input data using the generated map. In most cases, the goal of training is to represent an input space with p dimensions as a map space with two dimensions. Specifically, an input space with p variables is said to have p dimensions. A map space consists of components called "nodes" or "neurons", which are arranged as a hexagonal or rectangular grid with two dimensions. The number of nodes and their arrangement are specified beforehand based on the larger goals of the analysis and exploration of the data. Each node in the map space is associated with a "weight" vector, which is the position of the node in the input space. While nodes in the map space stay fixed, training consists in moving weight vectors toward the input data (reducing a distance metric such as Euclidean distance) without spoiling the topology induced from the map space. After training, the map can be used to classify additional observations for the input space by finding the node with the closest weight vector (smallest distance metric) to the input space vector.
|
Self-organizing map : The goal of learning in the self-organizing map is to cause different parts of the network to respond similarly to certain input patterns. This is partly motivated by how visual, auditory or other sensory information is handled in separate parts of the cerebral cortex in the human brain. The weights of the neurons are initialized either to small random values or sampled evenly from the subspace spanned by the two largest principal component eigenvectors. With the latter alternative, learning is much faster because the initial weights already give a good approximation of SOM weights. The network must be fed a large number of example vectors that represent, as close as possible, the kinds of vectors expected during mapping. The examples are usually administered several times as iterations. The training utilizes competitive learning. When a training example is fed to the network, its Euclidean distance to all weight vectors is computed. The neuron whose weight vector is most similar to the input is called the best matching unit (BMU). The weights of the BMU and neurons close to it in the SOM grid are adjusted towards the input vector. The magnitude of the change decreases with time and with the grid-distance from the BMU. The update formula for a neuron v with weight vector Wv(s) is W v ( s + 1 ) = W v ( s ) + θ ( u , v , s ) ⋅ α ( s ) ⋅ ( D ( t ) − W v ( s ) ) (s+1)=W_(s)+\theta (u,v,s)\cdot \alpha (s)\cdot (D(t)-W_(s)) , where s is the step index, t is an index into the training sample, u is the index of the BMU for the input vector D(t), α(s) is a monotonically decreasing learning coefficient; θ(u, v, s) is the neighborhood function which gives the distance between the neuron u and the neuron v in step s. Depending on the implementations, t can scan the training data set systematically (t is 0, 1, 2...T-1, then repeat, T being the training sample's size), be randomly drawn from the data set (bootstrap sampling), or implement some other sampling method (such as jackknifing). The neighborhood function θ(u, v, s) (also called function of lateral interaction) depends on the grid-distance between the BMU (neuron u) and neuron v. In the simplest form, it is 1 for all neurons close enough to BMU and 0 for others, but the Gaussian and Mexican-hat functions are common choices, too. Regardless of the functional form, the neighborhood function shrinks with time. At the beginning when the neighborhood is broad, the self-organizing takes place on the global scale. When the neighborhood has shrunk to just a couple of neurons, the weights are converging to local estimates. In some implementations, the learning coefficient α and the neighborhood function θ decrease steadily with increasing s, in others (in particular those where t scans the training data set) they decrease in step-wise fashion, once every T steps. This process is repeated for each input vector for a (usually large) number of cycles λ. The network winds up associating output nodes with groups or patterns in the input data set. If these patterns can be named, the names can be attached to the associated nodes in the trained net. During mapping, there will be one single winning neuron: the neuron whose weight vector lies closest to the input vector. This can be simply determined by calculating the Euclidean distance between input vector and weight vector. While representing input data as vectors has been emphasized in this article, any kind of object which can be represented digitally, which has an appropriate distance measure associated with it, and in which the necessary operations for training are possible can be used to construct a self-organizing map. This includes matrices, continuous functions or even other self-organizing maps.
|
Self-organizing map : There are two ways to interpret a SOM. Because in the training phase weights of the whole neighborhood are moved in the same direction, similar items tend to excite adjacent neurons. Therefore, SOM forms a semantic map where similar samples are mapped close together and dissimilar ones apart. This may be visualized by a U-Matrix (Euclidean distance between weight vectors of neighboring cells) of the SOM. The other way is to think of neuronal weights as pointers to the input space. They form a discrete approximation of the distribution of training samples. More neurons point to regions with high training sample concentration and fewer where the samples are scarce. SOM may be considered a nonlinear generalization of Principal components analysis (PCA). It has been shown, using both artificial and real geophysical data, that SOM has many advantages over the conventional feature extraction methods such as Empirical Orthogonal Functions (EOF) or PCA. Originally, SOM was not formulated as a solution to an optimisation problem. Nevertheless, there have been several attempts to modify the definition of SOM and to formulate an optimisation problem which gives similar results. For example, Elastic maps use the mechanical metaphor of elasticity to approximate principal manifolds: the analogy is an elastic membrane and plate.
|
Self-organizing map : Banking system financial analysis Financial investment Project prioritization and selection Seismic facies analysis for oil and gas exploration Failure mode and effects analysis Finding representative data in large datasets representative species for ecological communities representative days for energy system models
|
Self-organizing map : The generative topographic map (GTM) is a potential alternative to SOMs. In the sense that a GTM explicitly requires a smooth and continuous mapping from the input space to the map space, it is topology preserving. However, in a practical sense, this measure of topological preservation is lacking. The growing self-organizing map (GSOM) is a growing variant of the self-organizing map. The GSOM was developed to address the issue of identifying a suitable map size in the SOM. It starts with a minimal number of nodes (usually four) and grows new nodes on the boundary based on a heuristic. By using a value called the spread factor, the data analyst has the ability to control the growth of the GSOM. The conformal map approach uses conformal mapping to interpolate each training sample between grid nodes in a continuous surface. A one-to-one smooth mapping is possible in this approach. The time adaptive self-organizing map (TASOM) network is an extension of the basic SOM. The TASOM employs adaptive learning rates and neighborhood functions. It also includes a scaling parameter to make the network invariant to scaling, translation and rotation of the input space. The TASOM and its variants have been used in several applications including adaptive clustering, multilevel thresholding, input space approximation, and active contour modeling. Moreover, a Binary Tree TASOM or BTASOM, resembling a binary natural tree having nodes composed of TASOM networks has been proposed where the number of its levels and the number of its nodes are adaptive with its environment. The elastic map approach borrows from the spline interpolation the idea of minimization of the elastic energy. In learning, it minimizes the sum of quadratic bending and stretching energy with the least squares approximation error. The oriented and scalable map (OS-Map) generalises the neighborhood function and the winner selection. The homogeneous Gaussian neighborhood function is replaced with the matrix exponential. Thus one can specify the orientation either in the map space or in the data space. SOM has a fixed scale (=1), so that the maps "optimally describe the domain of observation". But what about a map covering the domain twice or in n-folds? This entails the conception of scaling. The OS-Map regards the scale as a statistical description of how many best-matching nodes an input has in the map.
|
Self-organizing map : Deep learning Hybrid Kohonen self-organizing map Learning vector quantization Liquid state machine Neocognitron Neural gas Sparse coding Sparse distributed memory Topological data analysis
|
Self-organizing map : Kohonen, Teuvo (January 2013). "Essentials of the self-organizing map". Neural Networks. 37: 52–65. doi:10.1016/j.neunet.2012.09.018. PMID 23067803. S2CID 17289060. Kohonen, Teuvo (2001). Self-organizing maps: with 22 tables. Springer Series in Information Sciences (3 ed.). Berlin Heidelberg: Springer. ISBN 978-3-540-67921-9. Kohonen, Teuvo (1988). "Self-Organization and Associative Memory". Springer Series in Information Sciences. 8. doi:10.1007/978-3-662-00784-6. ISBN 978-3-540-18314-3. ISSN 0720-678X. Kaski, Samuel, Jari Kangas, and Teuvo Kohonen. "Bibliography of self-organizing map (SOM) papers: 1981–1997." Neural computing surveys 1.3&4 (1998): 1-176. Oja, Merja, Samuel Kaski, and Teuvo Kohonen. "Bibliography of self-organizing map (SOM) papers: 1998–2001 addendum." Neural computing surveys 3.1 (2003): 1-156.
|
Self-organizing map : Media related to Self-organizing maps at Wikimedia Commons
|
Preference learning : Preference learning is a subfield of machine learning that focuses on modeling and predicting preferences based on observed preference information. Preference learning typically involves supervised learning using datasets of pairwise preference comparisons, rankings, or other preference information.
|
Preference learning : The main task in preference learning concerns problems in "learning to rank". According to different types of preference information observed, the tasks are categorized as three main problems in the book Preference Learning:
|
Preference learning : There are two practical representations of the preference information A ≻ B . One is assigning A and B with two real numbers a and b respectively such that a > b . Another one is assigning a binary value V ( A , B ) ∈ \,\! for all pairs ( A , B ) denoting whether A ≻ B or B ≻ A . Corresponding to these two different representations, there are two different techniques applied to the learning process.
|
Preference learning : Preference learning can be used in ranking search results according to feedback of user preference. Given a query and a set of documents, a learning model is used to find the ranking of documents corresponding to the relevance with this query. More discussions on research in this field can be found in Tie-Yan Liu's survey paper. Another application of preference learning is recommender systems. Online store may analyze customer's purchase record to learn a preference model and then recommend similar products to customers. Internet content providers can make use of user's ratings to provide more user preferred contents. == References ==
|
Multiple instance learning : In machine learning, multiple-instance learning (MIL) is a type of supervised learning. Instead of receiving a set of instances which are individually labeled, the learner receives a set of labeled bags, each containing many instances. In the simple case of multiple-instance binary classification, a bag may be labeled negative if all the instances in it are negative. On the other hand, a bag is labeled positive if there is at least one instance in it which is positive. From a collection of labeled bags, the learner tries to either (i) induce a concept that will label individual instances correctly or (ii) learn how to label bags without inducing the concept. Babenko (2008) gives a simple example for MIL. Imagine several people, and each of them has a key chain that contains few keys. Some of these people are able to enter a certain room, and some aren't. The task is then to predict whether a certain key or a certain key chain can get you into that room. To solve this problem we need to find the exact key that is common for all the "positive" key chains. If we can correctly identify this key, we can also correctly classify an entire key chain - positive if it contains the required key, or negative if it doesn't.
|
Multiple instance learning : Depending on the type and variation in training data, machine learning can be roughly categorized into three frameworks: supervised learning, unsupervised learning, and reinforcement learning. Multiple instance learning (MIL) falls under the supervised learning framework, where every training instance has a label, either discrete or real valued. MIL deals with problems with incomplete knowledge of labels in training sets. More precisely, in multiple-instance learning, the training set consists of labeled "bags", each of which is a collection of unlabeled instances. A bag is positively labeled if at least one instance in it is positive, and is negatively labeled if all instances in it are negative. The goal of the MIL is to predict the labels of new, unseen bags.
|
Multiple instance learning : Keeler et al., in his work in the early 1990s was the first one to explore the area of MIL. The actual term multi-instance learning was introduced in the middle of the 1990s, by Dietterich et al. while they were investigating the problem of drug activity prediction. They tried to create a learning system that could predict whether new molecule was qualified to make some drug, or not, through analyzing a collection of known molecules. Molecules can have many alternative low-energy states, but only one, or some of them, are qualified to make a drug. The problem arose because scientists could only determine if molecule is qualified, or not, but they couldn't say exactly which of its low-energy shapes are responsible for that. One of the proposed ways to solve this problem was to use supervised learning, and regard all the low-energy shapes of the qualified molecule as positive training instances, while all of the low-energy shapes of unqualified molecules as negative instances. Dietterich et al. showed that such method would have a high false positive noise, from all low-energy shapes that are mislabeled as positive, and thus wasn't really useful. Their approach was to regard each molecule as a labeled bag, and all the alternative low-energy shapes of that molecule as instances in the bag, without individual labels. Thus formulating multiple-instance learning. Solution to the multiple instance learning problem that Dietterich et al. proposed is the axis-parallel rectangle (APR) algorithm. It attempts to search for appropriate axis-parallel rectangles constructed by the conjunction of the features. They tested the algorithm on Musk dataset, which is a concrete test data of drug activity prediction and the most popularly used benchmark in multiple-instance learning. APR algorithm achieved the best result, but APR was designed with Musk data in mind. Problem of multi-instance learning is not unique to drug finding. In 1998, Maron and Ratan found another application of multiple instance learning to scene classification in machine vision, and devised Diverse Density framework. Given an image, an instance is taken to be one or more fixed-size subimages, and the bag of instances is taken to be the entire image. An image is labeled positive if it contains the target scene - a waterfall, for example - and negative otherwise. Multiple instance learning can be used to learn the properties of the subimages which characterize the target scene. From there on, these frameworks have been applied to a wide spectrum of applications, ranging from image concept learning and text categorization, to stock market prediction.
|
Multiple instance learning : Take image classification for example Amores (2013). Given an image, we want to know its target class based on its visual content. For instance, the target class might be "beach", where the image contains both "sand" and "water". In MIL terms, the image is described as a bag X = ,..,X_\ , where each X i is the feature vector (called instance) extracted from the corresponding i -th region in the image and N is the total regions (instances) partitioning the image. The bag is labeled positive ("beach") if it contains both "sand" region instances and "water" region instances. Examples of where MIL is applied are: Molecule activity Predicting binding sites of Calmodulin binding proteins Predicting function for alternatively spliced isoforms Li, Menon & et al. (2014),Eksi et al. (2013) Image classification Maron & Ratan (1998) Text or document categorization Kotzias et al. (2015) Predicting functional binding sites of MicroRNA targets Bandyopadhyay, Ghosh & et al. (2015) Medical image classification Zhu et al. (2016), P.J.Sudharshan et al. (2019) Numerous researchers have worked on adapting classical classification techniques, such as support vector machines or boosting, to work within the context of multiple-instance learning.
|
Multiple instance learning : If the space of instances is X , then the set of bags is the set of functions N X = ^=\\rightarrow \mathbb \ , which is isomorphic to the set of multi-subsets of X . For each bag B ∈ N X ^ and each instance x ∈ X , B ( x ) is viewed as the number of times x occurs in B . Let Y be the space of labels, then a "multiple instance concept" is a map c : N X → Y ^\rightarrow . The goal of MIL is to learn such a concept. The remainder of the article will focus on binary classification, where Y = =\ .
|
Multiple instance learning : Most of the work on multiple instance learning, including Dietterich et al. (1997) and Maron & Lozano-Pérez (1997) early papers, make the assumption regarding the relationship between the instances within a bag and the class label of the bag. Because of its importance, that assumption is often called standard MI assumption.
|
Multiple instance learning : There are two major flavors of algorithms for Multiple Instance Learning: instance-based and metadata-based, or embedding-based algorithms. The term "instance-based" denotes that the algorithm attempts to find a set of representative instances based on an MI assumption and classify future bags from these representatives. By contrast, metadata-based algorithms make no assumptions about the relationship between instances and bag labels, and instead try to extract instance-independent information (or metadata) about the bags in order to learn the concept. For a survey of some of the modern MI algorithms see Foulds and Frank.
|
Multiple instance learning : So far this article has considered multiple instance learning exclusively in the context of binary classifiers. However, the generalizations of single-instance binary classifiers can carry over to the multiple-instance case. One such generalization is the multiple-instance multiple-label problem (MIML), where each bag can now be associated with any subset of the space of labels. Formally, if X is the space of features and Y is the space of labels, an MIML concept is a map c : N X → 2 Y ^\rightarrow 2^ . Zhou and Zhang (2006) propose a solution to the MIML problem via a reduction to either a multiple-instance or multiple-concept problem. Another obvious generalization is to multiple-instance regression. Here, each bag is associated with a single real number as in standard regression. Much like the standard assumption, MI regression assumes there is one instance in each bag, called the "prime instance", which determines the label for the bag (up to noise). The ideal goal of MI regression would be to find a hyperplane which minimizes the square loss of the prime instances in each bag, but the prime instances are hidden. In fact, Ray and Page (2001) show that finding a best fit hyperplane which fits one instance from each bag is intractable if there are fewer than three instances per bag, and instead develop an algorithm for approximation. Many of the algorithms developed for MI classification may also provide good approximations to the MI regression problem.
|
Multiple instance learning : Supervised learning Multi-label classification
|
Multiple instance learning : Recent reviews of the MIL literature include: Amores (2013), which provides an extensive review and comparative study of the different paradigms, Foulds & Frank (2010), which provides a thorough review of the different assumptions used by different paradigms in the literature. Dietterich, Thomas G; Lathrop, Richard H; Lozano-Pérez, Tomás (1997). "Solving the multiple instance problem with axis-parallel rectangles". Artificial Intelligence. 89 (1–2): 31–71. doi:10.1016/S0004-3702(96)00034-3. Herrera, Francisco; Ventura, Sebastián; Bello, Rafael; Cornelis, Chris; Zafra, Amelia; Sánchez-Tarragó, Dánel; Vluymans, Sarah (2016). Multiple Instance Learning. doi:10.1007/978-3-319-47759-6. ISBN 978-3-319-47758-9. S2CID 24047205. Amores, Jaume (2013). "Multiple instance classification: Review, taxonomy and comparative study". Artificial Intelligence. 201: 81–105. doi:10.1016/j.artint.2013.06.003. Foulds, James; Frank, Eibe (2010). "A review of multi-instance learning assumptions". The Knowledge Engineering Review. 25: 1–25. CiteSeerX 10.1.1.148.2333. doi:10.1017/S026988890999035X. S2CID 8601873. Keeler, James D.; Rumelhart, David E.; Leow, Wee-Kheng (1990). "Integrated segmentation and recognition of hand-printed numerals". Proceedings of the 1990 Conference on Advances in Neural Information Processing Systems (NIPS 3). Morgan Kaufmann Publishers. pp. 557–563. ISBN 978-1-55860-184-0. Li, Hong-Dong; Menon, Rajasree; Omenn, Gilbert S; Guan, Yuanfang (2014). "The emerging era of genomic data integration for analyzing splice isoform function". Trends in Genetics. 30 (8): 340–7. doi:10.1016/j.tig.2014.05.005. PMC 4112133. PMID 24951248. Eksi, Ridvan; Li, Hong-Dong; Menon, Rajasree; Wen, Yuchen; Omenn, Gilbert S; Kretzler, Matthias; Guan, Yuanfang (2013). "Systematically Differentiating Functions for Alternatively Spliced Isoforms through Integrating RNA-seq Data". PLOS Computational Biology. 9 (11): e1003314. Bibcode:2013PLSCB...9E3314E. doi:10.1371/journal.pcbi.1003314. PMC 3820534. PMID 24244129. Maron, O.; Ratan, A.L. (1998). "Multiple-instance learning for natural scene classification". Proceedings of the Fifteenth International Conference on Machine Learning. Morgan Kaufmann Publishers. pp. 341–349. ISBN 978-1-55860-556-5. Kotzias, Dimitrios; Denil, Misha; De Freitas, Nando; Smyth, Padhraic (2015). "From Group to Individual Labels Using Deep Features". Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '15. pp. 597–606. doi:10.1145/2783258.2783380. ISBN 9781450336642. S2CID 7729996. Ray, Soumya; Page, David (2001). Multiple instance regression (PDF). ICML. Bandyopadhyay, Sanghamitra; Ghosh, Dip; Mitra, Ramkrishna; Zhao, Zhongming (2015). "MBSTAR: Multiple instance learning for predicting specific functional binding sites in microRNA targets". Scientific Reports. 5: 8004. Bibcode:2015NatSR...5E8004B. doi:10.1038/srep08004. PMC 4648438. PMID 25614300. Zhu, Wentao; Lou, Qi; Vang, Yeeleng Scott; Xie, Xiaohui (2017). "Deep Multi-instance Networks with Sparse Label Assignment for Whole Mammogram Classification". Medical Image Computing and Computer-Assisted Intervention − MICCAI 2017. Lecture Notes in Computer Science. Vol. 10435. pp. 603–11. arXiv:1612.05968. doi:10.1007/978-3-319-66179-7_69. ISBN 978-3-319-66178-0. S2CID 9623929.
|
Cognitive computing : Cognitive computing refers to technology platforms that, broadly speaking, are based on the scientific disciplines of artificial intelligence and signal processing. These platforms encompass machine learning, reasoning, natural language processing, speech recognition and vision (object recognition), human–computer interaction, dialog and narrative generation, among other technologies.
|
Cognitive computing : At present, there is no widely agreed upon definition for cognitive computing in either academia or industry. In general, the term cognitive computing has been used to refer to new hardware and/or software that mimics the functioning of the human brain (2004). In this sense, cognitive computing is a new type of computing with the goal of more accurate models of how the human brain/mind senses, reasons, and responds to stimulus. Cognitive computing applications link data analysis and adaptive page displays (AUI) to adjust content for a particular type of audience. As such, cognitive computing hardware and applications strive to be more affective and more influential by design. The term "cognitive system" also applies to any artificial construct able to perform a cognitive process where a cognitive process is the transformation of data, information, knowledge, or wisdom to a new level in the DIKW Pyramid. While many cognitive systems employ techniques having their origination in artificial intelligence research, cognitive systems, themselves, may not be artificially intelligent. For example, a neural network trained to recognize cancer on an MRI scan may achieve a higher success rate than a human doctor. This system is certainly a cognitive system but is not artificially intelligent. Cognitive systems may be engineered to feed on dynamic data in real-time, or near real-time, and may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).
|
Cognitive computing : Cognitive computing-branded technology platforms typically specialize in the processing and analysis of large, unstructured datasets.
|
Cognitive computing : Education Even if cognitive computing can not take the place of teachers, it can still be a heavy driving force in the education of students. Cognitive computing being used in the classroom is applied by essentially having an assistant that is personalized for each individual student. This cognitive assistant can relieve the stress that teachers face while teaching students, while also enhancing the student's learning experience over all. Teachers may not be able to pay each and every student individual attention, this being the place that cognitive computers fill the gap. Some students may need a little more help with a particular subject. For many students, Human interaction between student and teacher can cause anxiety and can be uncomfortable. With the help of Cognitive Computer tutors, students will not have to face their uneasiness and can gain the confidence to learn and do well in the classroom. While a student is in class with their personalized assistant, this assistant can develop various techniques, like creating lesson plans, to tailor and aid the student and their needs. Healthcare Numerous tech companies are in the process of developing technology that involves cognitive computing that can be used in the medical field. The ability to classify and identify is one of the main goals of these cognitive devices. This trait can be very helpful in the study of identifying carcinogens. This cognitive system that can detect would be able to assist the examiner in interpreting countless numbers of documents in a lesser amount of time than if they did not use Cognitive Computer technology. This technology can also evaluate information about the patient, looking through every medical record in depth, searching for indications that can be the source of their problems. Commerce Together with Artificial Intelligence, it has been used in warehouse management systems to collect, store, organize and analyze all related supplier data. All these aims at improving efficiency, enabling faster decision-making, monitoring inventory and fraud detection Human Cognitive Augmentation In situations where humans are using or working collaboratively with cognitive systems, called a human/cog ensemble, results achieved by the ensemble are superior to results obtainable by the human working alone. Therefore, the human is cognitively augmented. In cases where the human/cog ensemble achieves results at, or superior to, the level of a human expert then the ensemble has achieved synthetic expertise. In a human/cog ensemble, the "cog" is a cognitive system employing virtually any kind of cognitive computing technology. Other use cases Speech recognition Sentiment analysis Face detection Risk assessment Fraud detection Behavioral recommendations
|
Cognitive computing : Cognitive computing in conjunction with big data and algorithms that comprehend customer needs, can be a major advantage in economic decision making. The powers of cognitive computing and artificial intelligence hold the potential to affect almost every task that humans are capable of performing. This can negatively affect employment for humans, as there would be no such need for human labor anymore. It would also increase the inequality of wealth; the people at the head of the cognitive computing industry would grow significantly richer, while workers without ongoing, reliable employment would become less well off. The more industries start to use cognitive computing, the more difficult it will be for humans to compete. Increased use of the technology will also increase the amount of work that AI-driven robots and machines can perform. Only extraordinarily talented, capable and motivated humans would be able to keep up with the machines. The influence of competitive individuals in conjunction with artificial intelligence/cognitive computing with has the potential to change the course of humankind.
|
Cognitive computing : Automation Affective computing Analytics Artificial intelligence Artificial neural network Brain computer interface Cognitive computer Cognitive reasoning Cognitive science Enterprise cognitive system Semantic Web Social neuroscience Synthetic intelligence Usability Neuromorphic engineering AI accelerator
|
Cognitive computing : Russell, John (February 15, 2016). "Mapping Out a New Role for Cognitive Computing in Science". HPCwire. Retrieved April 21, 2016.
|
Incremental heuristic search : Incremental heuristic search algorithms combine both incremental and heuristic search to speed up searches of sequences of similar search problems, which is important in domains that are only incompletely known or change dynamically. Incremental search has been studied at least since the late 1960s. Incremental search algorithms reuse information from previous searches to speed up the current search and solve search problems potentially much faster than solving them repeatedly from scratch. Similarly, heuristic search has also been studied at least since the late 1960s. Heuristic search algorithms, often based on A*, use heuristic knowledge in the form of approximations of the goal distances to focus the search and solve search problems potentially much faster than uninformed search algorithms. The resulting search problems, sometimes called dynamic path planning problems, are graph search problems where paths have to be found repeatedly because the topology of the graph, its edge costs, the start vertex or the goal vertices change over time. So far, three main classes of incremental heuristic search algorithms have been developed: The first class restarts A* at the point where its current search deviates from the previous one (example: Fringe Saving A*). The second class updates the h-values (heuristic, i.e. approximate distance to goal) from the previous search during the current search to make them more informed (example: Generalized Adaptive A*). The third class updates the g-values (distance from start) from the previous search during the current search to correct them when necessary, which can be interpreted as transforming the A* search tree from the previous search into the A* search tree for the current search (examples: Lifelong Planning A*, D*, D* Lite). All three classes of incremental heuristic search algorithms are different from other replanning algorithms, such as planning by analogy, in that their plan quality does not deteriorate with the number of replanning episodes.
|
Incremental heuristic search : Incremental heuristic search has been extensively used in robotics, where a larger number of path planning systems are based on either D* (typically earlier systems) or D* Lite (current systems), two different incremental heuristic search algorithms.
|
Incremental heuristic search : Maxim Likhachev's page Sven Koenig's web page Anthony Stentz's web page
|
Cover's theorem : Cover's theorem is a statement in computational learning theory and is one of the primary theoretical motivations for the use of non-linear kernel methods in machine learning applications. It is so termed after the information theorist Thomas M. Cover who stated it in 1965, referring to it as counting function theorem.
|
Cover's theorem : Let the number of homogeneously linearly separable sets of N points in d dimensions be defined as a counting function C ( N , d ) of the number of points N and the dimensionality d . The theorem states that C ( N , d ) = 2 ∑ k = 0 d − 1 ( N − 1 k ) ^ . It requires, as a necessary and sufficient condition, that the points are in general position. Simply put, this means that the points should be as linearly independent (non-aligned) as possible. This condition is satisfied "with probability 1" or almost surely for random point sets, while it may easily be violated for real data, since these are often structured along smaller-dimensionality manifolds within the data space. The function C ( N , d ) follows two different regimes depending on the relationship between N and d . For N ≤ d + 1 , the function is exponential in N . This essentially means that any set of labelled points in general position and in number no larger than the dimensionality + 1 is linearly separable; in jargon, it is said that a linear classifier shatters any point set with N ≤ d + 1 . This limiting quantity is also known as the Vapnik-Chervonenkis dimension of the linear classifier. For N > d + 1 , the counting function starts growing less than exponentially. This means that, given a sample of fixed size N , for larger dimensionality d it is more probable that a random set of labelled points is linearly separable. Conversely, with fixed dimensionality, for larger sample sizes the number of linearly separable sets of random points will be smaller, or in other words the probability to find a linearly separable sample will decrease with N . A consequence of the theorem is that given a set of training data that is not linearly separable, one can with high probability transform it into a training set that is linearly separable by projecting it into a higher-dimensional space via some non-linear transformation, or: A complex pattern-classification problem, cast in a high-dimensional space nonlinearly, is more likely to be linearly separable than in a low-dimensional space, provided that the space is not densely populated.
|
Cover's theorem : By induction with the recursive relation C ( N + 1 , d ) = C ( N , d ) + C ( N , d − 1 ) . To show that, with fixed N , increasing d may turn a set of points from non-separable to separable, a deterministic mapping may be used: suppose there are N points. Lift them onto the vertices of the simplex in the N − 1 dimensional real space. Since every partition of the samples into two sets is separable by a linear separator, the property follows.
|
Cover's theorem : The 1965 paper contains multiple theorems. Theorem 6: Let X ∪ = =\left\,x_,\cdots ,x_,y\right\ be in ϕ -general position in d -space, where ϕ = ( ϕ 1 , ϕ 2 , ⋯ , ϕ d ) ,\phi _,\cdots ,\phi _\right) . Then y is ambiguous with respect to C ( N , d − 1 ) dichotomies of X relative to the class of all ϕ -surfaces. Corollary: If each of the ϕ -separable dichotomies of X has equal probability, then the probability A ( N , d ) that y is ambiguous with respect to a random ϕ -separable dichotomy of X is C ( N , d − 1 ) C ( N , d ) . If N / d → β , then at the limit of N → ∞ , this probability converges to lim N A ( N , d ) = A(N,d)=1,&0\leq \beta \leq 2\\,&\beta \geq 2\end . This can be interpreted as a bound on the memory capacity of a single perceptron unit. The d is the number of input weights into the perceptron. The formula states that at the limit of large d , the perceptron would almost certainly be able to memorize up to 2 d binary labels, but almost certainly fail to memorize any more than that. (MacKay 2003, p. 490)
|
Cover's theorem : Support vector machine Kernel method
|
Cover's theorem : Haykin, Simon (2009). Neural Networks and Learning Machines (Third ed.). Upper Saddle River, New Jersey: Pearson Education Inc. pp. 232–236. ISBN 978-0-13-147139-9. Cover, T.M. (1965). "Geometrical and Statistical properties of systems of linear inequalities with applications in pattern recognition" (PDF). IEEE Transactions on Electronic Computers. EC-14 (3): 326–334. doi:10.1109/pgec.1965.264137. S2CID 18251470. Archived from the original (PDF) on 2019-12-20. Mehrotra, K.; Mohan, C. K.; Ranka, S. (1997). Elements of artificial neural networks (2nd ed.). MIT Press. ISBN 0-262-13328-8. (Section 3.5) MacKay, David J. C. (2003). "40. Capacity of a Single Neuron". Information theory, inference, and learning algorithms. Cambridge: Cambridge University Press. ISBN 978-0-521-64298-9.
|
Latent semantic mapping : Latent semantic mapping (LSM) is a data-driven framework to model globally meaningful relationships implicit in large volumes of (often textual) data. It is a generalization of latent semantic analysis. In information retrieval, LSA enables retrieval on the basis of conceptual content, instead of merely matching words between queries and documents. LSM was derived from earlier work on latent semantic analysis. There are 3 main characteristics of latent semantic analysis: Discrete entities, usually in the form of words and documents, are mapped onto continuous vectors, the mapping involves a form of global correlation pattern, and dimensionality reduction is an important aspect of the analysis process. These constitute generic properties, and have been identified as potentially useful in a variety of different contexts. This usefulness has encouraged great interest in LSM. The intended product of latent semantic mapping, is a data-driven framework for modeling relationships in large volumes of data. Mac OS X v10.5 and later includes a framework implementing latent semantic mapping.
|
Latent semantic mapping : Latent semantic analysis
|
Latent semantic mapping : Bellegarda, J.R. (2005). "Latent semantic mapping [information retrieval]". IEEE Signal Processing Magazine. 22 (5): 70–80. Bibcode:2005ISPM...22...70B. doi:10.1109/MSP.2005.1511825. S2CID 17327041. J. Bellegarda (2006). "Latent semantic mapping: Principles and applications". ICASSP 2006. Archived from the original on 2013-08-24. Retrieved 2013-08-24.
|
Human Problem Solving : Human Problem Solving (1972) is a book by Allen Newell and Herbert A. Simon.
|
Human Problem Solving : Problem solving == References ==
|
Intelligent control : Intelligent control is a class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, reinforcement learning, evolutionary computation and genetic algorithms.
|
Intelligent control : Intelligent control can be divided into the following major sub-domains: Neural network control Machine learning control Reinforcement learning Bayesian control Fuzzy control Neuro-fuzzy control Expert Systems Genetic control New control techniques are created continuously as new models of intelligent behavior are created and computational methods developed to support them.
|
Intelligent control : Action selection AI effect Applications of artificial intelligence Artificial intelligence systems integration Function approximation Hybrid intelligent system Lists List of emerging technologies Outline of artificial intelligence
|
Intelligent control : Antsaklis, P.J. (1993). Passino, K.M. (ed.). An Introduction to Intelligent and Autonomous Control. Kluwer Academic Publishers. ISBN 0-7923-9267-1. Archived from the original on 10 April 2009. Liu, J.; Wang, W.; Golnaraghi, F.; Kubica, E. (2010). "A Novel Fuzzy Framework for Nonlinear System Control". Fuzzy Sets and Systems. 161 (21): 2746–2759. doi:10.1016/j.fss.2010.04.009.
|
Intelligent control : Jeffrey T. Spooner, Manfredi Maggiore, Raul Ord onez, and Kevin M. Passino, Stable Adaptive Control and Estimation for Nonlinear Systems: Neural and Fuzzy Approximator Techniques, John Wiley & Sons, NY; Farrell, J.A., Polycarpou, M.M. (2006). Adaptive Approximation Based Control: Unifying Neural, Fuzzy and Traditional Adaptive Approximation Approaches. Wiley. ISBN 978-0-471-72788-0.: CS1 maint: multiple names: authors list (link) Schramm, G. (1998). Intelligent Flight Control - A Fuzzy Logic Approach. TU Delft Press. ISBN 90-901192-4-8.
|
Multilayer perceptron : In deep learning, a multilayer perceptron (MLP) is a name for a modern feedforward neural network consisting of fully connected neurons with nonlinear activation functions, organized in layers, notable for being able to distinguish data that is not linearly separable. Modern neural networks are trained using backpropagation and are colloquially referred to as "vanilla" networks. MLPs grew out of an effort to improve single-layer perceptrons, which could only be applied to linearly separable data. A perceptron traditionally used a Heaviside step function as its nonlinear activation function. However, the backpropagation algorithm requires that modern MLPs use continuous activation functions such as sigmoid or ReLU. Multilayer perceptrons form the basis of deep learning, and are applicable across a vast set of diverse domains.
|
Multilayer perceptron : In 1943, Warren McCulloch and Walter Pitts proposed the binary artificial neuron as a logical model of biological neural networks. In 1958, Frank Rosenblatt proposed the multilayered perceptron model, consisting of an input layer, a hidden layer with randomized weights that did not learn, and an output layer with learnable connections. In 1962, Rosenblatt published many variants and experiments on perceptrons in his book Principles of Neurodynamics, including up to 2 trainable layers by "back-propagating errors". However, it was not the backpropagation algorithm, and he did not have a general method for training multiple layers. In 1965, Alexey Grigorevich Ivakhnenko and Valentin Lapa published Group Method of Data Handling. It was one of the first deep learning methods, used to train an eight-layer neural net in 1971. In 1967, Shun'ichi Amari reported the first multilayered neural network trained by stochastic gradient descent, was able to classify non-linearily separable pattern classes. Amari's student Saito conducted the computer experiments, using a five-layered feedforward network with two learning layers. Backpropagation was independently developed multiple times in early 1970s. The earliest published instance was Seppo Linnainmaa's master thesis (1970). Paul Werbos developed it independently in 1971, but had difficulty publishing it until 1982. In 1986, David E. Rumelhart et al. popularized backpropagation. In 2003, interest in backpropagation networks returned due to the successes of deep learning being applied to language modelling by Yoshua Bengio with co-authors. In 2021, a very simple NN architecture combining two deep MLPs with skip connections and layer normalizations was designed and called MLP-Mixer; its realizations featuring 19 to 431 millions of parameters were shown to be comparable to vision transformers of similar size on ImageNet and similar image classification tasks.
|
Multilayer perceptron : Weka: Open source data mining software with multilayer perceptron implementation. Neuroph Studio documentation, implements this algorithm and a few others.
|
Computational intelligence : In computer science, computational intelligence (CI) refers to concepts, paradigms, algorithms and implementations of systems that are designed to show "intelligent" behavior in complex and changing environments. These systems are aimed at mastering complex tasks in a wide variety of technical or commercial areas and offer solutions that recognize and interpret patterns, control processes, support decision-making or autonomously manoeuvre vehicles or robots in unknown environments, among other things. These concepts and paradigms are characterized by the ability to learn or adapt to new situations, to generalize, to abstract, to discover and associate. Nature-analog or nature-inspired methods play a key role, such as in neuroevolution for Computational Intelligence. CI approaches primarily address those complex real-world problems for which mathematical or traditional modeling is not appropriate for various reasons: the processes cannot be described exactly with complete knowledge, the processes are too complex for mathematical reasoning, they contain some uncertainties during the process, such as unforeseen changes in the environment or in the process itself, or the processes are simply stochastic in nature. Thus, CI techniques are properly aimed at processes that are ill-defined, complex, nonlinear, time-varying and/or stochastic. A recent definition of the IEEE Computational Intelligence Societey describes CI as the theory, design, application and development of biologically and linguistically motivated computational paradigms. Traditionally the three main pillars of CI have been Neural Networks, Fuzzy Systems and Evolutionary Computation. ... CI is an evolving field and at present in addition to the three main constituents, it encompasses computing paradigms like ambient intelligence, artificial life, cultural learning, artificial endocrine networks, social reasoning, and artificial hormone networks. ... Over the last few years there has been an explosion of research on Deep Learning, in particular deep convolutional neural networks. Nowadays, deep learning has become the core method for artificial intelligence. In fact, some of the most successful AI systems are based on CI. However, as CI is an emerging and developing field there is no final definition of CI, especially in terms of the list of concepts and paradigms that belong to it. The general requirements for the development of an “intelligent system” are ultimately always the same, namely the simulation of intelligent thinking and action in a specific area of application. To do this, the knowledge about this area must be represented in a model so that it can be processed. The quality of the resulting system depends largely on how well the model was chosen in the development process. Sometimes data-driven methods are suitable for finding a good model and sometimes logic-based knowledge representations deliver better results. Hybrid models are usually used in real applications. According to actual textbooks, the following methods and paradigms, which largely complement each other, can be regarded as parts of CI: Fuzzy systems Neural networks and, in particular, convolutional neural networks Evolutionary computation and, in particular, multi-objective evolutionary optimization Swarm intelligence Bayesian networks Artificial immune systems Learning theory Probabilistic Methods
|
Computational intelligence : Artificial intelligence (AI) is used in the media, but also by some of the scientists involved, as a kind of umbrella term for the various techniques associated with it or with CI. Craenen and Eiben state that attempts to define or at least describe CI can usually be assigned to one or more of the following groups: "Relative definition” comparing CI to AI Conceptual treatment of key notions and their roles in CI Listing of the (established) areas that belong to it The relationship between CI and AI has been a frequently discussed topic during the development of CI. While the above list implies that they are synonyms, the vast majority of AI/CI researchers working on the subject consider them to be distinct fields, where either CI is an alternative to AI AI includes CI CI includes AI The view of the first of the above three points goes back to Zadeh, the founder of the fuzzy set theory, who differentiated machine intelligence into hard and soft computing techniques, which are used in artificial intelligence on the one hand and computational intelligence on the other. In hard computing (HC) and AI, inaccuracy and uncertainty are undesirable characteristics of a system, while soft computing (SC) and thus CI focus on dealing with these characteristics. The adjacent figure illustrates these relationships and lists the most important CI techniques. Another frequently mentioned distinguishing feature is the representation of information in symbolic form in AI and in sub-symbolic form in CI techniques. Hard computing is a conventional computing method based on the principles of certainty and accuracy and it is deterministic. It requires a precisely stated analytical model of the task to be processed and a prewritten program, i.e. a fixed set of instructions. The models used are based on Boolean logic (also called crisp logic), where e.g. an element can be either a member of a set or not and there is nothing in between. When applied to real-world tasks, systems based on HC result in specific control actions defined by a mathematical model or algorithm. If an unforeseen situation occurs that is not included in the model or algorithm used, the action will most likely fail. Soft computing, on the other hand, is based on the fact that the human mind is capable of storing information and processing it in a goal-oriented way, even if it is imprecise and lacks certainty. SC is based on the model of the human brain with probabilistic thinking, fuzzy logic and multi-valued logic. Soft computing can process a wealth of data and perform a large number of computations, which may not be exact, in parallel. For hard problems for which no satisfying exact solutions based on HC are available, SC methods can be applied successfully. SC methods are usually stochastic in nature i.e., they are a randomly defined processes that can be analyzed statistically but not with precision. Up to now, the results of some CI methods, such as deep learning, cannot be verified and it is also not clear what they are based on. This problem represents an important scientific issue for the future. AI and CI are catchy terms, but they are also so similar that they can be confused. The meaning of both terms has developed and changed over a long period of time, with AI being used first. Bezdek describes this impressively and concludes that such buzzwords are frequently used and hyped by the scientific community, science management and (science) journalism. Not least because AI and biological intelligence are emotionally charged terms and it is still difficult to find a generally accepted definition for the basic term intelligence.
|
Computational intelligence : In 1950, Alan Turing, one of the founding fathers of computer science, developed a test for computer intelligence known as the Turing test. In this test, a person can ask questions via a keyboard and a monitor without knowing whether his counterpart is a human or a computer. A computer is considered intelligent if the interrogator cannot distinguish the computer from a human. This illustrates the discussion about intelligent computers at the beginning of the computer age. The term Computational Intelligence was first used as the title of the journal of the same name in 1985 and later by the IEEE Neural Networks Council (NNC), which was founded 1989 by a group of researchers interested in the development of biological and artificial neural networks. On November 21, 2001, the NNC became the IEEE Neural Networks Society, to become the IEEE Computational Intelligence Society two years later by including new areas of interest such as fuzzy systems and evolutionary computation. The NNC helped organize the first IEEE World Congress on Computational Intelligence in Orlando, Florida in 1994. On this conference the first clear definition of Computational Intelligence was introduced by Bezdek: A system is computationally intelligent when it: deals with only numerical (low-level) data, has pattern-recognition components, does not use knowledge in the AI sense; and additionally when it (begins to) exhibit (1) computational adaptivity; (2) computational fault tolerance; (3) speed approaching human-like turnaround and (4) error rates that approximate human performance. Today, with machine learning and deep learning in particular utilizing a breadth of supervised, unsupervised, and reinforcement learning approaches, the CI landscape has been greatly enhanced, with novell intelligent approaches.
|
Computational intelligence : The main applications of Computational Intelligence include computer science, engineering, data analysis and bio-medicine.
|
Computational intelligence : According to bibliometrics studies, computational intelligence plays a key role in research. All the major academic publishers are accepting manuscripts in which a combination of Fuzzy logic, neural networks and evolutionary computation is discussed. On the other hand, Computational intelligence isn't available in the university curriculum. The amount of technical universities in which students can attend a course is limited. Only British Columbia, Technical University of Dortmund (involved in the European fuzzy boom) and Georgia Southern University are offering courses from this domain. The reason why major university are ignoring the topic is because they don't have the resources. The existing computer science courses are so complex, that at the end of the semester there is no room for fuzzy logic. Sometimes it is taught as a subproject in existing introduction courses, but in most cases the universities are preferring courses about classical AI concepts based on Boolean logic, turing machines and toy problems like blocks world. Since a while with the upraising of STEM education, the situation has changed a bit. There are some efforts available in which multidisciplinary approaches are preferred which allows the student to understand complex adaptive systems. These objectives are discussed only on a theoretical basis. The curriculum of real universities wasn't adapted yet.
|
Computational intelligence : IEEE Transactions on Neural Networks and Learning Systems IEEE Transactions on Fuzzy Systems IEEE Transactions on Evolutionary Computation IEEE Transactions on Emerging Topics in Computational Intelligence IEEE Transactions on Autonomous Mental Development IEEE/ACM Transactions on Computational Biology and Bioinformatics IEEE Transactions on Computational Intelligence and AI in Games Applied Computational Intelligence and Soft Computing
|
Computational intelligence : Computational Intelligence: An Introduction by Andries Engelbrecht. Wiley & Sons. ISBN 0-470-84870-7 Computational Intelligence: A Logical Approach by David Poole, Alan Mackworth, Randy Goebel. Oxford University Press. ISBN 0-19-510270-3 Computational Intelligence: A Methodological Introduction by Kruse, Borgelt, Klawonn, Moewes, Steinbrecher, Held, 2013, Springer, ISBN 9781447150121 == References ==
|
U-Net : U-Net is a convolutional neural network that was developed for image segmentation. The network is based on a fully convolutional neural network whose architecture was modified and extended to work with fewer training images and to yield more precise segmentation. Segmentation of a 512 × 512 image takes less than a second on a modern (2015) GPU using the U-Net architecture. The U-Net architecture has also been employed in diffusion models for iterative image denoising. This technology underlies many modern image generation models, such as DALL-E, Midjourney, and Stable Diffusion.
|
U-Net : The U-Net architecture stems from the so-called "fully convolutional network". The main idea is to supplement a usual contracting network by successive layers, where pooling operations are replaced by upsampling operators. Hence these layers increase the resolution of the output. A successive convolutional layer can then learn to assemble a precise output based on this information. One important modification in U-Net is that there are a large number of feature channels in the upsampling part, which allow the network to propagate context information to higher resolution layers. As a consequence, the expansive path is more or less symmetric to the contracting part, and yields a u-shaped architecture. The network only uses the valid part of each convolution without any fully connected layers. To predict the pixels in the border region of the image, the missing context is extrapolated by mirroring the input image. This tiling strategy is important to apply the network to large images, since otherwise the resolution would be limited by the GPU memory.
|
U-Net : The network consists of a contracting path and an expansive path, which gives it the u-shaped architecture. The contracting path is a typical convolutional network that consists of repeated application of convolutions, each followed by a rectified linear unit (ReLU) and a max pooling operation. During the contraction, the spatial information is reduced while feature information is increased. The expansive pathway combines the feature and spatial information through a sequence of up-convolutions and concatenations with high-resolution features from the contracting path.
|
U-Net : There are many applications of U-Net in biomedical image segmentation, such as brain image segmentation (''BRATS'') and liver image segmentation ("siliver07") as well as protein binding site prediction. U-Net implementations have also found use in the physical sciences, for example in the analysis of micrographs of materials. Variations of the U-Net have also been applied for medical image reconstruction. Here are some variants and applications of U-Net as follows: Pixel-wise regression using U-Net and its application on pansharpening; 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation; TernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation. Image-to-image translation to estimate fluorescent stains In binding site prediction of protein structure.
|
U-Net : U-Net was created by Olaf Ronneberger, Philipp Fischer, Thomas Brox in 2015 and reported in the paper "U-Net: Convolutional Networks for Biomedical Image Segmentation". It is an improvement and development of FCN: Evan Shelhamer, Jonathan Long, Trevor Darrell (2014). "Fully convolutional networks for semantic segmentation".
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.