text
stringlengths 12
14.7k
|
---|
Lexical Markup Framework : The ISO number is 24613. The LMF specification has been published officially as an International Standard on 17 November 2008.
|
Lexical Markup Framework : The ISO/TC 37 standards are currently elaborated as high level specifications and deal with word segmentation (ISO 24614), annotations (ISO 24611 a.k.a. MAF, ISO 24612 a.k.a. LAF, ISO 24615 a.k.a. SynAF, and ISO 24617-1 a.k.a. SemAF/Time), feature structures (ISO 24610), multimedia containers (ISO 24616 a.k.a. MLIF), and lexicons (ISO 24613). These standards are based on low level specifications dedicated to constants, namely data categories (revision of ISO 12620), language codes (ISO 639), scripts codes (ISO 15924), country codes (ISO 3166) and Unicode (ISO 10646). The two level organization forms a coherent family of standards with the following common and simple rules: the high level specification provides structural elements that are adorned by the standardized constants; the low level specifications provide standardized constants as metadata.
|
Lexical Markup Framework : The linguistics constants like /feminine/ or /transitive/ are not defined within LMF but are recorded in the Data Category Registry (DCR) that is maintained as a global resource by ISO/TC 37 in compliance with ISO/IEC 11179-3:2003. And these constants are used to adorn the high level structural elements. The LMF specification complies with the modeling principles of Unified Modeling Language (UML) as defined by Object Management Group (OMG). The structure is specified by means of UML class diagrams. The examples are presented by means of UML instance (or object) diagrams. An XML DTD is given in an annex of the LMF document.
|
Lexical Markup Framework : LMF is composed of the following components: The core package that is the structural skeleton which describes the basic hierarchy of information in a lexical entry. Extensions of the core package which are expressed in a framework that describes the reuse of the core components in conjunction with the additional components required for a specific lexical resource. The extensions are specifically dedicated to morphology, MRD, NLP syntax, NLP semantics, NLP multilingual notations, NLP morphological patterns, multiword expression patterns, and constraint expression patterns.
|
Lexical Markup Framework : In the following example, the lexical entry is associated with a lemma clergyman and two inflected forms clergyman and clergymen. The language coding is set for the whole lexical resource. The language value is set for the whole lexicon as shown in the following UML instance diagram. The elements Lexical Resource, Global Information, Lexicon, Lexical Entry, Lemma, and Word Form define the structure of the lexicon. They are specified within the LMF document. On the contrary, languageCoding, language, partOfSpeech, commonNoun, writtenForm, grammaticalNumber, singular, plural are data categories that are taken from the Data Category Registry. These marks adorn the structure. The values ISO 639-3, clergyman, clergymen are plain character strings. The value eng is taken from the list of languages as defined by ISO 639-3. With some additional information like dtdVersion and feat, the same data can be expressed by the following XML fragment: This example is rather simple, while LMF can represent much more complex linguistic descriptions the XML tagging is correspondingly complex.
|
Lexical Markup Framework : The first publication about the LMF specification as it has been ratified by ISO (this paper became (in 2015) the 9th most cited paper within the Language Resources and Evaluation conferences from LREC papers): Language Resources and Evaluation LREC-2006/Genoa: Gil Francopoulo, Monte George, Nicoletta Calzolari, Monica Monachini, Nuria Bel, Mandy Pet, Claudia Soria: Lexical Markup Framework (LMF) About semantic representation: Gesellschaft für linguistische Datenverarbeitung GLDV-2007/Tübingen: Gil Francopoulo, Nuria Bel, Monte George Nicoletta Calzolari, Monica Monachini, Mandy Pet, Claudia Soria: Lexical Markup Framework ISO standard for semantic information in NLP lexicons About African languages: Traitement Automatique des langues naturelles, Marseille, 2014: Mouhamadou Khoule, Mouhamad Ndiankho Thiam, El Hadj Mamadou Nguer: Toward the establishment of a LMF-based Wolof language lexicon (Vers la mise en place d'un lexique basé sur LMF pour la langue wolof) [in French] About Asian languages: Lexicography, Journal of ASIALEX, Springer 2014: Lexical Markup Framework: Gil Francopoulo, Chu-Ren Huang: An ISO Standard for Electronic Lexicons and its Implications for Asian Languages DOI 10.1007/s40607-014-0006-z About European languages: COLING 2010: Verena Henrich, Erhard Hinrichs: Standardizing Wordnets in the ISO Standard LMF: Wordnet-LMF for GermaNet EACL 2012: Judith Eckle-Kohler, Iryna Gurevych: Subcat-LMF: Fleshing out a standardized format for subcategorization frame interoperability EACL 2012: Iryna Gurevych, Judith Eckle-Kohler, Silvana Hartmann, Michael Matuschek, Christian M Meyer, Christian Wirth: UBY - A Large-Scale Unified Lexical-Semantic Resource Based on LMF. About Semitic languages: Journal of Natural Language Engineering, Cambridge University Press (to appear in Spring 2015): Aida Khemakhem, Bilel Gargouri, Abdelmajid Ben Hamadou, Gil Francopoulo: ISO Standard Modeling of a large Arabic Dictionary. Proceedings of the seventh Global Wordnet Conference 2014: Nadia B M Karmani, Hsan Soussou, Adel M Alimi: Building a standardized Wordnet in the ISO LMF for aeb language. Proceedings of the workshop: HLT & NLP within Arabic world, LREC 2008: Noureddine Loukil, Kais Haddar, Abdelmajid Ben Hamadou: Towards a syntactic lexicon of Arabic Verbs. Traitement Automatique des Langues Naturelles, Toulouse (in French) 2007: Khemakhem A, Gargouri B, Abdelwahed A, Francopoulo G: Modélisation des paradigmes de flexion des verbes arabes selon la norme LMF-ISO 24613. About Proper Names: Language Resources and Evaluation LREC-2008/Marrakech: Denis Maurel: Prolexbase. A multilingual relational lexical database of proper names. This resource is available at the ortolang web site.
|
Lexical Markup Framework : There is a book published in 2013: LMF Lexical Markup Framework which is entirely dedicated to LMF. The first chapter deals with the history of lexicon models, the second chapter is a formal presentation of the data model and the third one deals with the relation with the data categories of the ISO-DCR. The other 14 chapters deal with a lexicon or a system, either in the civil or military domain, either within scientific research labs or for industrial applications. These are Wordnet-LMF, Prolmf, DUELME, UBY-LMF, LG-LMF, RELISH, GlobalAtlas (or Global Atlas) and Wordscape.
|
Lexical Markup Framework : Computational lexicology Lexical semantics Morphology (linguistics) for explanations concerning paradigms and morphosyntax Machine translation for a presentation of the different types of multilingual notations (see section Approaches) Morphological pattern for the difference between a paradigm and a paradigm pattern WordNet for a presentation of the most famous semantic lexicon for the English language Universal Terminology eXchange (UTX) for a user-oriented, alternative format for machine-readable dictionaries Universal Networking Language UBY-LMF for an application of LMF OntoLex-Lemon for an LMF-based model for publishing dictionaries as knowledge graphs, in RDF and/or as Linguistic Linked Open Data
|
Stochastic Neural Analog Reinforcement Calculator : The Stochastic Neural Analog Reinforcement Calculator (SNARC) is a neural-net machine designed by Marvin Lee Minsky. Prompted by a letter from Minsky, George Armitage Miller gathered the funding (a few thousand dollars) for the project from the Office of Naval Research in the summer of 1951 with the work to be carried out by Minsky, who was then a graduate student in mathematics at Princeton University. At the time, a physics graduate student at Princeton, Dean S. Edmonds, volunteered that he was good with electronics and therefore Minsky brought him onto the project. During undergraduate years, Minsky was inspired by the 1943 Warren McCulloch and Walter Pitts paper on artificial neurons, and decided to build such a machine. The learning was Skinnerian reinforcement learning, and Minsky talked with Skinner extensively during the development of the machine. They tested the machine on a copy of Shannon's maze, and found that it could learn to solve the maze. Unlike Shannon's maze, this machine did not control a physical robot, but simulated rats running in a maze. The simulation is displayed as an "arrangement of lights", and the circuit was reinforced each time the simulated rat reached the goal. The machine surprised its creators. "The rats actually interacted with one another. If one of them found a good path, the others would tend to follow it." The machine itself is a randomly connected network of approximately 40 Hebb synapses. These synapses each have a memory that holds the probability that signal comes in one input and another signal will come out of the output. There is a probability knob that goes from 0 to 1 that shows this probability of the signals propagating. If the probability signal gets through, a capacitor remembers this function and engages an electromagnetic clutch. At this point, the operator will press a button to give a reward to the machine. This activates a motor on a surplus Minneapolis-Honeywell C-1 gyroscopic autopilot from a B-24 bomber. The motor turns a chain that goes to all 40 synapse machines, checking if the clutch is engaged or not. As the capacitor can only "remember" for a certain amount of time, the chain only catches the most recent updates of the probabilities. Each neuron contained 6 vacuum tubes and a motor. The entire machine is "the size of a grand piano" and contained 300 vacuum tubes. The tubes failed regularly, but the machine would still work despite failures. This machine is considered one of the first pioneering attempts at the field of artificial intelligence. Minsky went on to be a founding member of MIT's Project MAC, which split to become the MIT Laboratory for Computer Science and the MIT Artificial Intelligence Lab, and is now the MIT Computer Science and Artificial Intelligence Laboratory. In 1985 Minsky became a founding member of the MIT Media Laboratory. According to Minsky, he loaned the machine to students in Dartmouth, and subsequently lost, except for a single neuron. A photo of Minsky's last neuron can be seen here. The photo shows 6 vacuum tubes, one of which is a Sylvania JAN-CHS-6H6GT/G/VT-90A.
|
Stochastic Neural Analog Reinforcement Calculator : Citations Works cited Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. ISBN 0-465-02997-3. Russell, Stuart; Norvig, Peter (2003). Artificial Intelligence: A Modern Approach. London, England: Pearson Education. ISBN 0-137-90395-2.
|
Stochastic Neural Analog Reinforcement Calculator : Levy, Steven (2010). Hackers. Sebastapol, California: O'Reilly. ISBN 978-1-449-38839-3. "A Neural-Analogue Calculator Based upon a Probability Model of Reinforcement" (Document). Cambridge, Massachusetts: Harvard University Psychological Laboratories. January 8, 1952. Describes the hardware of the SNARC.
|
Stochastic Neural Analog Reinforcement Calculator : 1951 – SNARC Maze Solver – Minsky / Edmonds (American) at Cyberneticzoo.com 2011 oral history interview with Marvin Minsky. Relevant segments that concern SNARC: Building my randomly wired neural network machine (136/151) Show and tell: My neural network machine (137/151) Learning machine theories after SNARC (138/151)
|
Deep Instinct : Deep Instinct is a cybersecurity company that applies deep learning to cybersecurity. The company implements artificial intelligence to the task of preventing and detecting malware. The company was the recipient of the Technology Pioneer by The World Economic Forum in 2017.
|
Deep Instinct : In 2015, Deep Instinct was founded by Guy Caspi, Dr. Eli David, and Nadav Maman. The headquarters of the company is located in New York City. In July 2017, NVIDIA became an investor. According to Tom's Hardware, NVIDIA’s investment enabled access to a GPU-based neural network and CUDA platform, which they were using to achieve maximum vulnerability detection rates. As of February 2020, the company had raised $43 million in Series C funding round. In April 2019, Deep Instinct commissioned an art project, titled The Persistence of Chaos, by Chinese artist, Guo O. Dong, consisting of a laptop infected with 6 pieces of malware that represented $95 billion in damages. The art was auctioned with a final bid of $1,345,000. In 2019, Globes reported that, HP Inc partnered with Deep Instinct to launch their security solution HP SureSense, which has been applied to the EliteBook and Zbook devices. In April 2021, Deep Instinct raised $100 million in Series D funding to accelerate growth.
|
Deep Instinct : Official website
|
Synaptic weight : In neuroscience and computer science, synaptic weight refers to the strength or amplitude of a connection between two nodes, corresponding in biology to the amount of influence the firing of one neuron has on another. The term is typically used in artificial and biological neural network research.
|
Synaptic weight : In a computational neural network, a vector or set of inputs x and outputs y , or pre- and post-synaptic neurons respectively, are interconnected with synaptic weights represented by the matrix w , where for a linear neuron y j = ∑ i w i j x i or y = w x =\sum _w_x_~~~~=w . where the rows of the synaptic matrix represent the vector of synaptic weights for the output indexed by j . The synaptic weight is changed by using a learning rule, the most basic of which is Hebb's rule, which is usually stated in biological terms as Neurons that fire together, wire together. Computationally, this means that if a large signal from one of the input neurons results in a large signal from one of the output neurons, then the synaptic weight between those two neurons will increase. The rule is unstable, however, and is typically modified using such variations as Oja's rule, radial basis functions or the backpropagation algorithm.
|
Synaptic weight : For biological networks, the effect of synaptic weights is not as simple as for linear neurons or Hebbian learning. However, biophysical models such as BCM theory have seen some success in mathematically describing these networks. In the mammalian central nervous system, signal transmission is carried out by interconnected networks of nerve cells, or neurons. For the basic pyramidal neuron, the input signal is carried by the axon, which releases neurotransmitter chemicals into the synapse which is picked up by the dendrites of the next neuron, which can then generate an action potential which is analogous to the output signal in the computational case. The synaptic weight in this process is determined by several variable factors: How well the input signal propagates through the axon (see myelination), The amount of neurotransmitter released into the synapse and the amount that can be absorbed in the following cell (determined by the number of AMPA and NMDA receptors on the cell membrane and the amount of intracellular calcium and other ions), The number of such connections made by the axon to the dendrites, How well the signal propagates and integrates in the postsynaptic cell. The changes in synaptic weight that occur is known as synaptic plasticity, and the process behind long-term changes (long-term potentiation and depression) is still poorly understood. Hebb's original learning rule was originally applied to biological systems, but has had to undergo many modifications as a number of theoretical and experimental problems came to light.
|
Synaptic weight : Neural network Synaptic plasticity Hebbian theory == References ==
|
Machine Learning (journal) : Machine Learning is a peer-reviewed scientific journal, published since 1986. In 2001, forty editors and members of the editorial board of Machine Learning resigned in order to support the Journal of Machine Learning Research (JMLR), saying that in the era of the internet, it was detrimental for researchers to continue publishing their papers in expensive journals with pay-access archives. Instead, they wrote, they supported the model of JMLR, in which authors retained copyright over their papers and archives were freely available on the internet. Following the mass resignation, Kluwer changed their publishing policy to allow authors to self-archive their papers online after peer-review.
|
Machine Learning (journal) : J.R. Quinlan (1986). "Induction of Decision Trees". Machine Learning. 1: 81–106. doi:10.1007/BF00116251. Nick Littlestone (1988). "Learning Quickly When Irrelevant Attributes Abound: A New Linear-threshold Algorithm" (PDF). Machine Learning. 2 (4): 285–318. doi:10.1007/BF00116827. John R. Anderson and Michael Matessa (1992). "Explorations of an Incremental, Bayesian Algorithm for Categorization". Machine Learning. 9 (4): 275–308. doi:10.1007/BF00994109. David Klahr (1994). "Children, Adults, and Machines as Discovery Systems". Machine Learning. 14 (3): 313–320. doi:10.1007/BF00993981. Thomas Dean and Dana Angluin and Kenneth Basye and Sean Engelson and Leslie Kaelbling and Evangelos Kokkevis and Oded Maron (1995). "Inferring Finite Automata with Stochastic Output Functions and an Application to Map Learning". Machine Learning. 18: 81–108. doi:10.1007/BF00993822. Luc De Raedt and Luc Dehaspe (1997). "Clausal Discovery". Machine Learning. 26 (2/3): 99–146. doi:10.1023/A:1007361123060. C. de la Higuera (1997). "Characteristic Sets for Grammatical Inference". Machine Learning. 27: 1–14. Robert E. Schapire and Yoram Singer (1999). "Improved Boosting Algorithms Using Confidence-rated Predictions". Machine Learning. 37 (3): 297–336. doi:10.1023/A:1007614523901. Robert E. Schapire and Yoram Singer (2000). "BoosTexter: A Boosting-based System for Text Categorization". Machine Learning. 39 (2/3): 135–168. doi:10.1023/A:1007649029923. P. Rossmanith and T. Zeugmann (2001). "Stochastic Finite Learning of the Pattern Languages". Machine Learning. 44 (1–2): 67–91. doi:10.1023/A:1010875913047. Parekh, Rajesh; Honavar, Vasant (2001). "Learning DFA from Simple Examples". Machine Learning. 44 (1/2): 9–35. doi:10.1023/A:1010822518073. Ayhan Demiriz and Kristin P. Bennett and John Shawe-Taylor (2002). "Linear Programming Boosting via Column Generation". Machine Learning. 46: 225–254. doi:10.1023/A:1012470815092. Simon Colton and Stephen Muggleton (2006). "Mathematical Applications of Inductive Logic Programming" (PDF). Machine Learning. 64 (1–3): 25–64. doi:10.1007/s10994-006-8259-x. Will Bridewell and Pat Langley and Ljupco Todorovski and Saso Dzeroski (2008). "Inductive Process Modeling". Machine Learning. Stephen Muggleton and Alireza Tamaddoni-Nezhad (2008). "QG/GA: a stochastic search for Progol". Machine Learning. 70 (2–3): 121–133. doi:10.1007/s10994-007-5029-3. == References ==
|
Large width limits of neural networks : Artificial neural networks are a class of models used in machine learning, and inspired by biological neural networks. They are the core component of modern deep learning algorithms. Computation in artificial neural networks is usually organized into sequential layers of artificial neurons. The number of neurons in a layer is called the layer width. Theoretical analysis of artificial neural networks sometimes considers the limiting case that layer width becomes large or infinite. This limit enables simple analytic statements to be made about neural network predictions, training dynamics, generalization, and loss surfaces. This wide layer limit is also of practical interest, since finite width neural networks often perform strictly better as layer width is increased.
|
Large width limits of neural networks : The Neural Network Gaussian Process (NNGP) corresponds to the infinite width limit of Bayesian neural networks, and to the distribution over functions realized by non-Bayesian neural networks after random initialization. The same underlying computations that are used to derive the NNGP kernel are also used in deep information propagation to characterize the propagation of information about gradients and inputs through a deep network. This characterization is used to predict how model trainability depends on architecture and initializations hyper-parameters. The Neural Tangent Kernel describes the evolution of neural network predictions during gradient descent training. In the infinite width limit the NTK usually becomes constant, often allowing closed form expressions for the function computed by a wide neural network throughout gradient descent training. The training dynamics essentially become linearized. Mean-field limit analysis, when applied to neural networks with weight scaling of ∼ 1 / h instead of ∼ 1 / h and large enough learning rates, predicts qualitatively distinct nonlinear training dynamics compared to the static linear behavior described by the fixed neural tangent kernel, suggesting alternative pathways for understanding infinite-width networks. Catapult dynamics describe neural network training dynamics in the case that logits diverge to infinity as the layer width is taken to infinity, and describe qualitative properties of early training dynamics. == References ==
|
Intelligent database : Until the 1980s, databases were viewed as computer systems that stored record-oriented and business data such as manufacturing inventories, bank records, and sales transactions. A database system was not expected to merge numeric data with text, images, or multimedia information, nor was it expected to automatically notice patterns in the data it stored. In the late 1980s the concept of an intelligent database was put forward as a system that manages information (rather than data) in a way that appears natural to users and which goes beyond simple record keeping. The term was introduced in 1989 by the book Intelligent Databases by Kamran Parsaye, Mark Chignell, Setrag Khoshafian and Harry Wong. The concept postulated three levels of intelligence for such systems: high level tools, the user interface and the database engine. The high level tools manage data quality and automatically discover relevant patterns in the data with a process called data mining. This layer often relies on the use of artificial intelligence techniques. The user interface uses hypermedia in a form that uniformly manages text, images and numeric data. The intelligent database engine supports the other two layers, often merging relational database techniques with object orientation. In the twenty-first century, intelligent databases have now become widespread, e.g. hospital databases can now call up patient histories consisting of charts, text and x-ray images just with a few mouse clicks, and many corporate databases include decision support tools based on sales pattern analysis.
|
Intelligent database : Intelligent Databases, book
|
Algorithmic accountability : Algorithmic accountability refers to the allocation of responsibility for the consequences of real-world actions influenced by algorithms used in decision-making processes. Ideally, algorithms should be designed to eliminate bias from their decision-making outcomes. This means they ought to evaluate only relevant characteristics of the input data, avoiding distinctions based on attributes that are generally inappropriate in social contexts, such as an individual's ethnicity in legal judgments. However, adherence to this principle is not always guaranteed, and there are instances where individuals may be adversely affected by algorithmic decisions. Responsibility for any harm resulting from a machine's decision may lie with the algorithm itself or with the individuals who designed it, particularly if the decision resulted from bias or flawed data analysis inherent in the algorithm's design.
|
Algorithmic accountability : Algorithms are widely utilized across various sectors of society that incorporate computational techniques in their control systems. These applications span numerous industries, including but not limited to medical, transportation, and payment services. In these contexts, algorithms perform functions such as: Approving or denying credit card applications; Counting votes in elections; Approving or denying immigrant visas; Determining which taxpayers will be audited on their income taxes; Managing systems that control self-driving cars on a highway; Scoring individuals as potential criminals for use in legal proceedings. However, the implementation of these algorithms can be complex and opaque. Generally, algorithms function as "black boxes," meaning that the specific processes an input undergoes during execution are often not transparent, with users typically only seeing the resulting output. This lack of transparency raises concerns about potential biases within the algorithms, as the parameters influencing decision-making may not be well understood. The outputs generated can lead to perceptions of bias, especially if individuals in similar circumstances receive different results. According to Nicholas Diakopoulos: But these algorithms can make mistakes. They have biases. Yet they sit in opaque black boxes, their inner workings, their inner “thoughts” hidden behind layers of complexity. We need to get inside that black box, to understand how they may be exerting power on us, and to understand where they might be making unjust mistakes
|
Algorithmic accountability : Algorithms are prevalent across various fields and significantly influence decisions that affect the population at large. Their underlying structures and parameters often remain unknown to those impacted by their outcomes. A notable case illustrating this issue is a recent ruling by the Wisconsin Supreme Court concerning "risk assessment" algorithms used in criminal justice. The court determined that scores generated by such algorithms, which analyze multiple parameters from individuals, should not be used as a determining factor for arresting an accused individual. Furthermore, the court mandated that all reports submitted to judges must include information regarding the accuracy of the algorithm used to compute these scores. This ruling is regarded as a noteworthy development in how society should manage software that makes consequential decisions, highlighting the importance of reliability, particularly in complex settings like the legal system. The use of algorithms in these contexts necessitates a high degree of impartiality in processing input data. However, experts note that there is still considerable work to be done to ensure the accuracy of algorithmic results. Questions about the transparency of data processing continue to arise, which raises issues regarding the appropriateness of the algorithms and the intentions of their designers.
|
Algorithmic accountability : A notable instance of potential algorithmic bias is highlighted in an article by The Washington Post regarding the ride-hailing service Uber. An analysis of collected data revealed that estimated waiting times for users varied based on the neighborhoods in which they resided. Key factors influencing these discrepancies included the predominant ethnicity and average income of the area. Specifically, neighborhoods with a majority white population and higher economic status tended to have shorter waiting times, while those with more diverse ethnic compositions and lower average incomes experienced longer waits. It’s important to clarify that this observation reflects a correlation identified in the data, rather than a definitive cause-and-effect relationship. No value judgments are made regarding the behavior of the Uber app in these cases. In a separate analysis published in the "Direito Digit@l" column on the Migalhas website, authors Coriolano Almeida Camargo and Marcelo Crespo examine the use of algorithms in decision-making contexts traditionally handled by humans. They discuss the challenges in assessing whether machine-generated decisions are fair and the potential flaws that can arise in this validation process. The issue transcends and will transcend the concern with which data is collected from consumers to the question of how this data is used by algorithms. Despite the existence of some consumer protection regulations, there is no effective mechanism available to consumers that tells them, for example, whether they have been automatically discriminated against by being denied loans or jobs. The rapid advancement of technology has introduced numerous innovations to society, including the development of autonomous vehicles. These vehicles rely on algorithms embedded within their systems to manage navigation and respond to various driving conditions. Autonomous systems are designed to collect data and evaluate their surroundings in real time, allowing them to make decisions that simulate the actions of a human driver. In their analysis, Camargo and Crespo address potential issues associated with the algorithms used in autonomous vehicles. They particularly emphasize the challenges related to decision-making during critical moments, highlighting the complexities and ethical considerations involved in programming such systems to ensure safety and fairness. The technological landscape is rapidly changing with the advent of very powerful computers and algorithms that are moving toward the impressive development of artificial intelligence. We have no doubt that artificial intelligence will revolutionize the provision of services and also industry. The problem is that ethical issues urgently need to be thought through and discussed. Or are we simply going to allow machines to judge us in court cases? Or that they decide who should live or die in accident situations that could be intervened by some technological equipment, such as autonomous cars? In TechCrunch website, Hemant Taneja wrote: Concern about “black box” algorithms that govern our lives has been spreading. New York University’s Information Law Institute hosted a conference on algorithmic accountability, noting: “Scholars, stakeholders, and policymakers question the adequacy of existing mechanisms governing algorithmic decision-making and grapple with new challenges presented by the rise of algorithmic power in terms of transparency, fairness, and equal treatment.” Yale Law School’s Information Society Project is studying this, too. “Algorithmic modeling may be biased or limited, and the uses of algorithms are still opaque in many critical sectors,” the group concluded.
|
Algorithmic accountability : Discussions among experts have sought viable solutions to understand the operations of algorithms, often referred to as "black boxes." It is generally proposed that companies responsible for developing and implementing these algorithms should ensure their reliability by disclosing the internal processes of their systems. Hemant Taneja, writing for TechCrunch, emphasizes that major technology companies, such as Google, Amazon, and Uber, must actively incorporate algorithmic accountability into their operations. He suggests that these companies should transparently monitor their own systems to avoid stringent regulatory measures. One potential approach is the introduction of regulations in the tech sector to enforce oversight of algorithmic processes. However, such regulations could significantly impact software developers and the industry as a whole. It may be more beneficial for companies to voluntarily disclose the details of their algorithms and decision-making parameters, which could enhance the trustworthiness of their solutions. Another avenue discussed is the possibility of self-regulation by the companies that create these algorithms, allowing them to take proactive steps in ensuring accountability and transparency in their operations. In TechCrunch website, Hemant Taneja wrote: There’s another benefit — perhaps a huge one — to software-defined regulation. It will also show us a path to a more efficient government. The world’s legal logic and regulations can be coded into software and smart sensors can offer real-time monitoring of everything from air and water quality, traffic flows and queues at the DMV. Regulators define the rules, technologist create the software to implement them and then AI and ML help refine iterations of policies going forward. This should lead to much more efficient, effective governments at the local, national and global levels.
|
Algorithmic accountability : Algorithmic transparency Artificial intelligence and elections – Use and impact of AI on political elections Big data ethics Regulation of algorithms
|
Algorithmic accountability : Kroll, Joshua A.; Huey, Joanna; Barocas, Solon; Barocas, Solon; Felten, Edward W.; Reidenberg, Joel R.; Robinson, David G.; Robinson, David G.; Yu, Harlan (2016) Accountable Algorithms. University of Pennsylvania Law Review, Vol. 165. Fordham Law Legal Studies Research Paper No. 2765268.
|
Exploration–exploitation dilemma : The exploration–exploitation dilemma, also known as the explore–exploit tradeoff, is a fundamental concept in decision-making that arises in many domains. It is depicted as the balancing act between two opposing strategies. Exploitation involves choosing the best option based on current knowledge of the system (which may be incomplete or misleading), while exploration involves trying out new options that may lead to better outcomes in the future at the expense of an exploitation opportunity. Finding the optimal balance between these two strategies is a crucial challenge in many decision-making problems whose goal is to maximize long-term benefits.
|
Exploration–exploitation dilemma : In the context of machine learning, the exploration–exploitation tradeoff is fundamental in reinforcement learning (RL), a type of machine learning that involves training agents to make decisions based on feedback from the environment. Crucially, this feedback may be incomplete or delayed. The agent must decide whether to exploit the current best-known policy or explore new policies to improve its performance.
|
Exploration–exploitation dilemma : Amin, Susan; Gomrokchi, Maziar; Satija, Harsh; Hoof, van; Precup, Doina (September 1, 2021). "A Survey of Exploration Methods in Reinforcement Learning". arXiv:2109.00157 [cs.LG].
|
Actor-critic algorithm : The actor-critic algorithm (AC) is a family of reinforcement learning (RL) algorithms that combine policy-based RL algorithms such as policy gradient methods, and value-based RL algorithms such as value iteration, Q-learning, SARSA, and TD learning. An AC algorithm consists of two main components: an "actor" that determines which actions to take according to a policy function, and a "critic" that evaluates those actions according to a value function. Some AC algorithms are on-policy, some are off-policy. Some apply to either continuous or discrete action spaces. Some work in both cases.
|
Actor-critic algorithm : The actor-critic methods can be understood as an improvement over pure policy gradient methods like REINFORCE via introducing a baseline.
|
Actor-critic algorithm : Asynchronous Advantage Actor-Critic (A3C): Parallel and asynchronous version of A2C. Soft Actor-Critic (SAC): Incorporates entropy maximization for improved exploration. Deep Deterministic Policy Gradient (DDPG): Specialized for continuous action spaces.
|
Actor-critic algorithm : Reinforcement learning Policy gradient method Deep reinforcement learning
|
Actor-critic algorithm : Konda, Vijay R.; Tsitsiklis, John N. (January 2003). "On Actor-Critic Algorithms". SIAM Journal on Control and Optimization. 42 (4): 1143–1166. doi:10.1137/S0363012901385691. ISSN 0363-0129. Sutton, Richard S.; Barto, Andrew G. (2018). Reinforcement learning: an introduction. Adaptive computation and machine learning series (2 ed.). Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-03924-6. Bertsekas, Dimitri P. (2019). Reinforcement learning and optimal control (2 ed.). Belmont, Massachusetts: Athena Scientific. ISBN 978-1-886529-39-7. Grossi, Csaba (2010). Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning (1 ed.). Cham: Springer International Publishing. ISBN 978-3-031-00423-0. Grondman, Ivo; Busoniu, Lucian; Lopes, Gabriel A. D.; Babuska, Robert (November 2012). "A Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy Gradients". IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews). 42 (6): 1291–1307. doi:10.1109/TSMCC.2012.2218595. ISSN 1094-6977.
|
Gated recurrent unit : Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al. The GRU is like a long short-term memory (LSTM) with a gating mechanism to input or forget certain features, but lacks a context vector or output gate, resulting in fewer parameters than LSTM. GRU's performance on certain tasks of polyphonic music modeling, speech signal modeling and natural language processing was found to be similar to that of LSTM. GRUs showed that gating is indeed helpful in general, and Bengio's team came to no concrete conclusion on which of the two gating units was better.
|
Gated recurrent unit : There are several variations on the full gated unit, with gating done using the previous hidden state and the bias in various combinations, and a simplified form called minimal gated unit. The operator ⊙ denotes the Hadamard product in the following.
|
Intelligent word recognition : Intelligent Word Recognition, or IWR, is the recognition of unconstrained handwritten words. IWR recognizes entire handwritten words or phrases instead of character-by-character, like its predecessor, optical character recognition (OCR). IWR technology matches handwritten or printed words to a user-defined dictionary, significantly reducing character errors encountered in typical character-based recognition engines. New technology on the market utilizes IWR, OCR, and ICR together, which opens many doors for the processing of documents, either constrained (hand printed or machine printed) or unconstrained (freeform cursive). IWR also eliminates a large percentage of the manual data entry of handwritten documents that, in the past, could only be keyed by a human, creating an automated workflow. When cursive handwriting is in play, for each word analyzed, the system breaks down the words into a sequence of graphemes, or subparts of letters. These various curves, shapes and lines make up letters and IWR considers these various shape and groupings in order to calculate a confidence value associated with the word in question. IWR is not meant to replace ICR and OCR engines which work well with printed data; however, IWR reduces the number of character errors associated with these engines, and it is ideal for processing real-world documents that contain mostly freeform, hard-to-recognize data, inherently unsuitable for them.
|
Intelligent word recognition : AI effect Handwriting recognition Optical character recognition Lists List of emerging technologies Outline of artificial intelligence == References ==
|
Deep learning in photoacoustic imaging : Photoacoustic imaging (PA) is based on the photoacoustic effect, in which optical absorption causes a rise in temperature, which causes a subsequent rise in pressure via thermo-elastic expansion. This pressure rise propagates through the tissue and is sensed via ultrasonic transducers. Due to the proportionality between the optical absorption, the rise in temperature, and the rise in pressure, the ultrasound pressure wave signal can be used to quantify the original optical energy deposition within the tissue. Photoacoustic imaging has applications of deep learning in both photoacoustic computed tomography (PACT) and photoacoustic microscopy (PAM). PACT utilizes wide-field optical excitation and an array of unfocused ultrasound transducers. Similar to other computed tomography methods, the sample is imaged at multiple view angles, which are then used to perform an inverse reconstruction algorithm based on the detection geometry (typically through universal backprojection, modified delay-and-sum, or time reversal ) to elicit the initial pressure distribution within the tissue. PAM on the other hand uses focused ultrasound detection combined with weakly focused optical excitation (acoustic resolution PAM or AR-PAM) or tightly focused optical excitation (optical resolution PAM or OR-PAM). PAM typically captures images point-by-point via a mechanical raster scanning pattern. At each scanned point, the acoustic time-of-flight provides axial resolution while the acoustic focusing yields lateral resolution.
|
Deep learning in photoacoustic imaging : The first application of deep learning in PACT was by Reiter et al. in which a deep neural network was trained to learn spatial impulse responses and locate photoacoustic point sources. The resulting mean axial and lateral point location errors on 2,412 of their randomly selected test images were 0.28 mm and 0.37 mm respectively. After this initial implementation, the applications of deep learning in PACT have branched out primarily into removing artifacts from acoustic reflections, sparse sampling, limited-view, and limited-bandwidth. There has also been some recent work in PACT toward using deep learning for wavefront localization. There have been networks based on fusion of information from two different reconstructions to improve the reconstruction using deep learning fusion based networks.
|
Deep learning in photoacoustic imaging : Photoacoustic microscopy differs from other forms of photoacoustic tomography in that it uses focused ultrasound detection to acquire images pixel-by-pixel. PAM images are acquired as time-resolved volumetric data that is typically mapped to a 2-D projection via a Hilbert transform and maximum amplitude projection (MAP). The first application of deep learning to PAM, took the form of a motion-correction algorithm. This procedure was posed to correct the PAM artifacts that occur when an in vivo model moves during scanning. This movement creates the appearance of vessel discontinuities.
|
Deep learning in photoacoustic imaging : Photoacoustic imaging Photoacoustic microscopy Photoacoustic effect
|
Deep learning in photoacoustic imaging : Photoacoustic imaging Photoacoustic microscopy Photoacoustic effect
|
Artificial reproduction : Artificial reproduction is the re-creation of life brought about by means other than natural ones. It is new life built by human plans and projects. Examples include artificial selection, artificial insemination, in vitro fertilization, artificial womb, artificial cloning, and kinematic replication. Artificial reproduction is one aspect of artificial life. Artificial reproduction can be categorized into one of two classes according to its capacity to be self-sufficient: non-assisted reproductive technology and assisted reproductive technology. Cutting plants' stems and placing them in compost is a form of assisted artificial reproduction, xenobots are an example of a more autonomous type of reproduction, while the artificial womb presented in the movie the Matrix illustrates a non assisted hypothetical technology. The idea of artificial reproduction has led to various technologies.
|
Artificial reproduction : Humans have aspired to create life since immemorial times. Most theologies and religions have conceived this possibility as exclusive of deities. Christian religions consider the possibility of artificial reproduction, in most cases, as heretical and sinful.
|
Artificial reproduction : Although ancient Greek philosophy raised the concept that man could imitate the creative capacity of nature, classic Greeks thought that if possible, human beings would reproduce things as nature does, and vice versa, nature would do the things that man does in the same way. Aristotle, for example, wrote that if nature made tables, it would make them just as men do. In other words, Aristotle said that if nature were to create a table, such table will look like a human-made table. Correspondingly, Descartes envisioned the human body, and nature, as a machine. Cartesian philosophy does not stop seeing a perfect mirror between nature and the artificial. However, Kant revolutionized this old idea by criticizing such naturalism. Kant pedagogically wrote: "Reason, in order to be taught by nature, must approach nature with its principles in one hand, according to which the agreement among appearances can count as laws, and, in the other hand, the experiment thought out in accord with these principles—in order to be instructed by nature not like a pupil, who has recited to him whatever the teacher wants to say, but like an appointed judge who compels witnesses to answer the questions he puts to them.". Humans are not instructed by nature but rather use nature as raw material to invent. Humans find alternatives to the natural restrictions imposed by natural laws thus, nature is not necessarily mirrored. In accordance with Kant (and contrary to what Aristotle thought) Karl Marx, Alfred Whitehead, Jaques Derrida and Juan David García Bacca noticed that nature is incapable of reproducing tables; or airplanes, or submarines, or computers. If nature tried to create airplanes, it would produce birds. If nature tried to create submarines, it would get fishes. If nature tried to create computers, brains would grow. And if nature tried to create man, modern man, monkeys will be evolved. According to Whitehead, if we look for something natural in artificial life, in the most elaborate cases, if anything, only atoms remain natural. Juan David Garcia Bacca summarized, “It will not come out from wood, it will not be born, a galley; from clay, a vessel; from linen, a dress; from iron, a lever,...From natural, artificial. In the artificial, the natural is reduced to a simple raw material, even though it is perfectly specified with natural specification. The artificial is the real, positive, and original negation of the natural: of species, of genus and of essence. Thus, its ontology is superior to natural ontology. And for this very reason Marx did not attach any importance to Darwin, whose evolutionism is confined to the natural order: to changes, at most, from variety to variety, from species to species... natural. For the same reason, nature has no dialectics, even though continuous evolution and selection can occur. The dialectic cannot emerge from the natural, for deeper reasons than, using today's terms, from a bird, an airplane cannot emerge; from fish, a submarine; from ears, a telephone; from eyes, a television; from a brain, a digital computer; from feet, a car; from hands, an engine; from Euclid, Descartes; from Aristotle, Newton; from Plato, Marx.” According to García Bacca, the major difference between natural causes and artificial causes is that nature does not have plans and projects, while humans design things following plans and projects. In contrast, other influential authors such as Michael Behe have depicted the concept and promoted the idea of intelligent design, a notion that has aroused several doubts and heated controversies, as it reframe natural causes in accordance with a natural plan. Previous ideas that have also provided a positive 'sense' to natural reproduction, are orthogenesis, syntropy, orgone and morphic resonance, among others. Although, these ideas have been historically marginalized and often called pseudoscience, recently Bio-semioticians are reconsidering some of them under symbolic approaches. Current metaphysics of science actually recognizes that the artificial ways of reproduction are diverse from nature, i.e., unnatural, anti-natural or supernatural. Because Biosemiotics does not focus on the function of life but on its meaning, it has a better understanding of the artificial than classic biology.
|
Artificial reproduction : Biology, being the study of cellular life, addresses reproduction in terms of growth and cellular division (i.e., binary fission, mitosis and meiosis); however, the science of artificial reproduction is not restricted by the mirroring of these natural processes.The science of artificial reproduction is actually transcending the natural forms, and natural rules, of reproduction. For example, xenobots have redefined the classical conception of reproduction. Although xenobots are made of eukariotic cells they do not reproduce by mitosis, but rather by kinematic replication. Such constructive replication does not involve growing but rather building.
|
Artificial reproduction : Assisted reproductive technology (ART)'s purpose is to assist the development of a human embryo, commonly because of medical concerns due to fertility limitations.
|
Artificial reproduction : Non-assisted reproductive technologies (NART) could have medical motivations but are mostly driven by a wider heterotopic ambition. Although, NARTs are initially designed by humans, they are programed to become independent of humans to a relative or absolute extent. James Lovelock proposed that such novelties could overcome humans.
|
Artificial reproduction : Male Pregnancy Artificial Uterus In Vitro Fertilization Xenobot Fertilization Pregnancy The concept of nature sensu Marx Juan David García Bacca == References ==
|
Backpropagation : In machine learning, backpropagation is a gradient estimation method commonly used for training a neural network to compute its parameter updates. It is an efficient application of the chain rule to neural networks. Backpropagation computes the gradient of a loss function with respect to the weights of the network for a single input–output example, and does so efficiently, computing the gradient one layer at a time, iterating backward from the last layer to avoid redundant calculations of intermediate terms in the chain rule; this can be derived through dynamic programming. Strictly speaking, the term backpropagation refers only to an algorithm for efficiently computing the gradient, not how the gradient is used; but the term is often used loosely to refer to the entire learning algorithm – including how the gradient is used, such as by stochastic gradient descent, or as an intermediate step in a more complicated optimizer, such as Adaptive Moment Estimation. The local minimum convergence, exploding gradient, vanishing gradient, and weak control of learning rate are main disadvantages of these optimization algorithms. The Hessian and quasi-Hessian optimizers solve only local minimum convergence problem, and the backpropagation works longer. These problems caused researchers to develop hybrid and fractional optimization algorithms. Backpropagation had multiple discoveries and partial discoveries, with a tangled history and terminology. See the history section for details. Some other names for the technique include "reverse mode of automatic differentiation" or "reverse accumulation".
|
Backpropagation : Backpropagation computes the gradient in weight space of a feedforward neural network, with respect to a loss function. Denote: x : input (vector of features) y : target output For classification, output will be a vector of class probabilities (e.g., ( 0.1 , 0.7 , 0.2 ) , and target output is a specific class, encoded by the one-hot/dummy variable (e.g., ( 0 , 1 , 0 ) ). C : loss function or "cost function" For classification, this is usually cross-entropy (XC, log loss), while for regression it is usually squared error loss (SEL). L : the number of layers W l = ( w j k l ) =(w_^) : the weights between layer l − 1 and l , where w j k l ^ is the weight between the k -th node in layer l − 1 and the j -th node in layer l f l : activation functions at layer l For classification the last layer is usually the logistic function for binary classification, and softmax (softargmax) for multi-class classification, while for the hidden layers this was traditionally a sigmoid function (logistic function or others) on each node (coordinate), but today is more varied, with rectifier (ramp, ReLU) being common. a j l ^ : activation of the j -th node in layer l . In the derivation of backpropagation, other intermediate quantities are used by introducing them as needed below. Bias terms are not treated specially since they correspond to a weight with a fixed input of 1. For backpropagation the specific loss function and activation functions do not matter as long as they and their derivatives can be evaluated efficiently. Traditional activation functions include sigmoid, tanh, and ReLU. Swish, mish, and other activation functions have since been proposed as well. The overall network is a combination of function composition and matrix multiplication: g ( x ) := f L ( W L f L − 1 ( W L − 1 ⋯ f 1 ( W 1 x ) ⋯ ) ) (W^f^(W^\cdots f^(W^x)\cdots )) For a training set there will be a set of input–output pairs, ,y_)\right\ . For each input–output pair ( x i , y i ) ,y_) in the training set, the loss of the model on that pair is the cost of the difference between the predicted output g ( x i ) ) and the target output y i : C ( y i , g ( x i ) ) ,g(x_)) Note the distinction: during model evaluation the weights are fixed while the inputs vary (and the target output may be unknown), and the network ends with the output layer (it does not include the loss function). During model training the input–output pair is fixed while the weights vary, and the network ends with the loss function. Backpropagation computes the gradient for a fixed input–output pair ( x i , y i ) ,y_) , where the weights w j k l ^ can vary. Each individual component of the gradient, ∂ C / ∂ w j k l , ^, can be computed by the chain rule; but doing this separately for each weight is inefficient. Backpropagation efficiently computes the gradient by avoiding duplicate calculations and not computing unnecessary intermediate values, by computing the gradient of each layer – specifically the gradient of the weighted input of each layer, denoted by δ l – from back to front. Informally, the key point is that since the only way a weight in W l affects the loss is through its effect on the next layer, and it does so linearly, δ l are the only data you need to compute the gradients of the weights at layer l , and then the gradients of weights of previous layer can be computed by δ l − 1 and repeated recursively. This avoids inefficiency in two ways. First, it avoids duplication because when computing the gradient at layer l , it is unnecessary to recompute all derivatives on later layers l + 1 , l + 2 , … each time. Second, it avoids unnecessary intermediate calculations, because at each stage it directly computes the gradient of the weights with respect to the ultimate output (the loss), rather than unnecessarily computing the derivatives of the values of hidden layers with respect to changes in weights ∂ a j ′ l ′ / ∂ w j k l ^/\partial w_^ . Backpropagation can be expressed for simple feedforward networks in terms of matrix multiplication, or more generally in terms of the adjoint graph.
|
Backpropagation : For the basic case of a feedforward network, where nodes in each layer are connected only to nodes in the immediate next layer (without skipping any layers), and there is a loss function that computes a scalar loss for the final output, backpropagation can be understood simply by matrix multiplication. Essentially, backpropagation evaluates the expression for the derivative of the cost function as a product of derivatives between each layer from right to left – "backwards" – with the gradient of the weights between each layer being a simple modification of the partial products (the "backwards propagated error"). Given an input–output pair ( x , y ) , the loss is: C ( y , f L ( W L f L − 1 ( W L − 1 ⋯ f 2 ( W 2 f 1 ( W 1 x ) ) ⋯ ) ) ) (W^f^(W^\cdots f^(W^f^(W^x))\cdots ))) To compute this, one starts with the input x and works forward; denote the weighted input of each hidden layer as z l and the output of hidden layer l as the activation a l . For backpropagation, the activation a l as well as the derivatives ( f l ) ′ )' (evaluated at z l ) must be cached for use during the backwards pass. The derivative of the loss in terms of the inputs is given by the chain rule; note that each term is a total derivative, evaluated at the value of the network (at each node) on the input x : d C d a L ⋅ d a L d z L ⋅ d z L d a L − 1 ⋅ d a L − 1 d z L − 1 ⋅ d z L − 1 d a L − 2 ⋅ … ⋅ d a 1 d z 1 ⋅ ∂ z 1 ∂ x , \cdot \cdot \cdot \cdot \cdot \ldots \cdot \cdot , where d a L d z L is a diagonal matrix. These terms are: the derivative of the loss function; the derivatives of the activation functions; and the matrices of weights: d C d a L ∘ ( f L ) ′ ⋅ W L ∘ ( f L − 1 ) ′ ⋅ W L − 1 ∘ ⋯ ∘ ( f 1 ) ′ ⋅ W 1 . \circ (f^)'\cdot W^\circ (f^)'\cdot W^\circ \cdots \circ (f^)'\cdot W^. The gradient ∇ is the transpose of the derivative of the output in terms of the input, so the matrices are transposed and the order of multiplication is reversed, but the entries are the same: ∇ x C = ( W 1 ) T ⋅ ( f 1 ) ′ ∘ … ∘ ( W L − 1 ) T ⋅ ( f L − 1 ) ′ ∘ ( W L ) T ⋅ ( f L ) ′ ∘ ∇ a L C . C=(W^)^\cdot (f^)'\circ \ldots \circ (W^)^\cdot (f^)'\circ (W^)^\cdot (f^)'\circ \nabla _C. Backpropagation then consists essentially of evaluating this expression from right to left (equivalently, multiplying the previous expression for the derivative from left to right), computing the gradient at each layer on the way; there is an added step, because the gradient of the weights is not just a subexpression: there's an extra multiplication. Introducing the auxiliary quantity δ l for the partial products (multiplying from right to left), interpreted as the "error at level l " and defined as the gradient of the input values at level l : δ l := ( f l ) ′ ∘ ( W l + 1 ) T ⋅ ( f l + 1 ) ′ ∘ ⋯ ∘ ( W L − 1 ) T ⋅ ( f L − 1 ) ′ ∘ ( W L ) T ⋅ ( f L ) ′ ∘ ∇ a L C . :=(f^)'\circ (W^)^\cdot (f^)'\circ \cdots \circ (W^)^\cdot (f^)'\circ (W^)^\cdot (f^)'\circ \nabla _C. Note that δ l is a vector, of length equal to the number of nodes in level l ; each component is interpreted as the "cost attributable to (the value of) that node". The gradient of the weights in layer l is then: ∇ W l C = δ l ( a l − 1 ) T . C=\delta ^(a^)^. The factor of a l − 1 is because the weights W l between level l − 1 and l affect level l proportionally to the inputs (activations): the inputs are fixed, the weights vary. The δ l can easily be computed recursively, going from right to left, as: δ l − 1 := ( f l − 1 ) ′ ∘ ( W l ) T ⋅ δ l . :=(f^)'\circ (W^)^\cdot \delta ^. The gradients of the weights can thus be computed using a few matrix multiplications for each level; this is backpropagation. Compared with naively computing forwards (using the δ l for illustration): δ 1 = ( f 1 ) ′ ∘ ( W 2 ) T ⋅ ( f 2 ) ′ ∘ ⋯ ∘ ( W L − 1 ) T ⋅ ( f L − 1 ) ′ ∘ ( W L ) T ⋅ ( f L ) ′ ∘ ∇ a L C δ 2 = ( f 2 ) ′ ∘ ⋯ ∘ ( W L − 1 ) T ⋅ ( f L − 1 ) ′ ∘ ( W L ) T ⋅ ( f L ) ′ ∘ ∇ a L C ⋮ δ L − 1 = ( f L − 1 ) ′ ∘ ( W L ) T ⋅ ( f L ) ′ ∘ ∇ a L C δ L = ( f L ) ′ ∘ ∇ a L C , \delta ^&=(f^)'\circ (W^)^\cdot (f^)'\circ \cdots \circ (W^)^\cdot (f^)'\circ (W^)^\cdot (f^)'\circ \nabla _C\\\delta ^&=(f^)'\circ \cdots \circ (W^)^\cdot (f^)'\circ (W^)^\cdot (f^)'\circ \nabla _C\\&\vdots \\\delta ^&=(f^)'\circ (W^)^\cdot (f^)'\circ \nabla _C\\\delta ^&=(f^)'\circ \nabla _C,\end There are two key differences with backpropagation: Computing δ l − 1 in terms of δ l avoids the obvious duplicate multiplication of layers l and beyond. Multiplying starting from ∇ a L C C – propagating the error backwards – means that each step simply multiplies a vector ( δ l ) by the matrices of weights ( W l ) T )^ and derivatives of activations ( f l − 1 ) ′ )' . By contrast, multiplying forwards, starting from the changes at an earlier layer, means that each multiplication multiplies a matrix by a matrix. This is much more expensive, and corresponds to tracking every possible path of a change in one layer l forward to changes in the layer l + 2 (for multiplying W l + 1 by W l + 2 , with additional multiplications for the derivatives of the activations), which unnecessarily computes the intermediate quantities of how weight changes affect the values of hidden nodes.
|
Backpropagation : For more general graphs, and other advanced variations, backpropagation can be understood in terms of automatic differentiation, where backpropagation is a special case of reverse accumulation (or "reverse mode").
|
Backpropagation : The gradient descent method involves calculating the derivative of the loss function with respect to the weights of the network. This is normally done using backpropagation. Assuming one output neuron, the squared error function is E = L ( t , y ) where L is the loss for the output y and target value t , t is the target output for a training sample, and y is the actual output of the output neuron. For each neuron j , its output o j is defined as o j = φ ( net j ) = φ ( ∑ k = 1 n w k j x k ) , =\varphi (_)=\varphi \left(\sum _^w_x_\right), where the activation function φ is non-linear and differentiable over the activation region (the ReLU is not differentiable at one point). A historically used activation function is the logistic function: φ ( z ) = 1 1 + e − z which has a convenient derivative of: d φ d z = φ ( z ) ( 1 − φ ( z ) ) =\varphi (z)(1-\varphi (z)) The input net j _ to a neuron is the weighted sum of outputs o k of previous neurons. If the neuron is in the first layer after the input layer, the o k of the input layer are simply the inputs x k to the network. The number of input units to the neuron is n . The variable w k j denotes the weight between neuron k of the previous layer and neuron j of the current layer.
|
Backpropagation : Using a Hessian matrix of second-order derivatives of the error function, the Levenberg–Marquardt algorithm often converges faster than first-order gradient descent, especially when the topology of the error function is complicated. It may also find solutions in smaller node counts for which other methods might not converge. The Hessian can be approximated by the Fisher information matrix. As an example, consider a simple feedforward network. At the l -th layer, we have x i ( l ) , a i ( l ) = f ( x i ( l ) ) , x i ( l + 1 ) = ∑ j W i j a j ( l ) ^,\quad a_^=f(x_^),\quad x_^=\sum _W_a_^ where x are the pre-activations, a are the activations, and W is the weight matrix. Given a loss function L , the first-order backpropagation states that ∂ L ∂ a j ( l ) = ∑ j W i j ∂ L ∂ x i ( l + 1 ) , ∂ L ∂ x j ( l ) = f ′ ( x j ( l ) ) ∂ L ∂ a j ( l ) ^=\sum _W_^,\quad ^=f'(x_^)^ and the second-order backpropagation states that ∂ 2 L ∂ a j 1 ( l ) ∂ a j 2 ( l ) = ∑ j 1 j 2 W i 1 j 1 W i 2 j 2 ∂ 2 L ∂ x i 1 ( l + 1 ) ∂ x i 2 ( l + 1 ) , ∂ 2 L ∂ x j 1 ( l ) ∂ x j 2 ( l ) = f ′ ( x j 1 ( l ) ) f ′ ( x j 2 ( l ) ) ∂ 2 L ∂ a j 1 ( l ) ∂ a j 2 ( l ) + δ j 1 j 2 f ″ ( x j 1 ( l ) ) ∂ L ∂ a j 1 ( l ) L^\partial a_^=\sum _j_W_j_W_j_L^\partial x_^,\quad L^\partial x_^=f'(x_^)f'(x_^)L^\partial a_^+\delta _j_f''(x_^)^ where δ is the Dirac delta symbol. Arbitrary-order derivatives in arbitrary computational graphs can be computed with backpropagation, but with more complex expressions for higher orders.
|
Backpropagation : The loss function is a function that maps values of one or more variables onto a real number intuitively representing some "cost" associated with those values. For backpropagation, the loss function calculates the difference between the network output and its expected output, after a training example has propagated through the network.
|
Backpropagation : Gradient descent with backpropagation is not guaranteed to find the global minimum of the error function, but only a local minimum; also, it has trouble crossing plateaus in the error function landscape. This issue, caused by the non-convexity of error functions in neural networks, was long thought to be a major drawback, but Yann LeCun et al. argue that in many practical problems, it is not. Backpropagation learning does not require normalization of input vectors; however, normalization could improve performance. Backpropagation requires the derivatives of activation functions to be known at network design time.
|
Backpropagation : Artificial neural network Neural circuit Catastrophic interference Ensemble learning AdaBoost Overfitting Neural backpropagation Backpropagation through time Backpropagation through structure Three-factor learning
|
Backpropagation : Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016). "6.5 Back-Propagation and Other Differentiation Algorithms". Deep Learning. MIT Press. pp. 200–220. ISBN 9780262035613. Nielsen, Michael A. (2015). "How the backpropagation algorithm works". Neural Networks and Deep Learning. Determination Press. McCaffrey, James (October 2012). "Neural Network Back-Propagation for Programmers". MSDN Magazine. Rojas, Raúl (1996). "The Backpropagation Algorithm" (PDF). Neural Networks : A Systematic Introduction. Berlin: Springer. ISBN 3-540-60505-3.
|
Backpropagation : Backpropagation neural network tutorial at the Wikiversity Bernacki, Mariusz; Włodarczyk, Przemysław (2004). "Principles of training multi-layer neural network using backpropagation". Karpathy, Andrej (2016). "Lecture 4: Backpropagation, Neural Networks 1". CS231n. Stanford University. Archived from the original on 2021-12-12 – via YouTube. "What is Backpropagation Really Doing?". 3Blue1Brown. November 3, 2017. Archived from the original on 2021-12-12 – via YouTube. Putta, Sudeep Raja (2022). "Yet Another Derivation of Backpropagation in Matrix Form".
|
Way of the Future : Way of the Future (WOTF) is the first known religious organization dedicated to the worship of artificial intelligence (AI). It was founded in 2017 by American engineer Anthony Levandowski.
|
Way of the Future : Anthony Levandowskii founded Way of the Future in 2017 in California. Levandowski established WOTF as a non-profit religious corporation and the organization had tax-exempt status. He serves as the church leader and its unpaid CEO. The primary mission of WOTF was to "develop and promote the realization of a Godhead based on Artificial Intelligence." WOTF was closed by Levandowski in 2021. He donated all the funds of the church to the NAACP Legal Defense and Education Fund. The sum of the funds (~170000$) had not changed since 2017. The church was reopened by Levandowski in 2023. He claimed that there are "a couple thousand people" who want to make a "spiritual connection" with AI through his church.
|
Way of the Future : Some commentators wondered whether the WOTF is a joke parody religion, a potential way to minimize taxation as a religious organization, or a genuine effort to try and deal with the possible psychological and theological aspects of the rise of superhuman AI.
|
Way of the Future : Transhumanism Singularitarianism
|
Way of the Future : Archived official website
|
Automated negotiation : Automated negotiation is a form of interaction in systems that are composed of multiple autonomous agents, in which the aim is to reach agreements through an iterative process of making offers. Automated negotiation can be employed for many tasks human negotiators regularly engage in, such as bargaining and joint decision making. The main topics in automated negotiation revolve around the design of protocols and strategies.
|
Automated negotiation : Through digitization, the beginning of the 21st century has seen a growing interest in the automation of negotiation and e-negotiation systems, for example in the setting of e-commerce. This interest is fueled by the promise of automated agents being able to negotiate on behalf of human negotiators, and to find better outcomes than human negotiators.
|
Automated negotiation : Examples of automated negotiation include: Online dispute resolution, in which disagreements between parties are settled. Sponsored search auction, where bids are placed on advertisement keywords. Content negotiation, in which user agents negotiate over HTTP about how to best represent a web resource. Negotiation support systems, in which negotiation decision-making activities are supported by an information system. == References ==
|
Feature learning : In machine learning (ML), feature learning or representation learning is a set of techniques that allow a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task. Feature learning is motivated by the fact that ML tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data, such as image, video, and sensor data, have not yielded to attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms. Feature learning can be either supervised, unsupervised, or self-supervised: In supervised feature learning, features are learned using labeled input data. Labeled data includes input-label pairs where the input is given to the model, and it must produce the ground truth label as the output. This can be leveraged to generate feature representations with the model which result in high label prediction accuracy. Examples include supervised neural networks, multilayer perceptrons, and dictionary learning. In unsupervised feature learning, features are learned with unlabeled input data by analyzing the relationship between points in the dataset. Examples include dictionary learning, independent component analysis, matrix factorization, and various forms of clustering. In self-supervised feature learning, features are learned using unlabeled data like unsupervised learning, however input-label pairs are constructed from each data point, enabling learning the structure of the data through supervised methods such as gradient descent. Classical examples include word embeddings and autoencoders. Self-supervised learning has since been applied to many modalities through the use of deep neural network architectures such as convolutional neural networks and transformers.
|
Feature learning : Supervised feature learning is learning features from labeled data. The data label allows the system to compute an error term, the degree to which the system fails to produce the label, which can then be used as feedback to correct the learning process (reduce/minimize the error). Approaches include:
|
Feature learning : Unsupervised feature learning is learning features from unlabeled data. The goal of unsupervised feature learning is often to discover low-dimensional features that capture some structure underlying the high-dimensional input data. When the feature learning is performed in an unsupervised way, it enables a form of semisupervised learning where features learned from an unlabeled dataset are then employed to improve performance in a supervised setting with labeled data. Several approaches are introduced in the following.
|
Feature learning : The hierarchical architecture of the biological neural system inspires deep learning architectures for feature learning by stacking multiple layers of learning nodes. These architectures are often designed based on the assumption of distributed representation: observed data is generated by the interactions of many different factors on multiple levels. In a deep learning architecture, the output of each intermediate layer can be viewed as a representation of the original input data. Each level uses the representation produced by the previous, lower level as input, and produces new representations as output, which are then fed to higher levels. The input at the bottom layer is raw data, and the output of the final, highest layer is the final low-dimensional feature or representation.
|
Feature learning : Self-supervised representation learning is learning features by training on the structure of unlabeled data rather than relying on explicit labels for an information signal. This approach has enabled the combined use of deep neural network architectures and larger unlabeled datasets to produce deep feature representations. Training tasks typically fall under the classes of either contrastive, generative or both. Contrastive representation learning trains representations for associated data pairs, called positive samples, to be aligned, while pairs with no relation, called negative samples, are contrasted. A larger portion of negative samples is typically necessary in order to prevent catastrophic collapse, which is when all inputs are mapped to the same representation. Generative representation learning tasks the model with producing the correct data to either match a restricted input or reconstruct the full input from a lower dimensional representation. A common setup for self-supervised representation learning of a certain data type (e.g. text, image, audio, video) is to pretrain the model using large datasets of general context, unlabeled data. Depending on the context, the result of this is either a set of representations for common data segments (e.g. words) which new data can be broken into, or a neural network able to convert each new data point (e.g. image) into a set of lower dimensional features. In either case, the output representations can then be used as an initialization in many different problem settings where labeled data may be limited. Specialization of the model to specific tasks is typically done with supervised learning, either by fine-tuning the model / representations with the labels as the signal, or freezing the representations and training an additional model which takes them as an input. Many self-supervised training schemes have been developed for use in representation learning of various modalities, often first showing successful application in text or image before being transferred to other data types.
|
Feature learning : Dynamic representation learning methods generate latent embeddings for dynamic systems such as dynamic networks. Since particular distance functions are invariant under particular linear transformations, different sets of embedding vectors can actually represent the same/similar information. Therefore, for a dynamic system, a temporal difference in its embeddings may be explained by misalignment of embeddings due to arbitrary transformations and/or actual changes in the system. Therefore, generally speaking, temporal embeddings learned via dynamic representation learning methods should be inspected for any spurious changes and be aligned before consequent dynamic analyses.
|
Feature learning : Automated machine learning (AutoML) Deep learning geometric feature learning Feature detection (computer vision) Feature extraction Word embedding Vector quantization Variational autoencoder == References ==
|
Pattern theory : Pattern theory, formulated by Ulf Grenander, is a mathematical formalism to describe knowledge of the world as patterns. It differs from other approaches to artificial intelligence in that it does not begin by prescribing algorithms and machinery to recognize and classify patterns; rather, it prescribes a vocabulary to articulate and recast the pattern concepts in precise language. Broad in its mathematical coverage, Pattern Theory spans algebra and statistics, as well as local topological and global entropic properties. In addition to the new algebraic vocabulary, its statistical approach is novel in its aim to: Identify the hidden variables of a data set using real world data rather than artificial stimuli, which was previously commonplace. Formulate prior distributions for hidden variables and models for the observed variables that form the vertices of a Gibbs-like graph. Study the randomness and variability of these graphs. Create the basic classes of stochastic models applied by listing the deformations of the patterns. Synthesize (sample) from the models, not just analyze signals with them. The Brown University Pattern Theory Group was formed in 1972 by Ulf Grenander. Many mathematicians are currently working in this group, noteworthy among them being the Fields Medalist David Mumford. Mumford regards Grenander as his "guru" in Pattern Theory.
|
Pattern theory : Abductive reasoning Algebraic statistics Computational anatomy Formal concept analysis Grammar induction Image analysis Induction Lattice theory Spatial statistics
|
Pattern theory : 2007. Ulf Grenander and Michael Miller Pattern Theory: From Representation to Inference. Oxford University Press. Paperback. (ISBN 9780199297061) 1994. Ulf Grenander General Pattern Theory. Oxford Science Publications. (ISBN 978-0198536710) 1996. Ulf Grenander Elements of Pattern Theory. Johns Hopkins University Press. (ISBN 978-0801851889)
|
Pattern theory : Pattern Theory Group at Brown University Pattern Theory: Grenander's Ideas and Examples - a video lecture by David Mumford Pattern Theory and Applications - graduate course page with material by a Brown University alumnus
|
Neural cryptography : Neural cryptography is a branch of cryptography dedicated to analyzing the application of stochastic algorithms, especially artificial neural network algorithms, for use in encryption and cryptanalysis.
|
Neural cryptography : Artificial neural networks are well known for their ability to selectively explore the solution space of a given problem. This feature finds a natural niche of application in the field of cryptanalysis. At the same time, neural networks offer a new approach to attack ciphering algorithms based on the principle that any function could be reproduced by a neural network, which is a powerful proven computational tool that can be used to find the inverse-function of any cryptographic algorithm. The ideas of mutual learning, self learning, and stochastic behavior of neural networks and similar algorithms can be used for different aspects of cryptography, like public-key cryptography, solving the key distribution problem using neural network mutual synchronization, hashing or generation of pseudo-random numbers. Another idea is the ability of a neural network to separate space in non-linear pieces using "bias". It gives different probabilities of activating the neural network or not. This is very useful in the case of Cryptanalysis. Two names are used to design the same domain of research: Neuro-Cryptography and Neural Cryptography. The first work that it is known on this topic can be traced back to 1995 in an IT Master Thesis.
|
Neural cryptography : In 1995, Sebastien Dourlens applied neural networks to cryptanalyze DES by allowing the networks to learn how to invert the S-tables of the DES. The bias in DES studied through Differential Cryptanalysis by Adi Shamir is highlighted. The experiment shows about 50% of the key bits can be found, allowing the complete key to be found in a short time. Hardware application with multi micro-controllers have been proposed due to the easy implementation of multilayer neural networks in hardware. One example of a public-key protocol is given by Khalil Shihab. He describes the decryption scheme and the public key creation that are based on a backpropagation neural network. The encryption scheme and the private key creation process are based on Boolean algebra. This technique has the advantage of small time and memory complexities. A disadvantage is the property of backpropagation algorithms: because of huge training sets, the learning phase of a neural network is very long. Therefore, the use of this protocol is only theoretical so far.
|
Neural cryptography : The most used protocol for key exchange between two parties A and B in the practice is Diffie–Hellman key exchange protocol. Neural key exchange, which is based on the synchronization of two tree parity machines, should be a secure replacement for this method. Synchronizing these two machines is similar to synchronizing two chaotic oscillators in chaos communications.
|
Neural cryptography : Neural Network Stochastic neural network Shor's algorithm
|
Neural cryptography : Neuro-Cryptography 1995 - The first definition of the Neuro-Cryptography (AI Neural-Cryptography) applied to DES cryptanalysis by Sebastien Dourlens, France. Neural Cryptography - Description of one kind of neural cryptography at the University of Würzburg, Germany. Kinzel, W.; Kanter, I. (2002). "Neural cryptography". Proceedings of the 9th International Conference on Neural Information Processing. ICONIP '02. pp. 1351–1354. arXiv:cond-mat/0208453. doi:10.1109/ICONIP.2002.1202841. - One of the leading papers that introduce the concept of using synchronized neural networks to achieve a public key authentication system. Li, Li-Hua; Lin, Luon-Chang; Hwang, Min-Shiang (November 2001). "A remote password authentication scheme for multiserver architecture using neural networks". IEEE Transactions on Neural Networks. 12 (6): 1498–1504. doi:10.1109/72.963786. ISSN 1045-9227. PMID 18249979. - Possible practical application of Neural Cryptography. Klimov, Alexander; Mityagin, Anton; Shamir, Adi (2002). "Analysis of Neural Cryptography" (PDF). Advances in Cryptology. ASIACRYPT 2002. LNCS. Vol. 2501. pp. 288–298. doi:10.1007/3-540-36178-2_18. ISSN 0302-9743. Retrieved 2017-11-15. - Analysis of neural cryptography in general and focusing on the weakness and possible attacks of using synchronized neural networks. Neural Synchronization and Cryptography - Andreas Ruttor. PhD thesis, Bayerische Julius-Maximilians-Universität Würzburg, 2006. Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido (March 2006). "Genetic attack on neural cryptography". Physical Review E. 73 (3): 036121. arXiv:cond-mat/0512022. Bibcode:2006PhRvE..73c6121R. doi:10.1103/PhysRevE.73.036121. ISSN 1539-3755. PMID 16605612. S2CID 27786815. Khalil Shihab (2006). "A backpropagation neural network for computer network security" (PDF). Journal of Computer Science 2: 710–715. Archived from the original (PDF) on 2007-07-12.
|
Weak artificial intelligence : Weak artificial intelligence (weak AI) is artificial intelligence that implements a limited part of the mind, or, as Artificial Narrow Intelligence, is focused on one narrow task. Weak AI is contrasted with strong AI, which can be interpreted in various ways: Artificial general intelligence (AGI): a machine with the ability to apply intelligence to any problem, rather than just one specific problem. Artificial super intelligence (ASI): a machine with a vastly superior intelligence to the average human being. Artificial consciousness: a machine that has consciousness, sentience and mind (John Searle uses "strong AI" in this sense). Narrow AI can be classified as being "limited to a single, narrowly defined task. Most modern AI systems would be classified in this category." Artificial general intelligence is conversely the opposite.
|
Weak artificial intelligence : Some examples of narrow AI are AlphaGo, self-driving cars, robot systems used in the medical field, and diagnostic doctors. Narrow AI systems are sometimes dangerous if unreliable. And the behavior that it follows can become inconsistent. It could be difficult for the AI to grasp complex patterns and get to a solution that works reliably in various environments. This "brittleness" can cause it to fail in unpredictable ways. Narrow AI failures can sometimes have significant consequences. It could for example cause disruptions in the electric grid, damage nuclear power plants, cause global economic problems, and misdirect autonomous vehicles. Medicines could be incorrectly sorted and distributed. Also, medical diagnoses can ultimately have serious and sometimes deadly consequences if the AI is faulty or biased. Simple AI programs have already worked their way into our society unnoticed. Autocorrection for typing, speech recognition for speech-to-text programs, and vast expansions in the data science fields are examples. As much as narrow and relatively general AI is slowly starting to help out societies, they are also starting to hurt them as well. AI had already unfairly put people in jail, discriminated against women in the workplace for hiring, taught some problematic ideas to millions, and even killed people with automatic cars. AI might be a powerful tool that can be used for improving lives, but it could also be a dangerous technology with the potential for misuse. Despite being "narrow" AI, recommender systems are efficient at predicting user reactions based their posts, patterns, or trends. For instance, TikTok's "For You" algorithm can determine user's interests or preferences in less than an hour. Some other social media AI systems are used to detect bots that may be involved in biased propaganda or other potentially malicious activities.
|
Weak artificial intelligence : John Searle contests the possibility of strong AI (by which he means conscious AI). He further believes that the Turing test (created by Alan Turing and originally called the "imitation game", used to assess whether a machine can converse indistinguishably from a human) is not accurate or appropriate for testing whether an AI is "strong". Scholars such as Antonio Lieto have argued that the current research on both AI and cognitive modelling are perfectly aligned with the weak-AI hypothesis (that should not be confused with the "general" vs "narrow" AI distinction) and that the popular assumption that cognitively inspired AI systems espouse the strong AI hypothesis is ill-posed and problematic since "artificial models of brain and mind can be used to understand mental phenomena without pretending that that they are the real phenomena that they are modelling" (as, on the other hand, implied by the strong AI assumption).
|
Weak artificial intelligence : A.I. Rising – 2018 film directed by Lazar Bodroža Artificial intelligence – Intelligence of machines Artificial general intelligence – Type of AI with wide-ranging abilities Deep learning – Branch of machine learning Expert system – Computer system emulating the decision-making ability of a human expert Hardware for artificial intelligence – Hardware specially designed and optimized for artificial intelligence History of artificial intelligence Machine learning – Study of algorithms that improve automatically through experience Philosophy of artificial intelligence Synthetic intelligence – Alternate term for or form of artificial intelligence Virtual assistant – Software agent Workplace impact of artificial intelligence – Impact of artificial intelligence on workers == References ==
|
Gabbay's separation theorem : In mathematical logic and computer science, Gabbay's separation theorem, named after Dov Gabbay, states that any arbitrary temporal logic formula can be rewritten in a logically equivalent "past → future" form. I.e. the future becomes what must be satisfied. This form can be used as execution rules; a MetateM program is a set of such rules. == References ==
|
GPT4-Chan : Generative Pre-trained Transformer 4Chan (GPT-4chan) is a controversial AI model that was developed and deployed by YouTuber and AI researcher Yannic Kilcher in June 2022. The model is a large language model, which means it can generate text based on some input, by fine-tuning GPT-J with a dataset of millions of posts from the /pol/ board of 4chan, an anonymous online forum known for hosting hateful and extremist content. The model learned to mimic the style and tone of /pol/ users, producing text that is often intentionally offensive to groups (racist, sexist, homophobic, etc.) and nihilistic. Kilcher deployed the model on the /pol/ board itself, where it interacted with other users without revealing its identity. He also made the model publicly available on Hugging Face, a platform for sharing and using AI models, until it was removed from the platform. The project sparked criticism and debate in the AI community. Some people questioned the ethics, legality, and social impact of creating and distributing such a model. Some of the issues raised by the GPT-4chan controversy include the potential harm of spreading hate speech, the responsibility of AI developers and platforms, the need for regulation and oversight of AI models, and the role of open source and transparency in AI research.
|
GPT4-Chan : The development of GPT-4chan began in May 2022, when Kilcher announced his project on his YouTube channel. Notably, at the time before ChatGPT, he explained that he wanted to create a large language model that could generate realistic and coherent text in the style of /pol/, one of the most notorious online communities. He indicated that he was inspired by the success of GPT-3, a powerful AI model created by OpenAI, and GPT-J, and JPT-J] an open-source model, with GPT-3 comparable performance, released by EleutherAI, a group of independent AI researchers. Kilcher decided to use GPT-J as the base model for his project, and fine-tune it with a large dataset of /pol/ posts. The Raiders of the Lost Kek dataset contained over 100 million posts from /pol/, spanning from June 2016-November 2019. Kilcher then proceeded to fine-tune the GPT-J model on the 4chan data. He also showed some examples of the model’s outputs, which ranged from political opinions, conspiracy theories, jokes, insults, and threats, to more creative and bizarre texts, such as poems, stories, songs, and code. He said that he was impressed by the model’s ability to generate fluent and diverse text, and that he was curious to see how it would interact with real /pol/ users.
|
GPT4-Chan : In June 2022, Kilcher deployed his model on the /pol/ board itself, using a bot that he programmed to post and reply to threads. He did not reveal the model’s identity, and that he let it run autonomously, without any human supervision or intervention. He wanted to conduct a natural experiment, and to observe the model’s behavior and impact in a real-world setting. Furthermore, he also wanted to test the model’s robustness, and to see how it would handle the challenges and dynamics of /pol/, such as trolling, flaming, baiting, and moderation. At the same time, Kilcher also made his model publicly available on Hugging Face, a platform for sharing and using AI models. He wanted to share his work with the AI community and the public, and that he hoped that his model would inspire and enable others to create and explore new applications and possibilities with large language models. Likewise, he also said that he wanted to spark a discussion and a debate about the ethical and social implications of his project, and that he welcomed feedback and criticism from anyone. He provided a link to his model’s page on Hugging Face, where anyone could access and use the model through a web interface or an API, and also provided a link to his GitHub repository, where anyone could download and inspect the model’s code and data.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.