text
stringlengths 12
14.7k
|
---|
Jabberwacky : Jabberwacky is a chatbot created by British programmer Rollo Carpenter and launched in 1997. Its stated aim is to "simulate natural human chat in an interesting, entertaining and humorous manner". It is an early attempt at creating an artificially intelligent chatbot through human interaction.
|
Jabberwacky : The stated purpose of the project is to create an artificial intelligence that is capable of passing the Turing Test. It is designed to mimic human interaction and to carry out conversations with users. It is not designed to carry out any other functions. Unlike more traditional AI programs, the learning technology is intended as a form of entertainment rather than being used for computer support systems or corporate representation. Recent developments do allow a more scripted, controlled approach to sit atop the general conversational AI, aiming to bring together the best of both approaches, and usage in the fields of sales and marketing is underway. The ultimate intention is that the program move from a text based system to be wholly voice operated—learning directly from sound and other sensory inputs. Its creator believes that it can be incorporated into objects around the home such as robots or talking pets, intending both to be useful and entertaining, keeping people company.
|
Jabberwacky : Cleverbot is the evolved version of the older Jabberwacky chatterbot, or chatbot, originally launched in 1997 on the web. While Cleverbot.com continued to work in 2023, the Jabberwacky's website, tagged as "legacy only," stopped working temporarily from December 31, 2022 until approximately June 1, 2023, experienced a restoration for a little more than two weeks and then stopped working again.
|
Jabberwacky : 1981 – The first incarnation of this project is created as a program hard-coded on a Sinclair ZX81. 1988 – Learning AI project founded as 'Thoughts' 1997 – Launched on the Internet as 'Jabberwacky' October 2003 – Jabberwacky is awarded third place in the Loebner Prize. It was beaten by Juergen Pirner's Jabberwock (A German-based chat program) September 2004 – Jabberwacky is awarded second place in the Loebner Prize. It was beaten by computer chat program A.L.I.C.E. September 2005 – George, a character within Jabberwacky, wins the Loebner Prize September 2006 – Joan, another Jabberwacky character, wins the Loebner Prize October 2008 – A new variant of Jabberwacky is launched, more fuzzy and with deeper context, under the name Cleverbot January 2023 – The legacy website started displaying a 504 Gateway Time-out multiple times with a redirect at least once to boibot.com on February 28, 2023. According to the Internet Archive's Wayback Machine, the site was last active and working on December 31, 2022. No media outlet covered the change or noted any press release about it. June 2023 - the Jabberwacky site became accessible once more until it stopped working again the week of June 18, 2023. As with the outage, no media outlet covered the changes or noted any press release to explain the five-month down time, the restoration or the new outage.
|
Jabberwacky : Artificial Linguistic Internet Computer Entity (ALICE) Chatterbot Loebner Prize
|
Jabberwacky : www.jabberwacky.com The Official Website Jabberwacky entry Archived 30 September 2005 at the Wayback Machine to the Loebner Prize 2005
|
John M. Jumper : John Michael Jumper (born 1985) is an American chemist and computer scientist. Jumper and Demis Hassabis were awarded with the 2024 Nobel Prize in Chemistry for protein structure prediction. He currently serves as director at Google DeepMind. Jumper and his colleagues created AlphaFold, an artificial intelligence (AI) model to predict protein structures from their amino acid sequence with high accuracy. Jumper stated that the AlphaFold team plans to release 100 million protein structures. The scientific journal Nature included Jumper as one of the ten "people who mattered" in science in their annual listing of Nature's 10 in 2021.
|
John M. Jumper : Jumper received a Bachelor of Science with majors in physics and mathematics from Vanderbilt University in 2007, a Master of Philosophy in theoretical condensed matter physics from the University of Cambridge in 2010 on a Marshall Scholarship, a Master of Science in theoretical chemistry from the University of Chicago in 2012, and a Doctor of Philosophy in theoretical chemistry from the University of Chicago in 2017. His doctoral advisors at the University of Chicago were Tobin R. Sosnick and Karl Freed.
|
John M. Jumper : Jumper's research investigates algorithms for protein structure prediction.
|
John M. Jumper : == External links ==
|
Kaggle : Kaggle is a data science competition platform and online community for data scientists and machine learning practitioners under Google LLC. Kaggle enables users to find and publish datasets, explore and build models in a web-based data science environment, work with other data scientists and machine learning engineers, and enter competitions to solve data science challenges.
|
Kaggle : Kaggle was founded by Anthony Goldbloom in April 2010. Jeremy Howard, one of the first Kaggle users, joined in November 2010 and served as the President and Chief Scientist. Also on the team was Nicholas Gruen serving as the founding chair. In 2011, the company raised $12.5 million and Max Levchin became the chairman. On March 8, 2017, Fei-Fei Li, Chief Scientist at Google, announced that Google was acquiring Kaggle. In June 2017, Kaggle surpassed 1 million registered users, and as of October 2023, it has over 15 million users in 194 countries. In 2022, founders Goldbloom and Hamner stepped down from their positions and D. Sculley became the CEO. In February 2023, Kaggle introduced Models, allowing users to discover and use pre-trained models through deep integrations with the rest of Kaggle’s platform.
|
Kaggle : Data science competition platform Anthony Goldbloom Hugging Face
|
Kaggle : "Competition shines light on dark matter", Office of Science and Technology Policy, Whitehouse website, June 2011 "May the best algorithm win...", The Wall Street Journal, March 2011 "Kaggle contest aims to boost Wikipedia editors", New Scientist, July 2011 Archived 2016-03-22 at the Wayback Machine "Verification of systems biology research in the age of collaborative competition", Nature Nanotechnology, September 2011
|
KataGo : KataGo is a free and open-source computer Go program, capable of defeating top-level human players. First released on 27 February 2019, it is developed by David Wu, who also developed the Arimaa playing program bot_Sharp which defeated three top human players to win the Arimaa AI Challenge in 2015. KataGo's first release was trained by David Wu using resources provided by his employer Jane Street Capital, but it is now trained by a distributed effort. Members of the computer Go community provide computing resources by running the client, which generates self-play games and rating games, and submits them to a server. The self-play games are used to train newer networks and the rating games to evaluate the networks' relative strengths. KataGo supports the Go Text Protocol, with various extensions, thus making it compatible with popular GUIs such as Lizzie. As an alternative, it also implements a custom "analysis engine" protocol, which is used by the KaTrain GUI, among others. KataGo is widely used by strong human go players, including the South Korean national team, for training purposes. KataGo is also used as the default analysis engine in the online Go website AI Sensei, as well as OGS (the Online Go Server).
|
KataGo : Based on techniques used by DeepMind's AlphaGo Zero, KataGo implements Monte Carlo tree search with a convolutional neural network providing position evaluation and policy guidance. Compared to AlphaGo, KataGo introduces many refinements that enable it to learn faster and play more strongly. Notable features of KataGo that are absent in many other Go-playing programs include score estimation; support for small boards, arbitrary values of komi, and handicaps; and the ability to use various Go rulesets and adjust its play and evaluation for the small differences between them.
|
KataGo : In 2022, KataGo was used as the target for adversarial attack research, designed to demonstrate the "surprising failure modes" of AI systems. The researchers were able to trick KataGo into ending the game prematurely. Adversarial training improves defense against adversarial attacks, though not perfectly.
|
KataGo : KataGo on GitHub KataGo on Sensei's Library KataGo Training website
|
Kindwise : FlowerChecker, also known as Kindwise, is a company that uses machine learning to identify natural objects from images. This includes plants and their diseases, but also insects and mushrooms. It is based in Brno, Czech Republic. It was founded in 2014 by Ondřej Veselý, Jiří Řihák, and Ondřej Vild, at the time Ph.D. students.
|
Kindwise : FlowerChecker offers multiple products. Plant.id is a machine learning-based plant identification API launched in 2018, with the plant disease identification API, plant.health, released in April 2022. The plant.id API is suitable for integration into other software, such as mobile apps or urban trees from remote-sensing imagery. Other products include insect.id, mushroom.id and crop.health are machine learning-based identification APIs for the identification of insects, fungi and economically important plants, respectively, and include also online public demos. The FlowerChecker app was discontinued in October 2024 after 10 years of successful operation.
|
Kindwise : In 2019, FlowerChecker won the Idea of the Year award in the AI Awards organized by the Confederation of Industry of the Czech Republic. In 2020, an academic study comparing ten free automated image recognition apps showed that plant.id's performance excelled in most of the parameters studied. In an independent study comparing different image-based species recognition models and their suitability for recognizing invasive alien species, the plant.id achieved the highest accuracy compared to other tools. In a subsequent study, plant.id was utilized to evaluate urban forest biodiversity using remote-sensing imagery, achieving the highest accuracy in tree species identification among compared methods.
|
Kindwise : Flowerchecker cooperates with the Nature Conservation Agency of the Czech Republic on a biodiversity mapping project. FlowerChecker plans to adapt its services to participate in the control of invasive species. In 2022, the company entered a consortium to develop a weeder capable of in-row weed detection and removal. In 2025, it received funding for the development of a technology for the removal of invasive species. == References ==
|
Kinect : Kinect is a discontinued line of motion sensing input devices produced by Microsoft and first released in 2010. The devices generally contain RGB cameras, and infrared projectors and detectors that map depth through either structured light or time of flight calculations, which can in turn be used to perform real-time gesture recognition and body skeletal detection, among other capabilities. They also contain microphones that can be used for speech recognition and voice control. Kinect was originally developed as a motion controller peripheral for Xbox video game consoles, distinguished from competitors (such as Nintendo's Wii Remote and Sony's PlayStation Move) by not requiring physical controllers. The first-generation Kinect was based on technology from Israeli company PrimeSense, and unveiled at E3 2009 as a peripheral for Xbox 360 codenamed "Project Natal". It was first released on November 4, 2010, and would go on to sell eight million units in its first 60 days of availability. The majority of the games developed for Kinect were casual, family-oriented titles, which helped to attract new audiences to Xbox 360, but did not result in wide adoption by the console's existing, overall userbase. As part of the 2013 unveiling of Xbox 360's successor, Xbox One, Microsoft unveiled a second-generation version of Kinect with improved tracking capabilities. Microsoft also announced that Kinect would be a required component of the console, and that it would not function unless the peripheral is connected. The requirement proved controversial among users and critics due to privacy concerns, prompting Microsoft to backtrack on the decision. However, Microsoft still bundled the new Kinect with Xbox One consoles upon their launch in November 2013. A market for Kinect-based games still did not emerge after the Xbox One's launch; Microsoft would later offer Xbox One hardware bundles without Kinect included, and later revisions of the console removed the dedicated ports used to connect it (requiring a powered USB adapter instead). Microsoft ended production of Kinect for Xbox One in October 2017. Kinect has also been used as part of non-game applications in academic and commercial environments, as it was cheaper and more robust than other depth-sensing technologies at the time. While Microsoft initially objected to such applications, it later released software development kits (SDKs) for the development of Microsoft Windows applications that use Kinect. In 2020, Microsoft released Azure Kinect as a continuation of the technology integrated with the Microsoft Azure cloud computing platform. Part of the Kinect technology was also used within Microsoft's HoloLens project. Microsoft discontinued the Azure Kinect developer kits in October 2023.
|
Kinect : While announcing Kinect's discontinuation in an interview with Fast Co. Design on October 25, 2017, Microsoft stated that 35 million units had been sold since its release. 24 million units of Kinect had been shipped by February 2013. Having sold 8 million units in its first 60 days on the market, Kinect claimed the Guinness World Record of being the "fastest selling consumer electronics device". According to Wedbush analyst Michael Pachter, Kinect bundles accounted for about half of all Xbox 360 console sales in December 2010 and for more than two-thirds in February 2011. More than 750,000 Kinect units were sold during the week of Black Friday 2011.
|
Kinect : Kinect competed with several motion controllers on other home consoles, such as Wii Remote, Wii Remote Plus and Wii Balance Board for the Wii and Wii U, PlayStation Move and PlayStation Eye for the PlayStation 3, and PlayStation Camera for the PlayStation 4. While the Xbox 360 Kinect's controller-less nature enabled it to offer a motion-controlled experience different from the wand-based controls of the Wii and PlayStation Move, this has occasionally hindered developers from developing certain motion-controlled games that could target all three seventh-generation consoles and still provide the same experience regardless of console. Examples of seventh-generation motion-controlled games that were released on Wii and PlayStation 3, but had a version for Xbox 360 cancelled or ruled out from the start, due to issues with translating wand controls to the camera-based movement of the Kinect, include Dead Space: Extraction, The Lord of the Rings: Aragorn's Quest and Phineas and Ferb: Across the 2nd Dimension.
|
Kinect : The machine learning work on human motion capture within Kinect won the 2011 MacRobert Award for engineering innovation. Kinect Won T3's "Gadget of the Year" award for 2011. It also won the "Gaming Gadget of the Year" prize. 'Microsoft Kinect for Windows Software Development Kit' was ranked second in "The 10 Most Innovative Tech Products of 2011" at Popular Mechanics Breakthrough Awards ceremony in New York City. Microsoft Kinect for Windows won Innovation of the Year in the 2012 Seattle 2.0 Startup Awards.
|
Kinect : Dreameye EyeToy Xbox Live Vision
|
Kinect : Official website for Xbox + Kinect Official website for Kinect for Windows
|
Komodo (chess) : Komodo and Dragon by Komodo Chess (also known as Dragon or Komodo Dragon) are UCI chess engines developed by Komodo Chess, which is a part of Chess.com. The engines were originally authored by Don Dailey and GM Larry Kaufman. Dragon is a commercial chess engine, but Komodo is free for non-commercial use. Dragon is consistently ranked near the top of most major chess engine rating lists, along with Stockfish and Leela Chess Zero.
|
Komodo (chess) : Komodo vs Hannibal, nTCEC - Stage 2b - Season 1, Round 4.1, ECO: A10, 1–0 Archived 2016-03-04 at the Wayback Machine Komodo sacrifices an exchange for positional gain. Gull vs Komodo, nTCEC - Stage 3 - Season 2, Round 2.2, ECO: E10, 0–1 Archived March 4, 2016, at the Wayback Machine Archived 2016-03-04 at the Wayback Machine
|
Komodo (chess) : Official website
|
Leela Chess Zero : Leela Chess Zero (abbreviated as LCZero, lc0) is a free, open-source chess engine and volunteer computing project based on Google's AlphaZero engine. It was spearheaded by Gary Linscott, a developer for the Stockfish chess engine, and adapted from the Leela Zero Go engine. Like Leela Zero and AlphaGo Zero, early iterations of Leela Chess Zero started with no intrinsic chess-specific knowledge other than the basic rules of the game. It learned how to play chess through reinforcement learning from repeated self-play, using a distributed computing network coordinated at the Leela Chess Zero website. However, as of November 2024 most models used by the engine are trained through supervised learning on data generated by previous reinforcement learning runs. As of June 2024, Leela Chess Zero has played over 2.5 billion games against itself, playing around 1 million games every day, and is capable of play at a level that is comparable with Stockfish, the leading conventional chess program.
|
Leela Chess Zero : The Leela Chess Zero project was first announced on TalkChess.com on January 9, 2018, as an open-source, self-learning chess engine attempting to recreate the success of AlphaZero. Within the first few months of training, Leela Chess Zero had already reached the Grandmaster level, surpassing the strength of early releases of Rybka, Stockfish, and Komodo, despite evaluating orders of magnitude fewer positions due to the size of the deep neural network it uses as its evaluation function. In December 2018, the AlphaZero team published a paper in Science magazine revealing previously undisclosed details of the architecture and training parameters used for AlphaZero. These changes were soon incorporated into Leela Chess Zero and increased both its strength and training efficiency. Work on Leela Chess Zero has informed the AobaZero project for shogi. The engine has been rewritten and carefully iterated upon since its inception, and since 2019 has run on multiple backends, allowing it to run on both CPU and GPU. The engine can be configured to use different weights, including even different architectures. This same mechanism of substitutable weights can also be used for alternative chess rules, such as for the Fischer Random Chess variant, which was done in 2019.
|
Leela Chess Zero : Like AlphaZero, Leela Chess Zero employs neural networks which output both a policy vector, a distribution over subsequent moves used to guide search, and a position evaluation. These neural networks are designed to run on GPU, unlike traditional engines. It originally used residual neural networks, but in 2022 switched to using a transformer-based architecture designed by Daniel Monroe and Philip Chalmers. These models represent a chessboard as a sequence of 64 tokens and apply a trunk consisting of a stack of Post-LN encoder layers, outputting a sequence of 64 encoded tokens which is used to generate a position evaluation and a distribution over subsequent moves. They use a custom domain-specific position encoding called smolgen to improve the self-attention layer. As of November 2024, the models used by the engine are significantly larger and more efficient than the residual network used by AlphaZero, reportedly achieving grandmaster-level strength at one position evaluation per move. These models are able to detect and exploit positional features like trapped pieces and fortresses to outmaneuver traditional engines, giving Leela a unique playstyle. There is also evidence that they are able to perform look-ahead.
|
Leela Chess Zero : Like AlphaZero, Leela Chess Zero learns through reinforcement learning, continually training on data generated through self-play. However, unlike AlphaZero, Leela Chess Zero decentralizes its data generation through distributed computing, with volunteers generating self-play data on local hardware which is fed to the reinforcement algorithm. In order to contribute training games, volunteers must download the latest non-release candidate (non-rc) version of the engine and the client. The client connects to the Leela Chess Zero server and iteratively receives the latest neural network version and produces self-play games which are sent back to the server and use to train the network. In order to run the Leela Chess Zero engine, two components are needed: the engine binary used to perform search, and a network used to evaluate positions. The client, which is used to contribute training data to the project, is not needed for this purpose. Older networks can also be downloaded and used by placing those networks in the folder with the Lc0 binary.
|
Leela Chess Zero : In season 15 of the Top Chess Engine Championship, the engine AllieStein competed alongside Leela. AllieStein is a combination of two different spinoffs from Leela: Allie, which uses the same neural network as Leela, but has a unique search algorithm for exploring different lines of play, and Stein, a network which was trained using supervised learning on existing game data from games between other engines. While neither of these projects were admitted to TCEC separately due to their similarity to Leela, the combination of Allie's search algorithm with the Stein network, called AllieStein, was deemed unique enough to warrant its inclusion in the competition. In early 2021, the LcZero blog announced Ceres, a transliteration of the engine to C Sharp which introduced several algorithmic improvements. The engine has performed competitively in tournaments, achieving third place in the TCEC Swiss 7 and fourth place in the TCEC Cup 4. In 2024, the CeresTrain framework was announced to support training deep neural networks for chess in PyTorch.
|
Leela Chess Zero : In April 2018, Leela Chess Zero became the first engine using a deep neural network to enter the Top Chess Engine Championship (TCEC), during Season 12 in the lowest division, Division 4. Out of 28 games, it won one, drew two, and lost the remainder; its sole victory came from a position in which its opponent, Scorpio 2.82, crashed in three moves. However, it improved quickly. In July 2018, Leela placed seventh out of eight competitors at the 2018 World Computer Chess Championship. In August 2018, it won division 4 of TCEC season 13 with a record of 14 wins, 12 draws, and 2 losses. In Division 3, Leela scored 16/28 points, finishing third behind Ethereal, which scored 22.5/28 points, and Arasan on tiebreak. By September 2018, Leela had become competitive with the strongest engines in the world. In the 2018 Chess.com Computer Chess Championship (CCCC), Leela placed fifth out of 24 entrants. The top eight engines advanced to round 2, where Leela placed fourth. Leela then won the 30-game match against Komodo to secure third place in the tournament. Leela participated in the "TCEC Cup", an event in which engines from different TCEC divisions can play matches against one another. Leela defeated higher-division engines Laser, Ethereal and Fire before finally being eliminated by Stockfish in the semi-finals. In October and November 2018, Leela participated in the Chess.com Computer Chess Championship Blitz Battle, finishing third behind Stockfish and Komodo. In December 2018, Leela participated in Season 14 of the Top Chess Engine Championship. Leela dominated divisions 3, 2, and 1, easily finishing first in all of them. In the premier division, Stockfish dominated while Houdini, Komodo and Leela competed for second place. It came down to a final-round game where Leela needed to hold Stockfish to a draw with black to finish second ahead of Komodo. Leela managed this and therefore met Stockfish in the superfinal. In a back and forth match, first Stockfish and then Leela took three game leads before Stockfish won by the narrow margin of 50.5–49.5. In February 2019, Leela scored its first major tournament win when it defeated Houdini in the final of the second TCEC cup. Leela did not lose a game the entire tournament. In April 2019, Leela won the Chess.com Computer Chess Championship 7: Blitz Bonanza, becoming the first neural-network project to take the title. In the season 15 of the Top Chess Engine Championship (May 2019), Leela defended its TCEC Cup title, this time defeating Stockfish with a score of 5.5–4.5 (+2 =7 −1) in the final after Stockfish blundered a seven-man tablebase draw. Leela also won the Superfinal for the first time, scoring 53.5–46.5 (+14 −7 =79) versus Stockfish, including winning as both white and black in the same predetermined opening in games 61 and 62. Season 16 of TCEC saw Leela finish in third place in premier division, missing qualification for the Superfinal to Stockfish and the new deep neural network engine AllieStein. Leela was the only engine not to suffer any losses in the Premier division, and defeated Stockfish in one of the six games they played. However, Leela only managed to score nine wins, while AllieStein and Stockfish both scored 14 wins. This inability to defeat weaker engines led to Leela finishing third, half a point behind AllieStein and a point behind Stockfish. In the fourth TCEC Cup, Leela was seeded first as the defending champion, which placed it on the opposite half of the brackets as AllieStein and Stockfish. Leela was able to qualify for the finals, where it faced Stockfish. After seven draws, Stockfish won the eighth game to win the match. In Season 17 of TCEC, held in January–April 2020, Leela regained the championship by defeating Stockfish 52.5–47.5, scoring a remarkable six wins in the final ten games, including winning as both white and black in the same predetermined opening in games 95 and 96. It qualified for the superfinal again in Season 18, but this time was defeated by Stockfish 53.5–46.5. In the TCEC Cup 6 final, Leela lost to AllieStein, finishing second. Season 19 of TCEC saw Leela qualify for the Superfinal again. This time it played against a new Stockfish version with support for NNUE, a shallow neural network–based evaluation function used primarily for the leaf nodes of the search tree. Stockfish NNUE defeated Leela convincingly with a final score of 54.5–45.5 (+18 −9 =73). Since then, Leela has repeatedly qualified for the Superfinal, only to lose every time to Stockfish: +14 −8 =78 in Season 20, +19 −7 =74 in Season 21, +27 −10 =63 in Season 23, +20 −16 =64 in Season 24, +27 -23 =50 in Season 25, +31 -17 =52 in Season 26, and +35 -18 =47 in Season 27. Since the introduction of NNUE to Stockfish, Leela has scored victories at the TCEC Swiss 6 and 7 and the TCEC Cup 11, and is usually a close second behind Stockfish in major tournaments.
|
Leela Chess Zero : Leela vs Stockfish, CCCC bonus games, 1–0 Leela beats the world champion Stockfish engine despite a one-pawn handicap. Stockfish vs Leela Chess Zero – TCEC S15 Superfinal – Game 61 Leela completely outplays Stockfish with black pieces in the Trompovsky attack. Leela's eval went from 0.1 to −1.2 in one move, and Stockfish's eval did not go negative until 15 moves later. Stockfish vs Leela, CCC 21 Rapid Semifinals — Game 163 Leela traps Stockfish's rook, allowing it to win as black. Stockfish vs Leela, CCC 23 Rapid Finals – Game 189 Leela constructs a fortress as black, allowing it to make a draw out of a lost position. Stockfish was not able to detect the fortress, with its eval staying above +5 for over 100 moves.
|
Leela Chess Zero : Official website Leela Chess Zero on GitHub Neural network training client Engine Neural nets Chessprogramming wiki on Leela Chess Zero
|
Leela Zero : Leela Zero is a free and open-source computer Go program released on 25 October 2017. It is developed by Belgian programmer Gian-Carlo Pascutto, the author of chess engine Sjeng and Go engine Leela. Leela Zero's algorithm is based on DeepMind's 2017 paper about AlphaGo Zero. Unlike the original Leela, which has a lot of human knowledge and heuristics programmed into it, the program code in Leela Zero only knows the basic rules and nothing more. The knowledge that makes Leela Zero a strong player is contained in a neural network, which is trained based on the results of previous games that the program played. Leela Zero is trained by a distributed effort, which is coordinated at the Leela Zero website. Members of the community provide computing resources by running the client, which generates self-play games and submits them to the server. The self-play games are used to train newer networks. Generally, over 500 clients have connected to the server to contribute resources. The community has provided high quality code contributions as well.
|
Leela Zero : Leela Zero finished third at the BerryGenomics Cup World AI Go Tournament in Fuzhou, Fujian, China on 28 April 2018. The New Yorker at the end of 2018 characterized Leela and Leela Zero as "the world’s most successful open-source Go engines". In early 2018, another team branched Leela Chess Zero from the same code base, also to verify the methods in the AlphaZero paper as applied to the game of chess. AlphaZero's use of Google TPUs was replaced by a crowd-sourcing infrastructure and the ability to use graphics card GPUs via the OpenCL library. Even so, it is expected to take a year of crowd-sourced training to make up for the dozen hours that AlphaZero was allowed to train for its chess match in the paper. The distributed training server was shut down on 2021-02-15, marking the end of Leela Zero project. The page now directs visitors to KataGo and SAI. The model sizes increased steadily over time. The first released model has hash name d645af97, size 1x8 (1 layer, 8 channels), and released at 2017-11-10 13:04. The last released model has hash name 0e9ea880, size 40x256, and was released at 2021-02-15 09:04.
|
Leela Zero : Leela Zero is an (almost) exact replication of AlphaGo Zero in both training process and architecture. The training process is Monte-Carlo Tree Search with self-play, exactly the same as AlphaGo Zero. The architecture is the same as AlphaGo Zero (with one difference). Consider the last released model, 0e9ea880. It has 47 million parameters, and the following architecture: The stem of the network takes as input a 18x19x19 tensor representation of the Go board. 8 channels are the positions of the current player's stones from the last eight time steps. (1 if there is a stone, 0 otherwise. If the time step go before the beginning of the game, then 0 in all positions.) 8 channels are the positions of the other player's stones from the last eight time steps. 1 channel is all 1 if black is to move, and 0 otherwise. 1 channel is all 1 if white is to move, and 0 otherwise. (This channel is not present in the original AlphaGo Zero) The body is a ResNet with 40 residual blocks and 256 channels. There are two heads, a policy head and a value head. Policy head outputs a logit array of size 19 × 19 + 1 , representing the logit of making a move in one of the points, plus the logit of passing. Value head outputs a number in the range ( − 1 , + 1 ) , representing the expected score for the current player. -1 represents current player losing, and +1 winning.
|
Leela Zero : Official website Leela Zero on GitHub Leela Zero on Sensei's Library Play Leela Zero on ZBaduk
|
Mittens (chess) : Mittens is a chess engine developed by Chess.com. It was released on January 1, 2023, alongside four other engines, all of them given cat-related names. The engine became a viral sensation in the chess community due to exposure through content made by chess streamers and a social media marketing campaign, later contributing to record levels of traffic to the Chess.com website and causing issues with database scalability. Mittens was given a rating of one point by Chess.com, although it was evidently stronger than that. Various chess masters played matches against the engine, with players such as Hikaru Nakamura and Levy Rozman drawing and losing their games respectively. A month after its release, Mittens was removed from the website on February 1, as expected through Chess.com's monthly bot cycles. In December 2023, Mittens was brought back in a group of Chess.com's most popular bots of 2023. In January 2024, Mittens was removed again.
|
Mittens (chess) : Mittens was released on January 1, 2023, as part of a New Year event on Chess.com. It was one of five engines released, all with names related to cats. The other engines released were named Scaredy Cat, rated 800; Angry Cat, rated 1000; Mr. Grumpers, rated 1200 and Catspurrov (a pun on Garry Kasparov), rated 1400. As part of the announcement, a picture of each engine was accompanied by a short description of its character. The description given for Mittens suggested that the engine was hiding something, reading: Mittens likes chess… But how good is she? Of the five engines released, Mittens was by far the most popular. In December 2023, Chess.com re-released Mittens as part of a "best of 2023" group of chess bots made to showcase their most popular bots of the year.
|
Mittens (chess) : Mittens was conceptualized by Chess.com employee Will Whalen. Appearing as a kitten, Mittens trash talked its opponents with a selection of voice lines: these lines included quotes from J. Robert Oppenheimer, Vincent van Gogh and Friedrich Nietzsche, as well as the 1967 film Le Samouraï. The engine's "personality" was devised by a writing team headed by Sean Becker, and Marija Casic provided the engine's graphics. Chess.com did not disclose any information about the software running the engine. It may be based on Chess.com's Komodo Dragon 3 engine. Mittens' strategy was to slowly grind down an opponent, a tactic likened to the playing style of Anatoly Karpov. Becker stated that the design team believed it would be "way more demoralizing and funny" for the engine to play this way. According to Hikaru Nakamura, Mittens sometimes missed the best move (or winning positions).
|
Mittens (chess) : On Chess.com, Mittens had a rating of one point. However, the engine's playing style and tactics showed that it was stronger than that; Mittens was able to beat or draw against many top human players. In an interview with CNN Business, Whalen stated that the idea behind giving Mittens a rating of one was to surprise its opponents, giving it the upper hand psychologically. Estimates of Mittens' true rating range from an Elo of 3200 to 3500, because of its ability to beat other engines of around that level. An upper bound of the engine's rating was found after Levy Rozman made Mittens play against Stockfish 15, a 3700 rated engine. Mittens lost the two games that the engines played. The range of Mittens' possible ratings was summarized by Dot Esports, who stated: It seems like she’s around the 3200–3500 rating range (in Chess.com terms, where the best human players, like Magnus Carlsen and Hikaru Nakamura, sport a 3000–3100 rating in the faster formats), as evidenced by her victories over the site’s otherwise strongest, 3200-rated bots, and her defeat to Stockfish 15, which is currently rated around 3700.
|
Mittens (chess) : Against human players, Mittens won over 99 percent of the millions of games it played. Chess players such as Hikaru Nakamura, Benjamin Bok, Levy Rozman and Eric Rosen struggled against Mittens; while Rozman and Rosen both lost against the engine, Nakamura and Bok were both able to make a draw. In particular, Nakamura's game against the engine lasted 166 moves; he was playing as White. Bok, Benjamin Finegold and Rozman later went on to win against Mittens, the latter with engine assistance from Stockfish. Magnus Carlsen publicly refused to play the engine, calling it a "transparent marketing trick" and "a soulless computer". Against other chess engines, Mittens participated in the Chess.com Computer Chess Championship as a side act. In the competition, Mittens played 150 games against an engine named after the film M3GAN and won overall with a score of 81.5 to 68.5. This equated to 54 percent of the games played. During the event, an estimate of Mittens' rating was made at 3515 points.
|
Mittens (chess) : Mittens went viral in the chess community due to its concept and design: according to an announcement by Chess.com, a combined total of 120 million games were played against the cat engines over the course of January, with around 40 million played against Mittens. The popularity of the engine was helped by the social media exposure created by Chess.com. This included creating an official Twitter account to promote the engine. Chess streamers like Rozman and Nakamura helped cultivate this by creating content around the engine. A video by Nakamura entitled "Mittens the chess bot will make you quit chess" gained over 3.5 million views on YouTube. On January 11, Chess.com reported issues with database scalability due to record levels of traffic: 40 percent more games had been played on Chess.com in January 2023 than any other month since the website's release. According to The Wall Street Journal, the popularity spike was more than the similar surge following the release of Netflix's The Queen's Gambit. The popularity of Mittens was cited by Chess.com as a reason for this instability. The problems continued throughout January; Chess.com stated that they would have to upgrade their servers and invest more in cloud computing to solve the problems caused by the website's popularity surge. On February 1, 2023, Mittens and the other cat engines were removed from the computer section of Chess.com. They were replaced with five new engines themed around artificial intelligence. A tweet was posted on the Mittens's Twitter account after the engine's removal, reading "This is just the beginning. Goodbye for now."
|
Mittens (chess) : Mittens on Twitter Hamilton College interview with Will Whalen
|
MuZero : MuZero is a computer program developed by artificial intelligence research company DeepMind to master games without knowing their rules. Its release in 2019 included benchmarks of its performance in go, chess, shogi, and a standard suite of Atari games. The algorithm uses an approach similar to AlphaZero. It matched AlphaZero's performance in chess and shogi, improved on its performance in Go (setting a new world record), and improved on the state of the art in mastering a suite of 57 Atari games (the Arcade Learning Environment), a visually-complex domain. MuZero was trained via self-play, with no access to rules, opening books, or endgame tablebases. The trained algorithm used the same convolutional and residual architecture as AlphaZero, but with 20 percent fewer computation steps per node in the search tree. MuZero’s capacity to plan and learn effectively without explicit rules makes it a groundbreaking achievement in reinforcement learning and AI, pushing the boundaries of what is possible in artificial intelligence.
|
MuZero : MuZero really is discovering for itself how to build a model and understand it just from first principles. On November 19, 2019, the DeepMind team released a preprint introducing MuZero.
|
MuZero : MuZero used 16 third-generation tensor processing units (TPUs) for training, and 1000 TPUs for selfplay for board games, with 800 simulations per step and 8 TPUs for training and 32 TPUs for selfplay for Atari games, with 50 simulations per step. AlphaZero used 64 second-generation TPUs for training, and 5000 first-generation TPUs for selfplay. As TPU design has improved (third-generation chips are 2x as powerful individually as second-generation chips, with further advances in bandwidth and networking across chips in a pod), these are comparable training setups. R2D2 was trained for 5 days through 2M training steps.
|
MuZero : MuZero was viewed as a significant advancement over AlphaZero, and a generalizable step forward in unsupervised learning techniques. The work was seen as advancing understanding of how to compose systems from smaller components, a systems-level development more than a pure machine-learning development. While only pseudocode was released by the development team, Werner Duvaud produced an open source implementation based on that. MuZero has been used as a reference implementation in other work, for instance as a way to generate model-based behavior. In late 2021, a more efficient variant of MuZero was proposed, named EfficientZero. It "achieves 194.3 percent mean human performance and 109.0 percent median performance on the Atari 100k benchmark with only two hours of real-time game experience". In early 2022, a variant of MuZero was proposed to play stochastic games (for example 2048, backgammon), called Stochastic MuZero, which uses afterstate dynamics and chance codes to account for the stochastic nature of the environment when training the dynamics network.
|
MuZero : General game playing Unsupervised learning
|
MuZero : Initial MuZero preprint Open source implementations
|
Object co-segmentation : In computer vision, object co-segmentation is a special case of image segmentation, which is defined as jointly segmenting semantically similar objects in multiple images or video frames.
|
Object co-segmentation : It is often challenging to extract segmentation masks of a target/object from a noisy collection of images or video frames, which involves object discovery coupled with segmentation. A noisy collection implies that the object/target is present sporadically in a set of images or the object/target disappears intermittently throughout the video of interest. Early methods typically involve mid-level representations such as object proposals.
|
Object co-segmentation : A joint object discover and co-segmentation method based on coupled dynamic Markov networks has been proposed recently, which claims significant improvements in robustness against irrelevant/noisy video frames. Unlike previous efforts which conveniently assumes the consistent presence of the target objects throughout the input video, this coupled dual dynamic Markov network based algorithm simultaneously carries out both the detection and segmentation tasks with two respective Markov networks jointly updated via belief propagation. Specifically, the Markov network responsible for segmentation is initialized with superpixels and provides information for its Markov counterpart responsible for the object detection task. Conversely, the Markov network responsible for detection builds the object proposal graph with inputs including the spatio-temporal segmentation tubes.
|
Object co-segmentation : Graph cut optimization is a popular tool in computer vision, especially in earlier image segmentation applications. As an extension of regular graph cuts, multi-level hypergraph cut is proposed to account for more complex high order correspondences among video groups beyond typical pairwise correlations. With such hypergraph extension, multiple modalities of correspondences, including low-level appearance, saliency, coherent motion and high level features such as object regions, could be seamlessly incorporated in the hyperedge computation. In addition, as a core advantage over co-occurrence based approach, hypergraph implicitly retains more complex correspondences among its vertices, with the hyperedge weights conveniently computed by eigenvalue decomposition of Laplacian matrices.
|
Object co-segmentation : In action localization applications, object co-segmentation is also implemented as the segment-tube spatio-temporal detector. Inspired by the recent spatio-temporal action localization efforts with tubelets (sequences of bounding boxes), Le et al. present a new spatio-temporal action localization detector Segment-tube, which consists of sequences of per-frame segmentation masks. This Segment-tube detector can temporally pinpoint the starting/ending frame of each action category in the presence of preceding/subsequent interference actions in untrimmed videos. Simultaneously, the Segment-tube detector produces per-frame segmentation masks instead of bounding boxes, offering superior spatial accuracy to tubelets. This is achieved by alternating iterative optimization between temporal action localization and spatial action segmentation. The proposed segment-tube detector is illustrated in the flowchart on the right. The sample input is an untrimmed video containing all frames in a pair figure skating video, with only a portion of these frames belonging to a relevant category (e.g., the DeathSpirals). Initialized with saliency based image segmentation on individual frames, this method first performs temporal action localization step with a cascaded 3D CNN and LSTM, and pinpoints the starting frame and the ending frame of a target action with a coarse-to-fine strategy. Subsequently, the segment-tube detector refines per-frame spatial segmentation with graph cut by focusing on relevant frames identified by the temporal action localization step. The optimization alternates between the temporal action localization and spatial action segmentation in an iterative manner. Upon practical convergence, the final spatio-temporal action localization results are obtained in the format of a sequence of per-frame segmentation masks (bottom row in the flowchart) with precise starting/ending frames.
|
Object co-segmentation : Image segmentation Object detection Video content analysis Image analysis Digital image processing Activity recognition Computer vision Convolutional neural network Long short-term memory == References ==
|
OpenVINO : OpenVINO is an open-source software toolkit for optimizing and deploying deep learning models. It enables programmers to develop scalable and efficient AI solutions with relatively few lines of code. It supports several popular model formats and categories, such as large language models, computer vision, and generative AI. Actively developed by Intel, it prioritizes high-performance inference on Intel hardware but also supports ARM/ARM64 processors and encourages contributors to add new devices to the portfolio. Based in C++, it offers the following APIs: C/C++, Python, and Node.js (an early preview). OpenVINO is cross-platform and free for use under Apache License 2.0.
|
OpenVINO : The simplest OpenVINO usage involves obtaining a model and running it as is. Yet for the best results, a more complete workflow is suggested: obtain a model in one of supported frameworks, convert the model to OpenVINO IR using the OpenVINO Converter tool, optimize the model, using training-time or post-training options provided by OpenVINO's NNCF. execute inference, using OpenVINO Runtime by specifying one of several inference modes.
|
OpenVINO : OpenVINO IR is the default format used to run inference. It is saved as a set of two files, *.bin and *.xml, containing weights and topology, respectively. It is obtained by converting a model from one of the supported frameworks, using the application's API or a dedicated converter. Models of the supported formats may also be used for inference directly, without prior conversion to OpenVINO IR. Such an approach is more convenient but offers fewer optimization options and lower performance, since the conversion is performed automatically before inference. Some pre-converted models can be found in the Hugging Face repository. The supported model formats are: PyTorch TensorFlow TensorFlow Lite ONNX (including formats that may be serialized to ONNX) PaddlePaddle JAX/Flax
|
OpenVINO : OpenVINO runs on Windows, Linux and MacOS.
|
OpenVINO : Comparison of deep learning software == References ==
|
OpenVX : OpenVX is an open, royalty-free standard for cross-platform acceleration of computer vision applications. It is designed by the Khronos Group to facilitate portable, optimized and power-efficient processing of methods for vision algorithms. This is aimed for embedded and real-time programs within computer vision and related scenarios. It uses a connected graph representation of operations.
|
OpenVX : OpenVX specifies a higher level of abstraction for programming computer vision use cases than compute frameworks such as OpenCL. The high level makes the programming easy and the underlying execution will be efficient on different computing architectures. This is done while having a consistent and portable vision acceleration API. OpenVX is based on a connected graph of vision nodes that can execute the preferred chain of operations. It uses an opaque memory model, allowing to move image data between the host (CPU) memory and accelerator, such as GPU memory. As a result, the OpenVX implementation can optimize the execution through various techniques, such as acceleration on various processing units or dedicated hardware. This architecture facilitates applications programmed in OpenVX on different systems with different power and performance, including battery-sensitive, vision-enabled, wearable displays. OpenVX is complementary to the open source vision library OpenCV. OpenVX in some applications offers a better optimized graph management than OpenCV.
|
OpenVX : OpenVX 1.0 specification was released in October 2014. OpenVX sample implementation was released in December 2014. OpenVX 1.1 specification was released on May 2, 2016. OpenVX 1.2 was released on May 1, 2017. Updated OpenVX adopters program and OpenVX 1.2 conformance test suite was released on November 21, 2017. OpenVX 1.2.1 was released on November 27, 2018. OpenVX 1.3 was released on October 22, 2019.
|
OpenVX : AMD MIVisionX - for AMD's CPUs and GPUs. Cadence - for Cadence Design Systems's Tensilica Vision DSPs. Imagination - for Imagination Technologies's PowerVR GPUs Synopsys - for Synopsys' DesignWare EV Vision Processors Texas Instruments’ OpenVX (TIOVX) - for Texas Instruments’ Jacinto™ ADAS SoCs. NVIDIA VisionWorks - for CUDA-capable Nvidia GPUs and SoCs. OpenVINO - for Intel's CPUs, GPUs, VPUs, and FPGAs.
|
OpenVX : Official website for OpenVX OpenVX Specification Registry OpenVX Sample Implementation OpenVX Sample Applications OpenVX Tutorial Material
|
Portable Format for Analytics : The Portable Format for Analytics (PFA) is a JSON-based predictive model interchange format conceived and developed by Jim Pivarski. PFA provides a way for analytic applications to describe and exchange predictive models produced by analytics and machine learning algorithms. It supports common models such as logistic regression and decision trees. Version 0.8 was published in 2015. Subsequent versions have been developed by the Data Mining Group. As a predictive model interchange format developed by the Data Mining Group, PFA is complementary to the DMG's XML-based standard called the Predictive Model Markup Language or PMML.
|
Portable Format for Analytics : The Data Mining Group is a consortium managed by the Center for Computational Science Research, Inc., a nonprofit founded in 2008.
|
Portable Format for Analytics : reverse array: # reverse input array of doubles input: output: action: - let: - let: - let: - let: - while : do: - set : ,1]] - set : - z Bubblesort input: output: action: - let: - let: - let: - let: - let: - while : do : - set : - while : ] do : - if: , ] ] then : - set : - set : ] - set : ], to: s - set : - set : - A
|
Portable Format for Analytics : Hadrian (Java/Scala/JVM) - Hadrian is a complete implementation of PFA in Scala, which can be accessed through any JVM language, principally Java. It focuses on model deployment, so it is flexible (can run in restricted environments) and fast. Titus (Python 2.x) - Titus is a complete, independent implementation of PFA in pure Python. It focuses on model development, so it includes model producers and PFA manipulation tools in addition to runtime execution. Currently, it works for Python 2. Titus 2 (Python 3.x) - Titus 2 is a fork of Titus which supports PFA implementation for Python 3. Aurelius (R) - Aurelius is a toolkit for generating PFA in the R programming language. It focuses on porting models to PFA from their R equivalents. To validate or execute scoring engines, Aurelius sends them to Titus through rPython (so both must be installed). Antinous (Model development in Jython) - Antinous is a model-producer plugin for Hadrian that allows Jython code to be executed anywhere a PFA scoring engine would go. It also has a library of model producing algorithms.
|
Portable Format for Analytics : Official website pfa on GitHub PFA 0.8.1 Specification PFA Document Structure Python Multi-User Rest-Api Server for Deploying Portable Format For Analytics Titus 2 : Portable Format for Analytics (PFA) implementation for Python 3
|
Predictive Model Markup Language : The Predictive Model Markup Language (PMML) is an XML-based predictive model interchange format conceived by Robert Lee Grossman, then the director of the National Center for Data Mining at the University of Illinois at Chicago. PMML provides a way for analytic applications to describe and exchange predictive models produced by data mining and machine learning algorithms. It supports common models such as logistic regression and other feedforward neural networks. Version 0.9 was published in 1998. Subsequent versions have been developed by the Data Mining Group. Since PMML is an XML-based standard, the specification comes in the form of an XML schema. PMML itself is a mature standard with over 30 organizations having announced products supporting PMML.
|
Predictive Model Markup Language : A PMML file can be described by the following components: Header: contains general information about the PMML document, such as copyright information for the model, its description, and information about the application used to generate the model such as name and version. It also contains an attribute for a timestamp which can be used to specify the date of model creation. Data Dictionary: contains definitions for all the possible fields used by the model. It is here that a field is defined as continuous, categorical, or ordinal (attribute optype). Depending on this definition, the appropriate value ranges are then defined as well as the data type (such as, string or double). Data Transformations: transformations allow for the mapping of user data into a more desirable form to be used by the mining model. PMML defines several kinds of simple data transformations. Normalization: map values to numbers, the input can be continuous or discrete. Discretization: map continuous values to discrete values. Value mapping: map discrete values to discrete values. Functions (custom and built-in): derive a value by applying a function to one or more parameters. Aggregation: used to summarize or collect groups of values. Model: contains the definition of the data mining model. E.g., A multi-layered feedforward neural network is represented in PMML by a "NeuralNetwork" element which contains attributes such as: Model Name (attribute modelName) Function Name (attribute functionName) Algorithm Name (attribute algorithmName) Activation Function (attribute activationFunction) Number of Layers (attribute numberOfLayers) This information is then followed by three kinds of neural layers which specify the architecture of the neural network model being represented in the PMML document. These attributes are NeuralInputs, NeuralLayer, and NeuralOutputs. Besides neural networks, PMML allows for the representation of many other types of models including support vector machines, association rules, Naive Bayes classifier, clustering models, text models, decision trees, and different regression models. Mining Schema: a list of all fields used in the model. This can be a subset of the fields as defined in the data dictionary. It contains specific information about each field, such as: Name (attribute name): must refer to a field in the data dictionary Usage type (attribute usageType): defines the way a field is to be used in the model. Typical values are: active, predicted, and supplementary. Predicted fields are those whose values are predicted by the model. Outlier Treatment (attribute outliers): defines the outlier treatment to be use. In PMML, outliers can be treated as missing values, as extreme values (based on the definition of high and low values for a particular field), or as is. Missing Value Replacement Policy (attribute missingValueReplacement): if this attribute is specified then a missing value is automatically replaced by the given values. Missing Value Treatment (attribute missingValueTreatment): indicates how the missing value replacement was derived (e.g. as value, mean or median). Targets: allows for post-processing of the predicted value in the format of scaling if the output of the model is continuous. Targets can also be used for classification tasks. In this case, the attribute priorProbability specifies a default probability for the corresponding target category. It is used if the prediction logic itself did not produce a result. This can happen, e.g., if an input value is missing and there is no other method for treating missing values. Output: this element can be used to name all the desired output fields expected from the model. These are features of the predicted field and so are typically the predicted value itself, the probability, cluster affinity (for clustering models), standard error, etc. The latest release of PMML, PMML 4.1, extended Output to allow for generic post-processing of model outputs. In PMML 4.1, all the built-in and custom functions that were originally available only for pre-processing became available for post-processing too.
|
Predictive Model Markup Language : PMML 4.0 was released on June 16, 2009. Examples of new features included: Improved Pre-Processing Capabilities: Additions to built-in functions include a range of Boolean operations and an If-Then-Else function. Time Series Models: New exponential Smoothing models; also place holders for ARIMA, Seasonal Trend Decomposition, and Spectral density estimation, which are to be supported in the near future. Model Explanation: Saving of evaluation and model performance measures to the PMML file itself. Multiple Models: Capabilities for model composition, ensembles, and segmentation (e.g., combining of regression and decision trees). Extensions of Existing Elements: Addition of multi-class classification for Support Vector Machines, improved representation for Association Rules, and the addition of Cox Regression Models. PMML 4.1 was released on December 31, 2011. New features included: New model elements for representing Scorecards, k-Nearest Neighbors (KNN) and Baseline Models. Simplification of multiple models. In PMML 4.1, the same element is used to represent model segmentation, ensemble, and chaining. Overall definition of field scope and field names. A new attribute that identifies for each model element if the model is ready or not for production deployment. Enhanced post-processing capabilities (via the Output element). PMML 4.2 was released on February 28, 2014. New features include: Transformations: New elements for implementing text mining New built-in functions for implementing regular expressions: matches, concat, and replace Simplified outputs for post-processing Enhancements to Scorecard and Naive Bayes model elements PMML 4.3 was released on August 23, 2016. New features include: New Model Types: Gaussian Process Bayesian Network New built-in functions Usage clarifications Documentation improvements Version 4.4 was released in November 2019.
|
Predictive Model Markup Language : The Data Mining Group is a consortium managed by the Center for Computational Science Research, Inc., a nonprofit founded in 2008. The Data Mining Group also developed a standard called Portable Format for Analytics, or PFA, which is complementary to PMML.
|
Predictive Model Markup Language : Data Pre-processing in PMML and ADAPA - A Primer Video of Alex Guazzelli's PMML presentation for the ACM Data Mining Group (hosted by LinkedIn) PMML 3.2 Specification PMML 4.0 Specification PMML 4.1 Specification PMML 4.2.1 Specification PMML 4.4 Specification Representing predictive solutions in PMML: Move from raw data to predictions - Article published on the IBM developerWorks website. Predictive analytics in healthcare: The importance of open standards - Article published on the IBM developerWorks website.
|
Quantum Artificial Intelligence Lab : The Quantum Artificial Intelligence Lab (also called the Quantum AI Lab or QuAIL) is a joint initiative of NASA, Universities Space Research Association, and Google (specifically, Google Research) whose goal is to pioneer research on how quantum computing might help with machine learning and other difficult computer science problems. The lab is hosted at NASA's Ames Research Center.
|
Quantum Artificial Intelligence Lab : The Quantum AI Lab was announced by Google Research in a blog post on May 16, 2013. At the time of launch, the Lab was using the most advanced commercially available quantum computer, D-Wave Two from D-Wave Systems. On October 10, 2013, Google released a short film describing the current state of the Quantum AI Lab. On October 18, 2013, Google announced that it had incorporated quantum physics into Minecraft. In January 2014, Google reported results comparing the performance of the D-Wave Two in the lab with that of classical computers. The results were ambiguous and provoked heated discussion on the Internet. On 2 September 2014, it was announced that the Quantum AI Lab, in partnership with UC Santa Barbara, would be launching an initiative to create quantum information processors based on superconducting electronics. On the 23rd of October 2019, the Quantum AI Lab announced in a paper that it had achieved quantum supremacy.
|
Quantum Artificial Intelligence Lab : On December 09, 2024 Google Introduced Willow, describing it as a "state-of-the-art quantum chip". Google claims that this new chip takes just five minutes to solve a problem that the world fastest computers take ten septillion years. Ten septillion years is more than the Age of Universe. However, experts say Willow is, for now, a largely experimental device. A quantum computer powerful enough to solve a wide range of real-world problems is still years - and billions of dollars - away.
|
Quantum Artificial Intelligence Lab : Artificial intelligence Glossary of artificial intelligence Google Brain Google X
|
Quantum Artificial Intelligence Lab : Official website (NASA) Official website (USRA) Official website (Google) Official website (Google Quantum AI) Waybackmachine
|
Revoscalepy : revoscalepy is a machine learning package in Python created by Microsoft. It is available as part of Machine Learning Services in Microsoft SQL Server 2017 and Machine Learning Server 9.2.0 and later. The package contains functions for creating linear model, logistic regression, random forest, decision tree and boosted decision tree, in addition to some summary functions for inspecting data. Other machine learning algorithms such as neural network are provided in microsoftml, a separate package that is the Python version of MicrosoftML. revoscalepy also contains functions designed to run machine learning algorithms in different compute contexts, including SQL Server, Apache Spark, and Hadoop. In June 2021, Microsoft announced to open source the revoscalepy and RevoScaleR packages, making them freely available under the MIT License.
|
Revoscalepy : Microsoft Machine Learning Services
|
Revoscalepy : Samples for using revoscalepy and microsoftml
|
RevoScaleR : RevoScaleR is a machine learning package in R created by Microsoft. It is available as part of Machine Learning Server, Microsoft R Client, and Machine Learning Services in Microsoft SQL Server 2016. The package contains functions for creating linear model, logistic regression, random forest, decision tree and boosted decision tree, and K-means, in addition to some summary functions for inspecting and visualizing data. It has a Python package counterpart called revoscalepy. Another closely related package is MicrosoftML, which contains machine learning algorithms that RevoScaleR does not have, such as neural network and SVM. In June 2021, Microsoft announced to open source the RevoScaleR and revoscalepy packages, making them freely available under the MIT License.
|
RevoScaleR : Many R packages are designed to analyze data that can fit in the memory of the machine and usually do not make use of parallel processing. RevoScaleR was designed to address these limitations. The functions in RevoScaleR orientate around three main abstraction concepts that users can specify to process large amount of data that might not fit in memory and exploit parallel resources to speed up the analysis.
|
RevoScaleR : The package is mostly meant to be used with a SQL server or other remote machines. To fully leverage the abstractions it uses to process a large dataset, one needs a remote server and non-Express free edition of the package. It cannot be easily installed such as by running "install.packages("RevoScaleR")" like most open source R packages. It's available only through Microsoft R Client, a distribution of R for data science, or Microsoft Machine Learning Server (stand-alone with no SQL server attached), or Microsoft Machine Learning Services (a SQL server services). However, one can still use the analytics functions in an Express, free version of the package.
|
RevoScaleR : Microsoft Machine Learning Services
|
RevoScaleR : Samples for using revoscalepy and microsoftml
|
Sense Networks : Sense Networks is a New York City based company with a focus on applications that analyze big data from mobile phones, carrier networks, and taxicabs, particularly by using machine learning technology to make sense of large amounts of location (latitude/longitude) data. In 2009, Sense was named one of "The 25 Most Intriguing Startups in the World" by Bloomberg Businessweek and was called "The Next Google" on the cover of Newsweek. In 2014, Sense Networks was acquired by YP, "the local search and advertising company owned by Cerberus Capital Management and AT&T." It was subsequently sold off to Verve in 2017
|
Sense Networks : Sense Networks was founded by Greg Skibiski in February 2006 (2003?) near his home in Northampton, Massachusetts. After establishing an office in NoHo, New York City near Silicon Alley, Skibiski recruited Alex Pentland, Director of Human Dynamics Research and former Academic Head of the MIT Media Lab, Tony Jebara, Associate Professor and Head of the Machine Learning Laboratory at Columbia University, and Christine Lemke, who would later become co-founders. Sense Networks investors include Intel Capital, Javelin Venture Partners, and Kenan Altunis. Founder Greg Skibiski was pushed out by lead investor Intel Capital in November 2009 following the company's B round of financing. During the same week, the company won the Emerging Communications Conference "Company to Watch" Award. The company has three published patent applications for analyzing sensor data streams: System and Method of Performing Location Analytics (US 20090307263), Comparing Spatial-Temporal Trails in Location Analytics (US 20100079336), and Anomaly Detection in Sensor Analytics (US 20100082301). The company was acquired by the Yellow Pages in 2014. This is a marketing conglomerate under AT&T and Cerberus Capital Management.
|
Sense Networks : The Citysense consumer application that shows hotspots of human activity in real-time from mobile phone location and taxicab GPS data was named by ReadWriteWeb (in The New York Times) as "Top 10 Internet of Things Products of 2009". The Cabsense consumer application that shows the best place to catch a New York City taxicab based on GPS data from the vehicle was launched in March 2010. The Macrosense platform is for mobile application providers and mobile phone carriers to analyze billions of customer location data points for predictive analytics in advertising and churn management applications.
|
Sense Networks : The company allows users to opt-out of their service through their website, and users may monitor their profile through their application. The company does not collect identifiable data (such as phone numbers or names); it collects data received from cellphone to construct anonymous profiles of consumers. This anonymous data/profiles may then be sold to third parties. The company's privacy and data ownership policies are based on The New Deal on Data, as advocated by Alex "Sandy" Pentland, head of the Human Dynamics group at MIT.
|
Sense Networks : Geosocial Networking Location-based service Location awareness
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.