text
stringlengths 12
14.7k
|
---|
AlphaDev : Understanding DeepMind's AlphaDev Breakthrough in Optimizing Sorting Algorithms Understanding DeepMind's Sorting Algorithm
|
AlphaGo : AlphaGo is a computer program that plays the board game Go. It was developed by the London-based DeepMind Technologies, an acquired subsidiary of Google. Subsequent versions of AlphaGo became increasingly powerful, including a version that competed under the name Master. After retiring from competitive play, AlphaGo Master was succeeded by an even more powerful version known as AlphaGo Zero, which was completely self-taught without learning from human games. AlphaGo Zero was then generalized into a program known as AlphaZero, which played additional games, including chess and shogi. AlphaZero has in turn been succeeded by a program known as MuZero which learns without being taught the rules. AlphaGo and its successors use a Monte Carlo tree search algorithm to find its moves based on knowledge previously acquired by machine learning, specifically by an artificial neural network (a deep learning method) by extensive training, both from human and computer play. A neural network is trained to identify the best moves and the winning percentages of these moves. This neural network improves the strength of the tree search, resulting in stronger move selection in the next iteration. In October 2015, in a match against Fan Hui, the original AlphaGo became the first computer Go program to beat a human professional Go player without handicap on a full-sized 19×19 board. In March 2016, it beat Lee Sedol in a five-game match, the first time a computer Go program has beaten a 9-dan professional without handicap. Although it lost to Lee Sedol in the fourth game, Lee resigned in the final game, giving a final score of 4 games to 1 in favour of AlphaGo. In recognition of the victory, AlphaGo was awarded an honorary 9-dan by the Korea Baduk Association. The lead up and the challenge match with Lee Sedol were documented in a documentary film also titled AlphaGo, directed by Greg Kohs. The win by AlphaGo was chosen by Science as one of the Breakthrough of the Year runners-up on 22 December 2016. At the 2017 Future of Go Summit, the Master version of AlphaGo beat Ke Jie, the number one ranked player in the world at the time, in a three-game match, after which AlphaGo was awarded professional 9-dan by the Chinese Weiqi Association. After the match between AlphaGo and Ke Jie, DeepMind retired AlphaGo, while continuing AI research in other areas. The self-taught AlphaGo Zero achieved a 100–0 victory against the early competitive version of AlphaGo, and its successor AlphaZero was perceived as the world's top player in Go by the end of the 2010s.
|
AlphaGo : Go is considered much more difficult for computers to win than other games such as chess, because its strategic and aesthetic nature makes it hard to directly construct an evaluation function, and its much larger branching factor makes it prohibitively difficult to use traditional AI methods such as alpha–beta pruning, tree traversal and heuristic search. Almost two decades after IBM's computer Deep Blue beat world chess champion Garry Kasparov in the 1997 match, the strongest Go programs using artificial intelligence techniques only reached about amateur 5-dan level, and still could not beat a professional Go player without a handicap. In 2012, the software program Zen, running on a four PC cluster, beat Masaki Takemiya (9p) twice at five- and four-stone handicaps. In 2013, Crazy Stone beat Yoshio Ishida (9p) at a four-stone handicap. According to DeepMind's David Silver, the AlphaGo research project was formed around 2014 to test how well a neural network using deep learning can compete at Go. AlphaGo represents a significant improvement over previous Go programs. In 500 games against other available Go programs, including Crazy Stone and Zen, AlphaGo running on a single computer won all but one. In a similar matchup, AlphaGo running on multiple computers won all 500 games played against other Go programs, and 77% of games played against AlphaGo running on a single computer. The distributed version in October 2015 was using 1,202 CPUs and 176 GPUs.
|
AlphaGo : An early version of AlphaGo was tested on hardware with various numbers of CPUs and GPUs, running in asynchronous or distributed mode. Two seconds of thinking time was given to each move. The resulting Elo ratings are listed below. In the matches with more time per move higher ratings are achieved. In May 2016, Google unveiled its own proprietary hardware "tensor processing units", which it stated had already been deployed in multiple internal projects at Google, including the AlphaGo match against Lee Sedol. In the Future of Go Summit in May 2017, DeepMind disclosed that the version of AlphaGo used in this Summit was AlphaGo Master, and revealed that it had measured the strength of different versions of the software. AlphaGo Lee, the version used against Lee, could give AlphaGo Fan, the version used in AlphaGo vs. Fan Hui, three stones, and AlphaGo Master was even three stones stronger.
|
AlphaGo : As of 2016, AlphaGo's algorithm uses a combination of machine learning and tree search techniques, combined with extensive training, both from human and computer play. It uses Monte Carlo tree search, guided by a "value network" and a "policy network", both implemented using deep neural network technology. A limited amount of game-specific feature detection pre-processing (for example, to highlight whether a move matches a nakade pattern) is applied to the input before it is sent to the neural networks. The networks are convolutional neural networks with 12 layers, trained by reinforcement learning. The system's neural networks were initially bootstrapped from human gameplay expertise. AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves. Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, using reinforcement learning to improve its play. To avoid "disrespectfully" wasting its opponent's time, the program is specifically programmed to resign if its assessment of win probability falls beneath a certain threshold; for the match against Lee, the resignation threshold was set to 20%.
|
AlphaGo : Toby Manning, the match referee for AlphaGo vs. Fan Hui, has described the program's style as "conservative". AlphaGo's playing style strongly favours greater probability of winning by fewer points over lesser probability of winning by more points. Its strategy of maximising its probability of winning is distinct from what human players tend to do which is to maximise territorial gains, and explains some of its odd-looking moves. It makes a lot of opening moves that have never or seldom been made by humans. It likes to use shoulder hits, especially if the opponent is over concentrated.
|
AlphaGo : Facebook has also been working on its own Go-playing system darkforest, also based on combining machine learning and Monte Carlo tree search. Although a strong player against other computer Go programs, as of early 2016, it had not yet defeated a professional human player. Darkforest has lost to CrazyStone and Zen and is estimated to be of similar strength to CrazyStone and Zen. DeepZenGo, a system developed with support from video-sharing website Dwango and the University of Tokyo, lost 2–1 in November 2016 to Go master Cho Chikun, who holds the record for the largest number of Go title wins in Japan. A 2018 paper in Nature cited AlphaGo's approach as the basis for a new means of computing potential pharmaceutical drug molecules. Systems consisting of Monte Carlo tree search guided by neural networks have since been explored for a wide array of applications.
|
AlphaGo : AlphaGo Master (white) v. Tang Weixing (31 December 2016), AlphaGo won by resignation. White 36 was widely praised.
|
AlphaGo : The documentary film AlphaGo raised hopes that Lee Sedol and Fan Hui would have benefitted from their experience of playing AlphaGo, but as of May 2018, their ratings were little changed; Lee Sedol was ranked 11th in the world, and Fan Hui 545th. On 19 November 2019, Lee announced his retirement from professional play, arguing that he could never be the top overall player of Go due to the increasing dominance of AI. Lee referred to them as being "an entity that cannot be defeated".
|
AlphaGo : Official website AlphaGo wiki at Sensei's Library, including links to AlphaGo games AlphaGo page, with archive and games Estimated 2017 rating of Alpha Go AlphaGo - The Movie on YouTube Media related to AlphaGo at Wikimedia Commons Quotations related to AlphaGo at Wikiquote
|
AlphaGo Zero : AlphaGo Zero is a version of DeepMind's Go software AlphaGo. AlphaGo's team published an article in Nature in October 2017 introducing AlphaGo Zero, a version created without using data from human games, and stronger than any previous version. By playing games against itself, AlphaGo Zero: surpassed the strength of AlphaGo Lee in three days by winning 100 games to 0; reached the level of AlphaGo Master in 21 days; and exceeded all previous versions in 40 days. Training artificial intelligence (AI) without datasets derived from human experts has significant implications for the development of AI with superhuman skills, as expert data is "often expensive, unreliable, or simply unavailable." Demis Hassabis, the co-founder and CEO of DeepMind, said that AlphaGo Zero was so powerful because it was "no longer constrained by the limits of human knowledge". Furthermore, AlphaGo Zero performed better than standard deep reinforcement learning models (such as Deep Q-Network implementations) due to its integration of Monte Carlo tree search. David Silver, one of the first authors of DeepMind's papers published in Nature on AlphaGo, said that it is possible to have generalized AI algorithms by removing the need to learn from humans. Google later developed AlphaZero, a generalized version of AlphaGo Zero that could play chess and Shōgi in addition to Go. In December 2017, AlphaZero beat the 3-day version of AlphaGo Zero by winning 60 games to 40, and with 8 hours of training it outperformed AlphaGo Lee on an Elo scale. AlphaZero also defeated a top chess program (Stockfish) and a top Shōgi program (Elmo).
|
AlphaGo Zero : The network in AlphaGo Zero is a ResNet with two heads.: Appendix: Methods The stem of the network takes as input a 17x19x19 tensor representation of the Go board. 8 channels are the positions of the current player's stones from the last eight time steps. (1 if there is a stone, 0 otherwise. If the time step go before the beginning of the game, then 0 in all positions.) 8 channels are the positions of the other player's stones from the last eight time steps. 1 channel is all 1 if black is to move, and 0 otherwise. The body is a ResNet with either 20 or 40 residual blocks and 256 channels. There are two heads, a policy head and a value head. Policy head outputs a logit array of size 19 × 19 + 1 , representing the logit of making a move in one of the points, plus the logit of passing. Value head outputs a number in the range ( − 1 , + 1 ) , representing the expected score for the current player. -1 represents current player losing, and +1 winning.
|
AlphaGo Zero : AlphaGo Zero's neural network was trained using TensorFlow, with 64 GPU workers and 19 CPU parameter servers. Only four TPUs were used for inference. The neural network initially knew nothing about Go beyond the rules. Unlike earlier versions of AlphaGo, Zero only perceived the board's stones, rather than having some rare human-programmed edge cases to help recognize unusual Go board positions. The AI engaged in reinforcement learning, playing against itself until it could anticipate its own moves and how those moves would affect the game's outcome. In the first three days AlphaGo Zero played 4.9 million games against itself in quick succession. It appeared to develop the skills required to beat top humans within just a few days, whereas the earlier AlphaGo took months of training to achieve the same level. Training cost 3e23 FLOPs, ten times that of AlphaZero. For comparison, the researchers also trained a version of AlphaGo Zero using human games, AlphaGo Master, and found that it learned more quickly, but actually performed more poorly in the long run. DeepMind submitted its initial findings in a paper to Nature in April 2017, which was then published in October 2017.
|
AlphaGo Zero : The hardware cost for a single AlphaGo Zero system in 2017, including the four TPUs, has been quoted as around $25 million.
|
AlphaGo Zero : According to Hassabis, AlphaGo's algorithms are likely to be of the most benefit to domains that require an intelligent search through an enormous space of possibilities, such as protein folding (see AlphaFold) or accurately simulating chemical reactions. AlphaGo's techniques are probably less useful in domains that are difficult to simulate, such as learning how to drive a car. DeepMind stated in October 2017 that it had already started active work on attempting to use AlphaGo Zero technology for protein folding, and stated it would soon publish new findings.
|
AlphaGo Zero : AlphaGo Zero was widely regarded as a significant advance, even when compared with its groundbreaking predecessor, AlphaGo. Oren Etzioni of the Allen Institute for Artificial Intelligence called AlphaGo Zero "a very impressive technical result" in "both their ability to do it—and their ability to train the system in 40 days, on four TPUs". The Guardian called it a "major breakthrough for artificial intelligence", citing Eleni Vasilaki of Sheffield University and Tom Mitchell of Carnegie Mellon University, who called it an impressive feat and an “outstanding engineering accomplishment" respectively. Mark Pesce of the University of Sydney called AlphaGo Zero "a big technological advance" taking us into "undiscovered territory". Gary Marcus, a psychologist at New York University, has cautioned that for all we know, AlphaGo may contain "implicit knowledge that the programmers have about how to construct machines to play problems like Go" and will need to be tested in other domains before being sure that its base architecture is effective at much more than playing Go. In contrast, DeepMind is "confident that this approach is generalisable to a large number of domains". In response to the reports, South Korean Go professional Lee Sedol said, "The previous version of AlphaGo wasn’t perfect, and I believe that’s why AlphaGo Zero was made." On the potential for AlphaGo's development, Lee said he will have to wait and see but also said it will affect young Go players. Mok Jin-seok, who directs the South Korean national Go team, said the Go world has already been imitating the playing styles of previous versions of AlphaGo and creating new ideas from them, and he is hopeful that new ideas will come out from AlphaGo Zero. Mok also added that general trends in the Go world are now being influenced by AlphaGo's playing style. "At first, it was hard to understand and I almost felt like I was playing against an alien. However, having had a great amount of experience, I’ve become used to it," Mok said. "We are now past the point where we debate the gap between the capability of AlphaGo and humans. It’s now between computers." Mok has reportedly already begun analyzing the playing style of AlphaGo Zero along with players from the national team. "Though having watched only a few matches, we received the impression that AlphaGo Zero plays more like a human than its predecessors," Mok said. Chinese Go professional Ke Jie commented on the remarkable accomplishments of the new program: "A pure self-learning AlphaGo is the strongest. Humans seem redundant in front of its self-improvement."
|
AlphaGo Zero : On 5 December 2017, DeepMind team released a preprint on arXiv, introducing AlphaZero, a program using generalized AlphaGo Zero's approach, which achieved within 24 hours a superhuman level of play in chess, shogi, and Go, defeating world-champion programs, Stockfish, Elmo, and 3-day version of AlphaGo Zero in each case. AlphaZero (AZ) is a more generalized variant of the AlphaGo Zero (AGZ) algorithm, and is able to play shogi and chess as well as Go. Differences between AZ and AGZ include: AZ has hard-coded rules for setting search hyperparameters. The neural network is now updated continually. Chess (unlike Go) can end in a tie; therefore AZ can take into account the possibility of a tie game. An open source program, Leela Zero, based on the ideas from the AlphaGo papers is available. It uses a GPU instead of the TPUs recent versions of AlphaGo rely on.
|
AlphaGo Zero : "AlphaGo Zero: Starting from scratch". Archived from the original on 3 January 2020. Singh, S.; Okun, A.; Jackson, A. (2017). "AOP". Nature. 550 (7676): 336–337. Bibcode:2017Natur.550..336S. doi:10.1038/550336a. PMID 29052631. S2CID 4447445. Silver, David; Schrittwieser, Julian; Simonyan, Karen; Antonoglou, Ioannis; Huang, Aja; Guez, Arthur; Hubert, Thomas; Baker, Lucas; Lai, Matthew; Bolton, Adrian; Chen, Yutian; Lillicrap, Timothy; Hui, Fan; Sifre, Laurent; Van Den Driessche, George; Graepel, Thore; Hassabis, Demis (2017). "Mastering the game of Go without human knowledge" (PDF). Nature. 550 (7676): 354–359. Bibcode:2017Natur.550..354S. doi:10.1038/nature24270. PMID 29052630. S2CID 205261034. AlphaGo Zero Games
|
AlphaZero : AlphaZero is a computer program developed by artificial intelligence research company DeepMind to master the games of chess, shogi and go. This algorithm uses an approach similar to AlphaGo Zero. On December 5, 2017, the DeepMind team released a preprint paper introducing AlphaZero, which would soon play three games by defeating world-champion chess engines Stockfish, Elmo, and the three-day version of AlphaGo Zero. In each case it made use of custom tensor processing units (TPUs) that the Google programs were optimized to use. AlphaZero was trained solely via self-play using 5,000 first-generation TPUs to generate the games and 64 second-generation TPUs to train the neural networks, all in parallel, with no access to opening books or endgame tables. After four hours of training, DeepMind estimated AlphaZero was playing chess at a higher Elo rating than Stockfish 8; after nine hours of training, the algorithm defeated Stockfish 8 in a time-controlled 100-game tournament (28 wins, 0 losses, and 72 draws). The trained algorithm played on a single machine with four TPUs. DeepMind's paper on AlphaZero was published in the journal Science on 7 December 2018. While the actual AlphaZero program has not been released to the public, the algorithm described in the paper has been implemented in publicly available software. In 2019, DeepMind published a new paper detailing MuZero, a new algorithm able to generalize AlphaZero's work, playing both Atari and board games without knowledge of the rules or representations of the game.
|
AlphaZero : AlphaZero (AZ) is a more generalized variant of the AlphaGo Zero (AGZ) algorithm, and is able to play shogi and chess as well as Go. Differences between AZ and AGZ include: AZ has hard-coded rules for setting search hyperparameters. The neural network is now updated continually. AZ doesn't use symmetries, unlike AGZ. Chess or Shogi can end in a draw unlike Go; therefore, AlphaZero takes into account the possibility of a drawn game.
|
AlphaZero : Comparing Monte Carlo tree search searches, AlphaZero searches just 80,000 positions per second in chess and 40,000 in shogi, compared to 70 million for Stockfish and 35 million for Elmo. AlphaZero compensates for the lower number of evaluations by using its deep neural network to focus much more selectively on the most promising variation.
|
AlphaZero : AlphaZero was trained by simply playing against itself multiple times, using 5,000 first-generation TPUs to generate the games and 64 second-generation TPUs to train the neural networks. Training took several days, totaling about 41 TPU-years. It cost 3e22 FLOPs. In parallel, the in-training AlphaZero was periodically matched against its benchmark (Stockfish, Elmo, or AlphaGo Zero) in brief one-second-per-move games to determine how well the training was progressing. DeepMind judged that AlphaZero's performance exceeded the benchmark after around four hours of training for Stockfish, two hours for Elmo, and eight hours for AlphaGo Zero.
|
AlphaZero : DeepMind addressed many of the criticisms in their final version of the paper, published in December 2018 in Science. They further clarified that AlphaZero was not running on a supercomputer; it was trained using 5,000 tensor processing units (TPUs), but only ran on four TPUs and a 44-core CPU in its matches.
|
AlphaZero : Chessprogramming wiki on AlphaZero Chess.com Youtube playlist for AlphaZero vs. Stockfish
|
Cortica : Headquartered in Tel Aviv Cortica utilizes unsupervised learning methods to recognize and analyze digital images and video. The technology developed by the Cortica team is based on research of the function of the human brain.
|
Cortica : Cortica was founded in 2007 by Igal Raichelgauz, Karina Odinaev and Yehoshua Zeevi. Together, the founders developed the company’s core technology while at Technion – Israel Institute of Technology. By combining discoveries in neuroscience with developments in computer programming, the team created technology that possesses the ability to interpret large amounts of visual data with increased accuracy. This technology, called Image2Text, is based on the founders’ work in digitally replicating cortical neural networks’ ability to identify complex patterns within massive quantities of ambiguous and noisy data. Cortica’s offerings have application in the automotive industry, media industries, as well as the smart city and medical industries. Industry experts suggest that the self-driving automotive industry alone will be worth upwards of $7 trillion while each connected car is expected to generate 4,000 GB of data per day. Beyond that, industry analysts expect the proliferation of surveillance cameras to continue leading to an expected 2,500 Petabytes of data being generated daily by new surveillance cameras. Cortica operates in these high scale industries. The company currently employs professionals from many domains including AI researchers as well as veterans of intelligence units within the Israeli Defense Forces.
|
Cortica : In 2006, Founders Raichelgauz, Odinaev, and Zeevi shared their findings with the 28th IEEE EMBS Annual International Conference in New York in a paper titled, “Natural Signal Classification by Neural Cliques and Phase-Locked Attractors”. That same year, the team also published “Cliques in Neural Ensembles as Perception Carriers" CB Insights recently identified Cortica as the number one patent holder among AI companies. Cortica is researching to develop a machine-learning driving system which can identify objects and pedestrians. Connecting to it, Elon Musk has been rumored to partner with Cortica for his electric car company, Tesla. However, Tesla denies it stating that Musk did not discuss a collaboration with artificial intelligence firm Cortica.
|
Cortica : Cortica raised $7 million in its Series A funding round, announced in August 2012. Investors included Horizons Ventures (the investment firm of Hong Kong billionaire Li Ka-Shing), and Ynon Kreiz, the former chairman and CEO of the Endemol Group. In May 2013, it was announced that Cortica had raised $1.5 million from Russian firm Mail.ru Group. It later transpired that this was a part of Cortica's Series B funding round for $6.4 million, announced in June 2013. The round was led by Horizons Ventures, with participation from the Russian firm Mail.ru Group and other angel investors. In its fourth funding round, Cortica has raised $20 million, bringing the total investments to $38 million. According to a report from The Israeli lead Daily economic newspaper, TheMarker, the fourth round was led by a strategic Chinese investor who will probably help the company expand into the Asian market.
|
Cortica : GigaOm listed Cortica as one of the top deep learning startups in a November 2013 article surveying the field, along with AlchemyAPI, Ersatz, and Semantria. Business Insider ranked Cortica as one of the coolest tech companies in Israel. CB Insights has identified Cortica as the top patent holding AI company. In 2017 several leading automotive media outlets covered the launch of Cortica's automotive business unit
|
Cortica : Official website
|
Darkforest : Darkforest is a computer go program developed by Meta Platforms, based on deep learning techniques using a convolutional neural network. Its updated version Darkfores2 combines the techniques of its predecessor with Monte Carlo tree search. The MCTS effectively takes tree search methods commonly seen in computer chess programs and randomizes them. With the update, the system is known as Darkfmcts3. Darkforest is of similar strength to programs like CrazyStone and Zen. It has been tested against a professional human player at the 2016 UEC cup. Google's AlphaGo program won against a professional player in October 2015 using a similar combination of techniques. Darkforest is named after Liu Cixin's science fiction novel The Dark Forest.
|
Darkforest : Competing with top human players in the ancient game of Go has been a long-term goal of artificial intelligence. Go’s high branching factor makes traditional search techniques ineffective, even on cutting-edge hardware, and Go’s evaluation function could change drastically with one stone change. However, by using a Deep Convolutional Neural Network designed for long-term predictions, Darkforest has been able to substantially improve the win rate for bots over more traditional Monte Carlo Tree Search based approaches.
|
Darkforest : Darkforest uses a neural network to sort through the 10100 board positions, and find the most powerful next move. However, neural networks alone cannot match the level of good amateur players or the best search-based Go engines, and so Darkfores2 combines the neural network approach with a search-based machine. A database of 250,000 real Go games were used in the development of Darkforest, with 220,000 used as a training set and the rest used to test the neural network's ability to predict the next moves played in the real games. This allows Darkforest to accurately evaluate the global state of the board, but local tactics were still poor. Search-based engines have poor global evaluation, but are good at local tactics. Combining these two approaches is difficult because search-based engines work much faster than neural networks, a problem which was solved in Darkfores2 by running the processes in parallel with frequent communication between the two.
|
Darkforest : The family of Darkforest computer go programs is based on convolution neural networks. The most recent advances in Darkfmcts3 combined convolutional neural networks with more traditional Monte Carlo tree search. Darkfmcts3 is the most advanced version of Darkforest, which combines Facebook's most advanced convolutional neural network architecture from Darkfores2 with a Monte Carlo tree search. Darkfmcts3 relies on a convolution neural networks that predicts the next k moves based on the current state of play. It treats the board as a 19x19 image with multiple channels. Each channel represents a different aspect of board information based upon the specific style of play. For standard and extended play, there are 21 and 25 different channels, respectively. In standard play, each players liberties are represented as six binary channels or planes. The respective plane is true if the player one, two, or three or more liberties available. Ko (i.e. illegal moves) is represented as one binary plane. Stone placement for each opponent and empty board positions are represented as three binary planes, and the duration since a stone has been placed is represented as real numbers on two planes, one for each player. Lastly, the opponents rank is represented by nine binary planes, where if all are true, the player is a 9d level, if 8 are true, a 8d level, and so forth. Extended play additionally considers the boarder (binary plane that is true at the border), position mask (represented as distance from the board center, i.e. x ( − 0.5 ∗ d i s t a n c e 2 ) ) , where x is a real number at a position), and each player's territory (binary, based on which player a location is closer to). Darkfmct3 uses a 12-layer full convolutional network with a width of 384 nodes without weight sharing or pooling. Each convolutional layer is followed by a rectified linear unit, a popular activation function for deep neural networks. A key innovation of Darkfmct3 compared to previous approaches is that it uses only one softmax function to predict the next move, which enables the approach to reduce the overall number of parameters. Darkfmct3 was trained against 300 random selected games from an empirical dataset representing different game stages. The learning rate was determined by vanilla stochastic gradient descent. Darkfmct3 synchronously couples a convolutional neural network with a Monte Carlo tree search. Because the convolutional neural network is computationally taxing, the Monte Carlo tree search focuses computation on the more likely game play trajectories. By running the neural network synchronously with the Monte Carlo tree search, it is possible to guarantee that each node is expanded by the moves predicted by the neural network.
|
Darkforest : Darkfores2 beats Darkforest, its neural network-only predecessor, around 90% of the time, and Pachi, one of the best search-based engines, around 95% of the time. On the Kyu rating system, Darkforest holds a 1-2d level. Darkfores2 achieves a stable 3d level on KGS Go Server as a ranked bot. With the added Monte Carlo tree search, Darkfmcts3 with 5,000 rollouts beats Pachi with 10k rollouts in all 250 games; with 75k rollouts it achieves a stable 5d level in KGS server, on par with state-of-the-art Go AIs (e.g., Zen, DolBaram, CrazyStone); with 110k rollouts, it won the 3rd place in January KGS Go Tournament.
|
Darkforest : Go and mathematics
|
Darkforest : Source code on Github
|
DARPA LAGR Program : The Learning Applied to Ground Vehicles (LAGR) program, which ran from 2004 until 2008, had the goal of accelerating progress in autonomous, perception-based, off-road navigation in robotic unmanned ground vehicles (UGVs). LAGR was funded by DARPA, a research agency of the United States Department of Defense.
|
DARPA LAGR Program : While mobile robots had been in existence since the 1960s, (e.g. Shakey), progress in creating robots that could navigate on their own, outdoors, off-road, on irregular, obstacle-rich terrain had been slow. In fact, no clear metrics were in place to measure progress. A baseline understanding of off-road capabilities began to emerge with the DARPA PerceptOR program in which independent research teams fielded robotic vehicles in unrehearsed Government tests that measured average speed and number of required operator interventions over a fixed course over widely spaced waypoints. These tests exposed the extreme challenges of off-road navigation. While the PerceptOR vehicles were equipped with sensors and algorithms that were state-of-the-art for the beginning of the 21st century, the limited range of their perception technology caused them to become trapped in natural cul-de-sacs. Furthermore, their reliance on pre-scripted behaviors did not allow them to adapt to unexpected circumstances. The overall result was that except for essentially open terrain with minimal obstacles, or along dirt roads, the PerceptOR vehicles were unable navigate without numerous, repeated operator intervention. The LAGR program was designed to build on the methodology started in PerceptOR while seeking to overcome the technical challenges exposed by the PerceptOR tests.
|
DARPA LAGR Program : The principal goal of LAGR was to accelerate progress in off navigation of UGVs. Additional, synergistic goals included (1) establishing benchmarking methodology for measuring progress for autonomous robots operating in unstructured environments, (2) advancing machine vision and thus enabling long-range perception, and (3) increasing the number of institutions and individuals who were able to contribute to forefront UGV research.
|
DARPA LAGR Program : The LAGR program was designed to focus on developing new science for robot perception and control rather than on new hardware. Thus, it was decided to create a fleet of identical, relatively simple robots that would be supplied to the LAGR researchers, who were members of competitive teams, freeing them to concentrate on algorithm development. The teams were each given two robots of the standard design. They developed new software on these robots, and then sent the code to a government test team that then tested that code on Government robots at various test courses. These courses were located throughout the US and were not previously known to the teams. In this way, the code from all teams could be tested in essentially identical circumstances. After an initial startup period, the code development/test cycle was repeated about once every month. The standard robot was designed and built by the Carnegie Mellon University National Robotics Engineering Center (CMU NREC). The vehicles’ computers were preloaded with a modular “Baseline” perception and navigation system that was essentially the same system that CMU NREC had created for the PerceptOR program and was considered to represent the state-of-the-art at the inception of LAGR. The modular nature of the Baseline system allowed the researchers to replace parts of the Baseline code with their own modules and still have a complete working system without having to create an entire navigation system from scratch. Thus, for example, they were able to compare the performance of their own obstacle detection module with that of the Baseline code, while holding everything else fixed. The Baseline code also served as a fixed reference – in any environment and at any time in the program, teams’ code could be compared to the Baseline code. This rapid cycle gave the Government team and the performer teams quick feedback and allowed the Government team to design test courses that challenged the performers in specific perception tasks and whose difficulty was likely to challenge, but not overwhelm, the performers’ current capabilities. Teams were not required to submit new code for every test, but usually did. Despite this leeway, some teams found the rapid test cycle distracting to their long term progress and would have preferred a longer interval between tests.
|
DARPA LAGR Program : Eight teams were selected as performers in Phase I, the first 18 months of LAGR. The teams were from Applied Perception (Principal Investigator [PI] Mark Ollis), Georgia Tech (PI Tucker Balch), Jet Propulsion Laboratory (PI Larry Matthies), Net-Scale Technologies (PI Urs Muller), NIST (PI James Albus), Stanford University (PI Sebastian Thrun), SRI International (PI Robert Bolles), and University of Pennsylvania (PI Daniel Lee). The Stanford team resigned at the end of Phase I to focus its efforts on the DARPA Grand Challenge; it was replaced by a team from the University of Colorado, Boulder (PI Greg Grudic). Also in Phase II, the NIST team suspended its participation in the competition and instead concentrated on assembling the best software elements from each team into a single system. Roger Bostelman became PI of that effort.
|
DARPA LAGR Program : The LAGR vehicle, which was about the size of a supermarket shopping cart, was designed to be simple to control. (A companion DARPA program, Learning Locomotion, addressed complex motor control.) It was battery powered and had two independently driven wheelchair motors in the front, and two caster wheels in the rear. When the front wheels were rotated in the same direction the robot was driven either forward or reverse. When these wheels were driven in opposite directions, the robot turned. The ~ $30,000 cost of the LAGR vehicle meant that a fleet could be built and distributed to a number of teams expanding on the field of researchers who had traditionally participated in DARPA robotics programs. The vehicle's top speed of about 3 miles/ hour and relatively modest weight of ~100 kg meant that it posed a much reduced safety hazard compared to vehicles used in previous programs in unmanned ground vehicles and thus further reduced the budget required for each team to manage its robot. Nevertheless, the LAGR vehicles were sophisticated machines. Their sensor suite included 2 pairs of stereo cameras, an accelerometer, a bumper sensor, wheel encoders, and a GPS. The vehicle also had three computers that were user-programmable.
|
DARPA LAGR Program : A cornerstone of the program was incorporation of learned behaviors in the robots. In addition, the program used passive optical systems to accomplish long-range scene analysis. The difficulty of testing UGV navigation in unstructured, off-road environments made accurate, objective measurement of progress a challenging task. While no absolute measure of performance had been defined in LAGR, the relative comparison of a team's code to that of the Baseline code on a given course demonstrated whether progress was being made in that environment. By the conclusion of the program, testing showed that many of the performers had attained leaps in performance. In particular, average autonomous speeds were increased by factor of 3 and useful visual perception was extended to ranges as far as 100 meters. While LAGR did succeed in extending the useful range of visual perception, this was primarily done by either pixel or patch-based color or texture analysis. Object recognition was not directly addressed. Even though the LAGR vehicle had a WAAS GPS, its position was never determined down to the width of the vehicle, so it was hard for the systems to re-use obstacle maps of areas the robots had previously traversed since the GPS continually drifted. The drift was especially severe if there was a forest canopy. A few teams developed visual odometry algorithms that essentially eliminated this drift. LAGR also had the goal of expanding the number of performers and removing the need for large system integration so that valuable technology nuggets created by small teams could be recognized and then adopted by the larger community. Some teams developed rapid methods for learning with a human teacher: a human could Radio Control (RC) operate the robot and give signals specifying “safe” and “non-safe” areas and the robot could quickly adapt and navigate with the same policy. This was demonstrated when the robot was taught to be aggressive in driving over dead weeds while avoiding bushes or alternatively taught to be timid and only drive on mowed paths. LAGR was managed in tandem with the DARPA Unmanned Ground Combat Vehicle – PerceptOR Integration Program (UPI) CMU NREC UPI Website. UPI combined advanced perception with a vehicle of extreme mobility. The best stereo algorithms and the visual odometry from LAGR were ported to UPI. In addition, interactions between the LAGR PIs and the UPI team resulted in the incorporation of adaptive technology into the UPI codebase with a resultant improvement in performance of the UPI "Crusher" robots.
|
DARPA LAGR Program : LAGR was administered under the DARPA Information Processing Technology Office. Larry Jackel conceived the program and was the program manager from 2004 to 2007. Eric Krotkov, Michael Perschbacher, and James Pippine contributed to LAGR conception and management. Charles Sullivan played a major role in LAGR testing. Tom Wagner was the program manager from mid-2007 to the program conclusion in early 2008. == References ==
|
Diffbot : Diffbot is a developer of machine learning and computer vision algorithms and public APIs for extracting data from web pages / web scraping to create a knowledge base. The company has gained interest from its application of computer vision technology to web pages, wherein it visually parses a web page for important elements and returns them in a structured format. In 2015 Diffbot announced it was working on its version of an automated "Knowledge Graph" by crawling the web and using its automatic web page extraction to build a large database of structured web data. In 2019 Diffbot released their Knowledge Graph which has since grown to include over two billion entities (corporations, people, articles, products, discussions, and more), and ten trillion "facts." The company's products allow software developers to analyze web home pages and article pages, and extract the "important information" while ignoring elements deemed not core to the primary content. In August 2012 the company released its Page Classifier API, which automatically categorizes web pages into specific "page types". As part of this, Diffbot analyzed 750,000 web pages shared on the social media service Twitter and revealed that photos, followed by articles and videos, are the predominant web media shared on the social network. In September 2020 the company released a Natural Language Processing API for automatically building Knowledge Graphs from text. The company raised $2 million in funding in May 2012 from investors including Andy Bechtolsheim and Sky Dayton. Diffbot's customers include Adobe, AOL, Cisco, DuckDuckGo, eBay, Instapaper, Microsoft, Onswipe and Springpad.
|
Diffbot : Official website Knowledge Graph
|
Direct3D : Direct3D is a graphics application programming interface (API) for Microsoft Windows. Part of DirectX, Direct3D is used to render three-dimensional graphics in applications where performance is important, such as games. Direct3D uses hardware acceleration if available on the graphics card, allowing for hardware acceleration of the entire 3D rendering pipeline or even only partial acceleration. Direct3D exposes the advanced graphics capabilities of 3D graphics hardware, including Z-buffering, W-buffering, stencil buffering, spatial anti-aliasing, alpha blending, color blending, mipmapping, texture blending, clipping, culling, atmospheric effects, perspective-correct texture mapping, programmable HLSL shaders and effects. Integration with other DirectX technologies enables Direct3D to deliver such features as video mapping, hardware 3D rendering in 2D overlay planes, and even sprites, providing the use of 2D and 3D graphics in interactive media ties. Direct3D contains many commands for 3D computer graphics rendering; however, since version 8, Direct3D has superseded the DirectDraw framework and also taken responsibility for the rendering of 2D graphics. Microsoft strives to continually update Direct3D to support the latest technology available on 3D graphics cards. Direct3D offers full vertex software emulation but no pixel software emulation for features not available in hardware. For example, if software programmed using Direct3D requires pixel shaders and the video card on the user's computer does not support that feature, Direct3D will not emulate it, although it will compute and render the polygons and textures of the 3D models, albeit at a usually degraded quality and performance compared to the hardware equivalent. The API does include a Reference Rasterizer (or REF device), which emulates a generic graphics card in software, although it is too slow for most real-time 3D applications and is typically only used for debugging. A new real-time software rasterizer, WARP, designed to emulate the complete feature set of Direct3D 10.1, is included with Windows 7 and Windows Vista Service Pack 2 with the Platform Update; its performance is said to be on par with lower-end 3D cards on multi-core CPUs. As part of DirectX, Direct3D is available for Windows 95 and above, and is the base for the vector graphics API on the different versions of Xbox console systems. The Wine compatibility layer, a free software reimplementation of several Windows APIs, includes an implementation of Direct3D. Direct3D's main competitor is Khronos' OpenGL and its follow-on Vulkan. Fahrenheit was an attempt by Microsoft and SGI to unify OpenGL and Direct3D in the 1990s, but was eventually cancelled.
|
Direct3D : Direct3D 6.0 – Multitexturing Direct3D 7.0 – Hardware Transformation, Clipping and Lighting (TCL/T&L), DXVA 1.0 Direct3D 8.0 – Pixel Shader 1.0/1.1 & Vertex Shader 1.0/1.1 Direct3D 8.1 – Pixel Shader 1.2/1.3/1.4 Direct3D 9.0 – Shader Model 2.0 (Pixel Shader 2.0 & Vertex Shader 2.0) Direct3D 9.0a – Shader Model 2.0a (Pixel Shader 2.0a & Vertex Shader 2.0a) Direct3D 9.0b – Pixel Shader 2.0b, H.264 Direct3D 9.0c – last version supported for Windows 98/ME (early releases) and for Windows 2000/XP (all releases); Shader Model 3.0 (Pixel Shader 3.0 & Vertex Shader 3.0) Direct3D 9.0L – Windows Vista only; Direct3D 9.0c, Shader Model 3.0, Windows Graphics Foundation 1.0, GPGPU Direct3D 10.0 – Windows Vista/Windows 7; Shader Model 4.0, Windows Graphics Foundation 2.0, DXVA 2.0 Direct3D 10.1 – Windows Vista SP1/Windows 7; Shader Model 4.1, Windows Graphics Foundation 2.1, DXVA 2.1 Direct3D 11.0 – Windows Vista SP2/Windows 7; Shader Model 5.0, Tessellation, Multithreaded rendering, Compute shaders, implemented by hardware and software running Direct3D 9/10/10.1 Direct3D 11.1 – Windows 8 (partially supported on Windows 7 SP1 also); Stereoscopic 3D Rendering, H.265 Direct3D 11.2 – Windows 8.1; Tiled resources Direct3D 11.3 – Windows 10 Direct3D 12.0 – Windows 10; low-level rendering API, Shader Model 5.1 and 6.0 Direct3D 12.1 – Windows 10; DirectX Raytracing Direct3D 12.2 – Windows 10; DirectX 12 Ultimate
|
Direct3D : In 1992, Servan Keondjian, Doug Rabson and Kate Seekings started a company named RenderMorphics, which developed a 3D graphics API named Reality Lab, which was used in medical imaging and CAD software. Two versions of this API were released. Microsoft bought RenderMorphics in February 1995, bringing its staff on board to implement a 3D graphics engine for Windows 95. The first version of Direct3D shipped in DirectX 2.0 (June 2, 1996) and DirectX 3.0 (September 26, 1996). Direct3D initially implemented an "immediate mode" 3D API and layered upon it a "retained mode" 3D API. Both types of API were already offered with the second release of Reality Lab before Direct3D was released. Like other DirectX APIs, such as DirectDraw, both were based on COM. The retained mode API was a scene graph API that attained little adoption. Game developers clamored for more direct control of the hardware's activities than the Direct3D retained mode could provide. Only two games that sold a significant volume, Lego Island and Lego Rock Raiders, were based on the Direct3D retained mode, so Microsoft did not update the retained mode API after DirectX 3.0. For DirectX 2.0 and 3.0, the Direct3D immediate mode used an "execute buffer" programming model that Microsoft hoped hardware vendors would support directly. Execute buffers were intended to be allocated in hardware memory and parsed by the hardware to perform the 3D rendering. They were considered extremely awkward to program at the time, however, hindering adoption of the new API and prompting calls for Microsoft to adopt OpenGL as the official 3D rendering API for games as well as workstation applications. (see OpenGL vs. Direct3D) Rather than adopt OpenGL as a gaming API, Microsoft chose to continue improving Direct3D, not only to be competitive with OpenGL, but to compete more effectively with other proprietary APIs such as 3dfx's Glide. From the beginning, the immediate mode also supported Talisman's tiled rendering with the BeginScene/EndScene methods of the IDirect3DDevice interface.
|
Direct3D : No substantive changes were planned to Direct3D for DirectX 4.0, which was scheduled to ship in late 1996 and then cancelled.
|
Direct3D : In December 1996, a team in Redmond took over development of the Direct3D Immediate Mode, while the London-based RenderMorphics team continued work on the Retained Mode. The Redmond team added the DrawPrimitive API that eliminated the need for applications to construct execute buffers, making Direct3D more closely resemble other immediate mode rendering APIs such as Glide and OpenGL. The first beta of DrawPrimitive shipped in February 1997, and the final version shipped with DirectX 5.0 in August 1997. Besides introducing an easier-to-use immediate mode API, DirectX 5.0 added the SetRenderTarget method that enabled Direct3D devices to write their graphical output to a variety of DirectDraw surfaces.
|
Direct3D : DirectX 6.0 (released in August, 1998) introduced numerous features to cover contemporary hardware (such as multitexture and stencil buffers) as well as optimized geometry pipelines for x87, SSE and 3DNow! and optional texture management to simplify programming. Direct3D 6.0 also included support for features that had been licensed by Microsoft from specific hardware vendors for inclusion in the API, in exchange for the time-to-market advantage to the licensing vendor. S3 texture compression support was one such feature, renamed as DXTC for purposes of inclusion in the API. Another was TriTech's proprietary bump mapping technique. Microsoft included these features in DirectX, then added them to the requirements needed for drivers to get a Windows logo to encourage broad adoption of the features in other vendors' hardware. A minor update to DirectX 6.0 came in the February, 1999 DirectX 6.1 update. Besides adding DirectMusic support for the first time, this release improved support for Intel Pentium III 3D extensions. A confidential memo sent in 1997 shows Microsoft planning to announce full support for Talisman in DirectX 6.0, but the API ended up being cancelled (See the Microsoft Talisman page for details).
|
Direct3D : DirectX 7.0 (released in September, 1999) introduced the .dds texture format and support for transform and lighting hardware acceleration (first available on PC hardware with Nvidia's GeForce 256), as well as the ability to allocate vertex buffers in hardware memory. Hardware vertex buffers represent the first substantive improvement over OpenGL in DirectX history. Direct3D 7.0 also augmented DirectX support for multitexturing hardware, and represents the pinnacle of fixed-function multitexture pipeline features: although powerful, it was so complicated to program that a new programming model was needed to expose the shading capabilities of graphics hardware. Direct3D 7.0 also introduced DXVA features.
|
Direct3D : DirectX 8.0 (released in November, 2000) introduced programmability in the form of vertex and pixel shaders, enabling developers to write code without worrying about superfluous hardware state. The complexity of the shader programs depended on the complexity of the task, and the display driver compiled those shaders to instructions that could be understood by the hardware. Direct3D 8.0 and its programmable shading capabilities were the first major departure from an OpenGL-style fixed-function architecture, where drawing is controlled by a complicated state machine. Direct3D 8.0 also eliminated DirectDraw as a separate API. Direct3D subsumed all remaining DirectDraw API calls still needed for application development, such as Present(), the function used to display rendering results. Direct3D was not considered to be user friendly, but as of DirectX version 8.1, many usability problems were resolved. Direct3D 8 contained many powerful 3D graphics features, such as vertex shaders, pixel shaders, fog, bump mapping and texture mapping.
|
Direct3D : Direct3D 9.0 (released in December, 2002) added a new version of the High Level Shader Language support for floating-point texture formats, Multiple Render Targets (MRT), Multiple-Element Textures, texture lookups in the vertex shader and stencil buffer techniques.
|
Direct3D : Windows Vista includes a major update to the Direct3D API. Originally called WGF 2.0 (Windows Graphics Foundation 2.0), then DirectX 10 and DirectX Next, Direct3D 10 features an updated shader model 4.0 and optional interruptibility for shader programs. In this model shaders still consist of fixed stages as in previous versions, but all stages support a nearly unified interface, as well as a unified access paradigm for resources such as textures and shader constants. The language itself has been extended to be more expressive, including integer operations, a greatly increased instruction count, and more C-like language constructs. In addition to the previously available vertex and pixel shader stages, the API includes a geometry shader stage that breaks the old model of one vertex in/one vertex out, to allow geometry to be generated from within a shader, thus allowing for complex geometry to be generated entirely by the graphics hardware. Windows XP and earlier are not supported by DirectX 10.0 and above. Furthermore, Direct3D 10 dropped support for the retained mode API which had been a part of Direct3D since the beginning, making Windows Vista incompatible with 3D games that had used the retained mode API as their rendering engine. Unlike prior versions of the API, Direct3D 10 no longer uses "capability bits" (or "caps") to indicate which features are supported on a given graphics device. Instead, it defines a minimum standard of hardware capabilities which must be supported for a display system to be "Direct3D 10 compatible". This is a significant departure, with the goal of streamlining application code by removing capability-checking code and special cases based on the presence or absence of specific capabilities. Because Direct3D 10 hardware was comparatively rare after the initial release of Windows Vista and because of the massive install base of non-Direct3D 10 compatible graphics cards, the first Direct3D 10-compatible games still provide Direct3D 9 render paths. Examples of such titles are games originally written for Direct3D 9 and ported to Direct3D 10 after their release, such as Company of Heroes, or games originally developed for Direct3D 9 with a Direct3D 10 path retrofitted later during their development, such as Hellgate: London or Crysis. The DirectX 10 SDK became available in February 2007.
|
Direct3D : Direct3D 11 was released as part of Windows 7. It was presented at Gamefest 2008 on July 22, 2008 and demonstrated at the Nvision 08 technical conference on August 26, 2008. The Direct3D 11 Technical Preview has been included in November 2008 release of DirectX SDK. AMD previewed working DirectX11 hardware at Computex on June 3, 2009, running some DirectX 11 SDK samples. The Direct3D 11 runtime is able to run on Direct3D 9 and 10.x-class hardware and drivers using the concept of "feature levels", expanding on the functionality first introduced in Direct3D 10.1 runtime. Feature levels allow developers to unify the rendering pipeline under Direct3D 11 API and make use of API improvements such as better resource management and multithreading even on entry-level cards, though advanced features such as new shader models and rendering stages will only be exposed on up-level hardware. There are three "10 Level 9" profiles which encapsulate various capabilities of popular DirectX 9.0a cards, and Direct3D 10, 10.1, and 11 each have a separate feature level; each upper level is a strict superset of a lower level. Tessellation was earlier considered for Direct3D 10, but was later abandoned. GPUs such as Radeon R600 feature a tessellation engine that can be used with Direct3D 9/10/10.1 and OpenGL, but it's not compatible with Direct3D 11 (according to Microsoft). Older graphics hardware such as Radeon 8xxx, GeForce 3/4 had support for another form of tesselation (RT patches, N patches) but those technologies never saw substantial use. As such, their support was dropped from newer hardware. Microsoft has also hinted at other features such as order independent transparency, which was never exposed by the Direct3D API but supported almost transparently by early Direct3D hardware such as Videologic's PowerVR line of chips.
|
Direct3D : Direct3D 12 allows a lower level of hardware abstraction than earlier versions, enabling future applications to significantly improve multithreaded scaling and decrease CPU utilization. This is achieved by better matching the Direct3D abstraction layer with the underlying hardware, through new features such as Indirect Drawing, descriptor tables, concise pipeline state objects, and draw call bundles. Reducing driver overhead is the main attraction of Direct3D 12, similarly to AMD's Mantle. In the words of its lead developer Max McMullen, the main goal of Direct3D 12 is to achieve "console-level efficiency" and improved CPU parallelism. Although Nvidia has announced broad support for Direct3D 12, they were also somewhat reserved about the universal appeal of the new API, noting that while game engine developers may be enthusiastic about directly managing GPU resources from their application code, "a lot of [other] folks wouldn't" be happy to have to do that. Some new hardware features are also in Direct3D 12, including Shader Model 5.1, Volume Tiled Resources(Tier 2), Shader Specified Stencil Reference Value, Typed UAV Load, Conservative Rasterization(Tier 1), better collision and culling with Conservative Rasterization, Rasterizer Ordered Views (ROVs), Standard Swizzles, Default Texture Mapping, Swap Chains, swizzled resources and compressed resources, additional blend modes, programmable blend and efficient order-independent transparency (OIT) with pixel ordered UAV. Pipeline state objects (PSOs) have evolved from Direct3D 11, and the new concise pipeline states mean that the process has been simplified. DirectX 11 offered flexibility in how its states could be altered, to the detriment of performance. Simplifying the process and unifying the pipelines (e.g. pixel shader states) lead to a more streamlined process, significantly reducing the overheads and allowing the graphics card to draw more calls for each frame. Once created, the PSO is immutable. Root signatures introduce configurations to link command lists to resources required by shaders. They define the layout of resources that shaders will use and specifies what resources will be bound to the pipeline. A graphics command list has both a graphics and compute root signature, while a compute command list will have only a compute root signature. These root signatures are completely independent of each other. While the root signature lays out the types of data for shaders to use, it does not define or map the actual memory or data. Root parameters are one type of entry in a root signature. The actual values of the root parameters that are modified at runtime are called root arguments. This is the data that the shaders read. Within Direct3D 11, the commands are sent from the CPU to the GPU one by one, and the GPU works through these commands sequentially. This means that commands are bottlenecked by the speed at which the CPU could send these commands in a linear fashion. Within DirectX 12 these commands are sent as command lists, containing all the required information within a single package. The GPU is then capable of computing and executing this command in one single process, without having to wait on any additional information from the CPU. Within these command lists are bundles. Where previously commands were just taken, used, and then forgotten by the GPU, bundles can be reused. This decreases the workload of the GPU and means repeated assets can be used much faster. While resource binding is fairly convenient in Direct3D 11 for developers at the moment, its inefficiency means several modern hardware capabilities are being drastically underused. When a game engine needed resources in DX11, it had to draw the data from scratch every time, meaning repeat processes and unnecessary uses. In Direct3D 12, descriptor heaps and tables mean the most often used resources can be allocated by developers in tables, which the GPU can quickly and easily access. This can contribute to better performance than Direct3D 11 on equivalent hardware, but it also entails more work for the developer. Dynamic Heaps are also a feature of Direct3D 12. Direct3D 12 features explicit multi-adapter support, allowing the explicit control of multiple GPUs configuration systems. Such configurations can be built with graphics adapter of the same hardware vendor as well of different hardware vendor together. An experimental support of D3D 12 for Windows 7 SP1 has been released by Microsoft in 2019 via a dedicated NuGet package. Direct3D 12 version 1607 – With the Windows 10 anniversary update (version 1607), released on August 2, 2016, the Direct3D 12 runtime has been updated to support constructs for explicit multithreading and inter-process communication, allowing developers to take advantage of modern massively parallel GPUs. Other features include updated root signatures version 1.1, as well as support for HDR10 format and variable refresh rates. Direct3D 12 version 1703 – With the Windows 10 Creators Update (version 1703), released on April 11, 2017, the Direct3D 12 runtime has been updated to support Shader Model 6.0 and DXIL. and Shader Model 6.0 requires Windows 10 Anniversary Update (version 1607), WDDM 2.1. New graphical features are Depth Bounds Testing and Programmable MSAA. Direct3D 12 version 1709 – Direct3D in Windows 10 Fall Creators Update (version 1709), released on October 17, 2017, includes improved debugging. Direct3D 12 version 1809 – Windows 10 October 2018 Update (version 1809) brings support for DirectX Raytracing so GPUs can benefit from its API. Direct3D 12 version 1903 – Windows 10 May 2019 Update (version 1903) brings support for DirectML and NPUs. DirectML can support both compute shaders and tensor shaders. Direct3D 12 version 2004 – Windows 10 May 2020 Update (version 2004) brings support for DirectX 12 Ultimate, Mesh & Amplification Shaders, Sampler Feedback, as well DirectX Raytracing Tier 1.1 and memory allocation improvements. Direct3D 12 version 21H2 – Windows 10 version 21H2 and Windows 11 version 21H2 brings support for DirectStorage.
|
Direct3D : Direct3D is a Microsoft DirectX API subsystem component. The aim of Direct3D is to abstract the communication between a graphics application and the graphics hardware drivers. It is presented like a thin abstract layer at a level comparable to GDI (see attached diagram). Direct3D contains numerous features that GDI lacks. Direct3D is an Immediate mode graphics API. It provides a low-level interface to every video card 3D function (transformations, clipping, lighting, materials, textures, depth buffering and so on). It once had a higher level Retained mode component, now officially discontinued. Direct3D immediate mode presents three main abstractions: devices, resources and Swap Chains (see attached diagram). Devices are responsible for rendering the 3D scene. They provide an interface with different rendering capabilities. For example, the mono device provides white and black rendering, while the RGB device renders in color. There are four types of devices: HAL (hardware abstraction layer) device: For devices supporting hardware acceleration. Reference device: Simulates new functions not yet available in hardware. It is necessary to install the Direct3D SDK to use this device type. Null reference device: Does nothing. This device is used when the SDK is not installed and a reference device is requested. Pluggable software device: Performs software rendering. This device was introduced with DirectX 9.0. Every device contains at least one swap chain. A swap chain is made up of one or more back buffer surfaces. Rendering occurs in the back buffer. Moreover, devices contain a collection of resources; specific data used during rendering. Each resource has four attributes: Type: Determines the type of resource: surface, volume, texture, cube texture, volume texture, surface texture, index buffer or vertex buffer. Pool: Describes how the resource is managed by the runtime and where it is stored. In the Default pool the resource will exist only in device memory. Resources in the managed pool will be stored in system memory, and will be sent to the device when required. Resources in system memory pool will only exist in system memory. Finally, the scratch pool is basically the same as the system memory pool, but resources are not bound by hardware restrictions. Format: Describes the layout of the resource data in memory. For example, D3DFMT_R8G8B8 format value means a 24 bits color depth (8 bits for red, 8 bits for green and 8 bits for blue). Usage: Describes, with a collection of flag bits, how the resource will be used by the application. These flags dictate which resources are used in dynamic or static access patterns. Static resource values don't change after being loaded, whereas dynamic resource values may be modified. Direct3D implements two display modes: Fullscreen mode: The Direct3D application generates all of the graphical output for a display device. In this mode Direct3D automatically captures Alt-Tab and sets/restores screen resolution and pixel format without the programmer intervention. This also provides plenty of problems for debugging due to the 'Exclusive Cooperative Mode'. Windowed mode: The result is shown inside the area of a window. Direct3D communicates with GDI to generate the graphical output in the display. Windowed mode can have the same level of performance as full-screen, depending on driver support.
|
Direct3D : The Microsoft Direct3D 11 API defines a process to convert a group of vertices, textures, buffers, and state into an image on the screen. This process is described as a rendering pipeline with several distinct stages. The different stages of the Direct3D 11 pipeline are: Input-Assembler: Reads in vertex data from an application supplied vertex buffer and feeds them down the pipeline. Vertex Shader: Performs operations on a single vertex at a time, such as transformations, skinning, or lighting. Hull-Shader: Performs operations on sets of patch control points, and generates additional data known as patch constants. Tessellator: Subdivides geometry to create higher-order representations of the hull. Domain-Shader: Performs operations on vertices output by the tessellation stage, in much the same way as a vertex shader. Geometry Shader: Processes entire primitives such as triangles, points, or lines. Given a primitive, this stage discards it, or generates one or more new primitives. Stream-Output: Can write out the previous stage's results to memory. This is useful to recirculate data back into the pipeline. Rasterizer: Converts primitives into pixels, feeding these pixels into the pixel shader. The Rasterizer may also perform other tasks such as clipping what is not visible, or interpolating vertex data into per-pixel data. Pixel Shader: Determines the final pixel color to be written to the render target and can also calculate a depth value to be written to the depth buffer. Output-Merger: Merges various types of output data (pixel shader values, alpha blending, depth/stencil...) to build the final result. The pipeline stages illustrated with a round box are fully programmable. The application provides a shader program that describes the exact operations to be completed for that stage. Many stages are optional and can be disabled altogether.
|
Direct3D : In Direct3D 5 to 9, when new versions of the API introduced support for new hardware capabilities, most of them were optional – each graphics vendor maintained their own set of supported features in addition to the basic required functionality. Support for individual features had to be determined using "capability bits" or "caps", making cross-vendor graphics programming a complex task. Direct3D 10 introduced a much simplified set of mandatory hardware requirements based on most popular Direct3D 9 capabilities which all supporting graphics cards had to adhere to, with only a few optional capabilities for supported texture formats and operations. Direct3D 10.1 added a few new mandatory hardware requirements, and to remain compatible with 10.0 hardware and drivers, these features were encapsulated in two sets called "feature levels", with 10.1 level forming a superset of 10.0 level. As Direct3D 11.0, 11.1 and 12 added support for new hardware, new mandatory capabilities were further grouped in upper feature levels. Direct3D 11 also introduced "10level9", a subset of the Direct3D 10 API with three feature levels encapsulating various Direct3D 9 cards with WDDM drivers, and Direct3D 11.1 re-introduced a few optional features for all levels, which were expanded in Direct3D 11.2 and later versions. This approach allows developers to unify the rendering pipeline and use a single version of the API on both newer and older hardware, taking advantage of performance and usability improvements in the newer runtime. New feature levels are introduced with updated versions of the API and typically encapsulate: major mandatory features – (Direct3D 11.0, 12), a few minor features (Direct3D 10.1, 11.1), or a common set of previously optional features (Direct3D 11.0 "10 level 9"). Each upper level is a strict superset of a lower level, with only a few new or previously optional features that move to the core functionality on an upper level. More advanced features in a major revision of the Direct3D API such as new shader models and rendering stages are only exposed on up-level hardware. Separate capabilities exist to indicate support for specific texture operations and resource formats; these are specified per each texture format using a combination of capability flags. Feature levels use underscore as a delimiter (i.e. "12_1"), while API/runtime versions use dot (i.e. "Direct3D 11.4").
|
Direct3D : WDDM driver model in Windows Vista and higher supports arbitrarily large number of execution contexts (or threads) in hardware or in software. Windows XP only supported multitasked access to Direct3D, where separate applications could execute in different windows and be hardware accelerated, and the OS had limited control about what the GPU could do and the driver could switch execution threads arbitrarily. The ability to execute the runtime in a multi-threaded mode has been introduced with Direct3D 11 runtime. Each execution context is presented with a resource view of the GPU. Execution contexts are protected from each other, however a rogue or badly written app can take control of the execution in the user-mode driver and could potentially access data from another process within GPU memory by sending modified commands. Though protected from access by another app, a well-written app still needs to protect itself against failures and device loss caused by other applications. The OS manages the threads all by itself, allowing the hardware to switch from one thread to the other when appropriate, and also handles memory management and paging (to system memory and to disk) via integrated OS-kernel memory management. Finer-grained context switching, i.e. being able to switch two execution threads at the shader-instruction level instead of the single-command level or even batch of commands, was introduced in WDDM/DXGI 1.2 which shipped with Windows 8. This overcomes a potential scheduling problem when application would have very long execution of a single command/batch of commands and will have to be terminated by the OS watchdog timer. WDDM 2.0 and DirectX 12 have been reengineered to allow fully multithreaded draw calls. This was achieved by making all resources immutable (i.e. read-only), serializing the rendering states and using draw call bundles. This avoids complex resource management in the kernel-mode driver, making possible multiple reentrant calls to the user-mode driver via concurrent executions contexts supplied by separate rendering threads in the same application.
|
Direct3D : Direct3D Mobile is derived from Direct3D but has a smaller memory footprint. Windows CE provides Direct3D Mobile support.
|
Direct3D : The following alternative implementations of Direct3D API exist. They are useful for non-Windows platforms and for hardware without some versions of DX support: WineD3D – The Wine open source project has working implementations of the Direct3D APIs via translation to OpenGL. Wine's implementation can also be run on Windows under certain conditions. vkd3d – vkd3d is an open source 3D graphics library built on top of Vulkan which allows to run Direct3D 12 applications on top of Vulkan. It's primarily used by the Wine project, and is now included with Valve's Proton project bundled with Steam on Linux. DXVK – An open source Vulkan-based translation layer for Direct3D 8/9/10/11 which allows running 3D applications on Linux using Wine. It is used by Proton/Steam for Linux. DXVK is able to run a large number of modern Windows games under Linux. D9VK – An obsolete fork of DXVK for adding Direct3D 9 support, included with Steam/Proton on Linux. On December 16, 2019 D9VK was merged into DXVK. D8VK – An obsolete fork of DXVK for adding Direct3D 8 support on Linux. It was merged with DXVK version 2.4 which was released on July 10, 2024. Gallium Nine – Gallium Nine makes it possible to run Direct3D 9 applications on Linux natively, i.e. without any calls translation which allows for a near native speed. It depends on Wine and Mesa.
|
Direct3D : List of 3D rendering APIs List of 3D graphics libraries High-Level Shader Language Shader DirectX – collection of APIs in which Direct3D is implemented DirectDraw 3D computer graphics
|
Direct3D : DirectX website MSDN: DirectX Graphics and Gaming DirectX 10: The Future of PC Gaming, technical article discussing the new features of DirectX 10 and their impact on computer games
|
Dr.Fill : Dr.Fill is a computer program that solves American-style crossword puzzles. It was developed by Matt Ginsberg and described by Ginsberg in an article in the Journal of Artificial Intelligence Research. Ginsberg claims in that article that Dr.Fill is among the top fifty crossword solvers in the world.
|
Dr.Fill : Dr.Fill participated in the 2012 American Crossword Puzzle Tournament, finishing 141st of approximately 650 entrants with a total score of just over 10,000 points. The appearance led to a variety of descriptions of Dr.Fill in the popular press, including The Economist, the San Francisco Chronicle and Gizmodo. A description of Dr.Fill appeared on the front page of the March 17, 2012 New York Times. Dr.Fill's score in 2013 improved to 10,550, which would have earned it 92nd place. Videos of the program solving the problems from the tournament are available on YouTube. The score in 2014 improved further to 10,790, which would have tied for 67th place. A video of the program solving the first six puzzles from that tournament, together with a talk given by Ginsberg describing its performance, can be found on YouTube. Dr.Fill has largely continued to improve since the 2014 event. In 2015, it scored 10,920 points and finished in 55th place. In 2016, it scored 11,205 points and finished in 41st place. In 2017, it scored 11,795 and finished in 11th place. In 2018, it scored 10,740 points, dropping to 78th place. Dr.Fill returned to "form" in 2019, once again scoring 11,795 and finishing in 14th place. The 2020 ACPT was cancelled due to COVID-19, and Dr.Fill participated as a non-competitor in the Boswords tournament instead. The program outperformed the humans, scoring 11,218 points (fast solves with a total of one mistake) while the best scoring human scored 10,994 points (slower solves but no mistakes). The 2021 ACPT was virtual, again due to COVID-19. The Dr.Fill effort was joined by the Berkeley NLP Group, creating a hybrid system named Berkeley Crossword Solver, and Dr.Fill won the main event, scoring 12,825 points with Erik Agard, the highest scoring human, scoring 12,810 points. The tournament was won by Tyler Hinman (12,760 points), who completed the championship puzzle perfectly in three minutes. Dr.Fill also completed that puzzle perfectly, but in 49 seconds. After winning the tournament, Ginsberg announced on August 8, 2021, that both he and Dr.Fill would be retiring from crosswords.
|
Dr.Fill : As described by Ginsberg, Dr.Fill works by converting a crossword to a weighted constraint satisfaction problem and then attempting to maximize the probability that the fill is correct. Probabilities for individual words or phrases in the puzzle are computed using relatively simple statistical techniques based on features such as previous appearances of the clue, number of Google hits for the fill, and so on. In doing this, Dr.Fill is attempting to solve a problem similar to that tackled by the Jeopardy!-playing program Watson; Dr.Fill runs on a laptop instead of a supercomputer and Ginsberg remarks that Watson is far more effective than Dr.Fill at solving this portion of the problem. Instead of computational horsepower, Dr.Fill relies on the constraints provided by crossing words to refine its answers. A variety of techniques from artificial intelligence are applied to attempt to find the most likely fill. These include a small amount of lookahead, limited discrepancy search, and postprocessing. Ginsberg remarks that postprocessing was chosen over branch and bound because the two techniques are mutually incompatible and postprocessing was found to be more effective in this domain.
|
Dr.Fill : Berkeley Crossword Solver on GitHub Dr.Fill source code (except training data) on GitHub
|
DREAM Challenges : DREAM Challenges (Dialogue for Reverse Engineering Assessment and Methods) is a non-profit initiative for advancing biomedical and systems biology research via crowd-sourced competitions. Started in 2006, DREAM challenges collaborate with Sage Bionetworks to provide a platform for competitions run on the Synapse platform. Over 60 DREAM challenges have been conducted over the span of over 15 years.
|
DREAM Challenges : DREAM Challenges were founded in 2006 by Gustavo Stolovizky from IBM Research and Andrea Califano from Columbia University. Current chair of the DREAM organization is Paul Boutros from University of California. Further organization spans emeritus chairs Justin Guinney and Gustavo Stolovizky, and multiple DREAM directors. Individual challenges focus on tackling a specific biomedical research question, typically narrowed down to a specific disease. A prominent disease focus has been on oncology, with multiple past challenges focused on breast cancer, acute myeloid leukemia, and prostate cancer or similar diseases. The data involved in an individual challenge reflects the disease context; while cancers typically involve data such as mutations in the human genome, gene expression and gene networks in transcriptomics, and large scale proteomics, newer challenges have shifted towards single cell sequencing technologies as well as emerging gut microbiome related research questions, thus reflecting trends in the wider research community. Motivation for DREAM Challenges is that via crowd-sourcing data to a larger audience via competitions, better models and insight is gained than if the analysis was conducted by a single entity. Past competitions have been published in such scientific venues as the flagship journals of the Nature Portfolio and PLOS publishing groups. Results of DREAM challenges are announced via web platforms, and the top performing participants are invited to present their results in the annual RECOMB/ISCB Conferences with RSG/DREAM organized by the ISCB. While DREAM Challenges have emphasized open science and data, in order to mitigate issues rising from highly sensitive data such as genomics in patient cohorts, "model to data" approaches have been adopted. In such challenges participants submit their models via containers such as Docker or Singularity. This allows retaining confidentiality of the original data as these containers are then run by the organizers on the confidential data. This differs from the more traditional open data model, where participants submit predictions directly based on the provided open data.
|
DREAM Challenges : DREAM challenge comprises a core DREAM/Sage Bionetworks organization group as well as an extended scientific expert group, who may have contributed to creation and conception of the challenge or by providing key data. Additionally, new DREAM challenges may be proposed by the wider research community. Pharmaceutical companies or other private entities may also be involved in DREAM challenges, for example in providing data.
|
DREAM Challenges : Timelines for key stages (such as introduction webinars, model submission deadlines, and final deadline for participation) are provided in advance. After the winners are announced, organizers start collaborating with the top performing participants to conduct post hoc analyses for a publication describing key findings from the competition. Challenges may be split into sub-challenges, each addressing a different subtopic within the research question. For example, regarding cancer treatment efficacy predictions, these may be separate predictions for progression-free survival, overall survival, best overall response according to RECIST, or exact time until event (progression or death).
|
DREAM Challenges : During DREAM challenges, participants typically build models on provided data, and submit predictions or models that are then validated on held-out data by the organizers. While DREAM challenges avoid leaking validation data to participants, there are typically mid-challenge submission leaderboards available to assist participants in evaluating their performance on a sub-sampled or scrambled dataset. DREAM challenges are free for participants. During the open phase anybody can register via Synapse to participate either individually or as a team. A person may only register once and may not use any aliases. There are some exceptions, which disqualify an individual from participating, for example: Person has privileged access to the data for the particular challenge, thus providing them with an unfair advantage. Person has been caught or is under suspicion of cheating or abusing previous DREAM Challenges. Person is a minor (under age 18 or the age of majority in jurisdiction of residence). This may be alleviated via parental consent.
|
DREAM Challenges : List of crowdsourcing projects Critical Assessment of Function Annotation (CAFA) Critical Assessment of Genome Interpretation (CAGI) Critical Assessment of Prediction of Interactions (CAPRI) Critical Assessment of protein Structure Prediction (CASP) Kaggle == References ==
|
Google Brain : Google Brain was a deep learning artificial intelligence research team that served as the sole AI branch of Google before being incorporated under the newer umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, it combined open-ended machine learning research with information systems and large-scale computing resources. It created tools such as TensorFlow, which allow neural networks to be used by the public, and multiple internal AI research projects, and aimed to create research opportunities in machine learning and natural language processing. It was merged into former Google sister company DeepMind to form Google DeepMind in April 2023.
|
Google Brain : The Google Brain project began in 2011 as a part-time research collaboration between Google fellow Jeff Dean and Google Researcher Greg Corrado. Google Brain started as a Google X project and became so successful that it was graduated back to Google: Astro Teller has said that Google Brain paid for the entire cost of Google X. In June 2012, the New York Times reported that a cluster of 16,000 processors in 1,000 computers dedicated to mimicking some aspects of human brain activity had successfully trained itself to recognize a cat based on 10 million digital images taken from YouTube videos. The story was also covered by National Public Radio. In March 2013, Google hired Geoffrey Hinton, a leading researcher in the deep learning field, and acquired the company DNNResearch Inc. headed by Hinton. Hinton said that he would be dividing his future time between his university research and his work at Google. In April 2023, Google Brain merged with Google sister company DeepMind to form Google DeepMind, as part of the company's continued efforts to accelerate work on AI.
|
Google Brain : Google Brain was initially established by Google Fellow Jeff Dean and visiting Stanford professor Andrew Ng. In 2014, the team included Jeff Dean, Quoc Le, Ilya Sutskever, Alex Krizhevsky, Samy Bengio, and Vincent Vanhoucke. In 2017, team members included Anelia Angelova, Samy Bengio, Greg Corrado, George Dahl, Michael Isard, Anjuli Kannan, Hugo Larochelle, Chris Olah, Salih Edneer, Benoit Steiner, Vincent Vanhoucke, Vijay Vasudevan, and Fernanda Viegas. Chris Lattner, who created Apple's programming language Swift and then ran Tesla's autonomy team for six months, joined Google Brain's team in August 2017. Lattner left the team in January 2020 and joined SiFive. As of 2021, Google Brain was led by Jeff Dean, Geoffrey Hinton, and Zoubin Ghahramani. Other members include Katherine Heller, Pi-Chuan Chang, Ian Simon, Jean-Philippe Vert, Nevena Lazic, Anelia Angelova, Lukasz Kaiser, Carrie Jun Cai, Eric Breck, Ruoming Pang, Carlos Riquelme, Hugo Larochelle, and David Ha. Samy Bengio left the team in April 2021, and Zoubin Ghahramani took on his responsibilities. Google Research includes Google Brain and is based in Mountain View, California. It also has satellite groups in Accra, Amsterdam, Atlanta, Beijing, Berlin, Cambridge (Massachusetts), Israel, Los Angeles, London, Montreal, Munich, New York City, Paris, Pittsburgh, Princeton, San Francisco, Seattle, Tokyo, Toronto, and Zürich.
|
Google Brain : Google Brain has received coverage in Wired, NPR, and Big Think. These articles have contained interviews with key team members Ray Kurzweil and Andrew Ng, and focus on explanations of the project's goals and applications.
|
Google Brain : Artificial intelligence art Glossary of artificial intelligence List of artificial intelligence projects Noosphere Quantum Artificial Intelligence Lab – run by Google in collaboration with NASA and Universities Space Research Association == References ==
|
Google Nest : Google Nest is a line of smart home products including smart speakers, smart displays, streaming devices, thermostats, smoke detectors, routers and security systems including smart doorbells, cameras and smart locks. The Nest brand name was originally owned by Nest Labs, co-founded by former Apple engineers Tony Fadell and Matt Rogers in 2010. Its flagship product, which was the company's first offering, is the Nest Learning Thermostat, introduced in 2011. The product is programmable, self-learning, sensor-driven, and Wi-Fi-enabled: features that are often found in other Nest products. It was followed by the Nest Protect smoke and carbon monoxide detectors in October 2013. After its acquisition of Dropcam in 2014, the company introduced its Nest Cam branding of security cameras beginning in June 2015. The company quickly expanded to more than 130 employees by the end of 2012. Google acquired Nest Labs for US$3.2 billion in January 2014, when the company employed 280. As of late 2015, Nest employs more than 1,100 and added a primary engineering center in Seattle. After Google reorganized itself under the holding company Alphabet Inc., Nest operated independently of Google from 2015 to 2018. However, in 2018, Nest was merged into Google's home-devices unit led by Rishi Chandra, effectively ceasing to exist as a separate business. In July 2018, it was announced that all Google Home electronics products will henceforth be marketed under the brand Google Nest.
|
Google Nest : Works with Nest was a program that allowed third party devices to communicate with Nest products, such as virtual assistants, along with many third-party home automation platforms. Additionally, many smart device manufacturers have direct integration with the Nest platform, including Whirlpool, GE Appliances, and Myfox. On May 7, 2019, it was announced that Works with Nest would be discontinued effective August 31, 2019. Users are being directed to migrate to Google accounts and Google Assistant integration instead; doing so will remove the ability to use Works with Nest. Google stated that this change was for security and privacy reasons; as third-party devices may only integrate with the Nest ecosystem via Google Assistant, they will be heavily restricted in the amount of personal data and access to devices they will have access to. Google stated that it would give "a small number of thoroughly vetted partners" access to additional data. The change faced criticism for potentially resulting in a loss of functionality: vendors such as Lutron and SimpliSafe announced that their products' integration with the Nest platform (which allow them to be tied to the thermostat's home and away modes) would be affected by this change, while Google explicitly named IFTTT as a service that could not be integrated due to the amount of access it would need to operate. The Verge estimated that affected devices would also include Philips Hue, Logitech Harmony, Lutron lights, August Home, and Belkin Wemo switches. Furthermore, The Verge argued that this change created a closed platform, and would lead to fragmentation of the smart home market by potentially blocking integration with products that directly compete with those of Google. On May 16, 2019, Google clarified its deprecation plans for Works with Nest: existing integrations will not be disabled after August 31, but users will no longer be able to add new ones, and the service will only receive maintenance updates going forward. Google also stated that it was working on replicating Nest platform functions as part of Assistant, such as integrating Nest's Home/Away triggers into the "Routines" system, and maintaining integration between Nest and Amazon Alexa.
|
Google Nest : In February 2012, Honeywell filed a lawsuit claiming that some of its patents had been infringed by Nest. In April 2012, Nest stated they believe that none of the allegedly infringed patents were actually violated. Honeywell claimed that Nest infringed on patents pertaining to remotely controlling a thermostat, power-stealing thermostats, and thermostats designed around a circular, interactive design, similar to the Honeywell T87. However, Honeywell held patents that were almost identical to those that expired in 2004. Nest has taken the stance that they will see this through to patent court as they suspect Honeywell is trying to harass them, litigiously and financially, out of business. On May 14, 2013, Allure Energy also filed a lawsuit, alleging infringement of a patent on an "Auto-adaptable energy management apparatus". First Alert sued Nest in 2014 in regards to voice alert functionality and a design trait of the Nest Protect, despite the fact that the world's first talking smoke and carbon monoxide alarm was actually released by Kidde. In 2016, Nest announced that the devices of Revolv customers would be bricked on May 15, as they were shutting down the necessary cloud software. Karl Bode and Emmanuel Malberg of Vice News compared the move to a remote deletion of purchased Xbox Fitness content by Microsoft. The Federal Trade Commission entered into an investigation of the matter. In May 2016, an employee filed an unfair labor practice charge with the National Labor Relations Board against Nest and Google. In the charge, the employee alleged that he was terminated for posting information about Tony Fadell's poor leadership to a private Facebook page consisting of current and former employees. The charge also alleged that Nest and Google had engaged in unlawful surveillance and unlawful interrogation of employees in order to prevent them from discussing the work environment at Nest.
|
Google Nest : Per the terms of service, Google will provide law enforcement with Nest data "If we reasonably believe that we can prevent someone from dying or from suffering serious physical harm. For example, in the case of bomb threats, school shootings, kidnappings, suicide prevention, and missing person cases." In certain situations, this may be done without a warrant.
|
Google Nest : Internet of things Machine learning Android Things X10 ecobee
|
Google Nest : Huitt, Robert; Eubanks, Gordon; Rolander, Thomas "Tom" Alan; Laws, David; Michel, Howard E.; Halla, Brian; Wharton, John Harrison; Berg, Brian; Su, Weilian; Kildall, Scott; Kampe, Bill (April 25, 2014). Laws, David (ed.). "Legacy of Gary Kildall: The CP/M IEEE Milestone Dedication" (PDF) (video transscription). Pacific Grove, California, USA: Computer History Museum. CHM Reference number: X7170.2014. Retrieved January 19, 2020. […] Rolander: […] Gary then started a company, Prometheus Light and Sound, and he was into connected devices within the home. Which is of course what Google and Nest is about at this point, I think the reason for their acquisition. So he was consistently 10, maybe 20 years ahead, of, in many cases, the commercial viability of a lot of those technologies. Milestones:The CP/M Microcomputer Operating System, 1974 - Engineering and Technology History Wiki Legacy of Gary Kildall: The CP/M IEEE Milestone Dedication (33 pages)
|
Google Nest : Media related to Google Nest at Wikimedia Commons Official website
|
Intel RealSense : Intel RealSense Technology, formerly known as Intel Perceptual Computing, is a product range of depth and tracking technologies designed to give machines and devices depth perception capabilities. The technologies, owned by Intel are used in autonomous drones, robots, AR/VR, smart home devices amongst many others broad market products. The RealSense products are made of Vision Processors, Depth and Tracking Modules, and Depth Cameras, supported by an open source, cross-platform SDK in an attempt to simplify supporting cameras for third party software developers, system integrators, ODMs and OEMs.
|
Intel RealSense : Intel began producing hardware and software that utilized depth tracking, gestures, facial recognition, eye tracking, and other technologies under the branding Perceptual Computing in 2013. According to Intel, much of their research into the technologies is focused around "sensory inputs that make [computers] more human like". They initially hoped to begin including 3D cameras that could support their Perceptual Computing as opposed to traditional 2D cameras by late 2014. In 2013, Intel ran a competition among seven teams to create software highlighting the capabilities of its Perceptual Computing technology entitled "Intel Ultimate Coder Challenge: Going Perceptual". In 2014, Intel rebranded their Perceptual Computing line of technology as Intel RealSense. Intel RealSense Group supports multiple depth and tracking technologies including Coded Light Depth, Stereo Depth and Positional Tracking. To address the lack of applications built on the RealSense platform and to promote the platform among software developers, in 2014 Intel organized the "Intel RealSense App Challenge". The winners were awarded large sums of money.
|
Intel RealSense : Previous generations of Intel RealSense depth cameras (F200, R200 and SR300) were implemented in multiple laptop and tablet computers by Asus, HP, Dell, Lenovo, and Acer. Additionally, Razer and Creative offered consumer ready standalone webcams with the Intel RealSense camera built into the design.: Razer Stargazer and the Creative BlasterX Senz3D.
|
Intel RealSense : In an early preview article in 2015, PC World's Mark Hachman concluded that RealSense is an enabling technology that will be largely defined by the software that will take advantage of its features. He noted that as of the time the article was written, the technology was new and there was no such software.
|
Intel RealSense : Camera 3D uses Intel RealSense (Serie D400) and Microsoft Kinect sensors to create holographic memories, 3D models and Facebook 3D photos
|
Intel RealSense : Specifications: Intel RealSense Depth Camera D415, D435 and D455 Specifications: Intel RealSense Vision Processor D4 Series (Not available separately as these are just the bare PCB Vision Processor boards, only used as basis for the RealSense Depth Camera series) Specifications: Intel Stereo DepthModule SKUs (Not available separately as these are just the bare PCB Depth Sensor Modules, only used as basis for the RealSense Depth Camera series)
|
Intel RealSense : Creative Labs Kinect OpenCV Project Tango
|
Intel RealSense : Official website Intel RealSense SDK developer documentation Intel RealSense Product Family D400 Series Datasheet (revision 009, June 2020)
|
IRCF360 : Infrared Control Freak 360 (IRCF360) is a 360-degree proximity sensor and a motion sensing devices, developed by ROBOTmaker. The sensor is in BETA developers release as a low cost (software configurable) sensor for use within research, technical and hobby projects.
|
IRCF360 : The 360-degree sensor was originally designed as a short range micro robot proximity sensor and mainly intended for Swarm robotics, Ant robotics, Swarm intelligence, autonomous Qaudcopter, Drone, UAV, multi-robot simulations e.g. Jasmine Project where 360 proximity sensing is required to avoid collision with other robots and for simple IR inter-robot communications. To overcome certain limitation with Infra-red (IR) proximity sensing (e.g. detection of dark surfaces) the sensing module includes ambient light sensing and basic tactile sensing functionality during forward movement sensing/probing providing photovore and photophobe robot swarm behaviours and characteristics. A project named Sensorium Project was started aimed at broadening the Sensors audience beyond its typical robot sensor usage. To demonstrate the sensor's functionality, opensource Java based Integrated Development Environments (IDE) are used, such as Arduino and Processing (programming language).
|
IRCF360 : Official Websites Dean Camera development of USB interface for Arduino Details of the Sensorium and 360 degree sensor development
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.