text
stringlengths 12
14.7k
|
---|
Sense Networks : Sense Networks website CabSense website CitySense website Archived 2010-08-20 at the Wayback Machine
|
Simultaneous localization and mapping : Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. While this initially appears to be a chicken or the egg problem, there are several algorithms known to solve it in, at least approximately, tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, covariance intersection, and GraphSLAM. SLAM algorithms are based on concepts in computational geometry and computer vision, and are used in robot navigation, robotic mapping and odometry for virtual reality or augmented reality. SLAM algorithms are tailored to the available resources and are not aimed at perfection but at operational compliance. Published approaches are employed in self-driving cars, unmanned aerial vehicles, autonomous underwater vehicles, planetary rovers, newer domestic robots and even inside the human body.
|
Simultaneous localization and mapping : Given a series of controls u t and sensor observations o t over discrete time steps t , the SLAM problem is to compute an estimate of the agent's state x t and a map of the environment m t . All quantities are usually probabilistic, so the objective is to compute P ( m t + 1 , x t + 1 | o 1 : t + 1 , u 1 : t ) ,x_|o_,u_) Applying Bayes' rule gives a framework for sequentially updating the location posteriors, given a map and a transition function P ( x t | x t − 1 ) |x_) , P ( x t | o 1 : t , u 1 : t , m t ) = ∑ m t − 1 P ( o t | x t , m t , u 1 : t ) ∑ x t − 1 P ( x t | x t − 1 ) P ( x t − 1 | m t , o 1 : t − 1 , u 1 : t ) / Z |o_,u_,m_)=\sum _P(o_|x_,m_,u_)\sum _P(x_|x_)P(x_|m_,o_,u_)/Z Similarly the map can be updated sequentially by P ( m t | x t , o 1 : t , u 1 : t ) = ∑ x t ∑ m t P ( m t | x t , m t − 1 , o t , u 1 : t ) P ( m t − 1 , x t | o 1 : t − 1 , m t − 1 , u 1 : t ) |x_,o_,u_)=\sum _\sum _P(m_|x_,m_,o_,u_)P(m_,x_|o_,m_,u_) Like many inference problems, the solutions to inferring the two variables together can be found, to a local optimum solution, by alternating updates of the two beliefs in a form of an expectation–maximization algorithm.
|
Simultaneous localization and mapping : Statistical techniques used to approximate the above equations include Kalman filters and particle filters (the algorithm behind Monte Carlo Localization). They provide an estimation of the posterior probability distribution for the pose of the robot and for the parameters of the map. Methods which conservatively approximate the above model using covariance intersection are able to avoid reliance on statistical independence assumptions to reduce algorithmic complexity for large-scale applications. Other approximation methods achieve improved computational efficiency by using simple bounded-region representations of uncertainty. Set-membership techniques are mainly based on interval constraint propagation. They provide a set which encloses the pose of the robot and a set approximation of the map. Bundle adjustment, and more generally maximum a posteriori estimation (MAP), is another popular technique for SLAM using image data, which jointly estimates poses and landmark positions, increasing map fidelity, and is used in commercialized SLAM systems such as Google's ARCore which replaces their prior augmented reality computing platform named Tango, formerly Project Tango. MAP estimators compute the most likely explanation of the robot poses and the map given the sensor data, rather than trying to estimate the entire posterior probability. New SLAM algorithms remain an active research area, and are often driven by differing requirements and assumptions about the types of maps, sensors and models as detailed below. Many SLAM systems can be viewed as combinations of choices from each of these aspects.
|
Simultaneous localization and mapping : Various SLAM algorithms are implemented in the open-source software Robot Operating System (ROS) libraries, often used together with the Point Cloud Library for 3D maps or visual features from OpenCV.
|
Simultaneous localization and mapping : A seminal work in SLAM is the research of Smith and Cheeseman on the representation and estimation of spatial uncertainty in 1986. Other pioneering work in this field was conducted by the research group of Hugh F. Durrant-Whyte in the early 1990s. which showed that solutions to SLAM exist in the infinite data limit. This finding motivates the search for algorithms which are computationally tractable and approximate the solution. The acronym SLAM was coined within the paper, "Localization of Autonomous Guided Vehicles" which first appeared in ISR in 1995. The self-driving STANLEY and JUNIOR cars, led by Sebastian Thrun, won the DARPA Grand Challenge and came second in the DARPA Urban Challenge in the 2000s, and included SLAM systems, bringing SLAM to worldwide attention. Mass-market SLAM implementations can now be found in consumer robot vacuum cleaners and virtual reality headsets such as the Meta Quest 2 and PICO 4 for markerless inside-out tracking.
|
Simultaneous localization and mapping : Probabilistic Robotics by Sebastian Thrun, Wolfram Burgard and Dieter Fox with a clear overview of SLAM. SLAM For Dummies (A Tutorial Approach to Simultaneous Localization and Mapping). Andrew Davison research page at the Department of Computing, Imperial College London about SLAM using vision. openslam.org A good collection of open source code and explanations of SLAM. Matlab Toolbox of Kalman Filtering applied to Simultaneous Localization and Mapping Vehicle moving in 1D, 2D and 3D. FootSLAM research page at German Aerospace Center (DLR) including the related Wi-Fi SLAM and PlaceSLAM approaches. SLAM lecture Online SLAM lecture based on Python.
|
Stockfish (chess) : Stockfish is a free and open-source chess engine, available for various desktop and mobile platforms. It can be used in chess software through the Universal Chess Interface. Stockfish has been one of the strongest chess engines in the world for several years; it has won all main events of the Top Chess Engine Championship (TCEC) and the Chess.com Computer Chess Championship (CCC) since 2020 and, as of March 2025, is the strongest CPU chess engine in the world with an estimated Elo rating of 3642, in a time control of 40/15 (15 minutes to make 40 moves), according to CCRL. The Stockfish engine was developed by Tord Romstad, Marco Costalba, and Joona Kiiski, and was derived from Glaurung, an open-source engine by Tord Romstad released in 2004. It is now being developed and maintained by the Stockfish community. Stockfish historically used only a classical hand-crafted function to evaluate board positions, but with the introduction of the efficiently updatable neural network (NNUE) in August 2020, it adopted a hybrid evaluation system that primarily used the neural network and occasionally relied on the hand-crafted evaluation. In July 2023, Stockfish removed the hand-crafted evaluation and transitioned to a fully neural network-based approach.
|
Stockfish (chess) : Stockfish uses a tree-search algorithm based on alpha–beta search with several hand-designed heuristics, and since Stockfish 12 (2020) uses an efficiently updatable neural network as its evaluation function. It represents positions using bitboards. Stockfish supports Chess960, a feature it inherited from Glaurung. Support for Syzygy tablebases, previously available in a fork maintained by Ronald de Man, was integrated into Stockfish in 2014. In 2018 support for the 7-man Syzygy was added, shortly after the tablebase was made available. Stockfish supports up to 1024 CPU threads in multiprocessor systems, with a maximum transposition table size of 32 TB. Stockfish has been a very popular engine on various platforms. On desktop, it is the default chess engine bundled with the Internet Chess Club interface programs BlitzIn and Dasher. On mobile, it has been bundled with the Stockfish app, SmallFish and Droidfish. Other Stockfish-compatible graphical user interfaces (GUIs) include Fritz, Arena, Stockfish for Mac, and PyChess. Stockfish can be compiled to WebAssembly or JavaScript, allowing it to run in the browser. Both Chess.com and Lichess provide Stockfish in this form in addition to a server-side program. Release versions and development versions are available as C++ source code and as precompiled versions for Microsoft Windows, macOS, Linux 32-bit/64-bit and Android.
|
Stockfish (chess) : The program originated from Glaurung, an open-source chess engine created by Tord Romstad and first released in 2004. Four years later, Marco Costalba forked the project, naming it Stockfish because it was "produced in Norway and cooked in Italy" (Romstad is Norwegian and Costalba is Italian). The first version, Stockfish 1.0, was released in November 2008. For a while, new ideas and code changes were transferred between the two programs in both directions, until Romstad decided to discontinue Glaurung in favor of Stockfish, which was the stronger engine at the time. The last Glaurung version (2.2) was released in December 2008. Around 2011, Romstad decided to abandon his involvement with Stockfish in order to spend more time on his new iOS chess app. On 18 June 2014 Marco Costalba announced that he had "decided to step down as Stockfish maintainer" and asked that the community create a fork of the current version and continue its development. An official repository, managed by a volunteer group of core Stockfish developers, was created soon after and currently manages the development of the project.
|
Stockfish (chess) : YaneuraOu, a strong shogi engine and the origin of NNUE. Speaks USI, a variant of UCI for shogi. Fairy Stockfish, a version modified to play fairy chess. Runs with regional variants (chess, shogi, makruk, etc.) as well as other variants like antichess. Lichess Stockfish, a version for playing variants without fairy pieces. Crystal, which seeks to address common issues with chess engines such as positional or tactical blindness due to over reductions or over pruning, draw blindness due to the move horizon and displayed principal variation reliability. Brainfish, which contains a reduced version of Cerebellum, a chess opening library. BrainLearn, a derivative of Brainfish but with a persisted learning algorithm. ShashChess, a derivative with the goal to apply Alexander Shashin theory from the book Best Play: a New Method for Discovering the Strongest Move. Pikafish, a free, open source, and strong UCI Xiangqi engine derived from Stockfish that analyzes xiangqi positions and computes the optimal moves. Houdini 6, a Stockfish derivative that did not comply with the terms of the GPL license. Fat Fritz 2, a Stockfish derivative that did not comply with the terms of the GPL license.
|
Stockfish (chess) : Interview with Tord Romstad (Norway), Joona Kiiski (Finland) and Marco Costalba (Italy), programmers of Stockfish
|
Stockfish (chess) : Official website Official code repository on GitHub WebAssembly port of Stockfish Development versions built for Linux and Windows Developers forum Stockfish Testing Framework
|
TasteDive : TasteDive (formerly named TasteKid) is an entertainment recommendation engine for films, TV shows, music, video games, books, people, places, and brands. It also has elements of a social media site; it allows users to connect with "tastebuds", people with like minded interests.
|
TasteDive : TasteDive was founded in 2008 as TasteKid by brothers Andrei Oghina and Felix Oghina. In 2019, it was acquired by Qloo headquartered in NYC. "Qloo has built for developers and enterprises what TasteDive has built for individuals".
|
TasteDive : When a user types in the title of a film or TV show, the site's algorithm provides a list of similar content. It provides recommendations for TV shows to watch based on films liked by the user, and vice versa. It also provides recommendations for music, video games, and books, and includes film and TV trailers and music videos. An account is free and is not required to receive recommendations, but recommendations are more accurate for those with an account. The more a user explores the site, the more the site learns about the user's preferences and the better the results become. The site also has a social media aspect where one can see activity and gain recommendations from other users, how many others in the community like or dislike any recommendation, and how popular their tastes are within the TasteDive community. The main competitors of TasteDive are Taste App, Trakt.tv and Tastoid.
|
TasteDive : Rating site Recommender system == References ==
|
Tractable (company) : Tractable is a technology company specializing in the development of Artificial Intelligence (AI) to assess damage to property and vehicles. The AI allows users to appraise damage digitally.
|
Tractable (company) : Tractable's technology uses computer vision and deep learning to automate the appraisal of visual damage in accident and disaster recovery, for example to a vehicle. Drivers can be directed to use the application by their insurer after an accident, with the aim of settling their claim more quickly. The AI evaluates the damage from images, and therefore doesn't assess what isn't visible (such as, for example, interior damage to a vehicle or property).
|
Tractable (company) : Alexandre Dalyac and Razvan Ranca founded Tractable in 2014, and Adrien Cohen joined as co-founder in 2015. The company employs more than 300 staff members, largely in the United Kingdom. Tractable was named one of the 100 leading AI companies in the world in 2020 and 2021 by CB Insights. It won the Best Technology Award in the 2020 British Insurance Awards. In June 2021, Tractable announced a venture round that valued the company at $1 billion. Tractable was the UK's 100th billion-dollar tech company, or unicorn. In July 2023, the company received a $65 million investment from SoftBank Group, through its Vision Fund 2. == References ==
|
Cristóbal Valenzuela : Cristóbal Valenzuela is a Chilean-born technologist, software developer, and CEO of Runway. In 2018, Valenzuela co-founded the AI research company Runway in New York City with Anastasis Germanidis and Alejandro Matamala.
|
Cristóbal Valenzuela : Valenzuela graduated from Adolfo Ibáñez University (AIU), a research private university in Chile. From there, Valenzuela obtained a bachelor's degree in economics and business management, along with a master's degree in arts in design in 2012. In 2018, he graduated with a media arts degree from ITP NYU's Tisch School of the Arts.
|
Cristóbal Valenzuela : One of Valenzuela's first jobs was as a teaching and research assistant at the Adolfo Ibáñez University School of Design, and later an adjunct professor in the same department. In 2018, he became a researcher at NYU's Tisch School of the Arts ITP program, where he worked with Daniel Shiffman. He contributes to open-source software projects, including ml5.js, an open-source machine learning software. He co-founded Runway with two colleagues from ITP, Anastasis Germanidis, and Alejandro Matamala. The goal of Runway is to create new tools for human imagination using generative AI. In recent years, Valenzuela's work has been sponsored by Google and the Processing Foundation and his projects have been exhibited throughout Latin America and the US, including the Santiago Museum of Contemporary Art, Lollapalooza, NYC Media Lab, New Latin Wave, Inter-American Development Bank, Stanford University and New York University. In September 2023, Valenzuela was named as one of the TIME 100 Most Influential People in AI (TIME100 AI). == References ==
|
Vicarious (company) : Vicarious was an artificial intelligence company based in the San Francisco Bay Area, California. They use the theorized computational principles of the brain to attempt to build software that can think and learn like a human. Vicarious describes its technology as "a turnkey robotics solution integrator using artificial intelligence to automate tasks too complex and versatile for traditional automations". Alphabet Inc acquired the company in 2022 for an undisclosed amount.
|
Vicarious (company) : The company was founded in 2010 by D. Scott Phoenix and Dileep George. Before co-founding Vicarious, Phoenix was Entrepreneur in Residence at Founders Fund and CEO of Frogmetrics, a touchscreen analytics company he co-founded through the Y Combinator incubator program. Previously, George was Chief Technology Officer at Numenta, a company he co-founded with Jeff Hawkins and Donna Dubinsky while completing his PhD at Stanford University.
|
Vicarious (company) : The company launched in February 2011 with funding from Founders Fund, Dustin Moskovitz, Adam D’Angelo (former Facebook CTO and co-founder of Quora), Felicis Ventures, and Palantir co-founder Joe Lonsdale. In August 2012, in its Series A round of funding, it raised an additional $15 million. The round was led by Good Ventures; Founders Fund, Open Field Capital and Zarco Investment Group also participated. The company received $40 million in its Series B round of funding. The round was led by individuals including Mark Zuckerberg, Elon Musk, and others. An additional undisclosed amount was later contributed by Amazon.com CEO Jeff Bezos, Yahoo! co-founder Jerry Yang, Skype co-founder Janus Friis and Salesforce.com CEO Marc Benioff.
|
Vicarious (company) : Vicarious is developing machine learning software based on the computational principles of the human brain. One such software is a vision system known as the Recursive Cortical Network (RCN), it is a generative graphical visual perception system that interprets the contents of photographs and videos in a manner similar to humans. The system is powered by a balanced approach that takes sensory data, mathematics, and biological plausibility into consideration. On October 22, 2013, beating CAPTCHA, Vicarious announced its model was reliably able to solve modern CAPTCHAs, with character recognition rates of 90% or better when trained on one style. However, Luis von Ahn, a pioneer of early CAPTCHA and founder of reCAPTCHA, expressed skepticism, stating: "It's hard for me to be impressed since I see these every few months." He pointed out that 50 similar claims to that of Vicarious had been made since 2003. Vicarious later published their findings in peer-reviewed journal Science. Vicarious has indicated that its AI was not specifically designed to complete CAPTCHAs and its success at the task is a product of its advanced vision system. Because Vicarious's algorithms are based on insights from the human brain, it is also able to recognize photographs, videos, and other visual data.
|
Vicarious (company) : Artificial intelligence Glossary of artificial intelligence
|
Visual temporal attention : Visual temporal attention is a special case of visual attention that involves directing attention to specific instant of time. Similar to its spatial counterpart visual spatial attention, these attention modules have been widely implemented in video analytics in computer vision to provide enhanced performance and human interpretable explanation of deep learning models. As visual spatial attention mechanism allows human and/or computer vision systems to focus more on semantically more substantial regions in space, visual temporal attention modules enable machine learning algorithms to emphasize more on critical video frames in video analytics tasks, such as human action recognition. In convolutional neural network-based systems, the prioritization introduced by the attention mechanism is regularly implemented as a linear weighting layer with parameters determined by labeled training data.
|
Visual temporal attention : Recent video segmentation algorithms often exploits both spatial and temporal attention mechanisms. Research in human action recognition has accelerated significantly since the introduction of powerful tools such as Convolutional Neural Networks (CNNs). However, effective methods for incorporation of temporal information into CNNs are still being actively explored. Motivated by the popular recurrent attention models in natural language processing, the Attention-aware Temporal Weighted CNN (ATW CNN) is proposed in videos, which embeds a visual attention model into a temporal weighted multi-stream CNN. This attention model is implemented as temporal weighting and it effectively boosts the recognition performance of video representations. Besides, each stream in the proposed ATW CNN framework is capable of end-to-end training, with both network parameters and temporal weights optimized by stochastic gradient descent (SGD) with back-propagation. Experimental results show that the ATW CNN attention mechanism contributes substantially to the performance gains with the more discriminative snippets by focusing on more relevant video segments.
|
Visual temporal attention : Seibold VC, Balke J and Rolke B (2023): Temporal attention. Front. Cognit. 2:1168320. doi: 10.3389/fcogn.2023.1168320.
|
Visual temporal attention : Attention Visual spatial attention Action Recognition Video content analysis Convolutional neural network Computer vision == References ==
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.