Search is not available for this dataset
query
stringlengths
1
13.4k
pos
stringlengths
1
61k
neg
stringlengths
1
63.9k
query_lang
stringclasses
147 values
__index_level_0__
int64
0
3.11M
why was affirmative action necessary
Because some minority students who get into a top school with the help of affirmative action might actually be better served by attending a less elite institution to which they could gain admission with less of a boost or no boost at all.
1 Affirmative action programs have resulted in doubling or tripling the number of minority applications to colleges or universities, and have made colleges and universities more representative of their surrounding community.
eng_Latn
20,000
why was the EEOC created
Equal Employment Opportunity Commission (EEOC), U.S. agency created in 1964 to end discrimination based on race, color, religion, sex, or national origin in employment and to promote programs to make equal employment opportunity a reality.qual Employment Opportunity Commission (EEOC), U.S. agency created in 1964 to end discrimination based on race, color, religion, sex, or national origin in employment and to promote programs to make equal employment opportunity a reality.
On July 2, 1964, the Civil Rights Act was passed. Among its goals was the elimination of discrimination in the workplace through the creation of the Equal Employment Opportunity Commission (EEOC). Title VII of the Civil Rights Act of 1964 became the most famous aspect of the new legislation, prohibiting discrimination based on race, color, national origin, sex, religion, and retaliation.
eng_Latn
20,001
what was the result of the brown v. board of education supreme court decision?
The Result Of Brown V. Board Of Education Of Topeka. On May 17, 1954, the U.S. Supreme Court handed down a unanimous decision, ruling in Brown v. Board of Education of Topeka that racial segregation in public educational facilities was unConstitutional.
Alerts In Effect. The U.S. Supreme Court decision in Brown v. Board of Education (1954) is one of the most pivotal opinions ever rendered by that body. This landmark decision highlights the U.S. Supreme Court’s role in affecting changes in national and social policy.
eng_Latn
20,002
ada through employer definition
Americans with Disabilities Act of 1990. The Americans with Disabilities Act of 1990 (42 U.S.C. § 12101) is a civil rights law that prohibits discrimination based on disability. It affords similar protections against discrimination to Americans with disabilities as the Civil Rights Act of 1964, which made discrimination based on race, religion, sex, national origin, and other characteristics illegal.
Americans With Disabilities Act - ADA. DEFINITION of 'Americans With Disabilities Act - ADA'. Legislation passed in 1990 that prohibits discrimination against people with disabilities. Under this Act, discrimination against a disabled person is illegal in employment, transportation, public accommodations, communications and government activities.
eng_Latn
20,003
civil rights act title iv
Whether or not Title IV of the 1964 Civil Rights Act can be a means for the establishment of equal educational opportunity in the nation's public schools remains academic; presently, it is simply an instrument of the Nixon Administration's evolving policy on desegregation.hether or not Title IV of the 1964 Civil Rights Act can be a means for the establishment of equal educational opportunity in the nation's public schools remains academic; presently, it is simply an instrument of the Nixon Administration's evolving policy on desegregation.
Title III of the Civil Rights Act of 1964: Outlawed state and municipal governments from barring access to public facilities based off an individual’s religion, gender, race, or ethnicity.he provisions of the Civil Rights Act include: - Public accommodations may not discriminate against or segregate individuals based on race, ethnicity of gender. o Public accommodations being any establishments that lease, rent or sell goods and provide services.
eng_Latn
20,004
what amendment played a role in brown v board
After making its way through the District Courts, the Brown case went to the Supreme Court. In 1954, sixty years after Plessy v. Ferguson, the Supreme Court ruled unanimously in Brown v. Board of Education that “separate but equal” was unconstitutional under the Equal Protection Clause of the Fourteenth Amendment.
In the case of Brown v. Board of Education of Topeka (1954) the U.S. Supreme court held that. a. ethnic minorities have no rights to equal treatment by the government. b. public school segregation of races violates the equal protection clause of the Fourteenth Amendment. c. the national government does not have the power to force any type of actipn on local school boards.
eng_Latn
20,005
define title ix?
Overview of Title IX of the Education Amendments of 1972. On June 23, 1972, the President signed Title IX of the Education Amendments of 1972, 20 U.S.C. â§1681 et seq., into law. Title IX is a comprehensive federal law that prohibits discrimination on the basis of sex in any federally funded education program or activity. The principal objective of Title IX is to avoid the use of federal money to support sex discrimination in education programs and to provide individual citizens effective protection against those practices.
History of Title IX. I EXercise My Rights is a public service, informational campaign designed to educate the public about Title IX. Title IX is a law passed in 1972 that requires gender equity for boys and girls in every educational program that receives federal funding. Many people have never heard of Title IX. Most people who know about Title IX think it applies only to sports, but athletics is only one of 10 key areas addressed by the law.
eng_Latn
20,006
what is the intention of affirmative action
Affirmative action in the United States. From Wikipedia, the free encyclopedia. Affirmative action in the United States is a set of laws, policies, guidelines, and administrative practices intended to end and correct the effects of a specific form of discrimination..
Affirmative action as a practice was upheld by the Supreme Court's decision in Grutter v. Bollinger in 2003. Affirmative action policies were developed in order to correct decades of discrimination stemming from the Reconstruction Era by granting disadvantaged minorities opportunities.ome policies adopted as affirmative action, such as racial quotas or gender quotas for collegiate admission, have been criticized as a form of reverse discrimination, and such implementation of affirmative action has been ruled unconstitutional by the majority opinion of Gratz v. Bollinger.
eng_Latn
20,007
what is sweatt vs painter
Sweatt v. Painter. Sweatt v. Painter, 339 U.S. 629 (1950), was a U.S. Supreme Court case that successfully challenged the separate but equal doctrine of racial segregation established by the 1896 case Plessy v. Ferguson. The case was influential in the landmark case of Brown v. Board of Education four years later. The case involved a black man, Heman Marion Sweatt, who was refused admission to the School of Law of the University of Texas, whose president was Theophilus Painter, on the grounds that the Texas State Constitution prohibited integrated education.
Great topic! Here's my try; Painterly is about Paint and Painting...something peculiar and special to this media. When the paint is applied with confidence and decision and in such a way that it is not second guessing nor re doing...
eng_Latn
20,008
which equal opportunity act prohibits sex based wage discrimination
A separate law, the Equal Pay Act (EPA), specifically prohibits sex discrimination in wages. Women facing wage discrimination can bring a lawsuit under the EPA, Title VII, or both; the laws impose slightly different requirements on employers and offer different procedures to employees.
The Act prohibits discrimination based on race, color, religion, sex or national origin. Sex includes pregnancy, childbirth or related medical conditions. It makes it illegal for employers to discriminate in relation to hiring, discharging, compensating, or providing the terms, conditions, and privileges of employment.he main body of employment discrimination laws consists of federal and state statutes. The United States Constitution and some state constitutions provide additional protection when the employer is a governmental body or the government has taken significant steps to foster the discriminatory practice of the employer.
eng_Latn
20,009
what does the trafficking victims protection reauthorization act do
The law supplements the Trafficking Victims Protection Reauthorization Act of 2005, which amended the Trafficking Victims Protection Act of 2000. 8 The aim of these laws is to prevent people from becoming victims of human trafficking and to protect women and children who are often the targets of human traffickers.
The High Intensity Drug Trafficking Areas (HIDTA) program, created by Congress with the Anti-Drug Abuse Act of 1988, provides assistance to Federal, state, local, and tribal law enforcement agencies operating in areas determined to be critical drug-trafficking regions of the United States.
eng_Latn
20,010
what year was title vii amended to prohibited sexual discrimination
Title IX of the Civil Rights Act of 1964 should not be confused with Title IX of the Education Amendments Act of 1972, which prohibits sex discrimination in federally funded education programs and activities.
Know Your Rights: Title IX Prohibits Sexual Harassment 1. and Sexual Violence Where You Go to School Title IX of the Education Amendments of 1972 (“Title IX”), 20 U.S.C. §1681 et seq., is a Federal civil rights law that prohibits discrimination on the basis of sex in education programs and activities. All public and
eng_Latn
20,011
what happened during the equal opportunity act
The U.S. Equal Employment Opportunity Commission (“EEOC”) enforces federal laws prohibiting workplace discrimination. The EEOC was created by the Civil Rights Act of 1964.oday, the EEOC enforces federal anti-discrimination statutes, and provides oversight and coordination of all federal equal opportunity regulations, policies, and practices. The Civil Rights movement of the early 1960s peaked in the spring and summer of 1963.
Equal Employment Opportunity Commission (EEOC) Legislation covered by the EEOC include laws which prohibit discrimination, provide for equal pay, Title VII of the Civil Rights Act of 1964 (Title VII) which prohibits employment discrimination based on race, color, religion, sex, or national origin.
eng_Latn
20,012
what is the aclu political ideology
American Civil Liberties Union. The American Civil Liberties Union (ACLU) is a nonpartisan, non-profit organization whose stated mission is to defend and preserve the individual rights and liberties guaranteed to every person in this country by the Constitution and laws of the United States.. It works through litigation and lobbying.
the acu political action committee is the political arm of the american conservative unionn addition the acu political action committee endorsed and made contributions to scores of conservative candidates for governor senator and key house candidates
eng_Latn
20,013
what is the equal opportunity policy
Equal employment opportunity is a government policy that requires that employers do not discriminate against employees and job applicants based upon certain characteristics, such as age, race, color, creed, sex, religion, and disability.
The Equal Opportunity Act (1995) impacts on the selection of people to work in a business (advertising and interviewing processes in particular) and the activities undertaken once in employment. This (A.K.A.he Equal Employment Opportunity Act of 1972 (Public Law 92-261) instituted the federal Equal Employment Opportunity program, which is designed to ensure fair treatment to all … segments of society without regard to race, religion, color, national origin, or sex. That's very nice EXCEPT that wasn't the name of it.
eng_Latn
20,014
what year did miscegenation become unconstitutional
In November 2000, Alabama became the last state to overturn a law banning interracial marriage. The one-time home of George Wallace and Martin Luther King Jr. had held onto the provision for 33 years after the Supreme Court declared anti-miscegenation laws unconstitutional.
The US Supreme Court declared segregation on city buses unconstitutional on November 13, 1956. The case Browder v. Gayle, (1956) challenged the state of Alabama and city of Montgomery's segregation policy on intrastate bus travel that resulted in the 1955-56 Montgomery bus boycott.he US Supreme Court first declared segregation in public education unconstitutional in 1954, in the consolidated cases heard under the caption Brown v. Board of Education, (1 … 954) and its companion case, Bolling v. Sharpe, (1954).
eng_Latn
20,015
which acts prohibit discrimination and retaliation?
1990 (ADA) and Section 504 of the Rehabilitation Act of 1973, which prohibit discrimination against qualified persons with disabilities, as well as other federaland state laws pertaining to individuals with disabilities.
Religious Discrimination and Accommodation. Title VII of the Civil Rights Act of 1964 (Title VII) prohibits federal agencies from discriminating against employees or applicants for employment because of their religious beliefs in hiring, firing and other terms and conditions of employment.
eng_Latn
20,016
when was colorado anti discrimination law made
On May 6, 2013, Colorado Governor John Hickenlooper signed into law the Job Protection and Civil Rights Enforcement Act Of 2013 (Act), which amends the Colorado Anti-Discrimination Act (CADA), the state law prohibiting employment discrimination because of disability, race, creed, color, sex, sexual orientation, religion, age, national origin, or ...
27-30, enacted April 9, 1866, was the first United States federal law to define US citizenship and affirm that all citizens are equally protected by the law. It was mainly intended to protect the civil rights of Africans born in or brought to America, in the wake of the American Civil War.This legislation was enacted by Congress in 1865 but vetoed by President Andrew Johnson. In April 1866 Congress again passed the bill.Although Johnson again vetoed it, a two-thirds majority in each house overcame the veto and the bill therefore became law.ection 1981 (the original Civil Rights Act of 1866) was the first major anti-discrimination employment statute. This act prohibited employment discrimination based on race and color. This Act has been interpreted by the Supreme Court to protect all ethnic groups.
eng_Latn
20,017
what is overt lending discrimination
overt discrimination. An obvious way of forbidding a particular type of person from performing an activity or job.
I have been working in IT for 20 years and it is exactly these micro-inequalities that make women leave when they cannot take it any more. The problem is not overt discrimination, it is the continuous struggle of putting up with the little things that makes people give up and leave.
eng_Latn
20,018
what states that require schools to teach cursive
THE MOVEMENT TO HAVE TEACHING CURSIVE RESTORED. States that adopted Common Core aren't precluded from deviating from the standards. But in the world of education, where classroom time is limited and performance stakes are high, optional offerings tend to get sidelined in favor of what's required. That's why at least seven states — California, Idaho, Indiana, Kansas, Massachusetts, North Carolina and Utah — have moved to keep the cursive requirement. Legislation passed in North Carolina and elsewhere couples cursive with memorization of multiplication tables as twin back to basics mandates.
Florida's law, passed in 1994, also requires that its public schools teach women's history, Latino history, and the Holocaust. New Jersey, Illinois and New York have each created a commission to review how public schools in the state are teaching Black history and make recommendations on how to improve the curriculum.
eng_Latn
20,019
is creed a federally protected class
Creed discrimination involves treating an applicant or employee less favorably because of his or her beliefs. Creed is defined as “any statement or system of belief, principles, or opinions.”. Creed is protected at the state and university level. This class is protected by North Carolina General Statute 126-16, which requires that all state departments and agencies give equal opportunity for employment and compensation without regard to creed to all qualified individuals.
A group of people who share such an identified characteristic is collectively known as a protected class.. To avoid fair housing violations and costly liability, landlords need to know what a protected class is, as well as understand which protected classes are included under the FHA. For example, rejecting an applicant because he's from South America is illegal because the FHA bans discrimination based on national origin.
eng_Latn
20,020
what does title ix require
Title IX applies to institutions that receive federal financial assistance from the U S. Department of Education, including state and local educational agencies. These agencies include approximately 16,500 local school districts, 7,000 postsecondary institutions, as well as charter schools, for-profit schools, libraries, and museums.
THE ASSISTANT SECRETARY. Questions and Answers on Title IX and Sexual Violence1. Title IX of the Education Amendments of 1972 (“Title IX”)2 is a federal civil rights law that prohibits. discrimination on the basis of sex in federally funded education programs and activities. All public.
eng_Latn
20,021
does being in collegeg exmpt epopel from the draft
Randomization failure: the 1969 draft lottery. During the early part of the Vietnam war, males could be exempt from serving in the military (and being sent to war) by attending college. Eventually this practice was ruled unfair (to people who couldn't afford college), so the college exemption was eliminated.
‘The GAO last month criticized the draft as too unspecific.’ ‘The reasons for these unspecific effects remain unclear.’ ‘The school ‘death threats’, some left for individual pupils named in the letters, and others aimed at unspecific pupils, ended before the culprit could be caught.’
eng_Latn
20,022
what is a schedule a letter of disability
Step 2: Verify the Applicant is Schedule A Eligible. Verify that the applicant is, in fact, Schedule A eligible. Since Schedule A is an affirmative action program for persons with disabilities, applicants must provide proof of their disability. Any federal, state, District of Columbia, or US territory agency that issues or provides disability benefits.
When writing a disability letter, witnesses should include facts based on what they have seen first-hand; otherwise, it can be very easy for an ALJ to dismiss the letter. The witness should not say, for instance, that the claimant's doctor says the applicant shouldn't stand for more than one hour.
eng_Latn
20,023
transgender law
However, laws on the books don’t always translate into actual fair treatment. Another important step for governments to take is to issue guidance or rules about what the law means, such as by stating that transgender people have the right to use sex-specific facilities that match who they are. If you are working to pass a state or local non-discrimination law or policy, NCTE may be able to help.
The state Department of Elementary and Secondary Education released the guidelines on Friday, following passage of a Massachusetts law that took effect in July barring discrimination of transgender students in public schools.
eng_Latn
20,024
benefits of college amnesty policy
Appendix IV. Amnesty Policy. Student health and safety are of primary concern at the College. As such, in cases of significant intoxication as a result of alcohol or other substances, the College encourages individuals to seek medical assistance for themselves or others. Student(s) actively assisting the intoxicated student.
The goal of the City University of New York is to offer a comprehensive benefits package that will meet both the present and future needs of our employees and their families. The types of benefits managed through CUNY’s University Benefits Office include health, welfare, retirement and other programs. Depending upon your role at CUNY, you are eligible for a specific benefit program.
eng_Latn
20,025
What are the best Historically Black Law schools in the U.S.?
This site had a good list. Good luck\n\nhttp://www.forfutureblacklawstudents.com/find_school.html
hi I know this has nothing to do with yr question sorry to waste yr time...\nBut I just wanted to thank you,you replied to my question I chose you for best answer cause you really inspired me to not give a fuc# bout any1 so thanks if you would like to talk ,y email is : [email protected]\nTake Care thank you again!!\nGod Bless!!
eng_Latn
20,026
what law schools have early decision
Students interested in attending Notre Dame Law School may apply via either Early or Regular Decision. Early Decision is a binding process intended for applications who have researched their law school options and think NDLS is their top choice. Applying to NDLS via Early Decision allows applicants to express their special interest in attending the Law School.
Early decision is an application that is submitted early, generally by November 1 or November 15 at most colleges. Under this type of application, a student signs a statement saying that if the college admits the student the student agrees that they will attend the college and withdraw any other applications.
eng_Latn
20,027
was obama a senator in illinois
In 1996, Obama was elected to the Illinois State Senate from the south side neighborhood of Hyde Park, in Chicago's 13th District. An element of controversy surrounded the election, due to Obama's legal challenges to the petition signatures of all 4 opponents in the race, resulting in their subsequent disqualification.
The email also claims that “Barack Obama was NOT a Constitutional Law Professor at the University of Chicago.” That’s technically true. As we wrote back in 2008, Obama’s formal title was “senior lecturer,” but the University of Chicago Law School says he “served as a professor” and was “regarded as” a professor.
eng_Latn
20,028
Undecidability in Epistemic Planning
Improving Performance of Multiagent Cooperation Using Epistemic Planning
Tuberculosis deaths are predictable and preventable: Comprehensive assessment and clinical care is the key
eng_Latn
20,029
Optimizing Spatial and Temporal Reuse in Wireless Networks by Decentralized Partially Observable Markov Decision Processes
Policy Iteration for Decentralized Control of Markov Decision Processes
Evidence against a role for platelet-derived molecules in liver regeneration after partial hepatectomy in humans
eng_Latn
20,030
A Logical Measure of Progress for Planning (Technical Report)
Teleo-Reactive Programs for Agent Control
An utter refutation of the ‘Fundamental Theorem of the HapMap’
eng_Latn
20,031
Single-Agent Policy Tree Search With Guarantees
Efficient selectivity and backup operators in Monte-Carlo tree search
Opinion gathering using a multi-agent systems approach to policy selection
eng_Latn
20,032
We view incremental experiential learning in intelligent software agents as progressive agent self-adaptation. When an agent produces an incorrect behavior, then it may reflect on, and thus diagnose and repair, the reasoning and knowledge that produced the incorrect behavior. In particular, we focus on the self-diagnosis and self-repair of an agent's domain knowledge. The core issue that this article addresses is: what kind of metaknowledge may enable the agent to diagnose faults in its domain knowledge? To address this question, we propose a representation that explicitly encodes metaknowledge in the form of Empirical Verification Procedures (EVPs). In the proposed knowledge representation, an EVP may be associated with each concept within the agent's domain knowledge. Each EVP explicitly semantically grounds the associated concept in the agent's perception, and can thus be used as a test to determine the validity of knowledge of that concept during diagnosis. We present the empirical evaluation of a system, Augur, that makes use of EVP metaknowledge to adapt its own domain knowledge in the context of a particular subclass of classification problem called Compositional Classification.
We present a pilot study focused on creating flexible Hierarchical Task Networks that can leverage Reinforcement Learning to repair and adapt incomplete plans in the simulated rich domain of Minecraft. This paper presents an early evaluation of our algorithm using simulation for adaptive agents planning in a dynamic world. Our algorithm uses an hierarchical planner and can theoretically be used for any type of “bot”. The main aim of our study is to create flexible knowledge-based planners for robots, which can leverage exploration and guide learning more efficiently by imparting structure using domain knowledge. Results from simulations indicate that a combined approach using both HTN and RL is more flexible than HTN alone and more efficient than RL alone.
Among the tasks involved in building a Bayesian network, obtaining the required probabilities is generally considered the most daunting. Available data collections are often too small to allow for estimating reliable probabilities. Most domain experts, on the other hand, consider assessing the numbers to be quite demanding. Qualitative probabilistic knowledge, however, is provided more easily by experts. We propose a method for obtaining probabilities, that uses qualitative expert knowledge to constrain the probabilities learned from a small data collection. A dedicated elicitation technique is designed to support the acquisition of the qualitative knowledge required for this purpose. We demonstrate the application of our method by quantifying part of a network in the field of classical swine fever.
eng_Latn
20,033
Dynamic allocation policies for the finite horizon one armed bandit problem
On Sequential Designs for Maximizing the Sum of $n$ Observations
An instrumental variable approach finds no associated harm or benefit from early dialysis initiation in the United States
eng_Latn
20,034
An optimal inspection and replacement policy under incomplete state information
Partially Observable Markov Decision Processes With Reward Information: Basic Ideas and Models
A Universal Optimal Consumption Rate for an Insider
eng_Latn
20,035
In reinforcement learning, Return, which is the weighted accumulated future rewards, and Value, which is the expected return, serve as the objective that guides the learning of the policy. In classic RL, return is defined as the exponentially discounted sum of future rewards. One key insight is that there could be many feasible ways to define the form of the return function (and thus the value), from which the same optimal policy can be derived, yet these different forms might render dramatically different speeds of learning this policy. In this paper, we research how to modify the form of the return function to enhance the learning towards the optimal policy. We propose to use a general mathematical form for return function, and employ meta-learning to learn the optimal return function in an end-to-end manner. We test our methods on a specially designed maze environment and several Atari games, and our experimental results clearly indicate the advantages of automatically learning optimal return functions in reinforcement learning.
From the Publisher: ::: In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
20,036
Large-scale cooperation underpins the evolution of ecosystems and the human society, and the collective behaviors by self-organization of multi-agent systems are the key for understanding. As artificial intelligence (AI) prevails in almost all branches of science, it would be of great interest to see what new insights of collective behaviors could be obtained from a multi-agent AI system. Here, we introduce a typical reinforcement learning (RL) algorithm—Q-learning into evolutionary game dynamics, where agents pursue optimal action on the basis of the introspectiveness rather than the outward manner such as the birth–death or imitation processes in the traditional evolutionary game (EG). We investigate the cooperation prevalence numerically for a general \(2\times 2\) game setting. We find that the cooperation prevalence in the multi-agent AI is unexpectedly of equal level as in the traditional EG in most cases. However, in the snowdrift games with RL, we reveal that explosive cooperation appears in the form of periodic oscillation, and we study the impact of the payoff structure on its emergence. Finally, we show that the periodic oscillation can also be observed in some other EGs with the RL algorithm, such as the rock–paper–scissors game. Our results offer a reference point to understand the emergence of cooperation and oscillatory behaviors in nature and society from AI’s perspective.
Given that the assumption of perfect rationality is rarely met in the real world, we explore a graded notion of rationality in socioecological systems of networked actors. We parametrize an actors' rationality via their place in a social network and quantify system rationality via the average Jensen-Shannon divergence between the games Nash and logit quantal response equilibria. Previous work has argued that scale-free topologies maximize a system's overall rationality in this setup. Here we show that while, for certain games, it is true that increasing degree heterogeneity of complex networks enhances rationality, rationality-optimal configurations are not scale-free. For the Prisoner's Dilemma and Stag Hunt games, we provide analytic arguments complemented by numerical optimization experiments to demonstrate that core-periphery networks composed of a few dominant hub nodes surrounded by a periphery of very low degree nodes give strikingly smaller overall deviations from rationality than scale-free networks. Similarly, for the Battle of the Sexes and the Matching Pennies games, we find that the optimal network structure is also a core-periphery graph but with a smaller difference in the average degrees of the core and the periphery. These results provide insight on the interplay between the topological structure of socioecological systems and their collective cognitive behavior, with potential applications to understanding wealth inequality and the structural features of the network of global corporate control.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
20,037
Formation control of multi-agent systems has been an important task in the fields of automatic control and robotics. The aim of this paper is to develop a deep learning based formation control strategy for the multi-agent systems by using the backpropagation algorithm. Specifically, the deep learning network can be treated as the feedback controller, thus the multi-agent system can use the network output as its input to achieve the formation control. The algorithm has been tested on a multirobot system to verify the effectiveness of the proposed method.
Distributed cooperative control of multi-agent systems has been one of the most active research topics in the fields of automatic control and robotics. This paper provides a survey on recent advances in distributed cooperative control under a sampled-data setting, with special emphasis on the published results since 2011. First, some typical sampling mechanisms related to this topic, such as uniform sampling, nonuniform sampling, random sampling, and event-triggered sampling, are summarized in both asynchronous and synchronous paradigms. Then, based on different coordinated tasks, recent results on distributed sampled-data cooperative control of multi-agent systems are categorized into four classes, i.e., sampled-data leaderless consensus, sampled-data leader-following consensus, sampled-data containment control, and sampled-data formation control. For each class, some explicit research lines are identified according to various sampling mechanisms. In particular, depending on definitions of event triggering conditions, some representative event-triggered sampling mechanisms are sorted out and discussed in detail. Finally, several challenging issues for future research are proposed.
Action learning and multi-rater feedback are today among the most widely used interventions for leadership development. Despite their popularity, the authors believe that both have been poorly deployed. For example, while grounded in real company issues, action-learning formats often fail to provide the multiple learning experiences necessary to develop complex knowledge. Inadequate opportunities to reflect on learning, poor facilitation and a failure to follow up on project outcomes seriously hamper this intervention's potential to develop leadership talent. Similar shortcomings apply to the deployment of multi-rater feedback. For example, when its use is stretched and different purposes, such as performance measurement, are coupled with it, or when its quantitative aspects are emphasised and the qualitative ones neglected, or when it is conceptualised as a single event rather than as an enduring system, the actual capabilities of multi-rater feedback are limited. Both interventions require far more atte...
eng_Latn
20,038
This paper first presents an overall view for dynamical decision-making in teams, both cooperative and competitive. ::: Strategies for team decision problems, including optimal control, zero-sum 2-player games (H-infinity control) and ::: so on are normally solved for off-line by solving associated matrix equations such as the Riccati equation. ::: However, using that approach, players cannot change their objectives online in real time without calling for a ::: completely new off-line solution for the new strategies. Therefore, in this paper we give a method for learning ::: optimal team strategies online in real time as team dynamical play unfolds. In the linear quadratic regulator case, for ::: instance, the method learns the Riccati equation solution online without ever solving the Riccati equation. This ::: allows for truly dynamical team decisions where objective functions can change in real time and the system dynamics can be time-varying.
This paper investigates multi-agent cooperative path planning with obstacle avoidance based on game theory and multi-agent reinforcement learning algorithm. It aims to extend the traditional single agent Q-learning algorithm to multi-agent systems by using the cooperative game framework. This framework takes into account the selection of joint actions at joint states for multi-agent cooperative path planning with obstacle avoidance. First, a cooperative game model is presented for agents to achieve cooperative path planning with obstacle avoidance in complicated environment. Second, a multi-agent Q-learning algorithm in continuous state space is proposed for solving Nash equilibrium, where the local minimum problem is well resolved. Finally, a numerical example is conducted to verify the effectiveness of the proposed approach.
We prove that groups acting geometrically on delta-quasiconvex spaces contain no essential Baumslag-Solitar quotients as subgroups. This implies that they are translation discrete, meaning that the translation numbers of their nontorsion elements are bounded away from zero.
eng_Latn
20,039
Options Trading Strategy And Risk Management
Thank you very much for reading options trading strategy and risk management. Maybe you have knowledge that, people have look hundreds times for their chosen novels like this options trading strategy and risk management, but end up in malicious downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they juggled with some harmful virus inside their computer. options trading strategy and risk management is available in our book collection an online access to it is set as public so you can download it instantly. Our digital library hosts in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the options trading strategy and risk management is universally compatible with any devices to read.
This paper investigates the use of reinforcement learning for the navigation of an over-actuated marine platform in unknown environments. The proposed approach uses an online least-squared policy iteration scheme for value function approximation in order to estimate optimal policy. We evaluate our approach in a simulation platform and report some initial results concerning its performance on estimating optimal navigation policies to unknown environments under different environmental disturbances. The results are promising.
kor_Hang
20,040
Task Planning in “Block World” with Deep Reinforcement Learning
At the moment reinforcement learning have advanced significantly with discovering new techniques and instruments for training. This paper is devoted to the application convolutional and recurrent neural networks in the task of planning with reinforcement learning problem. The aim of the work is to check whether the neural networks are fit for this problem. During the experiments in a block environment the task was to move blocks to obtain the final arrangement which was the target. Significant part of the problem is connected with the determining on the reward function and how the results are depending in reward’s calculation. The current results show that without modifying the initial problem into more straightforward ones neural networks didn’t demonstrate stable learning process. In the paper a modified reward function with sub-targets and euclidian reward calculation was used for more precise reward determination. Results have shown that none of the tested architectures were not able to achieve goal.
In this paper, we develop a dynamic programming algorithm for the scenario-tree-based stochastic uncapacitated lot-sizing problem with random lead times. Our algorithm runs in O(N^2) time, where N is the input size of the scenario tree, and improves the recently developed algorithm that runs in O(N^3) time.
eng_Latn
20,041
Multiple IgM autoantibodies in a non-human primate.
Abstract Macroglobulin rheumatoid factors have been detected in the sera of howler monkeys ( A. caraya ). These IgM antibodies cross-reacted with human and rabbit IgG. Additional IgM auto-antibodies were identified in the same sera together with an abnormal protein component. Maintenance in captivity was associated with a disappearance of both the autoantibodies and the abnormal serum protein. These serologic findings and IgM autoantibodies were likely due to a pigmentary deposit in the liver. This disorder previously detected was considered to be secondary to intense stimulation of the reticuloendo-thelial system by an exogenous (infectious?) environmental agent.
Due to the need for balancing prioritized corporate users and offering open access to guest users, enterprise small cell networks face unique user association challenges when hybrid access is adopted. In this paper, we model the dynamic user association problem as a finite Markov Decision Process (MDP), and maximize the long-term system throughput by redistributing controllable corporate users among base stations while simultaneously learning the behavior of uncontrollable external guest user dynamics. Depending on the level of knowledge regarding the uncontrollable guest users, we develop both complete MDP- based policies and simplified variations that require much less learning. System simulation results are provided to compare and characterize the performance advantages of the proposed algorithms. Specifically, we observe that the GU-average policy achieves the best balance between overall performance and the learning effort.
eng_Latn
20,042
Temporal Difference Learning of Position Evaluation in the Game of Go
The game of Go has a high branching factor that defeats the tree search approach used in computer chess, and long-range spatiotemporal interactions that make position evaluation extremely difficult. Development of conventional Go programs is hampered by their knowledge-intensive nature. We demonstrate a viable alternative by training networks to evaluate Go positions via temporal difference (TD) learning. ::: ::: Our approach is based on network architectures that reflect the spatial organization of both input and reinforcement signals on the Go board, and training protocols that provide exposure to competent (though unlabelled) play. These techniques yield far better performance than undifferentiated networks trained by selfplay alone. A network with less than 500 weights learned within 3,000 games of 9×9 Go a position evaluation function that enables a primitive one-ply search to defeat a commercial Go program at a low playing level.
AbstractThis article considers the understandings of space and place amongst a group of disaffected students within an institution that had been in a state of flux over a number of years. The artic...
eng_Latn
20,043
Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments
Ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence. In this paper, we cast the problem of continuous adaptation into the learning-to-learn framework. We develop a simple gradient-based meta-learning algorithm suitable for adaptation in dynamically changing and adversarial scenarios. Additionally, we design a new multi-agent competitive environment, RoboSumo, and define iterated adaptation games for testing various aspects of continuous adaptation strategies. We demonstrate that meta-learning enables significantly more efficient adaptation than reactive baselines in the few-shot regime. Our experiments with a population of agents that learn and compete suggest that meta-learners are the fittest.
The problem of stability for nonlinear neural networks is addressed in this paper. By means of the Lyapunov function of Lurie type, new classes of stability conditions for general neural network models are presented. The stability analysis here is global in the space of neuronal activations. An illustrated example is given.
eng_Latn
20,044
FoRex Trading Using Supervised Machine Learning
Autonomous FOREX Trading Agents
Why I have abandoned robot-assisted transaxillary thyroid surgery
eng_Latn
20,045
Control of an agent in the multi-goal environment with homeostasis-based neural network
Abstract Here we present the model of bio-inspired neuron, and synaptic plasticity, incorporating cellular homeostasis. Network of such neurons is used for multi-goal oriented control task. It was showed that such a model provides adaptive and robust behavior for the controlled agent.
Through the analysis of incomplete information static game model and the dynamic tripartite sequential game model on public budget benefit subjects,the authors find out that there are some deficiencies in the single dimension of the stakeholders.Therefore,we should understand the real effect of the NPC budget supervision dialectically in order to avoid making mistakes.Besides,we should dialectically understand the discretion of government financial departments,and introduce incentive and punishment warning mechanisms to improve budget performance.What's more,implementation conditions of performance budget should be regarded objectively,and the internal market competition mechanism among capital users should be introduced.
eng_Latn
20,046
evolutionary - based heuristic generators for checkers and give - away checkers .
Reinforcement Learning: An Introduction
Measuring nicotine dependence: A review of the Fagerstrom Tolerance Questionnaire
eng_Latn
20,047
Scaling Reinforcement Learning toward RoboCup Soccer
Reinforcement Learning: An Introduction
Diffusion Independent Semi-Bandit Influence Maximization
eng_Latn
20,048
The Autographa californica nuclear polyhedrosis virus AcNPV induces functional maturation of human monocyte-derived dendritic cells.
The initiation of an adaptive immune response is critically dependent on the activation of dendritic cells (DCs). Therefore, vaccination strategies targeting DCs have to ensure a proper presentation of the immunogen as well as an activation of DCs to accomplish their full maturation. Viral vectors can achieve gene delivery and a subsequent presentation of the expressed immunogen, however, the immunization efficiency may be hampered by an inhibition of DC activation. Here we report that the insect born Autographa californica nuclear polyhedrosis virus (AcNPV), which is already used for genetic immunization, is able to activate human monocyte-derived DCs. This activation induces the production of tumor necrosis factor alpha (TNF-alpha), an up-regulation of the surface molecules CD83, CD80, CD86, HLA-DR and HLA-I and increases the T cell stimulatory capacity of DCs. Thus, AcNPV represents a promising vector for vaccine trials.
Due to the need for balancing prioritized corporate users and offering open access to guest users, enterprise small cell networks face unique user association challenges when hybrid access is adopted. In this paper, we model the dynamic user association problem as a finite Markov Decision Process (MDP), and maximize the long-term system throughput by redistributing controllable corporate users among base stations while simultaneously learning the behavior of uncontrollable external guest user dynamics. Depending on the level of knowledge regarding the uncontrollable guest users, we develop both complete MDP- based policies and simplified variations that require much less learning. System simulation results are provided to compare and characterize the performance advantages of the proposed algorithms. Specifically, we observe that the GU-average policy achieves the best balance between overall performance and the learning effort.
eng_Latn
20,049
Effect of chest physiotherapy on the removal of mucus in patients with cystic fibrosis
We studied the effectiveness of some of the components of a physiotherapy regimen on the removal of mucus from the lungs of 6 subjects with cystic fibrosis. On 5 randomized study days, after inhalation of a 99mTc-human serum albumin aerosol to label primarily the large airways, the removal of lung radioactivity was measured during 40 min of (a) spontaneous cough while at rest (control), (b) postural drainage, (c) postural drainage plus mechanical percussion, (d) combined maneuvers (postural drainage, deep breathing with vibrations, and percussion) administered by a physiotherapist, (e) directed vigorous cough. Measurements continued for an additional 2 h of quiet rest. Compared with the control day, all forms of intervention significantly improved the removal of mucus: cough (p < 0.005), physiotherapy maneuvers (0.005 ⩽ p < 0.01), postural drainage (p < 0.05), and postural drainage plus percussion (p < 0.01). However, there was no significant difference between regimented cough alone and therapist-adminis...
In applying reinforcement learning to continuous space problems, discretization or redefinition of the learning space can be a promising approach. Several methods and algorithms have been introduced to learning agents to respond to this problem. In our previous study, we introduced an FCCM clustering technique into Q-learning (called QL-FCCM) and its transfer learning in the Markov process. Since we could not respond to complicated environments like a non-Markov process, in this study, we propose a method in which an agent updates his Q-table by changing the trade-off ratio, Q-learning and QL-FCCM, based on the damping ratio. We conducted numerical experiments of the single pendulum standing problem and our model resulted in a smooth learning process.
eng_Latn
20,050
Modeling the immune system response: an application to leishmaniasis.
In this paper, we present a mathematical model of the immune response to parasites. The model is a type of predator-prey system in which the parasite serves as the prey and the immune response as the predator. The model idealizes the entire immune response as a single entity although it is comprised of several aspects. Parasite density is captured using logistic growth while the immune response is modeled as a combination of two components, activation by parasite density and an autocatalytic reinforcement process. Analysis of the equilibria of the model demonstrate bifurcations between parasites and immune response arising from the autocatalytic response component. The analysis also points to the steady states associated with disease resolution or persistence in leishmaniasis. Numerical predictions of the model when applied to different cases of Leishmania mexicana are in very close agreement with experimental observations.
This paper investigates the use of reinforcement learning for the navigation of an over-actuated marine platform in unknown environments. The proposed approach uses an online least-squared policy iteration scheme for value function approximation in order to estimate optimal policy. We evaluate our approach in a simulation platform and report some initial results concerning its performance on estimating optimal navigation policies to unknown environments under different environmental disturbances. The results are promising.
eng_Latn
20,051
Analysis of strategy of enterprise operation on liabilities
The traditional view is only when the benificial rate of capital in enterprise is higher than the bank interest rate can the operation on liabilities achieve benifits.In fact,operation on liabilities has various motives either for short-term interest or long-term interest,but as long as it conforms with the need to develop enterprise's stratedy,operation on liabilities is feasible.
This paper investigates the use of reinforcement learning for the navigation of an over-actuated marine platform in unknown environments. The proposed approach uses an online least-squared policy iteration scheme for value function approximation in order to estimate optimal policy. We evaluate our approach in a simulation platform and report some initial results concerning its performance on estimating optimal navigation policies to unknown environments under different environmental disturbances. The results are promising.
eng_Latn
20,052
A Strategy for Service Delivery Systems
Servitization requires a strategy formulation for the alignment of service expectation, organization structure, and product condition monitoring.
This paper investigates the use of reinforcement learning for the navigation of an over-actuated marine platform in unknown environments. The proposed approach uses an online least-squared policy iteration scheme for value function approximation in order to estimate optimal policy. We evaluate our approach in a simulation platform and report some initial results concerning its performance on estimating optimal navigation policies to unknown environments under different environmental disturbances. The results are promising.
eng_Latn
20,053
Facebook has created an artificial intelligence system that is "getting close" to beating the best human players at the Chinese board game Go, Mark Zuckerberg has revealed.
The social network's founder added that the work was being done close to his desk, signalling the importance he is giving to the task. One expert said the challenge could result in far-reaching benefits. It is the second time Mr Zuckerberg has highlighted work on AI this month. Facebook is far from the only tech firm to have used computers to play Go - a game with trillions of possible moves. Microsoft's research division began developing AI software to tackle the issue in 2004, and ended up releasing an Xbox video game six years later that made use of its techniques. Google's AI chief Demis Hassabis has also indicated that his DeepMind team is working on the game. Go is thought to have first been played more than 2,500 years ago in ancient China. Two people take turns to place black or white stones on to a grid, with the goal being to dominate the board by surrounding the opponent's pieces. Once placed, the stones cannot be moved unless they are surrounded and captured by the other person's pieces. It has been estimated that there are 10 to the power of 700 (10 multiplied by itself 699 times) possible ways a Go game could be played. By contrast, chess - a game at which AIs can already play at grandmaster level - has about 10 to the power of 60 possible scenarios. "Scientists have been trying to teach computers to win at Go for 20 years," wrote Mr Zuckerberg on his Facebook page. "We're getting close, and in the past six months we've built an AI that can make moves in as fast as 0.1 secs and still be as good as previous systems that took years to build. "Our AI combines a search-based approach that models every possible move as the game progresses along with a pattern matching system built by our computer vision team. "The researcher who works on this, Yuandong Tian, sits about 20ft from my desk. I love having our AI team right near me so I can learn from what they're working on." Facebook's Go AI system is codenamed Darkforest, according to a paper submitted in November by Mr Tian to the International Conference on Learning Representations. He wrote that it had achieved a "stable" five dan level in the game, representing an advanced amateur but below the "professional levels". However, he acknowledged the software still had flaws. "Sometimes the bot plays tenuki ("move elsewhere") pointlessly when a tight local battle is needed," Mr Tian wrote. "When the bot is losing, it shows the typical behaviour of MCTS [a machine learning technique known as Monte Carlo Tree Search], that plays bad moves and loses more. We will improve these in the future." Facebook is currently trying to use other AI systems to answer questions and carry out tasks on its Messenger chat platform. Mr Zuckerberg has also made public his desire to build a "simple AI" to power his home and help him at work this year. One independent researcher said the firm's work on Go could have knock-on benefits. "Go is vastly harder than chess using any of the standard techniques for game-playing, and for some time it's been the case that people have regarded that you would need something fundamentally new in order to crack it," explained Dr Sean Holden from the University of Cambridge's computer laboratory. "Playing games like this is essentially a search problem. The AI has to search for a sequence of actions that will get you from the start of the game to a winning position. "And that general search problem is potentially usable in all manner of different AI scenarios. "Because, what AI essentially comes down to is that if you have a robot and you want it to achieve a task, you want it to find a sequence of moves from where it is to where you want it to be."
The program - called Vital - will vote on whether to invest in a specific company or not. The firm it will be working for - Deep Knowledge Ventures - focuses on drugs for age-related diseases. It said that Vital would make its recommendations by sifting through large amounts of data. The algorithm looks at a range of data when making decisions - including financial information, clinical trials for particular drugs, intellectual property owned by the firm and previous funding. "On first sight, it looks like a futuristic idea but on reflection it is really a little bit of publicity hype," said Prof Noel Sharkey of the University of Sheffield. "A lot of companies use large data search to access what is happening on the market, then the board or trusted workers can decide on the advice. "With financial markets, algorithms are delegated with decisions. The idea of the algorithm voting is a gimmick. It is not different from the algorithm making a suggestion and the board voting on it." According to Deep Knowledge Ventures, Vital has already approved two investment decisions. The software was developed by UK-based Aging Analytics.
eng_Latn
20,054
Learning Parameterized Skills
Natural Actor-Critic Algorithms
A hybrid fuzzy clustering approach for fertile and unfertile analysis
eng_Latn
20,055
Deep Reinforcement Learning for Multi-Resource Multi-Machine Job Scheduling
Dominant Resource Fairness : Fair Allocation of Multiple Resource Types
Neuro-dynamic programming: an overview
eng_Latn
20,056
Enhancing Network Performance in Distributed Cognitive Radio Networks using Single-Agent and Multi-Agent Reinforcement Learning
Cognitive radio: making software radios more personal
Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming
eng_Latn
20,057
Ant colonies for the travelling salesman problem
Ant-Q: A Reinforcement Learning Approach to the Traveling Salesman Problem
an analogue approach to the travelling salesman problem using an elastic net method .
eng_Latn
20,058
Scheduling of plug-in electric vehicle battery charging with price prediction
Residential Demand Response Using Reinforcement Learning
From Cybersecurity to Collaborative Resiliency
eng_Latn
20,059
Mathematical modelling of zika virus in Brazil
Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission
Cellular-Connected UAVs over 5G: Deep Reinforcement Learning for Interference Management
eng_Latn
20,060
Routing Strategies for Wireless Sensor Networks
Energy-efficient communication protocol for wireless microsensor networks
Meta-Gradient Reinforcement Learning
eng_Latn
20,061
A hierarchical maze navigation algorithm with Reinforcement Learning and mapping
Temporal Abstraction in Reinforcement Learning
TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play
eng_Latn
20,062
Bandit-based planning and learning in continuous-action Markov decision processes
Issues in Using Function Approximation for Reinforcement Learning
alvinn : an autonomous land vehicle in a neural network .
eng_Latn
20,063
A hierarchical maze navigation algorithm with Reinforcement Learning and mapping
Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition
Optimal Consumption and Investment Strategies with Stochastic Interest Rates
eng_Latn
20,064
A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning
Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments
Preparation, characterization, and potential application of chitosan, chitosan derivatives, and chitosan metal nanoparticles in pharmaceutical drug delivery
eng_Latn
20,065
Integrated Networking, Caching, and Computing for Connected Vehicles: A Deep Reinforcement Learning Approach
Human-level control through deep reinforcement learning
Cellular architecture and key technologies for 5G wireless communication networks
eng_Latn
20,066
Learning the variance of the reward-to-go
Actor-Critic Algorithms
Plasmodium falciparum parasitaemia and clinical malaria among school children living in a high transmission setting in western Kenya
eng_Latn
20,067
Reinforcement Learning-Based Plug-in Electric Vehicle Charging With Forecasted Price
Scheduling of plug-in electric vehicle battery charging with price prediction
Residential Demand Response Using Reinforcement Learning
eng_Latn
20,068
Anytime State-Based Solution Methods for Decision Processes with non-Markovian Rewards
Decision-Theoretic Planning: Structural Assumptions and Computational Leverage
Liquid handling, lidocaine and epinephrine in liposuction. The properly form
eng_Latn
20,069
Planning and Acting in Partially Observable Stochastic Domains
Incremental Self-Improvement For Life-Time Multi-Agent Reinforcement Learning
Designing focused crawler based on improved genetic algorithm
eng_Latn
20,070
Almost sure stability of networked control systems under exponentially bounded bursts of dropouts
Distributed Subgradient Methods for Multi-Agent Optimization
positive academic emotions moderate the relationship between self - regulation and academic achievement .
eng_Latn
20,071
A Comparison of Human and Agent Reinforcement Learning in Partially Observable Domains
dopamine : generalization and bonuses .
Millimeter wave systems for airports and short-range aviation communications: A survey of the current channel models at mmWave frequencies
eng_Latn
20,072
Coordinating multi-agent reinforcement learning with limited communication
Coordinated Reinforcement Learning
Promising 5.0-16.0 GHz CMOS-Based Oscillators with Tuned LC Tank
eng_Latn
20,073
Reinforcement Learning for Electric Power System Decision and Control: Past Considerations and Perspectives
Hierarchical Decision Making In Electricity Grid Management
Complementary and synergistic therapeutic effects of compounds found in Kampo medicine: analysis of daikenchuto
eng_Latn
20,074
Automatic Curriculum Graph Generation for Reinforcement Learning Agents
Artificial intelligence: A modern approach
An application of genetic algorithm in optimizing Jeepney operations along Taft Avenue, Manila
eng_Latn
20,075
Novel Approach to Non-Invasive Blood Glucose Monitoring Based on Transmittance and Refraction of Visible Laser Light
Noninvasive Blood Glucose Measurement
Hierarchical Reinforcement Learning for Multi-agent MOBA Game
eng_Latn
20,076
A Tutorial on Reinforcement Learning Techniques
Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding
Learning from delayed rewards
eng_Latn
20,077
Reinforcement learning for adaptive network routing
Packet Routing in Dynamically Changing Networks: A Reinforcement Learning Approach
A method for solving differential equations of fractional order
eng_Latn
20,078
A Case Study in Hybrid Multi-threading and Hierarchical Reinforcement Learning Approach for Cooperative Multi-agent Systems
Coordinated Reinforcement Learning
Design and Experimentation of WPT Charger for Electric City Car
eng_Latn
20,079
Predicting Tactical Solutions to Operational Planning Problems under Imperfect Information
Pointer Networks
Reparative inflammation takes charge of tissue regeneration
eng_Latn
20,080
Reinforcement learning based routing in wireless mesh networks
Reinforcement Learning: An Introduction
Articulation and Clarification of the Dendritic Cell Algorithm
eng_Latn
20,081
Reinforcement Learning for Problems with Hidden State
Learning from delayed rewards
The Value of a New Filler Material in Corrective and Cosmetic Surgery: DermaLive and DermaDeep
eng_Latn
20,082
Adaptive Shooting for Bots in First Person Shooter Games Using Reinforcement Learning
A Neurally Controlled Computer Game Avatar With Humanlike Behavior
Predicting Survival in Pulmonary Arterial Hypertension Insights From the Registry to Evaluate Early and Long-Term Pulmonary Arterial Hypertension Disease Management (REVEAL)
eng_Latn
20,083
expit - oos : towards learning from planning in imperfect information games .
Thinking Fast and Slow with Deep Learning and Tree Search
Novel Antenna Concept for Compact Millimeter-Wave Automotive Radar Sensors
eng_Latn
20,084
Reinforcement Learning of Heuristic EV Fleet Charging in a Day-Ahead Electricity Market
REINFORCEMENT LEARNING: AN INTRODUCTION by Richard S. Sutton and Andrew G. Barto, Adaptive Computation and Machine Learning series, MIT Press (Bradford Book), Cambridge, Mass., 1998, xviii + 322 pp, ISBN 0-262-19398-1, (hardback, £31.95).
Business Process Analysis and Optimization: Beyond Reengineering
eng_Latn
20,085
Piecewise Linear Value Function Approximation for Factored MDPs
Input Generalization in Delayed Reinforcement Learning: An Algorithm and Performance Comparisons
Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming
eng_Latn
20,086
Spectrum Management of Cognitive Radio Using Multi-agent Reinforcement Learning
Learning from delayed rewards
Classification of Skin Lesion by Interference of Segmentation and Convolotion Neural Network
eng_Latn
20,087
a reinforcement learning extension to the almgren - chriss framework for optimal trade execution .
Recent advances in hierarchical reinforcement learning
The role of teachers in implementing curriculum changes
eng_Latn
20,088
Use of GIS and agent-based modeling to simulate the spread of influenza
Understanding individual human mobility patterns
Meta-Reinforcement Learning of Structured Exploration Strategies
eng_Latn
20,089
Multi-armed bandit algorithms and empirical evaluation
The Sample Complexity of Exploration in the Multi-Armed Bandit Problem
Robust Control for Power Sharing in Microgrids With Low-Inertia Wind and PV Generators
eng_Latn
20,090
Robot team learning enhancement using Human Advice
Planning, learning and coordination in multiagent decision processes
Bilateral testicular self-castration due to cannabis abuse: a case report
eng_Latn
20,091
The Rationality and Irrationality of Financing Green Start-Ups
The market for 'lemons': quality uncertainty and the market mechanism
Experience-based model predictive control using reinforcement learning ∗
eng_Latn
20,092
Deep Reinforcement Learning Using Neurophysiological Signatures of Interest
Deep Reinforcement Learning with Double Q-learning
Self-Improving Reactive Agents Based on Reinforcement Learning, Planning and Teaching
eng_Latn
20,093
On-Line Fingerprint Verification
Introduction to Algorithms
A Distributional Perspective on Reinforcement Learning
eng_Latn
20,094
Simulation-Based Optimization of Markov Reward Processes
Learning from delayed rewards
Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming
kor_Hang
20,095
Learning to Search with MCTSnets
Mastering the game of Go without human knowledge
Exogenous carbon monoxide suppresses Escherichia coli vitality and improves survival in an Escherichia coli-induced murine sepsis model
eng_Latn
20,096
Reinforcement Learning in Energy Trading Game Among Smart Microgrids
Dynamic pricing for smart grid with reinforcement learning
Static magnetic field therapy: dosimetry considerations.
eng_Latn
20,097
Random cell association and void probability in poisson-distributed cellular networks
A Tractable Approach to Coverage and Rate in Cellular Networks
Floyd-Warshall Reinforcement Learning: Learning from Past Experiences to Reach New Goals
eng_Latn
20,098
What is ‘multi’ in multi-agent learning?
Multiagent Reinforcement Learning in the Iterated Prisoner's Dilemma
lifetime prevalence of mental disorders in u . s . adolescents : results from the national comorbidity survey replication - - adolescent supplement ( ncs - a ) .
eng_Latn
20,099