Search is not available for this dataset
query
stringlengths 1
13.4k
| pos
stringlengths 1
61k
| neg
stringlengths 1
63.9k
| query_lang
stringclasses 147
values | __index_level_0__
int64 0
3.11M
|
---|---|---|---|---|
Does the whole brain pick up on any changes in neural activity by means of reading any changes to the EM field caused by a localized activity? | Is it that the whole brain pick up on any changes in neural activity by means of reading any changes to the EM field caused by a localized activity? | Is dark matter as a sea of massive photons which are what waves in wave-particle duality more correct than the notion of WIMPs? | eng_Latn | 10,700 |
Is it actually that the entire brain intercepts changes to the EM field and so simultaneously intercepts any localized neural activities? | Does the whole brain pick up on any changes in neural activity by means of reading any changes to the EM field caused by a localized activity? | Can ELF (extremely low frequency) radio waves travel in universe (even in plasma zones) and farther until reaching the farthest zones in universe? | eng_Latn | 10,701 |
Does the entire brain pick up on any changes in neural activity by means of reading the change to the EM field caused by a localized activity? | Is it actually that the entire brain intercepts changes to the EM field and so simultaneously intercepts any localized neural activities? | How the first eye on a creature evolved and why we are capable to see lights of limited wavelengths (400-700nm) only? | eng_Latn | 10,702 |
Scientists Study Barn Owls To Understand Why People With ADHD Struggle To Focus | Kids with ADHD are easily distracted. Barn owls are not. So a team at Johns Hopkins University in Baltimore is studying these highly focused predatory birds in an effort to understand the brain circuits that control attention. The team's long-term goal is to figure out what goes wrong in the brains of people with attention problems, including attention deficit hyperactivity disorder. "We think we have the beginnings of an answer," says Shreesh Mysore, an assistant professor who oversees the owl lab at Hopkins. The answer, he says, appears to involve an ancient brain area with special cells that tell us what to ignore. Mysore explains his hypothesis from one of the owl rooms in his basement lab. He has a distraught bird perched on his forearm. And as he talks, he tries to soothe the animal. The owl screeches, flaps and digs its talons into the elbow-length leather glove that Mysore wears for protection. He covers the bird's eyes with his free hand and hugs the animal to his chest. The owl, no longer able to focus on the movements of his human visitors, goes quiet. When it comes to paying attention, barn owls have a lot in common with people, Mysore says. "Essentially, a brain decides at any instant: What is the most important piece of information for behavior or survival?" he says. "And that is the piece of information that gets attended to, that drives behavior." For a hungry owl, important information might be the sound of a wood mouse scampering through the grass. For a human parent, it might be the cry of a baby in the next room. In either case, hearing the sound causes a distinct response in the brain. "When we pay attention to something, we're not just focusing on the thing that we want to pay attention to," Mysore says. "We're also ignoring all the other information in the world." "The question is, how," he says. "How does the brain actually help you ignore stuff that's not important for you?" Mysore believes answering that question could help people whose brains are vulnerable to distractions. That includes not only people with ADHD, he says, but also many with autism, schizophrenia and even Parkinson's disease. "Pretty much name a psychiatric disorder, and there is some kind of attentional deficit associated with it," Mysore says. The problem is that scientists still don't know much about the brain's system for suppressing distractions. There's no simple way to study it in a human brain, Mysore says, but owl brains offer a good substitute. The birds have a predator's ability to focus, as well as keen eyesight and hearing. They also have a brain organized in a way that's easy to study. Because owls have eyes that are fixed in their sockets, the birds must swivel their head to look around. That makes it straightforward for the researchers to tell what they're paying attention to. So Mysore's lab is doing experiments in which an owl must decide whether to focus on something it hears or something it sees. For example, an owl might be listening to bursts of noise coming through special earphones while a computer monitor shows an object approaching quickly. That sets up a competition between these stimuli in the midbrain, an ancient part of the brainstem that can be found in animals ranging from reptiles to people. "And when we're presenting these stimuli, we're measuring activity in key areas in the midbrain to try and figure out how that stimulus competition is actually being implemented or carried out by neurons in the brain," he says. Several years ago, Mysore and Eric Knudsen, a professor at Stanford, identified a system in the owl midbrain that appears to control which stimulus to ignore. Now, Mysore's lab is trying to understand exactly how that system works. "One of the coolest things has been the identification of a particular group of neurons in the midbrain that we think are the ones controlling distractor suppression," he says. In other words, these seem to be the precise neurons that tell a brain when to start ignoring sights and sounds that aren't important at that moment. That could be critical to understanding why people with attention disorders have so much trouble ignoring distractions, Mysore says. The lab's next challenge is to show that mice, and people, also have these special neurons. If they do, Mysore says, it could provide a new target for treatments aimed at a wide range of disorders that affect attention. | By @brentbetit: Elegy for Brian. Your soul bulleting into the sky. The rifle's sharp report. A blue heron I saw flying at night outlined against a quicksilver moon. MICHEL MARTIN, host: And next, Muses and Metaphor. (Soundbite of music) MARTIN: All this month we've been hearing your poetic tweets on this program. It's part of our celebration of National Poetry Month. These are poems of no more than 140 characters sent through Twitter. They've been coming in from everywhere - from Atlanta, Georgia, from Dallas, Texas, from Palo Alto, California. Today, a tweet from Brent Betit, a fifth generation Vermonter who still lives in the house he was born in within the small southern Vermont town of Whitingham. Brent Betit is a founder of Landmark College. That's designed for students with learning disabilities and there he serves as executive vice president and provost. Now, remember, these are short - only 140 characters each. So listen up. Mr. BRENT BETIT (Founder, Landmark College): This is Brent Betit and this is my tweet. Elegy for Brian. (Reading) Your soul bulleting into the sky. The rifle's sharp report. A blue heron I saw flying at night outlined against a quicksilver moon. MARTIN: Now, remember, these are tweets and they are short, so we're going to play it again. Mr. BETIT: (Reading) Your soul bulleting into the sky. The rifle's sharp report. A blue heron I saw flying at night outlined against a quicksilver moon. MARTIN: That's a poetic tweet by Brent Betit. If you'd like to listen to some of our previous tweets, you can do so by going to the TELL ME MORE website. Go to NPR.org and click on the Programs menu to find TELL ME MORE. | eng_Latn | 10,703 |
A Musical Brain May Help Us Understand Language And Appreciate Tchaikovsky | What sounds like music to us may just be noise to a macaque monkey. That's because a monkey's brain appears to lack critical circuits that are highly sensitive to a sound's pitch, a team reported Monday in the journal Nature Neuroscience. The finding suggests that humans may have developed brain areas that are sensitive to pitch and tone in order to process the sounds associated with speech and music. "The macaque monkey doesn't have the hardware," says Bevil Conway, an investigator at the National Institutes of Health. "The question in my mind is, what are the monkeys hearing when they listen to Tchaikovsky's Fifth Symphony?" The study began with a bet between Conway and Sam Norman-Haignere, who was a graduate student at the time. Norman-Haignere, who is now a postdoctoral researcher at Columbia University, was part of a team that found evidence that the human brain responds to a sound's pitch. "I was like, well if you see that and it's a robust finding you see in humans, we'll see it in monkeys," Conway says. But Norman-Haignere thought monkey brains might be different. "Honestly, I wasn't sure," Norman-Haignere says. "I mean that's usually a sign of a good experiment, you know, when you don't know what the outcome is." So the two scientists and several colleagues used a special type of MRI to monitor the brains of six people and five macaque monkeys as they listened to a range of sounds through headphones. Some of the sounds were more like music, where changes in pitch are obvious. Other sounds were more like noise. And Conway says it didn't take long to realize he'd lost his bet. "In humans you see this beautiful organization, pitch bias, and it's clear as day," Conway says. In monkeys, he says, "we see nothing." That surprised Conway because his own research had shown that the two species are nearly identical when it comes to processing visual information. "When I look at something, I'm pretty sure that the monkey is seeing the same thing that I'm seeing," he says. "But here in the auditory domain it seems fundamentally different." The study didn't try to explain why sounds would be processed differently in a human brain. But one possibility involves our exposure to speech and music. "Both speech and music are highly complex structured sounds," Norman-Haignere says, "and it's totally plausible that the brain has developed regions that are highly tuned to those structures." That tuning could be the result of "something in our genetic code that causes those regions to develop the way they are and to be located where they are," Norman-Haignere says. Or, he says, it could be that these brain regions develop as children listen to music and speech. Regardless, subtle changes in pitch and tone seem to be critical when people want to convey emotion," Conway says. "You can know whether or not I'm angry or sad or questioning or confused, and you can get almost all of that meaning just from the tone," he says. | <em>Day to Day</em> music critic Christian Bordal examines the recent surge of patriotic themes in country music, and how it reflects the cultural divisions in the United States. | eng_Latn | 10,704 |
Individual members of enveloping structures are known by what terms? | The flower may consist only of these parts, as in willow, where each flower comprises only a few stamens or two carpels. Usually, other structures are present and serve to protect the sporophylls and to form an envelope attractive to pollinators. The individual members of these surrounding structures are known as sepals and petals (or tepals in flowers such as Magnolia where sepals and petals are not distinguishable from each other). The outer series (calyx of sepals) is usually green and leaf-like, and functions to protect the rest of the flower, especially the bud. The inner series (corolla of petals) is, in general, white or brightly colored, and is more delicate in structure. It functions to attract insect or bird pollinators. Attraction is effected by color, scent, and nectar, which may be secreted in some part of the flower. The characteristics that attract pollinators account for the popularity of flowers and flowering plants among humans. | The model also shows all the memory stores as being a single unit whereas research into this shows differently. For example, short-term memory can be broken up into different units such as visual information and acoustic information. In a study by Zlonoga and Gerber (1986), patient 'KF' demonstrated certain deviations from the Atkinson–Shiffrin model. Patient KF was brain damaged, displaying difficulties regarding short-term memory. Recognition of sounds such as spoken numbers, letters, words and easily identifiable noises (such as doorbells and cats meowing) were all impacted. Interestingly, visual short-term memory was unaffected, suggesting a dichotomy between visual and audial memory. | eng_Latn | 10,705 |
Who allegedly hit a home run to the Center? | On October 1, 1932, in game three of the World Series between the Cubs and the New York Yankees, Babe Ruth allegedly stepped to the plate, pointed his finger to Wrigley Field's center field bleachers and hit a long home run to center. There is speculation as to whether the "facts" surrounding the story are true or not, but nevertheless Ruth did help the Yankees secure a World Series win that year and the home run accounted for his 15th and last home run in the post season before he retired in 1935. | Temporal theories offer an alternative that appeals to the temporal structure of action potentials, mostly the phase-locking and mode-locking of action potentials to frequencies in a stimulus. The precise way this temporal structure helps code for pitch at higher levels is still debated, but the processing seems to be based on an autocorrelation of action potentials in the auditory nerve. However, it has long been noted that a neural mechanism that may accomplish a delay—a necessary operation of a true autocorrelation—has not been found. At least one model shows that a temporal delay is unnecessary to produce an autocorrelation model of pitch perception, appealing to phase shifts between cochlear filters; however, earlier work has shown that certain sounds with a prominent peak in their autocorrelation function do not elicit a corresponding pitch percept, and that certain sounds without a peak in their autocorrelation function nevertheless elicit a pitch. To be a more complete model, autocorrelation must therefore apply to signals that represent the output of the cochlea, as via auditory-nerve interspike-interval histograms. Some theories of pitch perception hold that pitch has inherent octave ambiguities, and therefore is best decomposed into a pitch chroma, a periodic value around the octave, like the note names in western music—and a pitch height, which may be ambiguous, that indicates the octave the pitch is in. | eng_Latn | 10,706 |
What are some examples of units that short-term memory can be categorized in to? | The model also shows all the memory stores as being a single unit whereas research into this shows differently. For example, short-term memory can be broken up into different units such as visual information and acoustic information. In a study by Zlonoga and Gerber (1986), patient 'KF' demonstrated certain deviations from the Atkinson–Shiffrin model. Patient KF was brain damaged, displaying difficulties regarding short-term memory. Recognition of sounds such as spoken numbers, letters, words and easily identifiable noises (such as doorbells and cats meowing) were all impacted. Interestingly, visual short-term memory was unaffected, suggesting a dichotomy between visual and audial memory. | Psychoactive drugs can impair the judgment of time. Stimulants can lead both humans and rats to overestimate time intervals, while depressants can have the opposite effect. The level of activity in the brain of neurotransmitters such as dopamine and norepinephrine may be the reason for this. Such chemicals will either excite or inhibit the firing of neurons in the brain, with a greater firing rate allowing the brain to register the occurrence of more events within a given interval (speed up time) and a decreased firing rate reducing the brain's capacity to distinguish events occurring within a given interval (slow down time). | eng_Latn | 10,707 |
What is Setwin otherwise known as? | The most commonly used forms of medium distance transport in Hyderabad include government owned services such as light railways and buses, as well as privately operated taxis and auto rickshaws. Bus services operate from the Mahatma Gandhi Bus Station in the city centre and carry over 130 million passengers daily across the entire network.:76 Hyderabad's light rail transportation system, the Multi-Modal Transport System (MMTS), is a three line suburban rail service used by over 160,000 passengers daily. Complementing these government services are minibus routes operated by Setwin (Society for Employment Promotion & Training in Twin Cities). Intercity rail services also operate from Hyderabad; the main, and largest, station is Secunderabad Railway Station, which serves as Indian Railways' South Central Railway zone headquarters and a hub for both buses and MMTS light rail services connecting Secunderabad and Hyderabad. Other major railway stations in Hyderabad are Hyderabad Deccan Station, Kachiguda Railway Station, Begumpet Railway Station, Malkajgiri Railway Station and Lingampally Railway Station. The Hyderabad Metro, a new rapid transit system, is to be added to the existing public transport infrastructure and is scheduled to operate three lines by 2015. | The model also shows all the memory stores as being a single unit whereas research into this shows differently. For example, short-term memory can be broken up into different units such as visual information and acoustic information. In a study by Zlonoga and Gerber (1986), patient 'KF' demonstrated certain deviations from the Atkinson–Shiffrin model. Patient KF was brain damaged, displaying difficulties regarding short-term memory. Recognition of sounds such as spoken numbers, letters, words and easily identifiable noises (such as doorbells and cats meowing) were all impacted. Interestingly, visual short-term memory was unaffected, suggesting a dichotomy between visual and audial memory. | eng_Latn | 10,708 |
What type of brain waves are seen in mammals during sleep? | As a side effect of the electrochemical processes used by neurons for signaling, brain tissue generates electric fields when it is active. When large numbers of neurons show synchronized activity, the electric fields that they generate can be large enough to detect outside the skull, using electroencephalography (EEG) or magnetoencephalography (MEG). EEG recordings, along with recordings made from electrodes implanted inside the brains of animals such as rats, show that the brain of a living animal is constantly active, even during sleep. Each part of the brain shows a mixture of rhythmic and nonrhythmic activity, which may vary according to behavioral state. In mammals, the cerebral cortex tends to show large slow delta waves during sleep, faster alpha waves when the animal is awake but inattentive, and chaotic-looking irregular activity when the animal is actively engaged in a task. During an epileptic seizure, the brain's inhibitory control mechanisms fail to function and electrical activity rises to pathological levels, producing EEG traces that show large wave and spike patterns not seen in a healthy brain. Relating these population-level patterns to the computational functions of individual neurons is a major focus of current research in neurophysiology. | Bird migration routes have been studied by a variety of techniques including the oldest, marking. Swans have been marked with a nick on the beak since about 1560 in England. Scientific ringing was pioneered by Hans Christian Cornelius Mortensen in 1899. Other techniques include radar and satellite tracking. | eng_Latn | 10,709 |
What changes can be linked to learning and memory? | Brain areas involved in the neuroanatomy of memory such as the hippocampus, the amygdala, the striatum, or the mammillary bodies are thought to be involved in specific types of memory. For example, the hippocampus is believed to be involved in spatial learning and declarative learning, while the amygdala is thought to be involved in emotional memory. Damage to certain areas in patients and animal models and subsequent memory deficits is a primary source of information. However, rather than implicating a specific area, it could be that damage to adjacent areas, or to a pathway traveling through the area is actually responsible for the observed deficit. Further, it is not sufficient to describe memory, and its counterpart, learning, as solely dependent on specific brain regions. Learning and memory are attributed to changes in neuronal synapses, thought to be mediated by long-term potentiation and long-term depression. | During the 1990s, several research papers and popular books wrote on what came to be called the "Mozart effect": an observed temporary, small elevation of scores on certain tests as a result of listening to Mozart's works. The approach has been popularized in a book by Don Campbell, and is based on an experiment published in Nature suggesting that listening to Mozart temporarily boosted students' IQ by 8 to 9 points. This popularized version of the theory was expressed succinctly by the New York Times music columnist Alex Ross: "researchers... have determined that listening to Mozart actually makes you smarter." Promoters marketed CDs claimed to induce the effect. Florida passed a law requiring toddlers in state-run schools to listen to classical music every day, and in 1998 the governor of Georgia budgeted $105,000 per year to provide every child born in Georgia with a tape or CD of classical music. One of the co-authors of the original studies of the Mozart effect commented "I don't think it can hurt. I'm all for exposing children to wonderful cultural experiences. But I do think the money could be better spent on music education programs." | eng_Latn | 10,710 |
What helped cause the most life like sound? | The lateral cut NAB curve was remarkably similar to the NBC Orthacoustic curve that evolved from practices within the National Broadcasting Company since the mid-1930s. Empirically, and not by any formula, it was learned that the bass end of the audio spectrum below 100 Hz could be boosted somewhat to override system hum and turntable rumble noises. Likewise at the treble end beginning at 1,000 Hz, if audio frequencies were boosted by 16 dB at 10,000 Hz the delicate sibilant sounds of speech and high overtones of musical instruments could survive the noise level of cellulose acetate, lacquer/aluminum, and vinyl disc media. When the record was played back using a complementary inverse curve, signal-to-noise ratio was improved and the programming sounded more lifelike. | An example of this theory in action would be as follows: An emotion-evoking stimulus (snake) triggers a pattern of physiological response (increased heart rate, faster breathing, etc.), which is interpreted as a particular emotion (fear). This theory is supported by experiments in which by manipulating the bodily state induces a desired emotional state. Some people may believe that emotions give rise to emotion-specific actions: e.g. "I'm crying because I'm sad," or "I ran away because I was scared." The issue with the James–Lange theory is that of causation (bodily states causing emotions and being a priori), not that of the bodily influences on emotional experience (which can be argued and is still quite prevalent today in biofeedback studies and embodiment theory). | eng_Latn | 10,711 |
What vocal technology did Kanye pick up for his next set of artistic endeavors? | West's life took a different direction when his mother, Donda West, died of complications from cosmetic surgery involving abdominoplasty and breast reduction in November 2007. Months later, West and fiancée Alexis Phifer ended their engagement and their long-term intermittent relationship, which had begun in 2002. The events profoundly affected West, who set off for his 2008 Glow in the Dark Tour shortly thereafter. Purportedly because his emotions could not be conveyed through rapping, West decided to sing using the voice audio processor Auto-Tune, which would become a central part of his next effort. West had previously experimented with the technology on his debut album The College Dropout for the background vocals of "Jesus Walks" and "Never Let Me Down." Recorded mostly in Honolulu, Hawaii in three weeks, West announced his fourth album, 808s & Heartbreak, at the 2008 MTV Video Music Awards, where he performed its lead single, "Love Lockdown". Music audiences were taken aback by the uncharacteristic production style and the presence of Auto-Tune, which typified the pre-release response to the record. | Temporal theories offer an alternative that appeals to the temporal structure of action potentials, mostly the phase-locking and mode-locking of action potentials to frequencies in a stimulus. The precise way this temporal structure helps code for pitch at higher levels is still debated, but the processing seems to be based on an autocorrelation of action potentials in the auditory nerve. However, it has long been noted that a neural mechanism that may accomplish a delay—a necessary operation of a true autocorrelation—has not been found. At least one model shows that a temporal delay is unnecessary to produce an autocorrelation model of pitch perception, appealing to phase shifts between cochlear filters; however, earlier work has shown that certain sounds with a prominent peak in their autocorrelation function do not elicit a corresponding pitch percept, and that certain sounds without a peak in their autocorrelation function nevertheless elicit a pitch. To be a more complete model, autocorrelation must therefore apply to signals that represent the output of the cochlea, as via auditory-nerve interspike-interval histograms. Some theories of pitch perception hold that pitch has inherent octave ambiguities, and therefore is best decomposed into a pitch chroma, a periodic value around the octave, like the note names in western music—and a pitch height, which may be ambiguous, that indicates the octave the pitch is in. | eng_Latn | 10,712 |
What are the names of the groups of FETs | FETs are divided into two families: junction FET (JFET) and insulated gate FET (IGFET). The IGFET is more commonly known as a metal–oxide–semiconductor FET (MOSFET), reflecting its original construction from layers of metal (the gate), oxide (the insulation), and semiconductor. Unlike IGFETs, the JFET gate forms a p–n diode with the channel which lies between the source and drain. Functionally, this makes the n-channel JFET the solid-state equivalent of the vacuum tube triode which, similarly, forms a diode between its grid and cathode. Also, both devices operate in the depletion mode, they both have a high input impedance, and they both conduct current under the control of an input voltage. | The model also shows all the memory stores as being a single unit whereas research into this shows differently. For example, short-term memory can be broken up into different units such as visual information and acoustic information. In a study by Zlonoga and Gerber (1986), patient 'KF' demonstrated certain deviations from the Atkinson–Shiffrin model. Patient KF was brain damaged, displaying difficulties regarding short-term memory. Recognition of sounds such as spoken numbers, letters, words and easily identifiable noises (such as doorbells and cats meowing) were all impacted. Interestingly, visual short-term memory was unaffected, suggesting a dichotomy between visual and audial memory. | eng_Latn | 10,713 |
Im realitivly new to physics with tons of psuedo questions. Just wondering if you could give me any info on :\n1.Do Brains have any effect with/on/to the electro magnectic and vice versa? | The elcrtomagnetic spectrum is used as a tool to determine what frequencies are high or lower than others. \nThe anserw to your question I guess could be yes, because the brain does send signals in waves, so it is on the spectrum somewhere. | Munna -circuit share more of a hero-sidekick kind of relation if u really see cos Munna is a bhai & Circuit is in his gang. Its a different thing tht they are best of friends & Circuit can fulfil all of Munna' s demands.Theyre very colorful characters so we find them more appealing.But if u see Jai-Veeru they re common ppl, we totally relate to them. Veeru is the bumbling 'Romeo' & Jai is the reticent but sarcastic brooder. Inspite of their differences theyre best of friends & one dies 4the other. For me Dosti ho tho Jai-Veeru jaisi | eng_Latn | 10,714 |
Top-down modulation: bridging selective attention and working memory | Control of goal-directed and stimulus-driven attention in the brain | Attention and the detection of signals. | eng_Latn | 10,715 |
Pain sensitivity and tactile spatial acuity are altered in healthy musicians as in chronic pain patients | Can You Hear Me Now? Musical Training Shapes Functional Brain Networks for Selective Auditory Attention and Hearing Speech in Noise | Magnetoreception in animals | eng_Latn | 10,716 |
(n, k, p)-Gray code for image systems | Digital Image Processing | being together in time : musical experience and the mirror neuron . | eng_Latn | 10,717 |
Music acquisition: effects of enculturation and formal training on development | Specific long-term memory traces in primary auditory cortex | Capacitive angular-position sensor with electrically floating conductive rotor and measurement redundancy | eng_Latn | 10,718 |
Reading and reading disturbance | meta - analysis of the functional neuroanatomy of single - word reading : method and validation . | Hearing and saying The functional neuro-anatomy of auditory word processing | eng_Latn | 10,719 |
Examining neural plasticity and cognitive benefit through the unique lens of musical training | brain structures differ between musicians and non - musician . | Using Word Familiarities and Word Associations to Measure Corpus Representativeness | eng_Latn | 10,720 |
Being in the zone: Flow state and the underlying neural dynamics in video game playing | Event-related EEG/MEG synchronization and desynchronization: basic principles | Singular Spectrum Analysis for Time Series | eng_Latn | 10,721 |
Reversible sketches: enabling monitoring and analysis over high-speed data streams | Approximate Frequency Counts over Data Streams | Subdivisions of auditory cortex and processing streams in primates. | eng_Latn | 10,722 |
Motor torque based vehicle stability control for four-wheel-drive electric vehicle | On the vehicle stability control for electric vehicle based on control allocation | An Oscillatory Hierarchy Controlling Neuronal Excitability and Stimulus Processing in the Auditory Cortex | eng_Latn | 10,723 |
Symbols Among the Neurons: Details of a Connectionist Inference Architecture | RECOVERING INTRINSIC SCENE CHARACTERISTICS FROM IMAGES | Why would Musical Training Benefit the Neural Encoding of Speech? The OPERA Hypothesis | eng_Latn | 10,724 |
Dissociation of Neural Representation of Intensity and Affective Valuation in Human Gustation | intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion . | Sounds Wilde. Phonetically Extended Embeddings for Author-Stylized Poetry Generation | eng_Latn | 10,725 |
A Context Encoder For Audio Inpainting | A Simple Weight Decay Can Improve Generalization | Optimal Brain Damage | eng_Latn | 10,726 |
High frequency neurons determine effective connectivity in neuronal networks | Spiking activity propagation in neuronal networks: reconciling different perspectives on neural coding | Binaural beat technology in humans: a pilot study to assess psychologic and physiologic effects. | eng_Latn | 10,727 |
The Neuroscience Literacy of Trainee Teachers | neuroplasticity : changes in grey matter induced by training . | Fast Component-Based QR Code Detection in Arbitrarily Acquired Images | eng_Latn | 10,728 |
Identifying salient sounds using dual-task experiments | Modeling the role of salience in the allocation of overt visual attention | A Feature-Integration Theory of Attention | eng_Latn | 10,729 |
Granger Causality: Basic Theory and Application to Neuroscience | Frequency decomposition of conditional Granger causality and application to multivariate neural field potential data | Penetration of Titanium Dioxide Microparticles in a Sunscreen Formulation into the Horny Layer and the Follicular Orifice | eng_Latn | 10,730 |
Superior time perception for lower musical pitch explains why bass-ranged instruments lay down musical rhythms | Adaptation to tempo changes in sensorimotor synchronization: effects of intention, attention, and awareness. | a goal - oriented neural conversation model . | eng_Latn | 10,731 |
The functional neuroanatomy of the human orbitofrontal cortex: evidence from neuroimaging and neuropsychology | Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions | human brain language areas identified by functional magnetic resonance imaging . | eng_Latn | 10,732 |
Neural correlates of flow using auditory evoked potential suppression | nonparametric permutation tests for functional neuroimaging : a primer with examples . | analysis of fmri time - series revisited — again . | eng_Latn | 10,733 |
Oscillations, phase-of-firing coding, and spike timing-dependent plasticity: an efficient learning scheme. | Unsupervised Learning of Visual Features through Spike Timing Dependent Plasticity | Rate Coding Versus Temporal Order Coding: What the Retinal Ganglion Cells Tell the Visual Cortex | eng_Latn | 10,734 |
Deep Reinforcement Learning that Matters | Empirical Evaluation of Rectified Activations in Convolutional Network | Children with ASD show links between aberrant sound processing, social symptoms, and atypical auditory interhemispheric and thalamocortical functional connectivity | eng_Latn | 10,735 |
The Generality of Working Memory Capacity: A Latent-Variable Approach to Verbal and Visuospatial Memory Span and Reasoning. | The episodic buffer : a new component of working memory ? | A Broadband Planar Monopulse Antenna Array of C-Band | eng_Latn | 10,736 |
Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit | Competitive mechanisms subserve attention in macaque areas V2 and V4 | comparison of parametric representations for monosyllabic word recognition in continuously spoken se . | eng_Latn | 10,737 |
hippocampal volume predicts fluid intelligence in musically trained people . | Transfer of Training between Music and Speech: Common Processing, Attention, and Memory | brain structures differ between musicians and non - musician . | eng_Latn | 10,738 |
Mu suppression as an index of sensorimotor contributions to speech processing: Evidence from continuous EEG signals | Standardized low-resolution brain electromagnetic tomography (sLORETA) : technical details | Observing Human-Object Interactions: Using Spatial and Functional Compatibility for Recognition | eng_Latn | 10,739 |
Effect of Dual-tasking on Visual and Auditory Simple Reaction Times. | Comparison between Auditory and Visual Simple Reaction Times | Computer Communication Within Industrial Distributed Environment—a Survey | eng_Latn | 10,740 |
Effect of Dual-tasking on Visual and Auditory Simple Reaction Times. | Comparison between Auditory and Visual Simple Reaction Times | Sprint starts and the minimum auditory reaction time | eng_Latn | 10,741 |
Phenomenology of mutual interference of FMCW and PMCW automotive radars | PMCW waveform and MIMO technique for a 79 GHz CMOS automotive radar | Development of coherent neuronal activity patterns in mammalian cortical networks: Common principles and local hetereogeneity | eng_Latn | 10,742 |
Language and other complex behaviors: Unifying characteristics, computational models, neural mechanisms | bridging language with the rest of cognition : computational , algorithmic and neurobiological issues and methods . | Cerebral functional connectivity periodically (de)synchronizes with anatomical constraints | eng_Latn | 10,743 |
Words in Context: The Effects of Length, Frequency, and Predictability on Brain Responses During Natural Reading | SWIFT: A Dynamical Model of Saccade Generation During Reading. | adaptation in natural and artificial systems . | eng_Latn | 10,744 |
The architecture of cognitive control in the human prefrontal cortex | An Information Theoretic Approach To Neural Computing | Tracing the Dynamic Changes in Perceived Tonal Organization in a Spatial Representation of Musical Keys | eng_Latn | 10,745 |
Cortical midline structures and the self | Mind Reading : Neural Mechanisms of Theory of Mind and Self-Perspective | Deep D-bar: Real time Electrical Impedance Tomography Imaging with Deep Neural Networks | eng_Latn | 10,746 |
Aesthetic package design: A behavioral, neural, and psychological investigation | Brain correlates of aesthetic judgment of beauty | Convolutional Neural Networks for Acoustic Modeling of Raw Time Signal in LVCSR | eng_Latn | 10,747 |
Order patterns recurrence plots in the analysis of ERP data | Towards a neural basis of auditory sentence processing | Experimental switching frequency limits of 15 kV SiC N-IGBT module | eng_Latn | 10,748 |
Individual Differences in Executive Functions Are Almost Entirely Genetic in Origin | Computational perspectives on dopamine function in prefrontal cortex | CAN(Controller Area Network) Bus Communication System Based on Matlab/Simulink | eng_Latn | 10,749 |
Common frequency pattern for music preference identification using frontal EEG | Frontal EEG asymmetry as a moderator and mediator of emotion | New oxidative decomposition mechanism of estradiol through the structural characterization of a minute impurity and its degradants. | eng_Latn | 10,750 |
Oscillations, phase-of-firing coding, and spike timing-dependent plasticity: an efficient learning scheme. | Unsupervised Learning of Visual Features through Spike Timing Dependent Plasticity | The CXXC motif is more than a redox rheostat. | eng_Latn | 10,751 |
A positive level shifter for high speed symmetric switching in flash memories | A new level-up shifter for high speed and wide range interface in ultra deep sub-micron | Measuring verbal and non-verbal communication in aphasia: Reliability, validity, and sensitivity to change of the scenario test | eng_Latn | 10,752 |
The pallial basal ganglia pathway modulates the behaviorally driven gene expression of the motor pathway. | Auditory feedback in learning and maintenance of vocal behaviour | Jupiter: a toolkit for interactive large model visualization | eng_Latn | 10,753 |
Musical Training as a Framework for Brain Plasticity: Behavior, Function, and Structure | The cortical organization of speech processing | High-gain differential CMOS transimpedance amplifier with on-chip buried double junction photodiode | eng_Latn | 10,754 |
Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG | Mechanisms Underlying Selective Neuronal Tracking of Attended Speech at a “Cocktail Party” | The cortical organization of speech processing | eng_Latn | 10,755 |
One Century of Brain Mapping Using Brodmann Areas* | The Enigmatic temporal pole : a review of findings on social and emotional processing | A FAST AND LOW SETTLING ERROR CONTINUOUS-TIME COMMON-MODE FEEDBACK CIRCUIT BASED ON DIFFERENTIAL DIFFERENCE AMPLIFIER | eng_Latn | 10,756 |
Emotions, Arousal, and Frontal Alpha Rhythm Asymmetry During Beethoven’s 5th Symphony | Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions | INTERNAL HEAT TRANSFER AUGMENTATION IN A CHANNEL USING AN ALTERNATE SET OF POROUS CAVITY-BLOCK OBSTACLES | eng_Latn | 10,757 |
ERP Features and EEG Dynamics: An ICA Perspective | Spike Timing-Dependent Plasticity of Neural Circuits | Noise, neural codes and cortical organization | kor_Hang | 10,758 |
Joint tracking and segmentation of multiple targets | Discrete-continuous optimization for multi-target tracking | Auditory brainstem response morphology and analysis in very preterm neonatal intensive care unit infants | eng_Latn | 10,759 |
Isolating Sources of Disentanglement in VAEs | Information Dropout: Learning Optimal Representations Through Noisy Computation | A Model of Electrically Stimulated Auditory Nerve Fiber Responses with Peripheral and Central Sites of Spike Generation | eng_Latn | 10,760 |
Hallam of children and young people The power of music : Its impact on the intellectual , social and personal development | Effects of Music Training on the Child's Brain and Cognitive Development | Numerical calculation of symmetric capacity of Rayleigh fading channel with BPSK/QPSK | eng_Latn | 10,761 |
Neglected Time: Impaired Temporal Perception of Multisecond Intervals in Unilateral Neglect | Directed attention and perception of temporal order | Feature selection using genetic algorithms for premature ventricular contraction classification | eng_Latn | 10,762 |
Supporting Wicked Problems with Procedural Decision Support Systems | A Behavioral Model of Rational Choice | Heschl gyrus and its included primary auditory cortex: Structural MRI studies in healthy and diseased subjects | eng_Latn | 10,763 |
Audio Matters in Visual Attention | Cognitive determinants of fixation location during picture viewing. | Python in neuroscience | eng_Latn | 10,764 |
Hearing Silences: Human Auditory Processing Relies on Preactivation of Sound-Specific Brain Activity Patterns | A fully automated correction method of EOG artifacts in EEG recordings | noncalcemic actions of vitamin d receptor ligands . | eng_Latn | 10,765 |
Executive Functions Predict the Success of Top-Soccer Players | Individual Differences in Executive Functions Are Almost Entirely Genetic in Origin | Adaptive impulse response modeling for interactive sound propagation | eng_Latn | 10,766 |
Neural activity in the human brain relating to uncertainty and arousal during anticipation | Statistical parametric maps in functional imaging: a general linear approach | CARLOC: Precise Positioning of Automobiles | eng_Latn | 10,767 |
Real-time decompression and visualization of animated volume data | Orthonormal bases of compactly supported wavelets | The cognitive neuroscience of creativity | eng_Latn | 10,768 |
Does the entire brain pick up on any changes in neural activity by means of reading any change to the EM field caused by a localized activity? | Is it that the whole brain pick up on any changes in neural activity by means of reading any changes to the EM field caused by a localized activity? | How our brain works when We think of random stuff? | eng_Latn | 10,769 |
Does the entire brain pick up on any changes in neural activity by means of reading any change to the EM field caused by a localized activity? | Does the entire brain pick up on any changes in neural activity by means of reading the change to the EM field caused by a localized activity? | Does a brain use more energy when it is concentrating on something? | eng_Latn | 10,770 |
Mu suppression as an index of sensorimotor contributions to speech processing: Evidence from continuous EEG signals | The cortical organization of speech processing | understanding the solid - state forms of fenofibrate - - a spectroscopic and computational study . | eng_Latn | 10,771 |
DEVELOPMENT OF A MECHATRONIC BLIND STICK | An experimental system for auditory image representations | Eager decision tree | yue_Hant | 10,772 |
brain structures differ between musicians and non - musician . | Increased auditory cortical representation in musicians. | The Hippocampus as a Cognitive Map | eng_Latn | 10,773 |
Influence diagram of physiological and environmental factors affecting heart rate variability: an extended literature overview | Brain correlates of autonomic modulation: Combining heart rate variability with fMRI | A simple bijection for the regions of the Shi arrangement of hyperplanes | eng_Latn | 10,774 |
Does the entire brain pick up on any changes in neural activity by means of reading the change to the EM field caused by a localized activity? | Does the whole brain pick up on any changes in neural activity by means of reading any changes to the EM field caused by a localized activity? | How does listening to music while reading affect the brain? | eng_Latn | 10,775 |
Integrating information across sensory domains to construct a unified representation of multi-sensory signals is a fundamental characteristic of perception in ecological contexts. One provocative hypothesis deriving from neurophysiology suggests that there exists early and direct cross-modal phase modulation. We provide evidence, based on magnetoencephalography (MEG) recordings from participants viewing audiovisual movies, that low-frequency neuronal information lies at the basis of the synergistic coordination of information across auditory and visual streams. In particular, the phase of the 2-7 Hz delta and theta band responses carries robust (in single trials) and usable information (for parsing the temporal structure) about stimulus dynamics in both sensory modalities concurrently. These experiments are the first to show in humans that a particular cortical mechanism, delta-theta phase modulation across early sensory areas, plays an important "active" role in continuously tracking naturalistic audio-visual streams, carrying dynamic multi-sensory information, and reflecting cross-sensory interaction in real time.. Title: Variations in the quality and sustainability of long-term glycaemic control with continuous subcutaneous insulin infusion. | UNLABELLED Congruent audiovisual speech enhances our ability to comprehend a speaker, even in noise-free conditions. When incongruent auditory and visual information is presented concurrently, it can hinder a listener's perception and even cause him or her to perceive information that was not presented in either modality. Efforts to investigate the neural basis of these effects have often focused on the special case of discrete audiovisual syllables that are spatially and temporally congruent, with less work done on the case of natural, continuous speech. Recent electrophysiological studies have demonstrated that cortical response measures to continuous auditory speech can be easily obtained using multivariate analysis methods. Here, we apply such methods to the case of audiovisual speech and, importantly, present a novel framework for indexing multisensory integration in the context of continuous speech. Specifically, we examine how the temporal and contextual congruency of ongoing audiovisual speech affects the cortical encoding of the speech envelope in humans using electroencephalography. We demonstrate that the cortical representation of the speech envelope is enhanced by the presentation of congruent audiovisual speech in noise-free conditions. Furthermore, we show that this is likely attributable to the contribution of neural generators that are not particularly active during unimodal stimulation and that it is most prominent at the temporal scale corresponding to syllabic rate (2-6 Hz). Finally, our data suggest that neural entrainment to the speech envelope is inhibited when the auditory and visual streams are incongruent both temporally and contextually. SIGNIFICANCE STATEMENT Seeing a speaker's face as he or she talks can greatly help in understanding what the speaker is saying. This is because the speaker's facial movements relay information about what the speaker is saying, but also, importantly, when the speaker is saying it. Studying how the brain uses this timing relationship to combine information from continuous auditory and visual speech has traditionally been methodologically difficult. Here we introduce a new approach for doing this using relatively inexpensive and noninvasive scalp recordings. Specifically, we show that the brain's representation of auditory speech is enhanced when the accompanying visual speech signal shares the same timing. Furthermore, we show that this enhancement is most pronounced at a time scale that corresponds to mean syllable length.. Title: Interneuronal DISC1 regulates NRG1-ErbB4 signalling and excitatory-inhibitory synapse formation in the mature cortex. | Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.. Title: The Natural Statistics of Audiovisual Speech | eng_Latn | 10,776 |
what is an attention hearing | Selective auditory attention or selective hearing is a type of selective attention and involves the auditory system of the nervous system. | Therapeutic Listening is a comprehensive, multi-faceted sound-based approach that involves much more than just the ears. Like other sensory systems, the auditory system does not work in isolation. Neurologically it is connected to all levels of brain function and as a result it has a vast range of influence. | eng_Latn | 10,777 |
what is the definition of sensory memory in psychology | Initially proposed in 1968 by Atkinson and Shiffrin, this theory outlines three separate stages of memory: sensory memory, short-term memory, and long-term memory. Sensory Memory Sensory memory is the earliest stage of memory. During this stage, sensory information from the environment is stored for a very brief period of time, generally for no longer than a half-second for visual information and 3 or 4 seconds for auditory information. | Lesson Summary. Echoic memory is the sub-type of sensory memory related exclusively to the receipt of auditory information from the environment. Sounds enter the ear and are translated into neurological signals.These signals are available for a brief period of time, typically 3-4 seconds.esson Summary. Echoic memory is the sub-type of sensory memory related exclusively to the receipt of auditory information from the environment. Sounds enter the ear and are translated into neurological signals. | eng_Latn | 10,778 |
what is the term for the ability to detect sound and pinpoint the direction from which it is emanating? | Sound localization refers to a listener's ability to identify the location or origin of a detected sound in direction and distance. It may also refer to the methods in acoustical engineering to simulate the placement of an auditory cue in a virtual 3D space (see binaural recording, wave field synthesis). | The intelligence involved in this ability to recognize tone, rhythm, timbre, and pitch is musical intelligence. With this type of intelligence, people are able to detect, generate, reproduce, and contemplate music as clearly exhibited by attuned listeners, musicians, composers, vocalists, and conductors. | eng_Latn | 10,779 |
what is efferent listening | 4. Building Efferent Listening -- comprehending information -- Efferent means to receive, attend, and comprehend with the goal of obtaining new information and learning. To teach this type of listening, students need to develop the abilities to take notes, recognize sequence, and formulate questions. Levels of Listening Ability need development in students: 1. Receiving -- hearing sound -- If a child is not listening when spoken to, the first thing a teacher must do is be sure the child can hear. | It can be acquired as a result of damage sustained to the hearing apparatus, or inner ear. There is speculation that the efferent portion of the auditory nerve (olivocochlear bundle) has been affected (efferent meaning fibers that originate in the brain which serve to regulate hearing). | eng_Latn | 10,780 |
definition of event related potential | An event-related potential (ERP) is the measured brain response that is the direct result of a specific sensory, cognitive, or motor event.urther reading [edit]. 1 Steven J. Luck: An Introduction to the Event-Related Potential Technique, Second edition. 2 Todd C. Handy: Event-Related Potentials: A Methods Handbook. 3 Luck, S.J., and Kappenman, E.S., ed. 4 Monica Fabiani, Gabriele Gratton, and Kara D. Federmeier: Event-Related Brain Potentials: Methods, Theory, and Applications. 5 John 6 ... Za | event-related potential (ERP). Etymology: L, evenire, to happen, relatus, carry back, potentia, power. a type of brain wave that is associated with a response to a specific stimulus, such as a particular wave pattern observed when a patient hears a clicking sound.See also evoked potential.roma helps to preserve information processing resources of the brain ... by Watanabe, S.; Hara, K.; Ohta, K.; Iino, H.; Miyajima, M.; Matsuda, A.; Hara, M.; Maehara, T.; Matsuu / Journal of the Australian Traditional-Medicine Society. | eng_Latn | 10,781 |
echoic memory definition | Echoic memory is one of the sensory memory registers; a component of sensory memory (SM) that is specific to retaining auditory information.The sensory memory for sounds that people have just perceived is the form of echoic memory.his particular sensory store is capable of storing large amounts of auditory information that is only retained for a short period of time (3â4 seconds). This echoic sound resonates in the mind and is replayed for this brief amount of time shortly after the presentation of auditory stimuli. | Echoic memory is one of the sensory memory registers; a component of sensory memory (SM) that is specific to retaining auditory information.The sensory memory for sounds that people have just perceived is the form of echoic memory.his particular sensory store is capable of storing large amounts of auditory information that is only retained for a short period of time (3â4 seconds). This echoic sound resonates in the mind and is replayed for this brief amount of time shortly after the presentation of auditory stimuli. | eng_Latn | 10,782 |
what is auditory discrimination | Auditory discrimination refers to the brain 's ability to organize and make sense of language sounds. Children who have difficulties with this might have trouble understanding and developing language skills because their brains either misinterpret language sounds, or process them too slowly.hildren with auditory disabilities may fall behind classmates in learning how to read. Children who have difficulties with auditory discrimination may also have trouble reading. Problems relating to auditory discrimination are usually related to the brain, rather than the ears. | The definition of the Committee of UK Medical Professionals Steering the UK Auditory Processing Disorder Research Program is as follows: APD results from impaired neural function and is characterised by poor recognition, discrimination, separation, grouping, localisation, or ordering of speech sounds. It does not solely result from a deficit in general attention, language or other cognitive processes.. | eng_Latn | 10,783 |
Mapping luminance into brightness: identification of compression mechanisms | A remarkable property of the hwnan visual system is that it manages to compress a 10-decade luminance range into a brightness range of about 3 decades. This paper deals with the question as to the compression mechanisms which the visual system uses for the luminance-brightness mapping. Data of two psychophysical experiments concerning solitary images are presented. The first experiment involves variation of the illumination level; in a second experiment the luminance range is expanded. The results indicate that three compression mechanisms are used, termed brightness constancy, contrast compression, and local adjustment of the environment. This so-called local adjustment, which is not reported in the literature as far as we know, asserts itself if the environment is less luminous in comparison with some central field and leads to a drop in the brightness of the direct surroundings. Furthermore. typical issues regarding sets of related images are worked out qualitatively in a demonstration. | Discussed inthis paperistheparameter estimation problem ofcomplex LFM signals based on cyclic-correlati on transform under multipath conditions. First, weanalyzed thecyclic statistics ofcomplex LFM signals andconstructed theestimator combining signal separation technology forcycle frequency domain with autocorrelated cyclic correlation amplitude, then, we analyzed theperformances oftheestimator anddeduced theexpression oferror variance ofchirp-rate estimate, andfinally we workedoutthecharacteristics ofthe estimator through computer simulation. Theresult isin agreement withtheanalytic results. 2.Signal model Lettheobserved signal x(t)bemodeled as M x(t) =y(t) +n(t) = E a.s(t -ri )+n(t) i=M M =Y ( aie )+ (t_-i )2 +0 )+n(t),O | eng_Latn | 10,784 |
Suppression of smooth pursuit eye movements induced by electrical stimulation of the monkey frontal eye field | This study was performed to characterize the properties of the suppression of smooth pursuit eye movement induced by electrical stimulation of the frontal eye field (FEF) in trained monkeys. At the stimulation sites tested, we first determined the threshold for generating electrically evoked saccades (Esacs). We then examined the suppressive effects of stimulation on smooth pursuit at intensities that were below the threshold for eliciting Esacs. We observed that FEF stimulation induced a clear deceleration of pursuit at pursuit initiation and also during the maintenance of pursuit at subthreshold intensities. The suppression of pursuit occurred even in the absence of catch-up saccades during pursuit, indicating that suppression influenced pursuit per se. We mapped the FEF area that was associated with the suppressive effect of stimulation on pursuit. In a wide area in the FEF, suppressive effects were observed for ipsiversive, but not contraversive, pursuit. In contrast, we observed the bilateral suppres... | Through analysis of Qingshan Hydroelectric Power Station Lightning stroke cause,this paper puts forward the suitable lightning electromagnetic impulse protection design principle for the hydroelectric power station main control room specific protection measures aginst the surge voltage isopotential connection.Testing indicates that the protection is quite effective. | eng_Latn | 10,785 |
Brain 'closes eyes' to hear music | Our brains can turn down our ability to see to help them listen even harder to music and complex sounds, say experts. | Taiwanese electronics manufacturer BenQ has sent letters of apology to those who contacted the company over its use of the wreckage of the World Trade Center in a recent ad for its MusiQ line of MP3 players, and pulled the ad. | eng_Latn | 10,786 |
Mechanisms of Human Auditory Localization | Abstract : Three experiments were completed. The first was a methodological study of the relationship between two different methods of measuring auditory thresholds and their possible relationships to supra-threshold tasks. The second experiment replicated, with improved methods, an earlier experiment which measured the accuracy of auditory localization with and without pinnae. The findings were striking and conclusive: in the absence of head movement, pinnae even someone else's pinnae, appear to be necessary for accurate auditory localization. The third experiment compared the effectiveness of different stimuli in an auditory localization task. It was expected that increasing the informational content of the stimulus would improve localization performance. However, the findings were inconclusive. | This paper presents BUT system submitted to NIST 2008 SRE. It includes two subsystems based on Joint Factor Analysis (JFA) GMM/UBM and one based on SVM-GMM. The systems were developed on NIST SRE2006 data, and the results arepresented on NIST SRE 2008 evaluation data. We concentrate on the influence of side information in the calibration. Index Terms: speaker recognition, joint factor analysis, NIST SRE 2008. | eng_Latn | 10,787 |
A saliency-based auditory attention model with applications to unsupervised prominent syllable detection in speech. | Mechanisms for Allocating Auditory Attention: An Auditory Saliency Map | How do we know the minds of others? Domain-specificity, simulation, and enactive social cognition | eng_Latn | 10,788 |
What equates the energy operator to the full energy of a particle or a system? | In quantum mechanics, energy is defined in terms of the energy operator as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation: (where is Planck's constant and the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons. | In Scherer's components processing model of emotion, five crucial elements of emotion are said to exist. From the component processing perspective, emotion experience is said to require that all of these processes become coordinated and synchronized for a short period of time, driven by appraisal processes. Although the inclusion of cognitive appraisal as one of the elements is slightly controversial, since some theorists make the assumption that emotion and cognition are separate but interacting systems, the component processing model provides a sequence of events that effectively describes the coordination involved during an emotional episode. | eng_Latn | 10,789 |
When using GLM to fit data, how do we select which family the data belongs to How does GLM source code analyze data, and when we do not know what kind of data exactly belongs to, how to select the best family to fit and get the best result | How to decide which glm family to use? I have fish density data that I am trying to compare between several different collection techniques, the data has lots of zeros, and the histogram looks vaugley appropriate for a poisson distribution except that, as densities, it is not integer data. I am relatively new to GLMs and have spent the last several days looking online for how to tell which distribution to use but have failed utterly in finding any resources that help make this decision. A sample histogram of the data looks like the following: I have no idea how to go about deciding on the appropriate family to use for the GLM. If anyone has any advice or could give me a resource I should check out, that would be fantastic. | If I upload my brain into a computer is it still me? I think the answer is yes but I know a lot of people disagree. So, I would like to ask these people when exactly does it stop being me. Let's say I want to upload my brain into a computer using the following procedure: I have each of my neurons replaced by electronic neurons. By electronic neurons I mean chips with the necessary hardware and software to perform the exact same function that the neuron being replaced was performing. This means that given the same inputs that the biological neuron was given it would produce the same outputs. Then I connect that electronic neuron to all the neurons that former biological neuron was connected to. At the end of this process is it still me? If not when did it stop being me? When one neuron was replaced? When 1,000 were replaced? When 1,000,000 were replaced? (assuming it is still me) Then, I divide these electronic neurons into groups and for each group I perform the following: Write software to simulate the group of electrical neurons in my computer such that given the same inputs that go into the group of electronic neurons in my brain, it will produce the same outputs. Replace each group of neurons with another chip that communicates with my computer. Each of the chips will communicate to the corresponding software module in order to determine how to transform its inputs into the necessary outputs. Connect all of the inputs and outputs of the group of electronic neurons to that new chip. At the end of this process is it still me? If not when did it stop being me? When one group was replaced? When 1,000 were replaced? When 1,000,000 were replaced? (assuming it is still me) Now, what I have is a bunch of these new chips in my head, connected to each other and for each of these chips I have a small neural network software module in my computer. So, now for each pair of chip/software module I do this: Transfer all connections going into and out of the chip into the computer version (i.e. connect the software module to the other software modules that represent the other chips that this chip is connected to) Disconnect the chip from the other chips and remove it At the end of this process is it still me? If not when did it stop being me? When one of the new chips was replaced? When 1,000 were replaced? When 1,000,000 were replaced? If it is still me then I have successfully moved my brain into a computer without ceasing being myself. So, again the question is if at the end it is not me anymore when exactly does it stop being me and why does it stop being me? EDIT: This question assumes that our universe and everything in it (including ourselves) follows the (testable) laws of physics. As it stands it also assumes that neuroscience is mostly right, although the question could also be considered without that requirement if instead of neural simulations we were to implement physics simulation of the brain. Of course this is a lot less practical. However, if your view is that our minds have a supernatural component then the question does not make any sense. It was also pointed out that this is related to the Ship of Theseus, i.e. is something still the same if all its parts have been changed. With regards to that I think it is useful to consider that most of the cells in our body (not including brain cells) are regularly replaced anyway and some people even get entire organs replaced. Yet, we don't say that they stopped being themselves. Note, however, that if you are assuming a supernatural component of the mind that this explanation is not really adequate or relevant. | eng_Latn | 10,790 |
How powerful is our brain? | How powerful is the brain? | Why do we experience deja vu? | eng_Latn | 10,791 |
what stage of sleep are k complex seen | Stage N2 sleep is scored when either one or more K complexes is noted (and unassociated with an arousal) or one or more trains of sleep spindles appear. Stage N2. Note K complexes unassociated with arousal and trains of sleep spindles. | NREM sleep can be broken down into three distinct stages: N1, N2, and N3. In the progression from stage N1 to N3, brain waves become slower and more synchronized, and the eyes remain still. In stage N3, the deepest stage of NREM, EEGs reveal high-amplitude (large), low-frequency (slow) waves and spindles. | eng_Latn | 10,792 |
difinition for echolocation | The location of objects by reflected sound, in particular that used by animals such as dolphins and bats. Example sentences. 1 Many odontocetes can navigate by echolocation, producing sound waves using a complex system of nasal sacs and passages, and using the echoes to navigate. | Echolocation, a physiological process for locating distant or invisible objects (such as prey) by means of sound waves reflected back to the emitter (such as a bat) by the objects. Echolocation is used for orientation, obstacle avoidance, food procurement, and social interactions. | eng_Latn | 10,793 |
I am the lead guitarist in a symphonic metal band called Yet Still I Remain. We have been together about a year and a half we have recorded a demo tape. Our style is hard rock and metal like Seether and Metallica with more symphonic things like evanescense and Cradle of Filth. However, we're wonderig, in a world obsessed with rap and R'n'B is there a place for a metal band? | There is always a place for something new! If you came out with a CD i'd buy it! I'm always into something new! I love country and Rock, Your style (evenescense) I really like that kind of stuff! Not so much Rap it gives me a headache sometimes! If you've come out with a demo tape that is pretty cool but if you feel your good and you get good reviews then go for it! Try it! Nothing says that you can't try! You tried to make a band and you have one now don't you? Good Luck and hope you make it! I'll be sure to get your CD =) Just fill me in when it comes out if I don't see your music videos lol =) | Synesthesia (also spelled synæsthesia or synaesthesia, plural synesthesiae) -- from the Greek syn- meaning union and aesthesis meaning sensation -- is a neurological condition in which two or more bodily senses are coupled. In a form of synesthesia known as grapheme → color synesthesia, letters or numbers may be perceived as inherently colored, while in ordinal linguistic personification, numbers, days of the week and months of the year evoke personalities.\n\nIt takes several forms\n\n\nGrapheme → color synesthesia\nMusic → color synesthesia\nNumber form synesthesia\nPersonification\nLexical → gustatory synesthesia\n\nread more --> http://en.wikipedia.org/wiki/Synaesthesia | eng_Latn | 10,794 |
Reconstructing What Makes Us Tick | Newswise — WASHINGTON, D.C., April 24, 2018 -- Cardiac arrhythmia results when the usual symphony of electric pulses that keep the heart’s muscles in sync becomes chaotic. Although symptoms are often barely noticeable, arrhythmia leads to hundreds of thousands of deaths from unexpected, sudden cardiac arrest in the United States each year. A major issue that limits modeling to predict such events is that it is impossible to measure and monitor all the hundreds of variables that come together to make our hearts tick.
A pair of researchers at the Max Planck Institute for Dynamics and Self-Organization developed an algorithm that uses artificial intelligence in new ways to accurately model the electrical excitations in heart muscle. Their work, appearing in Chaos, from AIP Publishing, draws on partial differential equations describing excitable media and a technique called echo state networks (ESNs) to cross-predict variables about chaotic electrical wave propagations in cardiac tissue.
“In this case, you have to try to get this information about those quantities that you can’t measure from quantities that you can measure,” said Ulrich Parlitz, an author on the paper and a scientist at the Biomedical Physics Research Group at Max Planck Institute for Dynamics and Self-Organization. “This is a well-known but challenging problem, for which we provided a novel solution employing machine learning methods.”
Because machine learning techniques have become more powerful, certain neural networks, such as ESNs, can represent dynamical systems and develop a memory of events over time, which can help understand how arrhythmic electrical signals fall out of sync.
The model that the researchers developed fills in these gaps with a dynamical observer. After training the algorithm on a data set generated by a physical model, Parlitz and his partner, Roland Zimmermann, fed a new time series of the measured quantities to the ESN. This process allowed the observer to cross-predict state vectors. For example, if researchers know the voltage in a certain area of the heart at a point in time, they can reconstruct the flow of calcium currents.
The team verified their approach with data generated by the Barkley and Bueno-Orovio-Cherry-Fenton models, which describe chaotic dynamics that occur in cardiac arrhythmias, even cross-predicting state vectors with noise present. “This paper deals with cross-prediction, but ESNs can also be used for making predictions of future behavior,” Parlitz said.
Understanding the electrical properties of the heart is only one part of the picture. Parlitz said that he and his colleagues are looking to include ultrasound measurements of the heart’s internal mechanical dynamics. One day, the group hopes to combine different forms of measurements with models of a beating heart’s electrical and mechanical features to improve diagnosis and therapies of cardiac diseases. “We broke a big problem down into many smaller ones,” Parlitz said.
###
The article, "Observing spatio-temporal dynamics of excitable media using reservoir computing," is authored by Roland S. Zimmermann and Ulrich Parlitz. The article will appear in Chaos April 24, 2018 (DOI: 10.1063/1.5022276). After that date, it can be accessed at http://aip.scitation.org/doi/full/10.1063/1.5022276.
ABOUT THE JOURNAL
Chaos is devoted to increasing the understanding of nonlinear phenomena in all disciplines and describing their manifestations in a manner comprehensible to researchers from a broad spectrum of disciplines. See http://chaos.aip.org.
###
SEE ORIGINAL STUDY | Notes: In the Affiliates version there is a break point with re-introduction at 23:49 for stations needing to insert station ID or announcements.
The time has come for many things: for peace, for climate action, for economic sanity, the list is long.
This week on Radio Ecoshock we thunder into another place humans don't like to go. The nasty truth is we are killing off the only known living companions we have in the universe, as our first guest says. The venerable biologist and head of the Stanford Center for Biodiversity Paul Erhlich joins us.
Paul is followed by Will Tuttle, author of "The World Peace Diet." Tuttle says you can't care about climate change and still eat meat, because about half of all global emissions are driven by the industrial slaughter of our fellow species. That hidden holocaust of animals is also eating into our minds, twisting itself back out as illness and violence.
Too much information? Don't worry, be happy with this week's "Climate Variety Hour... In just ten minutes." Get inspired with Bernie Sanders, climate humor from UK's Guardian newspaper, and bits from climate songs by people who can actually sing. | eng_Latn | 10,795 |
What type of test is used to tell that a brain is active even during sleep? | As a side effect of the electrochemical processes used by neurons for signaling, brain tissue generates electric fields when it is active. When large numbers of neurons show synchronized activity, the electric fields that they generate can be large enough to detect outside the skull, using electroencephalography (EEG) or magnetoencephalography (MEG). EEG recordings, along with recordings made from electrodes implanted inside the brains of animals such as rats, show that the brain of a living animal is constantly active, even during sleep. Each part of the brain shows a mixture of rhythmic and nonrhythmic activity, which may vary according to behavioral state. In mammals, the cerebral cortex tends to show large slow delta waves during sleep, faster alpha waves when the animal is awake but inattentive, and chaotic-looking irregular activity when the animal is actively engaged in a task. During an epileptic seizure, the brain's inhibitory control mechanisms fail to function and electrical activity rises to pathological levels, producing EEG traces that show large wave and spike patterns not seen in a healthy brain. Relating these population-level patterns to the computational functions of individual neurons is a major focus of current research in neurophysiology. | Bell's own detailed account, presented to the American Association for the Advancement of Science in 1882, differs in several particulars from most of the many and varied versions now in circulation, most notably by concluding that extraneous metal was not to blame for failure to locate the bullet. Perplexed by the peculiar results he had obtained during an examination of Garfield, Bell "...proceeded to the Executive Mansion the next morning...to ascertain from the surgeons whether they were perfectly sure that all metal had been removed from the neighborhood of the bed. It was then recollected that underneath the horse-hair mattress on which the President lay was another mattress composed of steel wires. Upon obtaining a duplicate, the mattress was found to consist of a sort of net of woven steel wires, with large meshes. The extent of the [area that produced a response from the detector] having been so small, as compared with the area of the bed, it seemed reasonable to conclude that the steel mattress had produced no detrimental effect." In a footnote, Bell adds that "The death of President Garfield and the subsequent post-mortem examination, however, proved that the bullet was at too great a distance from the surface to have affected our apparatus." | eng_Latn | 10,796 |
Why is watching and doing something always different? | Why's doing something and watching always different? | How do we know that 1 second is 1 second? | eng_Latn | 10,797 |
Is there a verb for making (appreciative/agreement/attentive) sounds during conversation? Is there a verb for giving feedback to someone with non-word sounds during conversation, including not specifically using filler words, in order to indicate attentiveness, agreement or appreciation, etc.? I have searched for this word in all kinds of places and understood it to be a form of vocal (non-verbal) communication and as a subset of paralanguage. However, none of it helped finding a verb. | Confirmation that someone is listening to another person's speech When someone is telling you a very long and detailed story he usually wants to hear some "confirmations" (or response) that you are listening to his story. In Russian we often use something like "tak" (which has a meaning of "ok" and "well, proceed further"), "a-ha" or "uh-huh". What word serves the same purpose in English and American English? | Can a concentration spell be cast without actually concentrating on it for an "instant" effect? The book says: Concentration. Some spells require you to maintain concentration in order to keep their magic active. If you lose concentration, such a spell ends. So what happens if, for example, you are already concentrating on a spell, and cast Witch Bolt without concentrating. Would you get the initial arc of blue energy, or would it not even go off? I am asking if you can cast a spell that says concentration without concentrating for an immediate effect. This would be useful to avoid interrupting your existing concentration. In essence, if you are concentrating on thing A (doesn't have to be a spell since other things need concentration), can you cast a concentration spell getting an immediate effect without losing concentration on thing A? Since the definition of concentration is to keep the spell active (meaning it is already active) I would think that you would get the initial effect (if it had one) without the need to concentrate on it. | eng_Latn | 10,798 |
Safety of current with duration My question is about how dangerous a momentary amount of current is vs the duration. Like is there a reasonably consistent relationship between current duration through the body and relative danger? I.e. 1 amp for 1 second vs 0.01 amps for 100 seconds. I imagine that's a bit complex to determine, but otherwise is there a duration for which a normally deadly amperage would shock someone but not cause a heart attack? I read an article about research into a possible electric weapon that would shock someone on the scale of nanoseconds to stun them without risking killing them. | Is 20 watts of electricity dangerous? I have a few circumstances which invlove someone being shocked with 20 watts of electricity and whether it would be deadly. So here they are: Would 1000 volts at 20 miliamperes of AC (2MHz frequency) be dangerous or deadly? Would 1000 volts at 20 miliamperes of DC be dangerous or deadly? Would 10 Kilo volts at 2 miliamperes of AC (2MHz frequency) be dangerous or deadly? and Would 10 Kilo volts at 2 miliamperes of DC be dangerous or deadly? Thanks for everyones help. | Can you concentrate on a special ability while also concentrating on a spell? There exist several abilities that have the text similar to, this effect lasts while you maintain concentration (as if you were concentrating on a spell). One such example is the Bard College of Glamour: Mantle of Majesty ability. At 6th level, you gain the ability to cloak yourself in a fey magic that makes others want to serve you. As a bonus action, you cast command, without expending a spell slot, and you take on an appearance of unearthly beauty for 1 minute or until your concentration ends (as if you were concentrating on a spell). During this time, you can cast command as a bonus action on each of your turns, without expending a spell slot. Another example is the Cleric Trickery Domain: Channel Divinity: Invoke Duplicity ability. Starting at 2nd level, you can use your Channel Divinity to create an illusory duplicate of yourself. As an action, you create a perfect illusion of yourself that lasts for 1 minute, or until you lose your concentration (as if you were concentrating on a spell). While you are concentrating on maintaining such an ability would you also be able to maintain concentration on a spell that requires concentration? There does not seem to be any RAW that limits this however the limitation around concentrating on a second spell exists which may suggest that the intent is that a character cannot concentrate on 2 effects at the same time unless explicitly stated. | eng_Latn | 10,799 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.