text
stringlengths 6
128k
|
---|
The usual action of Yang-Mills theory is given by the quadratic form of
curvatures of a principal G bundle defined on four dimensional manifolds. The
non-linear generalization which is known as the Born-Infeld action has been
given. In this paper we give another non-linear generalization on four
dimensional manifolds and call it a universal Yang-Mills action. The advantage
of our model is that the action splits {\bf automatically} into two parts
consisting of self-dual and anti-self-dual directions. Namely, we have
automatically the self-dual and anti-self-dual equations without solving the
equations of motion as in a usual case. Our method may be applicable to recent
non-commutative Yang-Mills theories studied widely.
|
I have used the Hipparcos Input Catalog, together with Kurucz model stellar
atmospheres, and information on the strength of the interstellar extinction, to
create a model of the expected intensity and spectral distribution of the local
interstellar ultraviolet radiation field, under various assumptions concerning
the albedo a of the interstellar grains. (This ultraviolet radiation field is
of particular interest because of the fact that ultraviolet radiation is
capable of profoundly affecting the chemistry of the interstellar medium.) By
comparing my models with the observations, I am able to conclude that the
albedo a of the interstellar grains in the far ultraviolet is very low, perhaps
a = 0.1. I also advance arguments that my present determination of this albedo
is much more reliable than any of the many previous (and conflicting)
ultraviolet interstellar grain albedo determinations. Beyond this, I show that
the ultraviolet background radiation that is observed at high galactic
latitudes must be extragalactic in origin, as it cannot be backscatter of the
interstellar radiation field.
|
It is known that finding approximate optima of non-convex functions is
intractable. We give a simple proof to show that this problem is not even
computable.
|
A simple theory of electromechanical transduction for single-charge-carrier
double-layer electroactuators is developed, in which the ion distribution and
curvature are mutually coupled. The obtained expressions for the dependence of
curvature and charge accumulation on the applied voltage, as well as the
electroactuation dynamics, are compared with literature data. The mechanical-
or sensor- performance of such electroactuators appears to be determined by
just three cumulative parameters, with all of their constituents measurable,
permitting a scaling approach to their design.
|
In the present work, the Tensor-Train decomposition algorithm is applied to
reduce the memory footprint of a stochastic discrete velocity solver for
rarefied gas dynamics simulation. An energy-conserving modification to the
algorithm is proposed, along with an interleaved collision/convection routine
which allows for easy application of higher-order convection schemes. The
performance of the developed algorithm is analyzed for several 0- and
1-dimensional model problems in terms of solution error and reduction in memory
use requirements.
|
We introduce techniques to analyze unitary operations in terms of quadratic
form expansions, a form similar to a sum over paths in the computational basis
when the phase contributed by each path is described by a quadratic form over
$\mathbb R$. We show how to relate such a form to an entangled resource akin to
that of the one-way measurement model of quantum computing. Using this, we
describe various conditions under which it is possible to efficiently implement
a unitary operation U, either when provided a quadratic form expansion for U as
input, or by finding a quadratic form expansion for U from other input data.
|
Partial measurements of relative position are a relatively common event
during the observation of visual binary stars. However, these observations are
typically discarded when estimating the orbit of a visual pair. In this article
we present a novel framework to characterize the orbits from a Bayesian
standpoint, including partial observations of relative position as an input for
the estimation of orbital parameters. Our aim is to formally incorporate the
information contained in those partial measurements in a systematic way into
the final inference. In the statistical literature, an imputation is defined as
the replacement of a missing quantity with a plausible value. To compute
posterior distributions of orbital parameters with partial observations, we
propose a technique based on Markov chain Monte Carlo with multiple imputation.
We present the methodology and test the algorithm with both synthetic and real
observations, studying the effect of incorporating partial measurements in the
parameter estimation. Our results suggest that the inclusion of partial
measurements into the characterization of visual binaries may lead to a
reduction in the uncertainty associated to each orbital element, in terms of a
decrease in dispersion measures (such as the interquartile range) of the
posterior distribution of relevant orbital parameters. The extent to which the
uncertainty decreases after the incorporation of new data (either complete or
partial) depends on how informative those newly-incorporated measurements are.
Quantifying the information contained in each measurement remains an open
issue.
|
The new generation of machine learning processors have evolved from
multi-core and parallel architectures that were designed to efficiently
implement matrix-vector-multiplications (MVMs). This is because at the
fundamental level, neural network and machine learning operations extensively
use MVM operations and hardware compilers exploit the inherent parallelism in
MVM operations to achieve hardware acceleration on GPUs and FPGAs. However,
many IoT and edge computing platforms require embedded ML devices close to the
network in order to compensate for communication cost and latency. Hence a
natural question to ask is whether MVM operations are even necessary to
implement ML algorithms and whether simpler hardware primitives can be used to
implement an ultra-energy-efficient ML processor/architecture. In this paper we
propose an alternate hardware-software codesign of ML and neural network
architectures where instead of using MVM operations and non-linear activation
functions, the architecture only uses simple addition and thresholding
operations to implement inference and learning. At the core of the proposed
approach is margin-propagation (MP) based computation that maps multiplications
into additions and additions into a dynamic rectifying-linear-unit (ReLU)
operations. This mapping results in significant improvement in computational
and hence energy cost. In this paper, we show how the MP network formulation
can be applied for designing linear classifiers, shallow multi-layer
perceptrons and support vector networks suitable fot IoT platforms and tiny ML
applications. We show that these MP based classifiers give comparable results
to that of their traditional counterparts for benchmark UCI datasets, with the
added advantage of reduction in computational complexity enabling an
improvement in energy efficiency.
|
A thorough analysis of wheel-terrain interaction is critical to ensure the
safe and efficient operation of space rovers on extraterrestrial surfaces like
the Moon or Mars. This paper presents an approach for developing and
experimentally validating a virtual wheel-terrain interaction model for the UAE
Rashid rover. The model aims to improve the fidelity and capability of current
simulation methods for space rovers and facilitate the design, evaluation, and
control of their locomotion systems. The proposed method considers various
factors, such as wheel grouser properties, wheel slippage, loose soil
properties, and interaction mechanics. The model accuracy was validated through
experiments on a Test-rig testbed that simulated lunar soil conditions. In
specific, a set of experiments was carried out to test the behaviors acted on a
Grouser-Rashid rover wheel by the lunar soil with different slip ratios of 0,
0.25, 0.50, and 0.75. The obtained results demonstrate that the proposed
simulation method provides a more accurate and realistic simulation of the
wheel-terrain interaction behavior and provides insight into the overall
performance of the rover
|
The hard thermal loop (HTL) effective field theory of QED can be derived from
the classical limit of transport theory, corresponding to the leading term in a
gradient expansion of the quantum approach. In this paper, we show that power
corrections to the HTL effective Lagrangian of QED can also be obtained from
transport theory by including higher orders in such gradient expansion. The
gradient expansion is increasingly infrared (IR) divergent, but the correction
that we compute is IR finite. We employ dimensional regularization, and show
that this result comes after a cancellation of divergencies between the vacuum
and medium contributions. While the transport framework is an effective field
theory of the long distance physics of the plasma, we show that it correctly
reproduces the correct QED ultraviolet divergencies associated with the photon
wave function renormalization.
|
In this paper, if the time-modulated array (TMA)-enabled directional
modulation (DM) communication system can be cracked is investigated and the
answer is YES! We first demonstrate that the scrambling data received at the
eavesdropper can be defied by using grid search to successfully find the only
and actual mixing matrix generated by TMA. Then, we propose introducing symbol
ambiguity to TMA to defend the defying of grid search, and design two
principles for the TMA mixing matrix, i.e., rank deficiency and non-uniqueness
of the ON-OFF switching pattern, that can be used to construct the symbol
ambiguity. Also, we present a feasible mechanism to implement these two
principles. Our proposed principles and mechanism not only shed light on how to
design a more secure TMA DM system theoretically in the future, but also have
been validated to be effective by bit error rate measurements.
|
Neutron scattering measurements show the ferromagnetic XY pyrochlore Yb2Ti2O7
to display strong quasi-two dimensional (2D) spin correlations at low
temperature, which give way to long range order (LRO) under the application of
modest magnetic fields. Rods of scattering along < 111 > directions due to
these 2D spin correlations imply a magnetic decomposition of the cubic
pyrochlore system into decoupled kagome planes. A magnetic field of ~0.5 T
applied along the [1-10] direction induces a transition to a 3D LRO state
characterized by long-lived, dispersive spin waves. Our measurements map out a
complex low temperature-field phase diagram for this exotic pyrochlore magnet.
|
In this paper we discuss some geometrical and topological properties of the
full symmetric Toda system. We show by a direct inspection that the phase
transition diagram for the full symmetric Toda system in dimensions $n=3,4$
coincides with the Hasse diagram of the Bruhat order of symmetric groups $S_3$
and $S_4$. The method we use is based on the existence of a vast collection of
invariant subvarieties of the Toda flow in orthogonal groups. We show how one
can extend it to the case of general $n$. The resulting theorem identifies the
set of singular points of $\mathrm{dim}=n$ Toda flow with the elements of the
permutation group $S_n$, so that points will be connected by a trajectory, if
and only if the corresponding elements are Bruhat comparable. We also show that
the dimension of the submanifolds, spanned by the trajectories connecting two
singular points, is equal to the length of the corresponding segment in the
Hasse diagramm. This is equivalent to the fact, that the full symmetric Toda
system is in fact a Morse-Smale system.
|
In this paper we introduce a novel method to conduct inference with models
defined through a continuous-time Markov process, and we apply these results to
a classical stochastic SIR model as a case study. Using the inverse-size
expansion of van Kampen we obtain approximations for first and second moments
for the state variables. These approximate moments are in turn matched to the
moments of an inputed generic discrete distribution aimed at generating an
approximate likelihood that is valid both for low count or high count data. We
conduct a full Bayesian inference to estimate epidemic parameters using
informative priors. Excellent estimations and predictions are obtained both in
a synthetic data scenario and in two Dengue fever case studies.
|
The effect of non-Hermiticity in band topology has sparked many discussions
on non-Hermitian topological physics. It has long been known that non-Hermitian
Hamiltonians can exhibit real energy spectra under the condition of parity-time
($PT$) symmetry -- commonly implemented with balanced loss and gain -- but only
when non-Hermiticity is relatively weak. Sufficiently strong non-Hermiticity,
on the other hand, will destroy the reality of energy spectra, a situation
known as spontaneous $PT$-symmetry breaking. Here, based on non-reciprocal
coupling, we show a systematic strategy to construct non-Hermitian topological
systems exhibiting bulk and boundary energy spectra that are always real,
regardless of weak or strong non-Hermiticity. Such nonreciprocal-coupling-based
non-Hermiticity can directly drive a topological phase transition and determine
the band topology, as demonstrated in a few non-Hermitian systems from 1D to
2D. Our work develops so far the only theory that can guarantee the reality of
energy spectra for non-Hermitian Hamiltonians, and offers a new avenue to
explore non-Hermitian topological physics.
|
This paper considers the single antenna, static Gaussian broadcast channel in
the finite blocklength regime. Second order achievable and converse rate
regions are presented. Both a global reliability requirement and per-user
reliability requirements are considered. The two-user case is analyzed in
detail, and generalizations to the $K$-user case are also discussed. The
largest second order achievable region presented here requires both
superposition and rate splitting in the code construction, as opposed to the
(infinite blocklength, first order) capacity region which does not require rate
splitting. Indeed, the finite blocklength penalty causes superposition alone to
under-perform other coding techniques in some parts of the region. In the
two-user case with per-user reliability requirements, the capacity achieving
superposition coding order (with the codeword of the user with the smallest SNR
as cloud center) does not necessarily gives the largest second order region.
Instead, the message of the user with the smallest point-to-point second order
capacity should be encoded in the cloud center in order to obtain the largest
second order region for the proposed scheme.
|
The problem of choosing the optimal multipath components to be employed at a
minimum mean square error (MMSE) selective Rake receiver is considered for an
impulse radio ultra-wideband system. First, the optimal finger selection
problem is formulated as an integer programming problem with a non-convex
objective function. Then, the objective function is approximated by a convex
function and the integer programming problem is solved by means of constraint
relaxation techniques. The proposed algorithms are suboptimal due to the
approximate objective function and the constraint relaxation steps. However,
they perform better than the conventional finger selection algorithm, which is
suboptimal since it ignores the correlation between multipath components, and
they can get quite close to the optimal scheme that cannot be implemented in
practice due to its complexity. In addition to the convex relaxation
techniques, a genetic algorithm (GA) based approach is proposed, which does not
need any approximations or integer relaxations. This iterative algorithm is
based on the direct evaluation of the objective function, and can achieve
near-optimal performance with a reasonable number of iterations. Simulation
results are presented to compare the performance of the proposed finger
selection algorithms with that of the conventional and the optimal schemes.
|
The flex divisor of a primitively polarized K3 surface $(X,L)$ of degree
$L^2=2d$ is, generically, the locus of all points $x\in X$ for which there
exists a pencil $V\subset |L|$ whose base locus is $\{x\}$. We show that the
flex divisor lies in the linear system $|n_dL|$ where $n_d=(2d+1)C(d)^2$ and
$C(d)$ is the Catalan number. We also show that there is a well-defined notion
of flex divisor over the whole moduli space $F_{2d}$ of polarized K3 surfaces.
|
A late epoch cosmic acceleration may be naturally entangled with cosmic
coincidence -- the observation that at the onset of acceleration the vacuum
energy density fraction nearly coincides with the matter density fraction. In
this Letter we show that this is indeed the case with the cosmology of a
Friedmann-Lama\^itre-Robertson-Walker (FLRW) 3-brane in a five-dimensional
anti-de Sitter spacetime. We derive the four-dimensional effective action on a
FLRW 3-brane, which helps define a general reduction formula, namely,
$M_P^{2}=\rho_{b}/|\Lambda_5|$, where $M_{P}$ is the effective Planck mass,
$\Lambda_5$ is the 5-dimensional cosmological constant, and $\rho_b$ is the sum
of the 3-brane tension $V$ and the matter density $\rho$. The behavior of the
background solution is consistent with the results based on the form of the 4D
effective potential. Although the range of variation in $\rho_{b}$ is strongly
constrained, the big bang nucleosynthesis bound on the time variation of the
renormalized Newton constant $G_N = (8\pi M_P^2)^{-1}$ is satisfied when the
ratio $V/\rho \gtrsim {O} (10^2)$ on cosmological scales. The same bound leads
to an effective equation of state close to -1 at late epochs in accordance with
current astrophysical and cosmological observations.
|
A strong magnetic field can make it advantageous to work in a coordinate
system aligned with dipolar field lines. This monograph collect the formulas
for some of the most frequently used expressions and operations in dipole
coordinates.
|
Depression detection from user-generated content on the internet has been a
long-lasting topic of interest in the research community, providing valuable
screening tools for psychologists. The ubiquitous use of social media platforms
lays out the perfect avenue for exploring mental health manifestations in posts
and interactions with other users. Current methods for depression detection
from social media mainly focus on text processing, and only a few also utilize
images posted by users. In this work, we propose a flexible time-enriched
multimodal transformer architecture for detecting depression from social media
posts, using pretrained models for extracting image and text embeddings. Our
model operates directly at the user-level, and we enrich it with the relative
time between posts by using time2vec positional embeddings. Moreover, we
propose another model variant, which can operate on randomly sampled and
unordered sets of posts to be more robust to dataset noise. We show that our
method, using EmoBERTa and CLIP embeddings, surpasses other methods on two
multimodal datasets, obtaining state-of-the-art results of 0.931 F1 score on a
popular multimodal Twitter dataset, and 0.902 F1 score on the only multimodal
Reddit dataset.
|
A Quantum Point Contact (QPC) causes a one-dimensional constriction on the
spatial potential landscape of a two-dimensional electron system. By tuning the
voltage applied on a QPC at low temperatures the resulting regular step-like
electron conductance quantization can show an additional kink near pinch-off
around 0.7(2$e^2$/h), called 0.7-anomaly. In a recent publication, we presented
a combination of theoretical calculations and transport measurements that lead
to a detailed understanding of the microscopic origin of the 0.7-anomaly.
Functional Renormalization Group-based calculations were performed exhibiting
the 0.7-anomaly even when no symmetry-breaking external magnetic fields are
involved. According to the calculations the electron spin susceptibility is
enhanced within a QPC that is tuned in the region of the 0.7-anomaly. Moderate
externally applied magnetic fields impose a corresponding enhancement in the
spin magnetization. In principle, it should be possible to map out this spin
distribution optically by means of the Faraday rotation technique. Here we
report the initial steps of an experimental project aimed at realizing such
measurements. Simulations were performed on a particularly pre-designed
semiconductor heterostructure. Based on the simulation results a sample was
built and its basic transport and optical properties were investigated.
Finally, we introduce a sample gate design, suitable for combined transport and
optical studies.
|
In this paper we consider the following problem: When are the preduals of two
hyperfinite (=injective) factors $\M$ and $\N$ (on separable Hilbert spaces)
cb-isomorphic (i.e., isomorphic as operator spaces)? We show that if $\M$ is
semifinite and $\N$ is type III, then their preduals are not cb-isomorphic.
Moreover, we construct a one-parameter family of hyperfinite type
III$_0$-factors with mutually non cb-isomorphic preduals, and we give a
characterization of those hyperfinite factors $\M$ whose preduals are
cb-isomorphic to the predual of the unique hyperfinite type III$_1$-factor. In
contrast, Christensen and Sinclair proved in 1989 that all infinite dimensional
hyperfinite factors with separable preduals are cb-isomorphic. More recently
Rosenthal, Sukochev and the first-named author proved that all hyperfinite type
III$_\lambda$-factors, where $0< \lambda\leq 1$, have cb-isomorphic preduals.
|
We demonstrate experimentally the guiding of cold and slow ND3 molecules
along a thin charged wire over a distance of ~0.34 m through an entire
molecular beam apparatus. Trajectory simulations confirm that both linear and
quadratic high-field-seeking Stark states can be efficiently guided from the
beam source up to the detector. A density enhancement up to a factor 7 is
reached for decelerated beams with velocities ranging down to ~50 m/s generated
by the rotating nozzle technique.
|
We review the status of a program, outlined and motivated in the
introduction, for the study of correspondences between spectral invariants of
partially hyperbolic flows on locally symmetric spaces and their quantizations.
Further we formulate a number of concrete problems which may be viewed as
possible further steps to be taken in order to complete the program.
|
Two-photon exchange contributions to elastic electron-scattering are
reviewed. The apparent discrepancy in the extraction of elastic nucleon form
factors between unpolarized Rosenbluth and polarization transfer experiments is
discussed, as well as the understanding of this puzzle in terms of two-photon
exchange corrections. Calculations of such corrections both within partonic and
hadronic frameworks are reviewed. In view of recent spin-dependent electron
scattering data, the relation of the two-photon exchange process to the
hyperfine splitting in hydrogen is critically examined. The imaginary part of
the two-photon exchange amplitude as can be accessed from the beam normal spin
asymmetry in elastic electron-nucleon scattering is reviewed. Further
extensions and open issues in this field are outlined.
|
In this work, we consider a finitely determined map germ $f$ from
$(\mathbb{C}^2,0)$ to $(\mathbb{C}^3,0)$. We characterize the Whitney
equisingularity of an unfolding $F=(f_t,t)$ of $f$ through the constancy of a
single invariant in the source. Namely, the Milnor number of the curve
$W_t(f)=D(f_t)\cup f_t^{-1}(\gamma)$, where $D(f)$ denotes the double point
curve of $f_t$. This gives an answer to a question by Ruas in 1994.
|
In this work, we focus on semi-supervised learning for video action detection
which utilizes both labeled as well as unlabeled data. We propose a simple
end-to-end consistency based approach which effectively utilizes the unlabeled
data. Video action detection requires both, action class prediction as well as
a spatio-temporal localization of actions. Therefore, we investigate two types
of constraints, classification consistency, and spatio-temporal consistency.
The presence of predominant background and static regions in a video makes it
challenging to utilize spatio-temporal consistency for action detection. To
address this, we propose two novel regularization constraints for
spatio-temporal consistency; 1) temporal coherency, and 2) gradient smoothness.
Both these aspects exploit the temporal continuity of action in videos and are
found to be effective for utilizing unlabeled videos for action detection. We
demonstrate the effectiveness of the proposed approach on two different action
detection benchmark datasets, UCF101-24 and JHMDB-21. In addition, we also show
the effectiveness of the proposed approach for video object segmentation on the
Youtube-VOS which demonstrates its generalization capability The proposed
approach achieves competitive performance by using merely 20% of annotations on
UCF101-24 when compared with recent fully supervised methods. On UCF101-24, it
improves the score by +8.9% and +11% at 0.5 f-mAP and v-mAP respectively,
compared to supervised approach.
|
Recent data from Reticulum II (RetII) require the energy range of the
FermiLAT $\gamma$-excess to be $\sim$ $2-10$ GeV. We adjust our unified
nonthermal Dark Matter (DM) model to accommodate this. We have two extra
scalars beyond the Standard Model to also explain 3.55 keV X-ray line. Now the
mass of the heavier of them has to be increased to lie around 250 GeV, while
that of the lighter one remains at 7.1 keV. This requires a new seed mechanism
for the $\gamma$-excess and new Boltzmann equations for the generation of the
DM relic density. All concerned data for RetII and the X-ray line can now be
fitted well and consistency with other indirect limits attained.
|
We study representation theory of the partially transposed permutation matrix
algebra, a matrix representation of the diagrammatic walled Brauer algebra.
This algebra plays a prominent role in mixed Schur-Weyl duality that appears in
various contexts in quantum information. Our main technical result is an
explicit formula for the action of the walled Brauer algebra generators in the
Gelfand-Tsetlin basis. It generalizes the well-known Gelfand-Tsetlin basis for
the symmetric group (also known as Young's orthogonal form or Young-Yamanouchi
basis).
We provide two applications of our result to quantum information. First, we
show how to simplify semidefinite optimization problems over
unitary-equivariant quantum channels by performing a symmetry reduction.
Second, we derive an efficient quantum circuit for implementing the optimal
port-based quantum teleportation protocol, exponentially improving the known
trivial construction. As a consequence, this also exponentially improves the
known lower bound for the amount of entanglement needed to implement unitaries
non-locally.
Both applications require a generalization of quantum Schur transform to
tensors of mixed unitary symmetry. We develop an efficient quantum circuit for
this mixed quantum Schur transform and provide a matrix product state
representation of its basis vectors. For constant local dimension, this yields
an efficient classical algorithm for computing any entry of the mixed quantum
Schur transform unitary.
|
Employing dynamical cluster quantum Monte Carlo calculations we show that the
single particle spectral weight A(k,w) of the one-band two-dimensional Hubbard
model displays a high energy kink in the quasiparticle dispersion followed by a
steep dispersion of a broad peak similar to recent ARPES results reported for
the cuprates. Based on the agreement between the Monte Carlo results and a
simple calculation which couples the quasiparticle to spin fluctuations, we
conclude that the kink and the broad spectral feature in the Hubbard model
spectra is due to scattering with damped high energy spin fluctuations.
|
We have calculated electrical conductivity in the presence of a magnetic
field by using the Nambu-Jona-Lasinio model.
|
We draw the multimessenger picture of J1048+7143, a flat-spectrum radio
quasar known to show quasi-periodic oscillations in the $\gamma$-ray regime. We
generate the adaptively-binned Fermi Large Area Telescope light curve of this
source above 168 MeV to find three major $\gamma$-ray flares of the source,
such that all three flares consist of two-two sharp sub-flares. Based on radio
interferometric imaging data taken with the Very Large Array, we find that the
kpc-scale jet is directed towards west, while our analysis of $8.6$-GHz very
long baseline interferometry data, mostly taken with the Very Long Baseline
Array, revealed signatures of two pc-scale jets, one pointing towards east, one
pointing towards south. We suggest that the misalignment of the kpc- and
pc-scale jets is a revealing signature of jet precession. We also analyze the
$5$-GHz total flux density curve of J1048+7143 taken with the Nanshan(Ur) and
RATAN-600 single dish radio telescopes and find two complete radio flares,
slightly lagging behind the $\gamma$-ray flares. We model the timing of
$\gamma$-ray flares as signature of the spin-orbit precession in a supermassive
black hole binary, and find that the binary could merge in the next $\sim
60-80$ years. We show that both the Pulsar Timing Arrays and the planned Laser
Interferometer Space Antenna lack sensitivity and frequency coverage to detect
the hypothetical supermassive black hole binary in J1048$+$7143. We argue that
the identification of sources similar to J1048+7143 plays a key role to reveal
periodic high-energy sources in the distant Universe.
|
Hyperdimensional computing (HDC) is an emerging computing paradigm that
exploits the distributed representation of input data in a hyperdimensional
space, the dimensions of which are typically between 1,000--10,000. The
hyperdimensional distributed representation enables energy-efficient,
low-latency, and noise-robust computations with low-precision and basic
arithmetic operations. In this study, we propose optical hyperdimensional
distributed representations based on laser speckles for adaptive, efficient,
and low-latency optical sensor processing. In the proposed approach, sensory
information is optically mapped into a hyperdimensional space with >250,000
dimensions, enabling HDC-based cognitive processing. We use this approach for
the processing of a soft-touch interface and a tactile sensor and demonstrate
to achieve high accuracy of touch or tactile recognition while significantly
reducing training data amount and computational burdens, compared with previous
machine-learning-based sensing approaches. Furthermore, we show that this
approach enables adaptive recalibration to keep high accuracy even under
different conditions.
|
We prove Holder continuity for solutions to the n-dimensional H-System
assuming logarithmic higher integrability of the solution.
|
We study localized modes in binary mixtures of Bose-Einstein condensates
embedded in one-dimensional optical lattices. We report a diversity of
asymmetric modes and investigate their dynamics. We concentrate on the cases
where one of the components is dominant, i.e. has much larger number of atoms
than the other one, and where both components have the numbers of atoms of the
same order but different symmetries. In the first case we propose a method of
systematic obtaining the modes, considering the "small" component as
bifurcating from the continuum spectrum. A generalization of this approach
combined with the use of the symmetry of the coupled Gross-Pitaevskii equations
allows obtaining breather modes, which are also presented.
|
We construct the Numerical Galaxy Catalog ($\nu$GC), based on a semi-analytic
model of galaxy formation combined with high-resolution N-body simulations in a
$\Lambda$-dominated flat cold dark matter ($\Lambda$CDM) cosmological model.
The model includes several essential ingredients for galaxy formation, such as
merging histories of dark halos directly taken from N-body simulations,
radiative gas cooling, star formation, heating by supernova explosions
(supernova feedback), mergers of galaxies, population synthesis, and extinction
by internal dust and intervening HI clouds. As the first paper in a series
using this model, we focus on basic photometric, structural and kinematical
properties of galaxies at present and high redshifts. Two sets of model
parameters are examined, strong and weak supernova feedback models, which are
in good agreement with observational luminosity functions of local galaxies in
a range of observational uncertainty. Both models agree well with many
observations such as cold gas mass-to-stellar luminosity ratios of spiral
galaxies, HI mass functions, galaxy sizes, faint galaxy number counts and
photometric redshift distributions in optical pass-bands, isophotal angular
sizes, and cosmic star formation rates. In particular, the strong supernova
feedback model is in much better agreement with near-infrared (K'-band) faint
galaxy number counts and redshift distribution than the weak feedback model and
our previous semi-analytic models based on the extended Press-Schechter
formalism. (Abridged)
|
The Autler-Townes effect due to near resonance transition between 4s-4p
states in potassium atoms is mapped out in the photo-electron-momentum
distribution and manifests itself as a splitting in the photo-electron kinetic
energy spectra. The energy splitting fits well with the calculated Rabi
frequency at low laser intensities and shows clear deviation at laser
intensities above 1.5x10^11 W/cm^2. An effective Rabi frequency formulae
including the ionization process explains the observed results. Our results
reveal the possibility to tune the effective coupling strength with the cost of
the number of level-populations.
|
Contextual bandit algorithms are applied in a wide range of domains, from
advertising to recommender systems, from clinical trials to education. In many
of these domains, malicious agents may have incentives to attack the bandit
algorithm to induce it to perform a desired behavior. For instance, an
unscrupulous ad publisher may try to increase their own revenue at the expense
of the advertisers; a seller may want to increase the exposure of their
products, or thwart a competitor's advertising campaign. In this paper, we
study several attack scenarios and show that a malicious agent can force a
linear contextual bandit algorithm to pull any desired arm $T - o(T)$ times
over a horizon of $T$ steps, while applying adversarial modifications to either
rewards or contexts that only grow logarithmically as $O(\log T)$. We also
investigate the case when a malicious agent is interested in affecting the
behavior of the bandit algorithm in a single context (e.g., a specific user).
We first provide sufficient conditions for the feasibility of the attack and we
then propose an efficient algorithm to perform the attack. We validate our
theoretical results on experiments performed on both synthetic and real-world
datasets.
|
The influence of spatial dimensionality and particle-antiparticle pair
production on the thermodynamic properties of the relativistic Fermi gas, at
finite chemical potential, is studied. Resembling a kind of phase transition,
qualitatively different behaviors of the thermodynamic susceptibilities, namely
the isothermal compressibility and the specific heat, are markedly observed at
different temperature regimes as function of the system dimensionality and of
the rest mass of the particles. A minimum in the isothermal compressibility
marks a characteristic temperature, in the range of tenths of the Fermi
temperature, at which the system transit from a normal phase, to a phase where
the gas compressibility grows as a power law of the temperature. Curiously, we
find that for a particle density of a few times the density of nuclear matter,
and rest masses of the order of 10 MeV, the minimum of the compressibility
occurs at approximately 170 MeV/k, which roughly estimates the critical
temperature of hot fermions as those occurring in the gluon-quark plasma phase
transition.
|
The convergence of the iterative solutions of the transport equations of
cosmic muon and tau neutrinos propagating through Earth is studied and
analyzed. For achieving a fast convergence of the iterative solutions of the
coupled transport equations of nu_tau, nubar_tau and the associated tau^{\pm}
fluxes, a new semi-analytic input algorithm is presented where the peculiar
tau-decay contributions are implemented already in the initial zeroth order
input. Furthermore, the common single transport equation for muon neutrinos is
generalized by taking into account the contributions of secondary nu_mu and
nubar_mu fluxes due to the prompt tau-decay tau -> nu_mu initiated by the
associated tau flux. Differential and total nadir angle integrated upward-going
mu^- + mu^+ event rates are presented for underground neutrino telescopes and
compared with the muon rates initiated by the primary nu_mu, nu_tau and tau
fluxes.
|
We present a scheme for achieving macroscopic quantum superpositions in
optomechanical systems by using single photon postselection and detecting them
with nested interferometers. This method relieves many of the challenges
associated with previous optical schemes for measuring macroscopic
superpositions, and only requires the devices to be in the weak coupling
regime. It requires only small improvements on currently achievable device
parameters, and allows observation of decoherence on a timescale unconstrained
by the system's optical decay time. Prospects for observing novel decoherence
mechanisms are discussed.
|
We consider an ordinary differential equation with a unique hyperbolic
attractor at the origin, to which we add a small random perturbation. It is
known that under general conditions, the solution of this stochastic
differential equation converges exponentially fast to an equilibrium
distribution. We show that the convergence occurs abruptly: in a time window of
small size compared to the natural time scale of the process, the distance to
equilibrium drops from its maximal possible value to near zero, and only after
this time window the convergence is exponentially fast. This is what is known
as the cut-off phenomenon in the context of Markov chains of increasing
complexity. In addition, we are able to give general conditions to decide
whether the distance to equilibrium converges in this time window to a
universal function, a fact known as profile cut-off.
|
Cross-document event coreference resolution (CDECR) involves clustering event
mentions across multiple documents that refer to the same real-world events.
Existing approaches utilize fine-tuning of small language models (SLMs) like
BERT to address the compatibility among the contexts of event mentions.
However, due to the complexity and diversity of contexts, these models are
prone to learning simple co-occurrences. Recently, large language models (LLMs)
like ChatGPT have demonstrated impressive contextual understanding, yet they
encounter challenges in adapting to specific information extraction (IE) tasks.
In this paper, we propose a collaborative approach for CDECR, leveraging the
capabilities of both a universally capable LLM and a task-specific SLM. The
collaborative strategy begins with the LLM accurately and comprehensively
summarizing events through prompting. Then, the SLM refines its learning of
event representations based on these insights during fine-tuning. Experimental
results demonstrate that our approach surpasses the performance of both the
large and small language models individually, forming a complementary
advantage. Across various datasets, our approach achieves state-of-the-art
performance, underscoring its effectiveness in diverse scenarios.
|
The Majorana's stellar representation, which represents the evolution of a
quantum state with the trajectories of the Majorana stars on a Bloch sphere,
provides an intuitive way to study a physical system with high dimensional
projective Hilbert space. In this Letter, we study the Berry phase by these
stars and their loops on the Bloch sphere. It is shown that the Berry phase of
a general spin state can be expressed by an elegant formula with the solid
angles of Majorana star loops. Furthermore, these results can be naturally used
to a general state with arbitrary dimensions. To demonstrate our theory, we
study a two mode interacting boson system. Finally, the relation between stars'
correlations and quantum entanglement is discussed.
|
Using quenched chiral perturbation theory, we compute the long-distance
behaviour of two-point functions of flavour non-singlet axial and vector
currents in a finite volume, for small quark masses, and at a fixed gauge-field
topology. We also present the corresponding predictions for the unquenched
theory at fixed topology. These results can in principle be used to measure the
low-energy constants of the chiral Lagrangian, from lattice simulations in
volumes much smaller than one pion Compton wavelength. We show that quenching
has a dramatic effect on the vector correlator, which is argued to vanish to
all orders, while the axial correlator appears to be a robust observable only
moderately sensitive to quenching.
|
Natural climate solutions (NCS) are critical for mitigating climate change
through ecosystem-based carbon removal and emissions reductions. NCS
implementation can also generate biodiversity and human well-being co-benefits
and trade-offs ("NCS co-impacts"), but the volume of evidence on NCS co-impacts
has grown rapidly across disciplines, is poorly understood, and remains to be
systematically collated and synthesized. A global evidence map of NCS
co-impacts would overcome key barriers to NCS implementation by providing
relevant information on co-benefits and trade-offs where carbon mitigation
potential alone does not justify NCS projects. We employ large language models
to assess over two million articles, finding 257,266 relevant articles on NCS
co-impacts. We analyze this large and dispersed body of literature using
innovative machine learning methods to extract relevant data (e.g., study
location, species, and other key variables), and create a global evidence map
on NCS co-impacts. Evidence on NCS co-impacts has grown approximately ten-fold
in three decades, although some of the most abundant evidence is associated
with pathways that have less mitigation potential. We find that studies often
examine multiple NCS pathways, indicating natural NCS pathway complements, and
each NCS is often associated with two or more coimpacts. Finally, NCS
co-impacts evidence and priority areas for NCS are often mismatched--some
countries with high mitigation potential from NCS have few published studies on
the broader co-impacts of NCS implementation. Our work advances and makes
available novel methods and systematic and representative data of NCS
co-impacts studies, thus providing timely insights to inform NCS research and
action globally.
|
This paper discusses the phenomenon of spontaneous symmetry breaking in the
Schr\"odinger representation formulation of quantum field theory. The analysis
is presented for three-dimensional space-time abelian gauge theories with
either Maxwell, Maxwell-Chern-Simons, or pure Chern-Simons terms as the gauge
field contribution to the action, each of which leads to a different form of
mass generation for the gauge fields.
|
Language models as a service (LMaaS) enable users to accomplish tasks without
requiring specialized knowledge, simply by paying a service provider. However,
numerous providers offer massive large language model (LLM) services with
variations in latency, performance, and pricing. Consequently, constructing the
cost-saving LLM services invocation strategy with low-latency and
high-performance responses that meet specific task demands becomes a pressing
challenge. This paper provides a comprehensive overview of the LLM services
invocation methods. Technically, we give a formal definition of the problem of
constructing effective invocation strategy in LMaaS and present the LLM
services invocation framework. The framework classifies existing methods into
four different components, including input abstract, semantic cache, solution
design, and output enhancement, which can be freely combined with each other.
Finally, we emphasize the open challenges that have not yet been well addressed
in this task and shed light on future research.
|
Recently, a relativistic gravitation theory has been proposed [J. D.
Bekenstein, Phys. Rev. D {\bf 70}, 083509 (2004)] that gives the Modified
Newtonian Dynamics (or MOND) in the weak acceleration regime. The theory is
based on three dynamic gravitational fields and succeeds in explaining a large
part of extragalactic and gravitational lensing phenomenology without invoking
dark matter. In this work we consider the strong gravity regime of TeVeS. We
study spherically symmetric, static and vacuum spacetimes relevant for a
non-rotating black hole or the exterior of a star. Two branches of solutions
are identified: in the first the vector field is aligned with the time
direction while in the second the vector field has a non-vanishing radial
component. We show that in the first branch of solutions the \beta and \gamma
PPN coefficients in TeVeS are identical to these of general relativity (GR)
while in the second the \beta PPN coefficient differs from unity violating
observational determinations of it (for the choice of the free function $F$ of
the theory made in Bekenstein's paper). For the first branch of solutions, we
derive analytic expressions for the physical metric and discuss their
implications. Applying these solutions to the case of black holes, it is shown
that they violate causality (since they allow for superluminal propagation of
metric, vector and scalar waves) in the vicinity of the event horizon and/or
that they are characterized by negative energy density carried by the fields.
|
The complete form of the amplitude of one closed string Ramond-Ramond (RR),
two fermionic strings and one scalar field in IIB superstring theory has been
computed in detail. Deriving $<V_{C}V_{\bar\psi}V_{\psi} V_{\phi}>$ by using
suitable gauge fixing, we discover some new vertices and their higher
derivative corrections. We investigate both infinite gauge and scalar
$u-$channel poles of this amplitude. In particular, by using the fact that the
kinetic term of fermion fields has no correction, employing Born-Infeld action,
the Wess-Zumino terms and their higher derivative corrections, we discover all
infinite $t,s-$channel fermion poles. The couplings between one RR and two
fermions and all their infinite higher derivative corrections have been
explored. In order to look for all infinite $(s+t+u)-$ channel scalar/gauge
poles for $p+2=n,p=n$ cases, we obtain the couplings between two fermions-two
scalars and two fermions, one scalar and one gauge field as well as all their
infinite higher derivative corrections in type IIB. Specifically we make
various comments based on arXiv:1205.5079 in favor of universality conjecture
for all order higher derivative corrections (with or without low energy
expansion) and the relation of open/closed string that is responsible for all
superstring scattering amplitudes in IIA,IIB.
|
We argue that all the necessary ingredients for successful inflation are
present in the flat directions of the Minimally Supersymmetric Standard Model.
We show that out of many gauge invariant combinations of squarks, sleptons and
Higgses, there are two directions, ${\bf LLe}$, and ${\bf udd}$, which are
promising candidates for the inflaton. The model predicts more than $10^3$
e-foldings with an inflationary scale of $H_{\rm inf}\sim {\cal O}(1-10)$ GeV,
provides a tilted spectrum with an amplitude of $\delta_H\sim 10^{-5}$ and a
negligible tensor perturbation. The temperature of the thermalized plasma could
be as low as $T_{rh}\sim {\cal O}(1-10)$~TeV. Parts of the inflaton potential
can be determined independently of cosmology by future particle physics
experiments.
|
We associate cotangent models to a neighbourhood of a Liouville torus in
symplectic and Poisson manifolds focusing on a special class called
$b$-Poisson/$b$-symplectic manifolds. The semilocal equivalence with such
models uses the corresponding action-angle coordinate theorems in these
settings: the theorem of Liouville-Mineur-Arnold [A74] for symplectic manifolds
and an action-angle theorem for regular Liouville tori in Poisson manifolds
[LMV11]. Our models comprise regular Liouville tori of Poisson manifolds but
also consider the Liouville tori on the singular locus of a $b$-Poisson
manifold. For this latter class of Poisson structures we define a twisted
cotangent model. The equivalence with this twisted cotangent model is given by
an action-angle theorem recently proved in [KMS16]. This viewpoint of cotangent
models provides a new machinery to construct examples of integrable systems,
which are especially valuable in the $b$-symplectic case where not many sources
of examples are known. At the end of the paper we introduce non-degenerate
singularities as lifted cotangent models on $b$-symplectic manifolds and
discuss some generalizations of these models to general Poisson manifolds.
|
We report measurement of trigonometric parallax of IRAS 05168+3634 with VERA.
The parallax is 0.532 +/- 0.053 mas, corresponding to a distance of
1.88+0.21/-0.17 kpc. This result is significantly smaller than the previous
distance estimate of 6 kpc based on kinematic distance. This drastic change in
the source distance revises not only physical parameters of IRAS 05168+3634,
but also its location of the source, placing it in the Perseus arm rather than
the Outer arm. We also measure proper motions of the source. A combination of
the distance and the proper motions with systemic velocity yields rotation
velocity ({\Theta}) of 227+9/-11 km s-1 at the source, assuming {\Theta}0 = 240
km s-1. Our result combined with previous VLBI results for six sources in the
Perseus arm indicates that the sources rotate systematically slower than the
Galactic rotation velocity at the LSR. In fact, we show observed disk peculiar
motions averaged over the seven sources in the Perseus arm as (Umean, Vmean) =
(11 +/- 3, -17 +/- 3) km s-1, indicating that these seven sources are
systematically moving toward the Galactic center, and lag behind the Galactic
rotation.
|
The finite n-th polylogarithm li_n(z) in Z/p[z] is defined as the sum on k
from 1 to p-1 of z^k/k^n. We state and prove the following theorem. Let
Li_k:C_p to C_p be the p-adic polylogarithms defined by Coleman. Then a certain
linear combination F_n of products of polylogarithms and logarithms, with
coefficients which are independent of p, has the property that p^{1-n}
DF_n(z^p) reduces modulo p>n+1 to li_{n-1}(z) where D is the Cathelineau
operator z(1-z) d/dz. A slightly modified version of this theorem was
conjectured by Kontsevich. This theorem is used by Elbaz-Vincent and Gangl to
deduce functional equations of finite polylogarithms from those of complex
polylogarithms.
|
We consider fixed effects binary choice models with a fixed number of periods
$T$ and regressors without a large support. If the time-varying unobserved
terms are i.i.d. with known distribution $F$, \cite{chamberlain2010} shows that
the common slope parameter is point identified if and only if $F$ is logistic.
However, he only considers in his proof $T=2$. We show that the result does not
generalize to $T\geq 3$: the common slope parameter can be identified when $F$
belongs to a family including the logit distribution. Identification is based
on a conditional moment restriction. Under restrictions on the covariates,
these moment conditions lead to point identification of relative effects. If
$T=3$ and mild conditions hold, GMM estimators based on these conditional
moment restrictions reach the semiparametric efficiency bound. Finally, we
illustrate our method by revisiting Brender and Drazen (2008).
|
A graph $G=(V,E)$ is word-representable if there exists a word $w$ over the
alphabet $V$ such that letters $x$ and $y$ alternate in $w$ if and only if
$(x,y)\in E$. A triangular grid graph is a subgraph of a tiling of the plane
with equilateral triangles defined by a finite number of triangles, called
cells. A subdivision of a triangular grid graph is replacing some of its cells
by plane copies of the complete graph $K_4$.
Inspired by a recent elegant result of Akrobotu et al., who classified
word-representable triangulations of grid graphs related to convex polyominoes,
we characterize word-representable subdivisions of triangular grid graphs. A
key role in the characterization is played by smart orientations introduced by
us in this paper. As a corollary to our main result, we obtain that any
subdivision of boundary triangles in the Sierpi\'{n}ski gasket graph is
word-representable.
|
In this article, after recalling and discussing the conventional extremality,
local extremality, stationarity and approximate stationarity properties of
collections of sets and the corresponding (extended) extremal principle, we
focus on extensions of these properties and the corresponding dual conditions
with the goal to refine the main arguments used in this type of results,
clarify the relationships between different extensions and expand the
applicability of the generalised separability results. We introduce and study
new more universal concepts of relative extremality and stationarity and
formulate the relative extended extremal principle. Among other things, certain
stability of the relative approximate stationarity is proved. Some links are
established between the relative extremality and stationarity properties of
collections of sets and (the absence of) certain regularity, lower
semicontinuity and Lipschitz-like properties of set-valued mappings.
|
In this article, we investigate the asymptotic formation of consensus for
several classes of time-dependent cooperative graphon dynamics. After
motivating the use of this type of macroscopic models to describe multi-agent
systems, we adapt the classical notion of scrambling coefficient to this
setting, leverage it to establish sufficient conditions ensuring the
exponential convergence to consensus with respect to the $L^{\infty}$-norm
topology. We then shift our attention to consensus formation expressed in terms
of the $L^2$-norm, and prove three different consensus result for symmetric,
balanced and strongly connected topologies, which involve a suitable
generalisation of the notion of algebraic connectivity to this
infinite-dimensional framework. We then show that, just as in the
finite-dimensional setting, the notion of algebraic connectivity that we
propose encodes information about the connectivity properties of the underlying
interaction topology. We finally use the corresponding results to shed some
light on the relation between $L^2$- and $L^{\infty}$-consensus formation, and
illustrate our contributions by a series of numerical simulations.
|
We propose a novel visual SLAM method that integrates text objects tightly by
treating them as semantic features via fully exploring their geometric and
semantic prior. The text object is modeled as a texture-rich planar patch whose
semantic meaning is extracted and updated on the fly for better data
association. With the full exploration of locally planar characteristics and
semantic meaning of text objects, the SLAM system becomes more accurate and
robust even under challenging conditions such as image blurring, large
viewpoint changes, and significant illumination variations (day and night). We
tested our method in various scenes with the ground truth data. The results
show that integrating texture features leads to a more superior SLAM system
that can match images across day and night. The reconstructed semantic 3D text
map could be useful for navigation and scene understanding in robotic and mixed
reality applications. Our project page: https://github.com/SJTU-ViSYS/TextSLAM .
|
Strutinsky's averaging(SA) method is applied to multiquark droplets to
systematically extract the smooth part of the exact quantal energy and thereby
the shell correction energies. It is shown within the bag model that the
semi-phenomenological density of states expression given upto curvature order
is almost equivalent to the SA method. A comparative study of the bag model and
the relativistic harmonic oscillator potential for quarks is done to
investigate the quark mass dependence of the finite-size effects. It is found
that there is an important difference between these two cases, which may be
related to the presence/non-presence of the net spin-orbit effect.
|
In these lecture notes we discuss recently conjectured instability of anti-de
Sitter space, resulting in gravitational collapse of a large class of
arbitrarily small initial perturbations. We uncover the technical details used
in the numerical study of spherically symmetric Einstein-massless scalar field
system with negative cosmological constant that led to the conjectured
instability.
|
The optical depth $\tau$ is the least well determined parameter in the
standard model of cosmology, and one whose precise value is important for both
understanding reionization and for inferring fundamental physics from
cosmological measurements. We forecast how well future epoch of reionization
experiments could constraint $\tau$ using a symmetries-based bias expansion
that highlights the special role played by anisotropies in the power spectrum
on large scales. Given a parametric model for the ionization evolution inspired
by the physical behavior of more detailed reionization simulations, we find
that future 21cm experiments could place tight constraints on the timing and
duration of reionization and hence constraints on $\tau$ that are competitive
with proposed, space-based CMB missions provided they can measure $k\approx
0.1\,h\,\text{Mpc}^{-1}$ with a clean foreground wedge across redshifts
spanning the most active periods of reionization, corresponding to ionization
fractions $0.2 \lesssim x \lesssim 0.8$. Significantly improving upon existing
CMB-based measurements with next-generation 21cm surveys would require
substantially longer observations ($\sim5$ years) than standard
$\mathcal{O}(1000 \,\,\text{hour})$ integration times. Precise measurements of
smaller scales will not improve constraints on $\tau$ until a better
understanding of the astrophysics of reionization is achieved. In the presence
of noise and foregrounds even future 21cm experiments will struggle to
constrain $\tau$ if the ionization evolution deviates significantly from simple
parametric forms.
|
This paper shows how C-numerical-range related new strucures may arise from
practical problems in quantum control--and vice versa, how an understanding of
these structures helps to tackle hot topics in quantum information.
We start out with an overview on the role of C-numerical ranges in current
research problems in quantum theory: the quantum mechanical task of maximising
the projection of a point on the unitary orbit of an initial state onto a
target state C relates to the C-numerical radius of A via maximising the trace
function |\tr \{C^\dagger UAU^\dagger\}|. In quantum control of n qubits one
may be interested (i) in having U\in SU(2^n) for the entire dynamics, or (ii)
in restricting the dynamics to {\em local} operations on each qubit, i.e. to
the n-fold tensor product SU(2)\otimes SU(2)\otimes >...\otimes SU(2).
Interestingly, the latter then leads to a novel entity, the {\em local}
C-numerical range W_{\rm loc}(C,A), whose intricate geometry is neither
star-shaped nor simply connected in contrast to the conventional C-numerical
range. This is shown in the accompanying paper (math-ph/0702005).
We present novel applications of the C-numerical range in quantum control
assisted by gradient flows on the local unitary group: (1) they serve as
powerful tools for deciding whether a quantum interaction can be inverted in
time (in a sense generalising Hahn's famous spin echo); (2) they allow for
optimising witnesses of quantum entanglement. We conclude by relating the
relative C-numerical range to problems of constrained quantum optimisation, for
which we also give Lagrange-type gradient flow algorithms.
|
The Ekedahl-Oort type is a combinatorial invariant of a principally polarized
abelian variety $A$ defined over an algebraically closed field of
characteristic $p > 0$. It characterizes the $p$-torsion group scheme of $A$ up
to isomorphism. Equivalently, it characterizes (the mod $p$ reduction of) the
Dieudonn\'e module of $A$ or the de Rham cohomology of $A$ as modules under the
Frobenius and Vershiebung operators.
There are very few results about which Ekedahl-Oort types occur for Jacobians
of curves. In this paper, we consider the class of Hermitian curves, indexed by
a prime power $q=p^n$, which are supersingular curves well-known for their
exceptional arithmetic properties. We determine the Ekedahl-Oort types of the
Jacobians of all Hermitian curves. An interesting feature is that their
indecomposable factors are determined by the orbits of the
multiplication-by-two map on ${\mathbb Z}/(2^n+1)$, and thus do not depend on
$p$. This yields applications about the decomposition of the Jacobians of
Hermitian curves up to isomorphism.
|
We propose a slight modification of the properties of a spectral geometry a
la Connes, which allows for some of the algebraic relations to be satisfied
only modulo compact operators. On the equatorial Podles sphere we construct
suq2-equivariant Dirac operator and real structure which satisfy these modified
properties.
|
In this paper, we introduce a novel approach for semantic description of
object features based on the prototypicality effects of the Prototype Theory.
Our prototype-based description model encodes and stores the semantic meaning
of an object, while describing its features using the semantic prototype
computed by CNN-classifications models. Our method uses semantic prototypes to
create discriminative descriptor signatures that describe an object
highlighting its most distinctive features within the category. Our experiments
show that: i) our descriptor preserves the semantic information used by the
CNN-models in classification tasks; ii) our distance metric can be used as the
object's typicality score; iii) our descriptor signatures are semantically
interpretable and enables the simulation of the prototypical organization of
objects within a category.
|
In spintronics, one of the long standing questions is why the MgO-based
magnetic tunnel junction (MTJ) is almost the only option to achieve a large
tunnelling magnetoresistance (TMR) ratio at room temperature (RT) but not as
large as the theoretical prediction. This study focuses on the development of
an almost strain-free MTJ using metastable bcc CoxMn100-x ferromagnetic films.
We have investigated the degree of crystallisation in MTJ consisting of
CoxMn100-x/MgO/CoxMn100-x (x = 66, 75, 83 and 86) in relation to their TMR
ratios. Cross-sectional high resolution transmission electron microscopy
(HRTEM) reveals that almost consistent lattice constants of these layers for 66
< x < 83 with maintaining large TMR ratios of 229% at RT, confirming the soft
nature of the CoxMn100-x layer with some dislocations at the MgO/Co75Mn25
interfaces. For x = 86, on the other hand, the TMR ratio is found to be reduced
to 142% at RT, which is partially attributed to the increased number of the
dislocations at the MgO/Co86Mn14 interfaces and amorphous grains identified in
the MgO barrier. Ab-initio calculations confirm the crystalline deformation
stability across a broad compositional range in CoMn, proving the advantage of
a strain-free interface for much larger TMR ratios.
|
We study the statistics of the number of executed hops of adatoms at the
surface of films grown with the Clarke-Vvedensky (CV) model in simple cubic
lattices. The distributions of this number, $N$, are determined in films with
average thicknesses close to $50$ and $100$ monolayers for a broad range of
values of the diffusion-to-deposition ratio $R$ and of the probability
$\epsilon$ that lowers the diffusion coefficient for each lateral neighbor. The
mobility of subsurface atoms and the energy barriers for crossing step edges
are neglected. Simulations show that the adatoms execute uncorrelated diffusion
during the time in which they move on the film surface. In a low temperature
regime, typically with $R\epsilon\lesssim 1$, the attachment to lateral
neighbors is almost irreversible, the average number of hops scales as $\langle
N\rangle \sim R^{0.38\pm 0.01}$, and the distribution of that number decays
approximately as $\exp\left[-\left({N/\langle N\rangle}\right)^{0.80\pm
0.07}\right]$. Similar decay is observed in simulations of random walks in a
plane with randomly distributed absorbing traps and the estimated relation
between $\langle N\rangle$ and the density of terrace steps is similar to that
observed in the trapping problem, which provides a conceptual explanation of
that regime. As the temperature increases, $\langle N\rangle$ crosses over to
another regime when $R\epsilon^{3.0\pm 0.3}\sim 1$, which indicates high
mobility of all adatoms at terrace borders. The distributions $P\left(
N\right)$ change to simple exponential decays, due to the constant probability
for an adatom to become immobile after being covered by a new deposited layer.
At higher temperatures, the surfaces become very smooth and $\langle N\rangle
\sim R\epsilon^{1.85\pm 0.15}$, which is explained by an analogy with
submonolayer growth.
|
A group of intelligent agents which fulfill a set of tasks in parallel is
represented first by the tensor multiplication of corresponding processes in a
linear logic game category. An optimal itinerary in the configuration space of
the group states is defined as a play with maximal total reward in the
category. New moments also are: the reward is represented as a degree of
certainty (visibility) of an agent goal, and the system goals are chosen by the
greatest value corresponding to these processes in the system goal lattice.
|
We explore element-wise convex combinations of two permutation-aligned neural
network parameter vectors $\Theta_A$ and $\Theta_B$ of size $d$. We conduct
extensive experiments by examining various distributions of such model
combinations parametrized by elements of the hypercube $[0,1]^{d}$ and its
vicinity. Our findings reveal that broad regions of the hypercube form surfaces
of low loss values, indicating that the notion of linear mode connectivity
extends to a more general phenomenon which we call mode combinability. We also
make several novel observations regarding linear mode connectivity and model
re-basin. We demonstrate a transitivity property: two models re-based to a
common third model are also linear mode connected, and a robustness property:
even with significant perturbations of the neuron matchings the resulting
combinations continue to form a working model. Moreover, we analyze the
functional and weight similarity of model combinations and show that such
combinations are non-vacuous in the sense that there are significant functional
differences between the resulting models.
|
We establish a number of results that reveal a form of irreversibility
(distinguishing arbitrarily long from finite time) in 2d Euler flows, by virtue
of twisting of the particle trajectory map. Our main observation is that
twisting in Hamiltonian flows on annular domains, which can be quantified by
the differential winding of particles around the center of the annulus, is
stable to perturbations. In fact, it is possible to prove the stability of the
whole of the lifted dynamics to non-autonomous perturbations (though single
particle paths are generically unstable). These all-time stability facts are
used to establish a number of results related to the long-time behavior of
inviscid fluid flows. In particular, we show that nearby general stable steady
states (i) all Euler flows exhibit indefinite twisting and hence "age", (ii)
vorticity generically becomes filamented and exhibits wandering in $L^\infty$.
We also give examples of infinite time gradient growth for smooth solutions to
the SQG equation and of smooth vortex patch solutions to the Euler equation
that entangle and develop unbounded perimeter in infinite time.
|
Guided by the non-relativistic effective field theory of interactions between
Weakly Interacting Massive Particles (WIMPs) of spin 1/2 and nuclei we study
direct detection exclusion plots for an example of non-standard spin-dependent
interaction and compare it to the standard one. We analyze an extensive list of
15 existing experiments including the effects of momentum dependence and
isospin violation. In our analysis, we fixed the dark matter velocity
distribution to a Maxwellian.
|
Recently Peebles and Vilenkin proposed and quantitatively analyzed the
fascinating idea that a substantial fraction of the present cosmic energy
density could reside in the vacuum potential energy of the scalar field
responsible for inflation (quintessential inflation). Here we compute the
signature of this model in the cosmic microwave background polarization and
temperature anisotropies and in the large scale structure.
|
Humor is a central aspect of human communication that has not been solved for
artificial agents so far. Large language models (LLMs) are increasingly able to
capture implicit and contextual information. Especially, OpenAI's ChatGPT
recently gained immense public attention. The GPT3-based model almost seems to
communicate on a human level and can even tell jokes. Humor is an essential
component of human communication. But is ChatGPT really funny? We put ChatGPT's
sense of humor to the test. In a series of exploratory experiments around
jokes, i.e., generation, explanation, and detection, we seek to understand
ChatGPT's capability to grasp and reproduce human humor. Since the model itself
is not accessible, we applied prompt-based experiments. Our empirical evidence
indicates that jokes are not hard-coded but mostly also not newly generated by
the model. Over 90% of 1008 generated jokes were the same 25 Jokes. The system
accurately explains valid jokes but also comes up with fictional explanations
for invalid jokes. Joke-typical characteristics can mislead ChatGPT in the
classification of jokes. ChatGPT has not solved computational humor yet but it
can be a big leap toward "funny" machines.
|
The goal of these notes is to give a short introduction to Fukaya categories
and some of their applications. The first half of the text is devoted to a
brief review of Lagrangian Floer (co)homology and product structures. Then we
introduce the Fukaya category (informally and without a lot of the necessary
technical detail), and briefly discuss algebraic concepts such as exact
triangles and generators. Finally, we mention wrapped Fukaya categories and
outline a few applications to symplectic topology, mirror symmetry and
low-dimensional topology. This text is based on a series of lectures given at a
Summer School on Contact and Symplectic Topology at Universit\'e de Nantes in
June 2011.
|
Given a sample covariance matrix, we examine the problem of maximizing the
variance explained by a linear combination of the input variables while
constraining the number of nonzero coefficients in this combination. This is
known as sparse principal component analysis and has a wide array of
applications in machine learning and engineering. We formulate a new
semidefinite relaxation to this problem and derive a greedy algorithm that
computes a full set of good solutions for all target numbers of non zero
coefficients, with total complexity O(n^3), where n is the number of variables.
We then use the same relaxation to derive sufficient conditions for global
optimality of a solution, which can be tested in O(n^3) per pattern. We discuss
applications in subset selection and sparse recovery and show on artificial
examples and biological data that our algorithm does provide globally optimal
solutions in many cases.
|
We clarify the relations among different Fourier-based approaches to option
pricing, and improve the B-spline probability density projection method using
the sinh-acceleration technique. This allows us to efficiently separate the
control of different sources of errors better than the FFT-based realization
allows; in many cases, the CPU time decreases as well. We demonstrate the
improvement of the B-spline projection method through several numerical
experiments in option pricing, including European and barrier options, where
the SINH acceleration technique proves to be robust and accurate.
|
A kinetic inhomogeneous Boltzmann-type equation is proposed to model the
dynamics of the number of agents in a large market depending on the estimated
value of an asset and the rationality of the agents. The interaction rules take
into account the interplay of the agents with sources of public information,
herding phenomena, and irrationality of the individuals. In the formal grazing
collision limit, a nonlinear nonlocal Fokker-Planck equation with anisotropic
(or incomplete) diffusion is derived. The existence of global-in-time weak
solutions to the Fokker-Planck initial-boundary-value problem is proved.
Numerical experiments for the Boltzmann equation highlight the importance of
the reliability of public information in the formation of bubbles and crashes.
The use of Bollinger bands in the simulations shows how herding may lead to
strong trends with low volatility of the asset prices, but eventually also to
abrupt corrections.
|
In this paper, we first present a centralized traffic control model based on
the emerging dynamic path flows. This new model in essence views the whole
target network as one integral piece in which traffic propagates based on
traffic flow dynamics, vehicle paths, and traffic control. In light of this
centralized traffic control concept, most requirements for the existing traffic
control coordination will be dropped in the optimal traffic control operations,
such as the common cycle length or offsets. Instead, the optimal traffic
control strategy across intersections will be highly adaptive over time to
minimize the total travel time and it can also prevent any short-term traffic
congestions according to the space-time characteristics of vehicle path flows.
A mixed integer linear programming (MILP) formulation is then presented to
model the propagations of path flows given a centralized traffic control
strategy. Secondly, a new approximated Lagrangian decomposition framework is
presented. Solving the proposed MILP model with the traditional Lagrangian
decomposition approach faces the leveraging challenge between the problem
tractability after decomposition and computing complexity. To address this
issue, we propose an approximate Lagrangian decomposition framework in which
the target problem is reformulated into an approximated problem by constructing
its complicating attributes into the objective function instead of in
constraints to reduce the computing complexity. The computing efficiency loss
of the objective function can be mitigated via multiple advanced computing
techniques. With the proposed approximated approach, the upper bound and tight
lower bound can be obtained via two customized dynamic network loading models.
In the end, one demonstrative and one real-world example are illustrated to
show the validness, robustness, and scalability of the new approach.
|
We prove that for $1<p\le q<\infty$, $qp\geq {p'}^2$ or $p'q'\geq q^2$,
$\frac{1}{p}+\frac{1}{p'}=\frac{1}{q}+\frac{1}{q'}=1$, $$\|\omega
P_\alpha(f)\|_{L^p(\mathcal{H},y^{\alpha+(2+\alpha)(\frac{q}{p}-1)}dxdy)}\le
C_{p,q,\alpha}[\omega]_{B_{p,q,\alpha}}^{(\frac{1}{p'}+\frac{1}{q})\max\{1,\frac{p'}{q}\}}\|\omega
f\|_{L^p(\mathcal{H},y^{\alpha}dxdy)}$$ where $P_\alpha$ is the weighted
Bergman projection of the upper-half plane $\mathcal{H}$, and
$$[\omega]_{B_{p,q,\alpha}}:=\sup_{I\subset
\mathbb{R}}\left(\frac{1}{|I|^{2+\alpha}}\int_{Q_I}\omega^{q}dV_\alpha\right)\left(\frac{1}{|I|^{2+\alpha}}\int_{Q_I}\omega^{-p'}dV_\alpha\right)^{\frac{q}{p'}},$$
with $Q_I=\{z=x+iy\in \mathbb{C}: x\in I, 0<y<|I|\}$.
|
Let $\mathbf{u} = (u_n)_{n \geq 0}$ be a Lucas sequence, that is, a sequence
of integers satisfying $u_0 = 0$, $u_1 = 1$, and $u_n = a_1 u_{n - 1} + a_2
u_{n - 2}$ for every integer $n \geq 2$, where $a_1$ and $a_2$ are fixed
nonzero integers. For each prime number $p$ with $p \nmid 2a_2D_{\mathbf{u}}$,
where $D_{\mathbf{u}} := a_1^2 + 4a_2$, let $\rho_{\mathbf{u}}(p)$ be the rank
of appearance of $p$ in $\mathbf{u}$, that is, the smallest positive integer
$k$ such that $p \mid u_k$. It is well known that $\rho_{\mathbf{u}}(p)$ exists
and that $p \equiv \big(D_{\mathbf{u}} \mid p \big) \pmod
{\rho_{\mathbf{u}}(p)}$, where $\big(D_{\mathbf{u}} \mid p \big)$ is the
Legendre symbol. Define the index of appearance of $p$ in $\mathbf{u}$ as
$\iota_{\mathbf{u}}(p) := \left(p - \big(D_{\mathbf{u}} \mid p \big)\right) /
\rho_{\mathbf{u}}(p)$. For each positive integer $t$ and for every $x > 0$, let
$\mathcal{P}_{\mathbf{u}}(t, x)$ be the set of prime numbers $p$ such that $p
\leq x$, $p \nmid 2a_2 D_{\mathbf{u}}$, and $\iota_{\mathbf{u}}(p) = t$. Under
the Generalized Riemann Hypothesis, and under some mild assumptions on
$\mathbf{u}$, we prove that \begin{equation*}
\#\mathcal{P}_{\mathbf{u}}(t, x) = A\, F_{\mathbf{u}}(t) \, G_{\mathbf{u}}(t)
\, \frac{x}{\log x} + O_{\mathbf{u}}\!\left(\frac{x}{(\log x)^2} + \frac{x \log
(2\log x)}{\varphi(t) (\log x)^2}\right) , \end{equation*} for all positive
integers $t$ and for all $x > t^3$, where $A$ is the Artin constant,
$F_{\mathbf{u}}(\cdot)$ is a multiplicative function, and
$G_{\mathbf{u}}(\cdot)$ is a periodic function (both these functions are
effectively computable in terms of $\mathbf{u}$). Furthermore, we provide some
explicit examples and numerical data.
|
Rotation of the nucleus and rotation of the electronic cloud of the atom/ion
were considered. It was shown that these rotations are not practically
possible. Rotation of the cloud of delocalized electrons and ionic core of a
fullerene molecule and these of the ring of a nanotube were discussed. It was
shown that the rotation of the cloud of delocalized electrons of a fullerene
molecule is possible and it goes in a quantum way when temperature is
essentially lower than 40 K. Rotation of the ion core of a fullerene molecule
is possible in a classical way only. The same should be said about rotations in
the ring of a nanotube.
|
Effective communication is a crucial skill for healthcare providers since it
leads to better patient health, satisfaction and avoids malpractice claims. In
standard medical education, students' communication skills are trained with
role-playing and Standardized Patients (SPs), i.e., actors. However, SPs are
difficult to standardize, and are very resource consuming. Virtual Patients
(VPs) are interactive computer-based systems that represent a valuable
alternative to SPs. VPs are capable of portraying patients in realistic
clinical scenarios and engage learners in realistic conversations. Approaching
medical communication skill training with VPs has been an active research area
in the last ten years. As a result, the number of works in this field has grown
significantly. The objective of this work is to survey the recent literature,
assessing the state of the art of this technology with a specific focus on the
instructional and technical design of VP simulations. After having classified
and analysed the VPs selected for our research, we identified several areas
that require further investigation, and we drafted practical recommendations
for VP developers on design aspects that, based on our findings, are pivotal to
create novel and effective VP simulations or improve existing ones.
|
We discuss the implications of the recent measurement of the $B_s-\bar{B_s}$
oscillation frequency $\Delta M_s$ on the parameter space of R-parity violating
supersymmetry. For completeness, we also discuss the bounds coming from
leptonic, semileptonic, and nonleptonic B decay modes, and point out some
possibly interesting channels at LHC.
|
Sensory stimuli can be recognized more rapidly when they are expected. This
phenomenon depends on expectation affecting the cortical processing of sensory
information. However, virtually nothing is known on the mechanisms responsible
for the effects of expectation on sensory networks. Here, we report a novel
computational mechanism underlying the expectation-dependent acceleration of
coding observed in the gustatory cortex (GC) of alert rats. We use a recurrent
spiking network model with a clustered architecture capturing essential
features of cortical activity, including the metastable activity observed in GC
before and after gustatory stimulation. Relying both on network theory and
computer simulations, we propose that expectation exerts its function by
modulating the intrinsically generated dynamics preceding taste delivery. Our
model, whose predictions are confirmed in the experimental data, demonstrates
how the modulation of intrinsic metastable activity can shape sensory coding
and mediate cognitive processes such as the expectation of relevant events.
Altogether, these results provide a biologically plausible theory of
expectation and ascribe a new functional role to intrinsically generated,
metastable activity.
|
The branching fractions for the inclusive Cabibbo-favored ~K*0 and
Cabibbo-suppressed K*0 decays of D mesons are measured based on a data sample
of 33 pb-1 collected at and around the center-of-mass energy of 3.773 GeV with
the BES-II detector at the BEPC collider. The branching fractions for the
decays D+(0) -> ~K*0(892)X and D0 -> K*0(892)X are determined to be BF(D0 ->
\~K*0X) = (8.7 +/- 4.0 +/- 1.2)%, BF(D+ -> ~K*0X) = (23.2 +/- 4.5 +/- 3.0)% and
BF(D0 -> K*0X) = (2.8 +/- 1.2 +/- 0.4)%. An upper limit on the branching
fraction at 90% C.L. for the decay D+ -> K*0(892)X is set to be BF(D+ -> K*0X)
< 6.6%.
|
The question of whether the Tsallis entropy is Lesche-stable is revisited. It
is argued that when physical averages are computed with the escort
probabilities, the correct application of the concept of Lesche-stability
requires use of the escort probabilities. As a consequence, as shown here, the
Tsallis entropy is unstable but the thermodynamic averages are stable. We
further show that Lesche stability as well as thermodynamic stability can be
obtained if the homogeneous entropy is used as the basis of the formulation of
non-extensive thermodynamics. In this approach, the escort distribution arises
naturally as a secondary structure.
|
We report our results of the Monte Carlo Renormalization Group analysis in
two dimensional coupling space. The qualitative features of the RG flow are
described with a phenomenological RG equation. The dependence on the lattice
spacing for various actions provides the conditions to determine the parameters
entering the RG equation.
|
We deal with the problem of sparsity-based audio inpainting, i.e. filling in
the missing segments of audio. A consequence of the approaches based on
mathematical optimization is the insufficient amplitude of the signal in the
filled gaps. Remaining in the framework based on sparsity and convex
optimization, we propose improvements to audio inpainting, aiming at
compensating for such an energy loss. The new ideas are based on different
types of weighting, both in the coefficient and the time domains. We show that
our propositions improve the inpainting performance in terms of both the SNR
and ODG.
|
We build a virtual agent for learning language in a 2D maze-like world. The
agent sees images of the surrounding environment, listens to a virtual teacher,
and takes actions to receive rewards. It interactively learns the teacher's
language from scratch based on two language use cases: sentence-directed
navigation and question answering. It learns simultaneously the visual
representations of the world, the language, and the action control. By
disentangling language grounding from other computational routines and sharing
a concept detection function between language grounding and prediction, the
agent reliably interpolates and extrapolates to interpret sentences that
contain new word combinations or new words missing from training sentences. The
new words are transferred from the answers of language prediction. Such a
language ability is trained and evaluated on a population of over 1.6 million
distinct sentences consisting of 119 object words, 8 color words, 9
spatial-relation words, and 50 grammatical words. The proposed model
significantly outperforms five comparison methods for interpreting zero-shot
sentences. In addition, we demonstrate human-interpretable intermediate outputs
of the model in the appendix.
|
This paper introduces Kernel-based Information Criterion (KIC) for model
selection in regression analysis. The novel kernel-based complexity measure in
KIC efficiently computes the interdependency between parameters of the model
using a variable-wise variance and yields selection of better, more robust
regressors. Experimental results show superior performance on both simulated
and real data sets compared to Leave-One-Out Cross-Validation (LOOCV),
kernel-based Information Complexity (ICOMP), and maximum log of marginal
likelihood in Gaussian Process Regression (GPR).
|
We present a study of optical Fe II emission in 302 AGNs selected from the
SDSS. We group the strongest Fe II multiplets into three groups according to
the lower term of the transition (b $^4$F, a $^6$S and a $^4$G terms). These
correspond approximately to the blue, central, and red part respectively of the
"iron shelf" around Hb. We calculate an Fe II template which takes into account
transitions into these three terms and an additional group of lines, based on a
reconstruction of the spectrum of I Zw 1. This Fe II template gives a more
precise fit of the Fe II lines in broad-line AGNs than other templates. We
extract Fe II, Ha, Hb, [O III] and [N II] emission parameters and investigate
correlations between them. We find that Fe II lines probably originate in an
Intermediate Line Region. We notice that the blue, red, and central parts of
the iron shelf have different relative intensities in different objects. Their
ratios depend on continuum luminosity, FWHM Hb, the velocity shift of Fe II,
and the Ha/Hb flux ratio. We examine the dependence of the well-known
anti-correlation between the equivalent widths of Fe II and [O III] on
continuum luminosity. We find that there is a Baldwin effect for [O III] but an
inverse Baldwin effect for the Fe II emission. The [O III]/Fe II ratio thus
decreases with L\lambda5100. Since the ratio is a major component of the
Boroson and Green eigenvector 1, this implies a connection between the Baldwin
effect and eigenvector 1, and could be connected with AGN evolution. We find
that spectra are different for Hb FWHMs greater and less than ~3000 km/s, and
that there are different correlation coefficients between the parameters.
|
Protoplanetary disks fragment due to gravitational instability when there is
enough mass for self-gravitation, described by the Toomre parameter, and when
heat can be lost at a rate comparable to the local dynamical timescale,
described by t_c=beta Omega^-1. Simulations of self-gravitating disks show that
the cooling parameter has a rough critical value at beta_crit=3. When below
beta_crit, gas overdensities will contract under their own gravity and fragment
into bound objects while otherwise maintaining a steady state of
gravitoturbulence. However, previous studies of the critical cooling parameter
have found dependence on simulation resolution, indicating that the simulation
of self-gravitating protoplanetary disks is not so straightforward. In
particular, the simplicity of the cooling timescale t_c prevents fragments from
being disrupted by pressure support as temperatures rise. We alter the cooling
law so that the cooling timescale is dependent on local surface density
fluctuations, a means of incorporating optical depth effects into the local
cooling of an object. For lower resolution simulations, this results in a lower
critical cooling parameter and a disk more stable to gravitational stresses
suggesting the formation of large gas giants planets in large, cool disks is
generally suppressed by more realistic cooling. At our highest resolution
however, the model becomes unstable to fragmentation for cooling timescales up
to beta = 10.
|
Sound event detection with weakly labeled data is considered as a problem of
multi-instance learning. And the choice of pooling function is the key to
solving this problem. In this paper, we proposed a hierarchical pooling
structure to improve the performance of weakly labeled sound event detection
system. Proposed pooling structure has made remarkable improvements on three
types of pooling function without adding any parameters. Moreover, our system
has achieved competitive performance on Task 4 of Detection and Classification
of Acoustic Scenes and Events (DCASE) 2017 Challenge using hierarchical pooling
structure.
|
We calculate the differences between reaction and interaction cross sections
in the collisions of relativistic light ions with $\text{A}<40$ in the
framework of Glauber theory. Although, in the optical approximation of Glauber
theory these differences are approximately 1% of the reaction cross sections or
less, they increase up to 3-4% when all scattering diagrams of Glauber theory
are included in calculation.
|
Sanctioned by its constitution, India is home to the world's most
comprehensive affirmative action program, where historically discriminated
groups are protected with vertical reservations implemented as "set asides,"
and other disadvantaged groups are protected with horizontal reservations
implemented as "minimum guarantees." A mechanism mandated by the Supreme Court
in 1995 suffers from important anomalies, triggering countless litigations in
India. Foretelling a recent reform correcting the flawed mechanism, we propose
the 2SMG mechanism that resolves all anomalies, and characterize it with
desiderata reflecting laws of India. Subsequently rediscovered with a high
court judgment and enforced in Gujarat, 2SMG is also endorsed by Saurav Yadav
v. State of UP (2020), in a Supreme Court ruling that rescinded the flawed
mechanism. While not explicitly enforced, 2SMG is indirectly enforced for an
important subclass of applications in India, because no other mechanism
satisfies the new mandates of the Supreme Court.
|
The ability to perceive how objects change over time is a crucial ingredient
in human intelligence. However, current benchmarks cannot faithfully reflect
the temporal understanding abilities of video-language models (VidLMs) due to
the existence of static visual shortcuts. To remedy this issue, we present
VITATECS, a diagnostic VIdeo-Text dAtaset for the evaluation of TEmporal
Concept underStanding. Specifically, we first introduce a fine-grained taxonomy
of temporal concepts in natural language in order to diagnose the capability of
VidLMs to comprehend different temporal aspects. Furthermore, to disentangle
the correlation between static and temporal information, we generate
counterfactual video descriptions that differ from the original one only in the
specified temporal aspect. We employ a semi-automatic data collection framework
using large language models and human-in-the-loop annotation to obtain
high-quality counterfactual descriptions efficiently. Evaluation of
representative video-language understanding models confirms their deficiency in
temporal understanding, revealing the need for greater emphasis on the temporal
elements in video-language research.
|
We analyze scheduling algorithms for multiuser communication systems with
users having multiple antennas and linear receivers. When there is no feedback
of channel information, we consider a common round robin scheduling algorithm,
and derive new exact and high signal-to-noise ratio (SNR) maximum sum-rate
results for the maximum ratio combining (MRC) and minimum mean squared error
(MMSE) receivers. We also present new analysis of MRC, zero forcing (ZF) and
MMSE receivers in the low SNR regime. When there are limited feedback
capabilities in the system, we consider a common practical scheduling scheme
based on signal-to-interference-and-noise ratio (SINR) feedback at the
transmitter. We derive new accurate approximations for the maximum sum-rate,
for the cases of MRC, ZF and MMSE receivers. We also derive maximum sum-rate
scaling laws, which reveal that the maximum sum-rate of all three linear
receivers converge to the same value for a large number of users, but at
different rates.
|
The theorem given in 'Equations of Hydro-and Thermodynamics of the Atmosphere
when Inertial Forces Are Small in Comparison with Gravity' (2018) is wrong,
since the solutions of the system of Navier-Stokes equations do not converge to
the solutions of the system of hydrostatic approximation equations, when the
vertical acceleration approaches zero. The main consequence is that the scales
given in the paper are not suitable for application of hydrostatic
approximation. The correct asymptotics should be given by the traditional
hydrostatic parameter H/L, where H and L are the vertical and horizontal scales
of motion. Also scale analysis of the L.F. Richardson's equation for vertical
velocity in hydrostatic approximation is not correct.
|
Subsets and Splits