text
stringlengths
1
3.65k
source
stringlengths
15
79
device - free wireless indoor localization is a key enabling technology for the internet of things ( iot ). fingerprint - based indoor localization techniques are a commonly used solution. this paper proposes a semi - supervised, generative adversarial network ( gan ) - based device - free fingerprinting indoor localization system. the proposed system uses a small amount of labeled data and a large amount of unlabeled data ( i. e., semi - supervised ), thus considerably reducing the expensive data labeling effort. experimental results show that, as compared to the state - of - the - art supervised scheme, the proposed semi - supervised system achieves comparable performance with equal, sufficient amount of labeled data, and significantly superior performance with equal, highly limited amount of labeled data. besides, the proposed semi - supervised system retains its performance over a broad range of the amount of labeled data. the interactions between the generator, discriminator, and classifier models of the proposed gan - based system are visually examined and discussed. a mathematical description of the proposed system is also presented.
arxiv:2008.07111
metastability is a phenomenon observed in stochastic systems which stay in a false - equilibrium within a region of its state space until the occurrence of a sequence of rare events that leads to an abrupt transition to a different region. this paper presents financial markets as metastable systems and shows that, under this assumption, financial time series evolve as hidden markov models. in special, we propose a theory that outlines an explicit causal relation between a financial market and the evolution of a financial time series. in the context of financial economics and causal factor investment, this theory introduces a paradigm shift, suggesting that investment performance fluctuations are primarily driven by the market state rather than direct causation by other variables. while not incompatible with traditional causal inference, our approach addresses the non - stationary evolution of time series through changes in market states, enhancing risk assessment and enabling mitigation strategies.
arxiv:2310.13081
eye - tracking applications that utilize the human gaze in video understanding tasks have become increasingly important. to effectively automate the process of video analysis based on eye - tracking data, it is important to accurately replicate human gaze behavior. however, this task presents significant challenges due to the inherent complexity and ambiguity of human gaze patterns. in this work, we introduce a novel method for simulating human gaze behavior. our approach uses a transformer - based reinforcement learning algorithm to train an agent that acts as a human observer, with the primary role of watching videos and simulating human gaze behavior. we employed an eye - tracking dataset gathered from videos generated by the virtualhome simulator, with a primary focus on activity recognition. our experimental results demonstrate the effectiveness of our gaze prediction method by highlighting its capability to replicate human gaze behavior and its applicability for downstream tasks where real human - gaze is used as input.
arxiv:2404.07351
this editorial opens the special issues that the journal of statistical physics has dedicated to the growing field of statistical physics modeling of social dynamics. the issues include contributions from physicists and social scientists, with the goal of fostering a better communication between these two communities.
arxiv:1304.1171
this is the first paper of a series that will examine the options for embedding supersymmetric orbifold - guts into five - dimensional n = 2 yang - mills - einstein supergravity theories ( ymesgts ). in particular, we focus on the allowed couplings of charged hypermultiplets in the lowest dimensional reps of the gauge groups su ( 5 ), so ( 10 ) and e ( 6 ). our results are within the classification of homogeneous quaternionic scalar manifolds. in the minimal coupling of a generation of bulk matter hypermultiplets, supergravity requires the field content of an so ( 10 ) scenario. in the minimal coupling of $ n $ bulk generations of matter and higgs hypermultiplets, supergravity requires the field content of an e ( 6 ) scenario. we also discuss the coupling of tensors and non - compact gaugings in 5d ymesgts, which can serve as alternative ways to obtain four - dimensional higgs sectors. charged tensor couplings seem to be difficult to work with phenomenologically since a u ( 1 ) gauge factor is always required when they are present, and it is not clear if tensors can be put in unified multiplets with other fields, if this is desired. this seems to imply that tensor coulpings in gut scenarios may be better suited in higher dimensional settings. the non - compact gaugings discussed here are simple, and offer a novel unification scenario in which the supergravity and vector multiplets are connected by gauge transformations. the main points are summarized in tables and the conclusion. although the discussion is in the spirit of a " bottom - up " approach, m - theory is taken as a motivating background.
arxiv:hep-ph/0501091
the complete one - loop qed initial state, final state and initial - - final state interference corrections to the process e + e - - > pi + pi - are presented. analytic formulae are given for the virtual and for the real photon corrections. the total cross section, the pion angular distribution and the pi + pi - invariant mass distribution are investigated in the regime of experimentally realistic kinematical cuts. it is shown that in addition to the full one - loop corrections also two - loop initial state corrections and even the resummation of higher order soft photon logarithms can be necessary if at least per cent accuracy is required. for the data analysis we focus on an inclusive treatment of all photons. the theoretical error concerning our treatment of radiative corrections is then estimated to be less than 2 per mille for both the measurement of the total cross section and the pi + pi - invariant mass distribution. in addition we discuss the model uncertainty due to the pion substructure. altogether the precision of the theoretical prediction matches the requirements of low energy e + e - experiments like the ones going on at dafne or vepp - 2m.
arxiv:hep-ph/0107154
we present a semi - analytical method to investigate the systematic effects and statistical uncertainties of the calculated angular power spectrum when incomplete spherical maps are used. the computed power spectrum suffers in particular a loss of angular frequency resolution, which can be written as \ delta _ l ~ \ pi / \ gamma _ max, where \ gamma _ max is the effective maximum extent of the partial spherical maps. we propose a correction algorithm to reduce systematic effects on the estimated c _ l, as obtained from the partial map projection on the spherical harmonic ylm ( l, m ) basis. we have derived near optimal bands and weighting functions in l - space for power spectrum calculation using small maps, and a correction algorithm for partially masked spherical maps that contain information on the angular correlations on all scales.
arxiv:0910.4623
we investigate the properties of the intracluster medium ( icm ) that forms within n - body / hydrodynamical simulations of galaxy clusters in a \ lambdacdm cosmology. when radiative cooling and a simple model for galactic feedback are included, our clusters have x - ray luminosities and temperatures in good agreement with observed systems, demonstrating the required excess entropy in their cores. more generally, cooling and feedback increases the entropy of the icm everywhere, albeit without significantly affecting the slope of the profile ( s prop. to r ) at large radii. the temperature of the icm is only modestly increased by these processes, with projected temperature profiles being in reasonable agreement with the observations. star / galaxy formation is still too efficient in our simulations, however, and so our gas mass fractions are around 60 per cent of the observed value at r _ { 2500 }. finally, we examine the reliability of using the hydrostatic equilibrium equation to estimate cluster masses and find that it underpredicts the true mass of our clusters by up to 20 per cent, due to incomplete thermalisation of the gas. feedback reduces this discrepancy, however, with estimates being accurate to within 10 per cent out to r _ { 500 }.
arxiv:astro-ph/0407058
nonholonomic models of automobiles are developed by utilizing tools of analytical mechanics, in particular the appellian approach that allows one to describe the vehicle dynamics with minimum number of time - dependent state variables. the models are categorized based on how they represent the wheel - ground contact, whether they incorporate the longitudinal dynamics, and whether they consider the steering dynamics. it is demonstrated that the developed models can be used to design low - complexity controllers that enable automated vehicles to execute a large variety of maneuvers with high precision.
arxiv:2108.02230
purpose : cervical cancer is one of the primary causes of death in women. it should be diagnosed early and treated according to the best medical advice, as with other diseases, to ensure that its effects are as minimal as possible. pap smear images are one of the most constructive ways for identifying this type of cancer. this study proposes a cross - attention - based transfomer approach for the reliable classification of cervical cancer in pap smear images. methods : in this study, we propose the cerviformer - - a model that depends on the transformers and thereby requires minimal architectural assumptions about the size of the input data. the model uses a cross - attention technique to repeatedly consolidate the input data into a compact latent transformer module, which enables it to manage very large - scale inputs. we evaluated our model on two publicly available pap smear datasets. results : for 3 - state classification on the sipakmed data, the model achieved an accuracy of 93. 70 %. for 2 - state classification on the herlev data, the model achieved an accuracy of 94. 57 %. conclusion : experimental results on two publicly accessible datasets demonstrate that the proposed method achieves competitive results when compared to contemporary approaches. the proposed method brings forth a comprehensive classification model to detect cervical cancer in pap smear images. this may aid medical professionals in providing better cervical cancer treatment, consequently, enhancing the overall effectiveness of the entire testing process.
arxiv:2303.10222
finding the optimum path for a robot for moving from start to the goal position through obstacles is still a challenging issue. this paper presents a novel path planning method, named d - point trigonometric, based on q - learning algorithm for dynamic and uncertain environments, in which all the obstacles and the target are moving. we define a new state, action and reward functions for the q - learning by which the agent can find the best action in every state to reach the goal in the most appropriate path. the d - point approach minimizes the possible number of states. moreover, the experiments in unity3d confirmed the high convergence speed, the high hit rate, as well as the low dependency on environmental parameters of the proposed method compared with an opponent approach.
arxiv:1910.12020
the tomography of the polarized sunyaev - zeldvich effect due to free electrons of galaxy clusters can be used to constrain the nature of dark energy because cmb quadrupoles at different redshifts as the polarization source are sensitive to the integrated sachs - wolfe effect. here we show that the low multipoles of the temperature and e - mode polarization anisotropies from the all - sky cmb can improve the constraint further through the correlation between them and the cmb quadrupoles viewed from the galaxy clusters. using a monte - carlo simulation, we find that low multipoles of the temperature and e - mode polarization anisotropies potentially improve the constraint on the equation of state of dark energy parameter by $ \ sim 17 $ percent.
arxiv:2301.13676
in this paper we study the evolution of radiative fluxes, flux radii and observable dust masses in protoplanetary discs, in order to understand how these depend on the angular momentum budget and on the assumed heat sources. we use a model that includes the formation and viscous evolution of protoplanetary gas discs, together with the growth and radial drift of the dust component. we find that we are best able to match the observed fluxes and radii of class 0 / i discs when we assume ( i ) an initial total angular momentum budget corresponding to a centrifugal radius of 40 au around solar - like stars, and ( ii ) inefficient viscous heating. fluxes and radii of class ii discs appear consistent with disc models with angular momentum budgets equivalent to centrifugal radii of both 40 au or 10 au for solar like stars, and with models where viscous heating occurs at either full efficiency or at reduced efficiency. during the first 0. 5 myr of their evolution discs are generally optically thick at a wavelength of 1. 3 mm. however, after this discs are optically thin at mm - wavelengths, supporting standard means of dust mass estimates. using a disc population synthesis model, we then show that the evolution of the cumulative evolution of the observable dust masses agrees well with that observed in young star forming clusters of different ages.
arxiv:2501.04411
a manifestly relativistically covariant form of the van der pol oscillator in 1 + 1 dimensions is studied. we show that the driven relativistic equations, for which $ x $ and $ t $ are coupled, relax very quickly to a pair of identical decoupled equations, due to a rapid vanishing of the ` ` angular momentum ' ' ( the boost in 1 + 1 dimensions ). a similar effect occurs in the damped driven covariant duffing oscillator previously treated. this effect is an example of entrainment, or synchronization ( phase locking ), of coupled chaotic systems. the lyapunov exponents are calculated using the very efficient method of habib and ryne. we show a poincar \ ' e map that demonstrates this effect and maintains remarkable stability in spite of the inevitable accumulation of computer error in the chaotic region. for our choice of parameters, the positive lyapunov exponent is about 0. 242 almost independently of the integration method.
arxiv:chao-dyn/9710010
hypergraphs, encoding structured interactions among any number of system units, have recently proven a successful tool to describe many real - world biological and social networks. here we propose a framework based on statistical inference to characterize the structural organization of hypergraphs. the method allows to infer missing hyperedges of any size in a principled way, and to jointly detect overlapping communities in presence of higher - order interactions. furthermore, our model has an efficient numerical implementation, and it runs faster than dyadic algorithms on pairwise records projected from higher - order data. we apply our method to a variety of real - world systems, showing strong performance in hyperedge prediction tasks, detecting communities well aligned with the information carried by interactions, and robustness against addition of noisy hyperedges. our approach illustrates the fundamental advantages of a hypergraph probabilistic model when modeling relational systems with higher - order interactions.
arxiv:2204.05646
wave - - current interaction ( wci ) dynamics energizes and mixes the ocean thermocline by producing a combination of langmuir circulation, internal waves and turbulent shear flows, which interact over a wide range of time scales. two complementary approaches exist for approximating different aspects of wci dynamics. these are the generalized lagrangian mean ( glm ) approach and the gent - - mcwilliams ( gm ) approach. their complementarity is evident in their kelvin circulation theorems. glm introduces a wave pseudomomentum per unit mass into its kelvin circulation integrand, while gm introduces a an additional ` bolus velocity ' to transport its kelvin circulation loop. the glm approach models eulerian momentum, while the gm approach models lagrangian transport. in principle, both glm and gm are based on the euler - - boussinesq ( eb ) equations for an incompressible, stratified, rotating flow. the differences in their kelvin theorems arise from differences in how they model the flow map in the lagrangian for the hamilton variational principle underlying the eb equations. a recently developed approach for uncertainty quantification in fluid dynamics constrains fluid variational principles to require that lagrangian trajectories undergo stochastic advection by lie transport ( salt ). here we introduce stochastic closure strategies for quantifying uncertainty in wci by adapting the salt approach to both the glm and gm approximations of the eb variational principle. in the glm framework, we introduce a stochastic group velocity for transport of wave properties, relative to the frame of motion of the lagrangian mean flow velocity and a stochastic pressure contribution from the fluctuating kinetic energy. in the gm framework we introduce a stochastic bolus velocity in addition to the mean drift velocity by imposing the salt constraint in the gm variational principle.
arxiv:1905.01930
the mid - infrared region is crucial for elucidating the unique biochemical signatures of microorganisms. the mir resonant structures turned out to facilitate exceptional performance owing to the enhance electric field confinement in the nano - sized aperture. however, the extension of such technique in bacteria - sensing remains limited, primarily due to its micrometre size. this work is the first demonstration of a mir resonant structure, the gold - coated micro - structured inverted pyramid array of silicon exhibiting light - trapping capabilities, for the bacteria detection in entire mir range. the electric - field localization within the micro - sized cavity of inverted pyramid amplifies the light - matter interaction by harnessing surface plasmon polaritons, leading to improved detection sensitivity. the confinement of electric field is further corroborated by electric - field simulations based on finite element method. in particular, we observed notable enhancement in both the quantitative and qualitative detection of escherichia coli and staphylococcus aureus for the bacteria cell with very low concentration, reflecting the efficacy of our detection method. furthermore, the cost - effective micro structured silicon is fabricated using metal - assisted chemical etching method with the lithography - free method, along with the capabilities of wafer - scale fabrication. moreover, our device configuration even demonstrates the characteristics of reusability and reproducibility offers substantial benefits over conventional detection schemes. consequently, this cmos technology - compatible biosensor signifies promising ways for the integration of this technology with forthcoming bio - applications.
arxiv:2411.09330
in process mining, a log exploration step allows making sense of the event traces ; e. g., identifying event patterns and illogical traces, and gaining insight into their variability. to support expressive log exploration, the event log can be converted into a knowledge graph ( kg ), which can then be queried using general - purpose languages. we explore the creation of semantic kg using the resource description framework ( rdf ) as a data model, combined with the general - purpose notation3 ( n3 ) rule language for querying. we show how typical trace querying constraints, inspired by the state of the art, can be implemented in n3. we convert case - and object - centric event logs into a trace - based semantic kg ; ocel2 logs are hereby " flattened " into traces based on object paths through the kg. this solution offers ( a ) expressivity, as queries can instantiate constraints in multiple ways and arbitrarily constrain attributes and relations ( e. g., actors, resources ) ; ( b ) flexibility, as ocel2 event logs can be serialized as traces in arbitrary ways based on the kg ; and ( c ) extensibility, as others can extend our library by leveraging the same implementation patterns.
arxiv:2409.04452
we obtain electrically charged vortex solutions for the born - infeld higgs system with a chern simons term. we analyse numerically these solutions, comparing their properties with those of ` ` normal ' ' nielsen - olesen vortices and also show that no charged vortex solutions exist in born - infeld theory when the chern simons term is absent.
arxiv:hep-th/9802175
within the formalism of usadel equations the josephson effect in dirty point contacts between single - band and three - band superconductors is investigated. the general expression for the josephson current, which is valid for arbitrary temperatures, is obtained. we calculate current - phase relations for very low temperature and in the vicinity of the critical temperature. for three - band superconductors with broken time - reversal symmetry ( btrs ) point contacts undergo frustration phenomena with different current - phase relations, corresponding to { \ phi } - contacts. for three - band superconductors without btrs we have close to sinusoidal current - phase relations and absence of the frustration, excepting the case of very low temperature, where under certain conditions two ground states of the point contact are realized. our results can be used as the potential probe for the detection of the possible btrs state in three - band superconducting systems.
arxiv:1406.5693
are elements of g, and n is an element of n fixing the identity of g, then applying this equality twice to n · λg · λh and once to the ( equivalent ) expression n · λgg gives that n ( g ) · n ( h ) = n ( g · h ). that is, every element of n that fixes the identity of g is in fact an automorphism of g. such an n normalizes λg, and the only λg that fixes the identity is λ ( 1 ). setting a to be the stabilizer of the identity, the subgroup generated by a and λg is semidirect product with normal subgroup λg and complement a. since λg is transitive, the subgroup generated by λg and the point stabilizer a is all of n, which shows the holomorph as a permutation group is isomorphic to the holomorph as semidirect product. it is useful, but not directly relevant, that the centralizer of λg in sym ( g ) is ρg, their intersection is ρ z ( g ) = λ z ( g ) { \ displaystyle \ rho _ { z ( g ) } = \ lambda _ { z ( g ) } }, where z ( g ) is the center of g, and that a is a common complement to both of these normal subgroups of n. = = properties = = ρ ( g ) ∩ aut ( g ) = 1 aut ( g ) normalizes ρ ( g ) so that canonically ρ ( g ) aut ( g ) g aut ( g ) inn ( g ) im ( g ↦ λ ( g ) ρ ( g ) ) { \ displaystyle \ operatorname { inn } ( g ) \ cong \ operatorname { im } ( g \ mapsto \ lambda ( g ) \ rho ( g ) ) } since λ ( g ) ρ ( g ) ( h ) = ghg−1 ( inn ( g ) { \ displaystyle \ operatorname { inn } ( g ) } is the group of inner automorphisms of g. ) k ≤ g is a characteristic subgroup if and only if λ ( k ) hol ( g ) = = references = = hall, marshall jr. ( 1959 ), the theory of groups, macmillan, mr 0103215 burnside, william ( 2004 ), theory of groups of finite order, 2nd ed., dover, p
https://en.wikipedia.org/wiki/Holomorph_(mathematics)
this paper is concerned with an uniqueness of solution of the weak formulation of an evolution dam problem related to a compressible fluid flow through a two - dimensional, rectangular and heterogeneous porous medium. note that our problem associated with the equation a ( x _ 1 ) ( u _ { x _ 2 } + \ chi ) _ { x _ 2 } - ( u + \ chi ) _ t = 0. our technique is based on the idea that we transform the weak form of this equation into a similar situation to the proof of the uniqueness in the incompressible case ( see [ 12 ] ). it is also difficult to adapt the proof obtained in [ 12 ] by using some properties of the solutions as in [ 12, sect. 2 ].
arxiv:1811.08085
automated analysis of electron microscopy datasets poses multiple challenges, such as limitation in the size of the training dataset, variation in data distribution induced by variation in sample quality and experiment conditions, etc. it is crucial for the trained model to continue to provide acceptable segmentation / classification performance on new data, and quantify the uncertainty associated with its predictions. among the broad applications of machine learning, various approaches have been adopted to quantify uncertainty, such as bayesian modeling, monte carlo dropout, ensembles, etc. with the aim of addressing the challenges specific to the data domain of electron microscopy, two different types of ensembles of pre - trained neural networks were implemented in this work. the ensembles performed semantic segmentation of ice crystal within a two - phase mixture, thereby tracking its phase transformation to water. the first ensemble ( ea ) is composed of u - net style networks having different underlying architectures, whereas the second series of ensembles ( er - i ) are composed of randomly initialized u - net style networks, wherein each base learner has the same underlying architecture ' i '. the encoders of the base learners were pre - trained on the imagenet dataset. the performance of ea and er were evaluated on three different metrics : accuracy, calibration, and uncertainty. it is seen that ea exhibits a greater classification accuracy and is better calibrated, as compared to er. while the uncertainty quantification of these two types of ensembles are comparable, the uncertainty scores exhibited by er were found to be dependent on the specific architecture of its base member ( ' i ' ) and not consistently better than ea. thus, the challenges posed for the analysis of electron microscopy datasets appear to be better addressed by an ensemble design like ea, as compared to an ensemble design like er.
arxiv:2209.01908
climate change impact studies inform policymakers on the estimated damages of future climate change on economic, health and other outcomes. in most studies, an annual outcome variable is observed, e. g. agricultural yield, along with a higher - frequency regressor, e. g. daily temperature. applied researchers then face a problem of selecting a model to characterize the nonlinear relationship between the outcome and the high - frequency regressor to make a policy recommendation based on the model - implied damage function. we show that existing model selection criteria are only suitable for the policy objective if one of the models under consideration nests the true model. if all models are seen as imperfect approximations to the true nonlinear relationship, the model that performs well in the normal climate conditions is not guaranteed to perform well at the projected climate that is different from the historical norm. we therefore propose a new criterion, the proximity - weighted mean - squared error ( pwmse ), that directly targets precision of the damage function at the projected future climate. to make this criterion feasible, we assign higher weights to prior years that can serve as weather analogs to the projected future climate when evaluating competing models using the pwmse. we show that our approach selects the best approximate regression model that has the smallest weighted error of predicted impacts for a projected future climate. a simulation study and an application revisiting the impact of climate change on agricultural production illustrate the empirical relevance of our theoretical analysis.
arxiv:1808.07861
the agentsociety challenge is the first competition in the web conference that aims to explore the potential of large language model ( llm ) agents in modeling user behavior and enhancing recommender systems on web platforms. the challenge consists of two tracks : the user modeling track and the recommendation track. participants are tasked to utilize a combined dataset from yelp, amazon, and goodreads, along with an interactive environment simulator, to develop innovative llm agents. the challenge has attracted 295 teams across the globe and received over 1, 400 submissions in total over the course of 37 official competition days. the participants have achieved 21. 9 % and 20. 3 % performance improvement for track 1 and track 2 in the development phase, and 9. 1 % and 15. 9 % in the final phase, representing a significant accomplishment. this paper discusses the detailed designs of the challenge, analyzes the outcomes, and highlights the most successful llm agent designs. to support further research and development, we have open - sourced the benchmark environment at https : / / tsinghua - fib - lab. github. io / agentsocietychallenge.
arxiv:2502.18754
metrics and frameworks to quantifiably assess security measures have arisen from needs of three distinct research communities - statistical measures from the intrusion detection and prevention literature, evaluation of cyber exercises, e. g., red - team and capture - the - flag competitions, and economic analyses addressing cost - versus - security tradeoffs. in this paper we provide two primary contributions to the security evaluation literature - a representative survey, and a novel framework for evaluating security that is flexible, applicable to all three use cases, and readily interpretable. in our survey of the literature we identify the distinct themes from each community ' s evaluation procedures side by side and flesh out the drawbacks and benefits of each. the evaluation framework we propose includes comprehensively modeling the resource, labor, and attack costs in dollars incurred based on expected resource usage, accuracy metrics, and time. this framework provides a unified approach in that it incorporates the accuracy and performance metrics, which dominate intrusion detection evaluation, the time to detection and impact to data and resources of an attack, favored by educational competitions ' metrics, and the monetary cost of many essential security components used in financial analysis. moreover, it is flexible enough to accommodate each use case, easily interpretable and comparable, and comprehensive in terms of costs considered. finally, we provide two examples of the framework applied to real - world use cases. overall, we provide a survey and a grounded, flexible framework with multiple concrete examples for evaluating security which can address the needs of three currently distinct communities.
arxiv:1902.00053
the axion is a hypothetical particle which is a candidate for cold dark matter. haloscope experiments directly search for these particles in strong magnetic fields with rf cavities as detectors. the relic axion detector exploratory setup ( rades ) at cern in particular is searching for axion dark matter in a mass range above 30 $ \ mu $ ev. the figure of merit of our detector depends linearly on the quality factor of the cavity and therefore we are researching the possibility of coating our cavities with different superconducting materials to increase the quality factor. since the experiment operates in strong magnetic fields of 11 t and more, superconductors with high critical magnetic fields are necessary. suitable materials for this application are for example reba $ _ 2 $ cu $ _ 3 $ o $ _ { 7 - x } $, nb $ _ 3 $ sn or nbn. we designed a microwave cavity which resonates at around 9 ~ ghz, with a geometry optimized to facilitate superconducting coating and designed to fit in the bore of available high - field accelerator magnets at cern. several prototypes of this cavity were coated with different superconducting materials, employing different coating techniques. these prototypes were characterized in strong magnetic fields at 4. 2 k.
arxiv:2110.01296
massive multiple input multiple output ( mimo ) offers superior capacity for future networks. in the quest for energy efficient implementation of these large array - based trans - mission systems, the power consumption of the power amplifiers ( pas ) is a main bottleneck. this paper investigates whether it is possible to operate the pas in their efficient nonlinear region, as the out - of - band ( oob ) distortion may not get the same array gain as the in - band ( ib ) signals. we present a framework to simulate the effects under realistic conditions, leveraging on an accurate ray - tracing simulator ( rts ). the results show that the often assumed i. i. d. rayleigh fading channel model results in too optimistic predictions, also in non line of sight ( nlos ) multi - path scenarios, regarding the spatial distribution of oob emissions. we further comment on the consequences in view of current regulatory constraints.
arxiv:2111.14548
with the introduction of large - scale datasets and deep learning models capable of learning complex representations, impressive advances have emerged in face detection and recognition tasks. despite such advances, existing datasets do not capture the difficulty of face recognition in the wildest scenarios, such as hostile disputes or fights. furthermore, existing datasets do not represent completely unconstrained cases of low resolution, high blur and large pose / occlusion variances. to this end, we introduce the wildest faces dataset, which focuses on such adverse effects through violent scenes. the dataset consists of an extensive set of violent scenes of celebrities from movies. our experimental results demonstrate that state - of - the - art techniques are not well - suited for violent scenes, and therefore, wildest faces is likely to stir further interest in face detection and recognition research.
arxiv:1805.07566
in this paper, we consider a satellite orbiting in a manev gravitational potential under the influence of an atmospheric drag force that varies with the square of velocity. using an exponential atmosphere that varies with the orbital altitude of the satellite, we examine a circular orbit scenario. in particular, we derive expressions for the change in satellite radial distance as a function of the drag force parameters and obtain numerical results. the manev potential is an alternative to the newtonian potential that has a wide variety of applications, in astronomy, astrophysics, space dynamics, classical physics, mechanics, and even atomic physics.
arxiv:1212.0913
the dynamic and evolutionary nature of service requirements in wireless networks has motivated the telecom industry to consider intelligent self - adapting reinforcement learning ( rl ) agents for controlling the growing portfolio of network services. infusion of many new types of services is anticipated with future adoption of 6g networks, and sometimes these services will be defined by applications that are external to the network. an rl agent trained for managing the needs of a specific service type may not be ideal for managing a different service type without domain adaptation. we provide a simple heuristic for evaluating a measure of proximity between a new service and existing services, and show that the rl agent of the most proximal service rapidly adapts to the new service type through a well defined process of domain adaptation. our approach enables a trained source policy to adapt to new situations with changed dynamics without retraining a new policy, thereby achieving significant computing and cost - effectiveness. such domain adaptation techniques may soon provide a foundation for more generalized rl - based service management under the face of rapidly evolving service types.
arxiv:2303.01013
pretrained language models ( plms ) have become the de facto starting point for fine - tuning on downstream tasks. however, as model sizes continue to increase, traditional fine - tuning of all the parameters becomes challenging. to address this, parameter - efficient fine - tuning ( peft ) methods have gained popularity as a means to adapt plms effectively. in parallel, recent studies have revealed the presence of activation sparsity within the intermediate outputs of the multilayer perceptron ( mlp ) blocks in transformers. low activation density enables efficient model inference on sparsity - aware hardware. building upon this insight, in this work, we propose a novel density loss that encourages higher activation sparsity ( equivalently, lower activation density ) in the pre - trained models. we demonstrate the effectiveness of our approach by utilizing mainstream peft techniques, including qlora, lora, adapter, and prompt / prefix tuning, to facilitate efficient model adaptation across diverse downstream tasks. experiments show that our proposed method, \ textbf { deft } ( density - efficient fine - tuning ), can consistently reduce activation density by up to \ textbf { 44. 94 \ % } on roberta $ _ \ mathrm { large } $ and by \ textbf { 53. 19 \ % } ( encoder density ) and \ textbf { 90. 60 \ % } ( decoder density ) on flan - t5 $ _ \ mathrm { xxl } $ ( \ textbf { 11b } ) compared to peft, using glue and qa ( squad ) benchmarks respectively. we also introduce \ textbf { ada - deft }, an adaptive variant of our deft approach, which achieves significant memory and runtime savings during inference. for instance, ada - deft reduces runtime by \ textbf { 8. 79 \ % } and memory usage by \ textbf { 17. 46 \ % } in flan - t5 $ _ \ mathrm { xl } $, and by \ textbf { 2. 79 \ % } and \ textbf { 2. 54 \ % } respectively in flan - t5 $ _ \ mathrm { xxl } $. additionally, we showcase that deft works complementarily with quantized and pruned models.
arxiv:2402.01911
using archival vlbi data for 3114 radio - luminous active galactic nuclei, we searched for binary supermassive black holes using a radio spectral index mapping technique which targets spatially resolved, double radio - emitting nuclei. only one source was detected as a double nucleus. this result is compared with a cosmological merger rate model and interpreted in terms of ( 1 ) implications for post - merger timescales for centralisation of the two black holes, ( 2 ) implications for the possibility of " stalled " systems, and ( 3 ) the relationship of radio activity in nuclei to mergers. our analysis suggests that the binary evolution of paired supermassive black holes ( both of masses > = 1e8 msun ) spends less than 500 myr in progression from the merging of galactic stellar cores to within the purported stalling radius for supermassive black hole pairs. the data show no evidence for an excess of stalled binary systems at small separations. we see circumstantial evidence that the relative state of radio emission between paired supermassive black holes is correlated within orbital separations of 2. 5 kpc.
arxiv:1008.4382
in this paper, we demonstrate that li ' s fixed point theorems are indeed equivalent with the primitive caristi ' s fixed point theorem, jachymski ' s fixed point theorems, feng and liu ' s fixed point theorems, khamsi ' s fixed point theorems and others.
arxiv:1010.0923
the earth ' s density distribution can be approximately considered piecewise continuous at the scale of two - flavor oscillations of typical solar neutrinos, such as the beryllium - 7 and boron - 8 neutrinos. this quite general assumption appears to be enough to analytically calculate the day - night asymmetry factor for such neutrinos. using the explicit time averaging procedure, we show that, within the leading - order approximation, this factor is determined by the electron density within about one oscillation length under the detector, namely, in the earth ' s crust ( and upper mantle for high - energy neutrinos ). we also evaluate the effect of the inner earth ' s structure on the observed asymmetry and show that it is suppressed and mainly comes from the neutrinos observed near the winter and summer solstices. as a result, we arrive at the strict interval constraint on the asymmetry, which is valid within quite a wide class of earth models.
arxiv:1302.7201
magnetic quivers and hasse diagrams for higgs branches of rank 1 $ 4d $ $ \ mathcal { n } = 2 $ scfts are provided. these rank 1 theories fit naturally into families of higher rank theories, originating from higher dimensions, which are addressed.
arxiv:2006.16994
a superconductor / normal metal / superconductor josephson junction is a coherent electron system where the thermodynamic entropy depends on temperature and phase difference across the weak - link. here, exploiting the phase - temperature thermodynamic diagram of a thermally isolated system, we argue that a cooling effect can be achieved when the phase drop across the junction is brought from 0 to $ \ pi $ in a iso - entropic process. we show that iso - entropic cooling can be enhanced with proper choice of geometrical and electrical parameters of the junction, i. e. by increasing the ratio between supercurrent and total junction volume. we present extensive numerical calculations using quasi - classical green function methods for a short junction and we compare them with analytical results. interestingly, we demonstrate that phase - coherent thermodynamic cycles can be implemented by combining iso - entropic and iso - phasic processes acting on the weak - link, thereby engineering the coherent version of thermal machines such as engines and cooling systems. we therefore evaluate their performances and the minimum temperature achievable in a cooling cycle.
arxiv:1806.01568
we study a model where two scalar fields, that are subdominant during inflation, decay into radiation some time after inflation has ended but before primordial nucleosynthesis. perturbations of these two curvaton fields can be responsible for the primordial curvature perturbation. we write down the full non - linear equations that relate the primordial perturbation to the curvaton perturbations on large scales, calculate the power spectrum of the primordial perturbation, and finally go to second order to find the non - linearity parameter, fnl. we find large positive values of fnl if the energy densities of the curvatons are sub - dominant when they decay, as in the single curvaton case. but we also find a large fnl even if the curvatons dominate the total energy density in the case when the inhomogeneous radiation produced by the first curvaton decay is diluted by the decay of a second nearly homogeneous curvaton. the minimum value min ( fnl ) = - 5 / 4 which we find is the same as in the single - curvaton case.
arxiv:0708.0223
the elementary geometric properties of jacob ' s ladders of the second order lead to a class of new asymptotic formulae for short and microscopic parts of the hardy - littlewood integral of $ | \ zeta ( 1 / 2 + it ) | ^ 4 $. these formulae cannot be obtained by methods of balasubramanian, heath - brown and ivic.
arxiv:1001.4007
this paper presents an approach to compute the worst - case gain of the interconnection of a finite time horizon linear time - variant system and a perturbation. the input / output behavior of the uncertainty is described by integral quadratic constraints ( iqcs ). a condition for the worst - case gain of such an interconnection can be formulated using dissipation theory as a parameterized riccati differential equation, which depends on the chosen iqc multiplier. a nonlinear optimization problem is formulated to minimize the upper bound of the worst - case gain over a set of admissible iqc multipliers. this problem can be efficiently solved with a custom - tailored logarithm scaled, adaptive differential evolution algorithm. it provides a fast alternative to similar approaches based on solving semidefinite programs. the algorithm is applied to the worst - case aerodynamic load analysis for an expendable launch vehicle ( elv ). the worst - case load of the uncertain elv is calculated under wind turbulence during the atmospheric ascend and compared to results from nonlinear simulation.
arxiv:2111.12748
we present an open - source software package wanniertools, a tool for investigation of novel topological materials. this code works in the tight - binding framework, which can be generated by another software package wannier90. it can help to classify the topological phase of given materials by calculating the wilson loop and can get the surface state spectrum which is detected by angle - resolved photoemission ( arpes ) and in scanning tunneling microscopy ( stm ) experiments. it also identifies positions of weyl / dirac points and nodal line structures, calculates the berry phase around a closed momentum loop and berry curvature in a part of the brillouin zone.
arxiv:1703.07789
models of period variations are basic tools for period analyzes of variable stars. we introduce phase function and instant period and formulate basic relations and equations among them. some simple period models are also presented.
arxiv:1212.5527
coupled - cluster and green ' s function theories are highly successful in treating many - body electron correlation and there has been significant interest in identifying and leveraging connections between them. here we present a diagrammatic definition of the irreducible coupled - cluster self - energy that directly embeds coupled - cluster theory within the framework of many - body field theory. the eom - cc treatment emerges naturally from our definition via the dyson and bethe - salpeter equations, providing a unified description of rpa, $ gw $ - bse and cc theory for ground state and excitation energies. this clarifies the origin of previously established connections between rpa, $ gw $ - bse and coupled - cluster theory, and exposes the relationship between vertex corrections and the coupled - cluster amplitude equations.
arxiv:2309.10451
we present an extensive experimental and theoretical study of the proximity effect in inas nanowires connected to superconducting electrodes. we fabricate and investigate devices with suspended gate - controlled nanowires and nonsuspended nanowires, with a broad range of lengths and normal - state resistances. we analyze the main features of the current - voltage characteristics : the josephson current, excess current, and subgap current as functions of length, temperature, magnetic field, and gate voltage, and compare them with theory. the josephson critical current for a short - length device, l = 30 nm, exhibits a record high magnitude of 800 na at low temperature that comes close to the theoretically expected value. the critical current in all other devices is typically reduced compared to the theoretical values. the excess current is consistent with the normal resistance data and agrees well with the theory. the subgap current shows a large number of structures ; some of them are identified as subharmonic gap structures generated by multiple andreev reflection. the other structures, detected in both suspended and nonsuspended devices, have the form of voltage steps at voltages that are independent of either the superconducting gap or length of the wire. by varying the gate voltage in suspended devices, we are able to observe a crossover from typical tunneling transport at large negative gate voltage, with suppressed subgap current and negative excess current, to pronounced proximity junction behavior at large positive gate voltage, with enhanced josephson current and subgap conductance as well as a large positive excess current.
arxiv:1311.1745
we derive general relationships between the number of complex poles of a propagator and the sign of the spectral function originating from the branch cut in the minkowski region under some assumptions on the asymptotic behaviors of the propagator. we apply this relation to the mass - deformed yang - mills model with one - loop quantum corrections, which is identified with a low - energy effective theory of the yang - mills theory, to show that the gluon propagator in this model has a pair of complex conjugate poles or " tachyonic " poles of multiplicity two, in accordance with the fact that the gluon field has a negative spectral function, while the ghost propagator has at most one " unphysical " pole. finally, we discuss implications of these results for gluon confinement and other non - perturbative aspects of the yang - mills theory.
arxiv:1812.03116
we show that topological properties of minimal dirac sheets as well as of currents lines characterize the phases unambiguously. we obtain the minimal sheets reliably by a suitable simulated - annealing procedure.
arxiv:hep-lat/9412066
we refine schmidt ' s problem and a partition identity related to 2 - color partitions which we will refer to as uncu - andrews - paule theorem. we will approach the problem using boulet - stanley weights and a formula on rogers - szeg \ h { o } polynomials by berkovich - warnaar, and present various schmidt ' s problem alike theorems and their refinements. our new schmidt type results include the use of even - indexed parts ' sums, alternating sum of parts, and hook lengths as well as the odd - indexed parts ' sum which appears in the original schmidt ' s problem. we also translate some of our schmidt ' s problem alike relations to weighted partition counts with multiplicative weights in relation to rogers - ramanujan partitions.
arxiv:2205.00527
in the realm of big data, discerning patterns in nonlinear systems affected by external control inputs is increasingly challenging. our approach blends the coarse - graining strengths of centroid - based unsupervised clustering with the clarity of sparse regression in a unique way to enhance the closed - loop feedback control of nonlinear dynamical systems. a key innovation in our methodology is the employment of cluster coefficients via a cluster decomposition of time - series measurement data. this approach transcends the conventional emphasis on the proximity of time series measurements to cluster centroids, offering a more nuanced representation of the dynamics within phase space. capturing the evolving dynamics of these coefficients enable the construction of a robust, deterministic model for the observed states of the system. this model excels in capturing a wide range of dynamics, including periodic and chaotic behaviors, under the influence of external control inputs. demonstrated in both the low - dimensional lorenz system and the high - dimensional scenario of a flexible plate immersed in fluid flow, our model showcases its ability to pinpoint critical system features and its adaptability in reaching any observed state. a distinctive feature of our control strategy is the novel hopping technique between cluster states, which successfully averts lobe switching in the lorenz system and accelerates vortex shedding in fluid - structure interaction systems while maintaining the mean aerodynamic characteristics.
arxiv:2312.14186
the possibility to explain basic physical properties of relaxors within the concept of the dipole - glass transition is discussed. we argue that this concept provides the only consistent picture accounting of all known anomalous features of relaxors. the origin of their history - dependent properties can be naturally traced to the main paradigm of glass - state theory - the existence of numerous metastable states. based on this paradigm phenomenological description of known history - dependent phenomena in relaxors agrees qualitatively with experiments.
arxiv:1003.0147
the connections between nonmonotonic reasoning and belief revision are well - known. a central problem in the area of nonmonotonic reasoning is the problem of default entailment, i. e., when should an item of default information representing " if a is true then, normally, b is true " be said to follow from a given set of items of such information. many answers to this question have been proposed but, surprisingly, virtually none have attempted any explicit connection to belief revision. the aim of this paper is to give an example of how such a connection can be made by showing how the lexicographic closure of a set of defaults may be conceptualised as a process of iterated revision by sets of sentences. specifically we use the revision process of nayak.
arxiv:cs/0003017
we present an event - by - event study of cosmic ray ( cr ) composition with the reflected cherenkov light method. the fraction of cr light component above 5 pev was reconstructed using the 2013 run data of the sphere experiment which observed optical vavilov - cherenkov radiation of extensive air showers, reflected from snow surface of lake baikal. additionally, we discuss a possibility to improve the elemental groups separability by means of multidimensional criteria.
arxiv:1503.04998
we consider a three - dimensional rotating ho \ v { r } ava ads black hole, that corresponds to a lorentz - violating version of the btz black hole and we analyze the effect of the breaking of lorentz invariance on the possibility that the black hole can acts as a particle accelerator by analyzing the energy in the center of mass ( cm ) frame of two colliding particles in the vicinity of its horizons. we find that the critical angular momentum of particles increases when the ho \ v { r } ava parameter $ \ xi $ increases and when the aether parameter $ b $ increases. also, the particles can collide on the inner horizon with arbitrarily high cm energy if one of the particles has a critical angular momentum being possible the bsw process, for the non - extremal rotating ho \ v { r } ava ads black hole. mainly, while that for the extremal btz black hole the particles with critical angular momentum only can exist on the degenerate horizon, for the lorentz - violating version of the btz black hole the particle with critical angular momentum can exist in a region from the degenerate horizon.
arxiv:2002.04421
eulerian percolation on z 2 with parameter p is the classical bernoulli bond percolation with parameter p conditioned on the fact that every site has an even degree. we first explain why eulerian percolation with parameter p coincides with the contours of the ising model for a well - chosen parameter $ \ beta $ ( p ). then we study the percolation properties of eulerian percolation.
arxiv:1607.01974
we present a low - cost ultraviolet to infrared absolute quantum efficiency detector characterization system developed using commercial off - the - shelf components. the key components of the experiment include a light source, a regulated power supply, a monochromator, an integrating sphere, and a calibrated photodiode. we provide a step - by - step procedure to construct the photon and quantum efficiency transfer curves of imaging sensors. we present results for the gsense 2020 bsi cmos sensor and the sony imx 455 bsi cmos sensor. as a reference for similar characterizations, we provide a list of parts and associated costs along with images of our setup.
arxiv:2207.13052
we prove global well - posedness and scattering for solutions to the mass - critical inhomogeneous nonlinear schr \ " odinger equation $ i \ partial _ { t } u + \ delta u = \ pm | x | ^ { - b } | u | ^ { \ frac { 4 - 2b } { d } } u $ for large $ l ^ 2 ( \ mathbb { r } ^ d ) $ initial data with $ d \ ge3, 0 < b < \ min \ left \ { 2, \ frac { d } { 2 } \ right \ } $ ; in the focusing case, we require that the mass is strictly less than that of the ground state. compared with the classical schr \ " odinger case ( $ b = 0 $, dodson, j. amer. math. soc. ( 2012 ), adv. math. ( 2015 ) ), the main differences for the inhomogeneous case ( $ b > 0 $ ) are that the presence of the inhomogeneity $ | x | ^ { - b } $ creates a nontrivial singularity at the origin, and breaks the translation symmetry as well as the galilean invariance of the equation, which makes the establishment of the profile decomposition and long time strichartz estimates more difficult. to overcome these difficulties, we perform the concentration compactness / rigidity methods of [ kenig and merle, invent. math. ( 2006 ) ] in the lorentz space framework, and reduces the problem to the exclusion of almost periodic solutions. the exclusion of these solutions will utilize fractional estimates and long time strichartz estimates in lorentz spaces. in our study, we obseve that the decay of the inhomogeneity $ | x | ^ { - b } $ at infinity prevents the concentration of the almost periodic solution at infinity in either physical or frequency space. therefore, we can use classical morawetz estimates, rather than interaction morawetz estimates, to exclude the existence of the quasi - soliton.
arxiv:2412.04566
we report on the observation of a spin texture in a cold exciton gas in a gaas / algaas coupled quantum well structure. the spin texture is observed around the exciton rings. the observed phenomena include : a ring of linear polarization, a vortex of linear polarization with polarization perpendicular to the radial direction, an anisotropy in the exciton flux, a skew of the exciton fluxes in orthogonal circular polarizations and a corresponding four - leaf pattern of circular polarization, a periodic spin texture, and extended exciton coherence in the region of the polarization vortex. the data indicate a transport regime where the spin polarization is locked to the direction of particle propagation and scattering is suppressed.
arxiv:1103.0321
we now have several observational examples of misaligned broken protoplanetary discs, where the disc inner regions are strongly misaligned with respect to the outer disc. current models suggest that this disc structure can be generated with an internal misaligned companion ( stellar or planetary ), but the occurrence rate of these currently unobserved companions remains unknown. here we explore whether a strong misalignment between the inner and outer disc can be formed without such a companion. we consider a disc that has an existing gap - - - essentially separating the disc into two regions - - - and use a flyby to disturb the discs, leading to a misalignment. despite considering the most optimistic parameters for this scenario, we find maximum misalignments between the inner and outer disc of $ \ sim $ 45 $ ^ { \ circ } $ and that these misalignments are short - lived. we thus conclude that the currently observed misaligned discs must harbour internal, misaligned companions.
arxiv:1911.05760
we establish recurrences formulas of the order of the classical groups that allow us to find a generalization of euler ' s angles for classical groups and the invariant measures of these groups. we find the generating function for the su ( 2 ) subset of su ( 3 ) basis in the fock - bargmann space and a new basis of su ( 3 ). this new basis is eigenfunction of the square of kinetic moment in product spaces of spherical harmonics. we generalize the generating function of su ( 2 ) and we find invariant polynomials of su ( 3 ) which are elements of the basis of su ( 6 ). using the above results we deduce the method of calculation isoscalar factors. we expose this method and we give the generating function for a particular case. finally we determine the generating function of the elements of the representation matrix of su ( 3 ) and we derive the analytical expression of these elements.
arxiv:0805.2740
we propose an experiment to search for a permanent atomic electric - dipole moment ( edm ) using laser - cooled $ ^ { 171 } $ yb atoms launched in an atomic fountain. a uniform b field sets the quantization axis, and the ramsey separated - oscillatory - fields method is used to measure the zeeman precession frequency of the atoms. laser beams of appropriate polarization are used for preparation and detection in a given magnetic sublevel. the signature of an edm is a shift in the ramsey resonance correlated with application of a large e field. the precision is expected to be at least 20 times better than current limits because the use of a cold atomic beam allows application of e field 10 times larger than in a vapor cell, and the interaction time with the e field is 200 times larger compared to a thermal beam. the leading source of systematic error in beam experiments, the ( e x v / c ) motional magnetic field, is reduced considerably because of the near - perfect reversal of velocity between up and down trajectories through the e - field region.
arxiv:physics/0510087
we analyze the bombay stock exchange ( bse ) price index over the period of last 12 years. keeping in mind the large fluctuations in last few years, we carefully find out the transient, non - statistical and locally structured variations. for that purpose, we make use of daubechies wavelet and characterize the fractal behavior of the returns using a recently developed wavelet based fluctuation analysis method. the returns show a fat - tail distribution as also weak non - statistical behavior. we have also carried out continuous wavelet as well as fourier power spectral analysis to characterize the periodic nature and correlation properties of the time series.
arxiv:0905.4237
a driving question in ( quantum ) cohomology of flag varieties is to find non - recursive, positive combinatorial formulas for expressing the product of two classes in a particularly nice basis, called the schubert basis. bertram, ciocan - fontanine and fulton provided a way to compute quantum products of schubert classes in the grassmannian of k - planes in complex n - space by doing classical multiplication and then applying a combinatorial rim hook rule which yields the quantum parameter. in this paper, we provide a generalization of this rim hook rule to the setting in which there is also an action of the complex torus. combining this result with knutson and tao ' s puzzle rule then gives an effective algorithm for computing all equivariant quantum littlewood - richardson coefficients. interestingly, this rule requires a specialization of torus weights modulo n, suggesting a direct connection to the peterson isomorphism relating quantum and affine schubert calculus.
arxiv:1403.6218
data on the photon nonproportional response of 33 inorganic scintillation materials are systemized and analyzed. the main trends of nonproportionality for different groups of inorganic scintillators, especially for oxides and halides, are highlighted. the dependence of the shape and degree of photon nonproportional response versus chemical composition, dopant type, refractive index and other fundamental properties of the materials is studied. better proportionality appears to be correlated with higher refractive index of the compound. another related factor is the width of the valence band in halide compounds. with larger valence band width from fluorides, to chlorides, to bromides, and to iodides, a better proportionality is observed.
arxiv:1204.4350
we build on a recently proposed method for stepwise explaining solutions of constraint satisfaction problems ( csp ) in a human - understandable way. an explanation here is a sequence of simple inference steps where simplicity is quantified using a cost function. the algorithms for explanation generation rely on extracting minimal unsatisfiable subsets ( mus ) of a derived unsatisfiable formula, exploiting a one - to - one correspondence between so - called non - redundant explanations and muss. however, mus extraction algorithms do not provide any guarantee of subset minimality or optimality with respect to a given cost function. therefore, we build on these formal foundations and tackle the main points of improvement, namely how to generate explanations efficiently that are provably optimal ( with respect to the given cost metric ). for that, we developed ( 1 ) a hitting set - based algorithm for finding the optimal constrained unsatisfiable subsets ; ( 2 ) a method for re - using relevant information over multiple algorithm calls ; and ( 3 ) methods exploiting domain - specific information to speed up the explanation sequence generation. we experimentally validated our algorithms on a large number of csp problems. we found that our algorithms outperform the mus approach in terms of explanation quality and computational time ( on average up to 56 % faster than a standard mus approach ).
arxiv:2303.11712
the standard map is a paradigmatic one - parameter ( noted $ a $ ) two - dimensional conservative map which displays both chaotic and regular regions. this map becomes integrable for $ a = 0 $. for $ a \ ne 0 $ it can be numerically shown that the usual, boltzmann - gibbs entropy $ s _ 1 ( t ) = - \ sum _ { i } p _ i ( t ) \ ln { p _ i ( t ) } $ exhibits a { \ it linear } time evolution whose slope hopefully converges, for very fine graining, to the kolmogorov - sinai entropy. however, for increasingly small values of $ a $, an increasingly large time interval emerges, { \ it before } that stage, for which { \ it linearity } with $ t $ is obtained only for the generalized nonextensive entropic form $ s _ q ( t ) = \ frac { 1 - \ sum _ { i } [ p _ i ( t ) ] ^ { q } } { q - 1 } $ with $ q = q ^ * \ simeq 0. 3 $. this anomalous regime corresponds in some sense to a power - law ( instead of exponential ) mixing. this scenario might explain why in isolated classical long - range $ n $ - body hamiltonians, and depending on the initial conditions, a metastable state ( whose duration diverges with $ 1 / n \ to 0 $ ) is observed before it crosses over to the bg regime.
arxiv:cond-mat/0108501
this paper presents a multirotor control architecture, where model predictive path integral control ( mppi ) and l1 adaptive control are combined to achieve both fast model predictive trajectory planning and robust trajectory tracking. mppi provides a framework to solve nonlinear mpc with complex cost functions in real - time. however, it often lacks robustness, especially when the simulated dynamics are different from the true dynamics. we show that the l1 adaptive controller robustifies the architecture, allowing the overall system to behave similar to the nominal system simulated with mppi. the architecture is validated in a simulated multirotor racing environment.
arxiv:2004.00152
let $ n \ geqslant 4 $. in this article, we will determine the asymptotic behaviour of the size of the set $ m ( b ) $ of integral points $ ( a _ { 0 } :... : a _ { n } ) $ on the hyperplane $ \ sum _ { i = 0 } ^ { n } x _ { i } = 0 $ in $ \ mathbf { p } ^ { n } $ such that $ a _ { i } $ is squareful ( an integer $ a $ is called squareful if the exponent of each prime divisor of $ a $ is at least two ), non - zero and $ | a _ { i } | \ leq b $ for each $ i \ in \ { 0,..., n \ } $, when $ b $ goes to infinity. for this, i will use the classical hardy - littlewood method. the result obtained supports a possible generalization of the brauer - manin program to fano orbifolds.
arxiv:1001.3296
context : deep galex uv data show that the extreme outskirts of some spiral galaxies are teeming with star formation. such young stellar populations evolving so far away from the bulk of their host galaxies challenge our overall understanding of how star formation proceeds at galactic scales. it is at present unclear whether our own milky way may also exhibit ongoing and recent star formation beyond the conventional edge of the disk ( $ \ sim 15 $ kpc ). aims : using \ textit { gaia } dr2 data, we aim to determine if such a population is present in the galactic halo, beyond the nominal radius of the milky way disk. methods : we studied the kinematics of \ textit { gaia } dr2 sources with parallax values between 1 / 60 and 1 / 30 milliarcseconds towards two regions that show abnormally high values of extinction and reddening ; the results are compared with predictions from galaxia galactic model. we also plotted the color - magnitude ( cm ) diagrams with heliocentric distances computed inverting the parallaxes, and studied the effects of the large parallax errors by monte carlo sampling. results : the kinematics point towards a galactic origin for one of the regions, while the provenance of the stars in the other is not clear. a spectroscopic analysis of some of the sources in the first region confirms that they are located in the halo. the cm diagram of the sources suggests that some of them are young.
arxiv:2001.03627
quadratic discriminant analysis ( qda ) is a simple method to classify a subject into two populations, and was proven to perform as well as the bayes rule when the data dimension p is fixed. the main purpose of this paper is to examine the empirical and theoretical behaviors of qda where p grows proportionally to the sample sizes without imposing any structural assumption on the parameters. the first finding in this moderate dimension regime is that qda can perform as poorly as random guessing even when the two populations deviate significantly. this motivates a generalized version of qda that automatically adapts to dimensionality. under a finite fourth moment condition, we derive misclassification rates for both the generalized qda and the optimal one. a direct comparison reveals one " easy " case where the difference between two rates converges to zero and one " hard " case where that converges to some strictly positive constant. for the latter, a divide - and - conquer approach over dimension ( rather than sample ) followed by a screening procedure is proposed to narrow the gap. various numerical studies are conducted to back up the proposed methodology.
arxiv:1808.10065
in this paper, i give a method to calculate the homfly polynomials of knots by using a representation of the braid group b4 into a group of 3? 3 matrices. also, i will give examples of a 2 - bridge knot and a 3 - bridge knot that have the same jone polynomial, but different homfly polynomials.
arxiv:1309.5052
development of autonomous and self - driving vehicles requires agile and reliable services to manage hazardous road situations. vehicular network is the medium that can provide high - quality services for self - driving vehicles. the majority of service requests in vehicular networks are delay intolerant ( e. g., hazard alerts, lane change warning ) and require immediate service. therefore, vehicular networks, and particularly, vehicle - to - infrastructure ( v2i ) systems must provide a consistent real - time response to autonomous vehicles. during peak hours or disasters, when a surge of requests arrives at a base station, it is challenging for the v2i system to maintain its performance, which can lead to hazardous consequences. hence, the goal of this research is to develop a v2i system that is robust against uncertain request arrivals. to achieve this goal, we propose to dynamically allocate service requests among base stations. we develop an uncertainty - aware resource allocation method for the federated environment that assigns arriving requests to a base station so that the likelihood of completing it on - time is maximized. we evaluate the system under various workload conditions and oversubscription levels. simulation results show that edge federation can improve robustness of the v2i system by reducing the overall service miss rate by up to 45 %.
arxiv:1905.04460
contained in the parisista — a supplementary text / appendix — of the atharvaveda. he does not provide any more bibliographic clarification on the sourcing. the book ' s editor, v. s. agrawala, argues that since the vedas are defined as the traditional repositories of all knowledge, any knowledge can be assumed to be somewhere in the vedas, by definition ; he even went to the extent of deeming krishna tirtha ' s work as a parisista in itself. however, numerous mathematicians and sts scholars ( dani, kim plofker, k. s. shukla, jan hogendijk et al. ) note that the vedas do not contain any of those sutras and sub - sutras. when shukla, a mathematician and historiographer of ancient indian mathematics, challenged krishna tirtha to locate the sutras in the parishishta of a standard edition of the atharvaveda, krishna tirtha stated that they were not included in the standard editions but only in a hitherto - undiscovered version, chanced upon by him ; the foreword and introduction of the book also takes a similar stand. sanskrit scholars have observed that the book ' s linguistic style is not that of the vedic period but rather reflects modern sanskrit. dani points out that the contents of the book have " practically nothing in common " with the mathematics of the vedic period or even with subsequent developments in indian mathematics. shukla reiterates the observations, on a per - chapter basis. for example, multiple techniques in the book involve the use of decimals. these were unknown during the vedic times and were introduced in india only in the sixteenth century ; works of numerous ancient mathematicians such as aryabhata, brahmagupta and bhaskara were based entirely on fractions. from a historiographic perspective, vedic india had no knowledge of differentiation or integration. the book also claims that analytic geometry of conics occupied an important tier in vedic mathematics, which runs contrary to all available evidence. = = publication history and reprints = = first published in 1965, five years after krishna tirtha ' s death, the work consisted of forty chapters, originally on 367 pages, and covered techniques he had promulgated through his lectures. a foreword by tirtha ' s disciple manjula trivedi stated that he had originally written 16 volumes — one on each sutra — but the manuscripts were lost before publication, and that this work was penned
https://en.wikipedia.org/wiki/Vedic_Mathematics
the article is devoted to homological complexes. smashly graded modules and complexes are studied over nonassociative algebras with metagroup relations. smashed tensor products of homological complexes are investigated. their homotopisms and homologisms are scrutinized.
arxiv:2012.10415
in this paper, we investigate topological aspects of indices of twisted geometric operators on manifolds equipped with fibered boundaries. we define $ k $ - groups relative to the pushforward for boundary fibration, and show that indices of twisted geometric operators, defined by complete $ \ phi $ or edge metrics, can be regarded as the index pairing over these $ k $ - groups. we also prove various properties of these indices using groupoid deformation techniques. using these properties, we give an application to the localization problem of signature operators for singular fiber bundles.
arxiv:1902.03767
this paper focuses on solving coupled problems of lumped parameter models. such problems are of interest for the simulation of severe accidents in nuclear reactors ~ : these coarse - grained models allow for fast calculations for statistical analysis used for risk assessment and solutions of large problems when considering the whole severe accident scenario. however, this modeling approach has several numerical flaws. besides, in this industrial context, computational efficiency is of great importance leading to various numerical constraints. the objective of this research is to analyze the applicability of explicit coupling strategies to solve such coupled problems and to design implicit coupling schemes allowing stable and accurate computations. the proposed schemes are theoretically analyzed and tested within cea ' s procor platform on a problem of heat conduction solved with coupled lumped parameter models and coupled 1d models. numerical results are discussed and allow us to emphasize the benefits of using the designed coupling schemes instead of the usual explicit coupling schemes.
arxiv:1803.07016
dirac, in 1937 proposed the variation of coupling constants derived from his large number hypothesis. efforts have continued since then to constrain their variation by various methods. we briefly discuss several methods used for the purpose while focusing primarily on the use of supernovae type 1a, quasars, and gamma - ray bursts ( grbs ) as cosmological probes for determining cosmological distances. supernovae type ia ( sneia ) are considered the best standard candles since their intrinsic luminosity can be determined precisely from their light curves. however, they have only been observed up to about redshift $ z = 2. 3 $, mostly at $ z < 1. 5 $. quasars are the brightest non - transient cosmic sources in the universe. they have been observed up to $ z = 7. 5 $. certain types of quasars can be calibrated well enough for their use as standard candles but with a higher degree of uncertainty in their intrinsic luminosity than the sneia. grbs are even brighter than quasars, observed up to $ z = 9. 4 $. their radiation lasts from 10s of milliseconds to several minutes and, in rare cases, for a few hours. however, they are even more challenging to calibrate as standard candles than quasars. what if the standard candles ' intrinsic luminosities are affected when the coupling constants become dynamic? this paper uses our earlier finding that the speed of light c, the gravitational constant g, the planck constant h, and the boltzmann constant k variations are correlated as $ g \ thicksim c ^ { 3 } \ thicksim h ^ { 3 } \ thicksim k ^ { 3 / 2 } $ with $ ( \ dot { g } / g ) _ { 0 } = 3 ( \ dot { c } / c ) _ { 0 } = ( \ dot { h } / h ) _ { 0 } = 1. 5 ( \ dot { k } / k ) _ { 0 } = 5. 4h _ { 0 } = 3. 90 ( \ pm 0. 04 ) \ times 10 ^ { - 10 } yr ^ { - 1 } $ corroborates it with sneia, quasars, and grbs observational data. also, we show that this covarying coupling constant model may be better than the standard { \ lambda } cdm model for using quasars and
arxiv:2301.09795
we prove a central limit theorem for the horvitz - thompson estimator based on the gram - schmidt walk ( gsw ) design, recently developed in harshaw et al. ( 2022 ). in particular, we consider the version of the gsw design which uses randomized pivot order, thereby answering an open question raised in the same article. we deduce this under minimal and global assumptions involving only the problem parameters such as the ( sum ) potential outcome vector and the covariate matrix. as an interesting consequence of our analysis we also obtain the precise limiting variance of the estimator in terms of these parameters which is smaller than the previously known upper bound. the main ingredients are a simplified skeletal process approximating the gsw design and concentration phenomena for random matrices obtained from random sampling using the stein ' s method for exchangeable pairs.
arxiv:2305.12512
we present a simple laboratory experiment to illustrate some aspects of the soliton theory in discrete lattices with a system that models the dynamics of dislocations in a crystal or the properties of adsorbed atomic layers. the apparatus not only shows the role of the peierls - nabarro potential but also illustrates the hierarchy of depinning transitions and the importance of the collective motion in mass transport.
arxiv:nlin/0002001
identification of attractors, that is, stable states and sustained oscillations, is an important step in the analysis of boolean models and exploration of potential variants. we describe an approach to the search for asynchronous cyclic attractors of boolean networks that exploits, in a novel way, the established technique of elimination of components. computation of attractors of simplified networks allows the identification of a limited number of candidate attractor states, which are then screened with techniques of reachability analysis combined with trap space computation. an implementation that brings together recently developed boolean network analysis tools, tested on biological models and random benchmark networks, shows the potential to significantly reduce running times.
arxiv:2305.01327
in this paper, we analyze the non - linear age of information ( aoi ) performance in a point - to - point short packet communication system, where a transmitter generates packets based on status updates and transmits the packets to a receiver. specifically, we investigate three packet management strategies, namely, the non - preemption with no buffer strategy, the non - preemption with one buffer strategy, and the preemption strategy. to characterize the level of the receiver ' s dissatisfaction on outdated data, we adopt a generalized \ alpha - \ beta aoi penalty function into the analysis and derive closed - form expressions for the average aoi penalty achieved by the three packet management strategies. simulation results are used to corroborate our analysis and explicitly evaluate the impact of various system parameters, such as the coding rate and status update generation rate, on the aoi performance. additionally, we find that the value of \ alpha reflects the system transmission reliability.
arxiv:2210.15672
we present an approach to solve a bethe - salpeter ( bs ) equation exactly without any approximation if the kernel of the bs equation exactly is instantaneous, and take positronium as an example to illustrate the general features of the solutions. as a middle stage, a set of coupled and self - consistent integration equations for a few scalar functions can be equivalently derived from the bs equation always, which are solvable accurately. for positronium, precise corrections to those of the schr \ " odinger equation in order $ v $ ( relative velocity ) in eigenfunctions, in order $ v ^ 2 $ in eigenvalues, and the possible mixing, such as that between $ s $ ( $ p $ ) and $ d $ ( $ f $ ) components in $ j ^ { pc } = 1 ^ { - - } $ ( $ j ^ { pc } = 2 ^ { + + } $ ) states as well, are determined quantitatively. moreover, we also point out that there is a problematic step in the classical derivation which was proposed first by e. e. salpeter. finally, we emphasize that for the effective theories ( such as nrqed and nrqcd etc ) we should pay great attention on the corrections indicated by the exact solutions.
arxiv:hep-ph/0406050
extending semantic parsers to code - switched input has been a challenging problem, primarily due to a lack of supervised training data. in this work, we introduce cst5, a new data augmentation technique that finetunes a t5 model using a small seed set ( $ \ approx $ 100 utterances ) to generate code - switched utterances from english utterances. we show that cst5 generates high quality code - switched data, both intrinsically ( per human evaluation ) and extrinsically by comparing baseline models which are trained without data augmentation to models which are trained with augmented data. empirically we observe that using cst5, one can achieve the same semantic parsing performance by using up to 20x less labeled data. to aid further research in this area, we are also releasing ( a ) hinglish - top, the largest human annotated code - switched semantic parsing dataset to date, containing 10k human annotated hindi - english ( hinglish ) code - switched utterances, and ( b ) over 170k cst5 generated code - switched utterances from the topv2 dataset. human evaluation shows that both the human annotated data as well as the cst5 generated data is of good quality.
arxiv:2211.07514
this paper considers the motion control of a particle and a spinning disc on rotating earth. the equations of motion are derived using lagrangian mechanics. trajectory planning is studied as an optimization problem using the method referred to as discrete mechanics and optimal control.
arxiv:1210.6435
recently proposed adaptive sketch & project ( sp ) methods connect several well - known projection methods such as randomized kaczmarz ( rk ), randomized block kaczmarz ( rbk ), motzkin relaxation ( mr ), randomized coordinate descent ( rcd ), capped coordinate descent ( ccd ), etc. into one framework for solving linear systems. in this work, we first propose a stochastic steepest descent ( ssd ) framework that connects sp methods with the well - known steepest descent ( sd ) method for solving positive - definite linear system of equations. we then introduce two greedy sampling strategies in the ssd framework that allow us to obtain algorithms such as sampling kaczmarz motzkin ( skm ), sampling block kaczmarz ( sbk ), sampling coordinate descent ( scd ), etc. in doing so, we generalize the existing sampling rules into one framework and develop an efficient version of sp methods. furthermore, we incorporated the polyak momentum technique into the ssd method to accelerate the resulting algorithms. we provide global convergence results for both the ssd method and the momentum induced ssd method. moreover, we prove $ \ mathcal { o } ( \ frac { 1 } { k } ) $ convergence rate for the cesaro average of iterates generated by both methods. by varying parameters in the ssd method, we obtain classical convergence results of the sd method as well as the sp methods as special cases. we design computational experiments to demonstrate the performance of the proposed greedy sampling methods as well as the momentum methods. the proposed greedy methods significantly outperform the existing methods for a wide variety of datasets such as random test instances as well as real - world datasets ( libsvm, sparse datasets from matrix market collection ). finally, the momentum algorithms designed in this work accelerate the algorithmic performance of the ssd methods.
arxiv:2012.13087
reaction barriers are a crucial ingredient for first principles based computational retro - synthesis efforts as well as for comprehensive reactivity assessments throughout chemical compound space. while extensive databases of experimental results exist, modern quantum machine learning applications require atomistic details which can only be obtained from quantum chemistry protocols. for competing e2 and s $ _ \ text { n } $ 2 reaction channels we report 4 ' 466 transition state and 143 ' 200 reactant complex geometries and energies at respective mp2 / 6 - 311g ( d ) and single point df - lccsd / cc - pvtz level of theory covering the chemical compound space spanned by the substituents no $ _ 2 $, cn, ch $ _ 3 $, and nh $ _ 2 $ and early halogens ( f, cl, br ) as nucleophiles and leaving groups. reactants are chosen such that the activation energy of the competing e2 and s $ _ \ text { n } $ 2 reactions are of comparable magnitude. the correct concerted motion for each of the one - step reactions has been validated for all transition states. we demonstrate how quantum machine learning models can support data set extension, and discuss the distribution of key internal coordinates of the transition states.
arxiv:2006.00504
poly 3, 4 - ethylenedioxythiophene ( pedot ) has been attracting attention as a thermoelectric material for room - temperature use due to its flexibility and non - toxicity. however, pedot reportedly generates insufficient thermoelectric power for practical use. this work tried to improve the seebeck coefficient by introducing molecular strain to pedot molecules by loading a polystyrene sulfonate ( pss ) - free pedot on a polyethyleneterephthalate ( pet ) fiber. raman spectroscopy revealed the pedot materials with significant compression in the c { \ alpha } - c { \ alpha } bond and extension in the c { \ alpha } = c \ b { eta } bond exhibit seebeck coefficients two orders of magnitude larger than usual. furthermore, strain in the c \ b { eta } - c \ b { eta } bond strongly correlated with the seebeck coefficient that varied in a broad range from - 2100 to 3100 { \ mu } v k - 1. this variation indicated that the molecular strain formed a sharp peak or valley around the fermi level in the density of state ( dos ) function, which gradually shifts along with the c \ b { eta } - c \ b { eta } strain. this molecular strain - induced giant seebeck effect is expected to be an applicable technique for other polythiophene molecules.
arxiv:2410.23573
- component biophysics up to complex ecologies. biology is concerned with the characteristics, classification and behaviors of organisms, as well as how species were formed and their interactions with each other and the environment. the biological fields of botany, zoology, and medicine date back to early periods of civilization, while microbiology was introduced in the 17th century with the invention of the microscope. however, it was not until the 19th century that biology became a unified science. once scientists discovered commonalities between all living things, it was decided they were best studied as a whole. some key developments in biology were the discovery of genetics, evolution through natural selection, the germ theory of disease, and the application of the techniques of chemistry and physics at the level of the cell or organic molecule. modern biology is divided into subdisciplines by the type of organism and by the scale being studied. molecular biology is the study of the fundamental chemistry of life, while cellular biology is the examination of the cell ; the basic building block of all life. at a higher level, anatomy and physiology look at the internal structures, and their functions, of an organism, while ecology looks at how various organisms interrelate. = = = earth science = = = earth science ( also known as geoscience ) is an all - embracing term for the sciences related to the planet earth, including geology, geography, geophysics, geochemistry, climatology, glaciology, hydrology, meteorology, and oceanography. although mining and precious stones have been human interests throughout the history of civilization, the development of the related sciences of economic geology and mineralogy did not occur until the 18th century. the study of the earth, particularly paleontology, blossomed in the 19th century. the growth of other disciplines, such as geophysics, in the 20th century led to the development of the theory of plate tectonics in the 1960s, which has had a similar effect on the earth sciences as the theory of evolution had on biology. earth sciences today are closely linked to petroleum and mineral resources, climate research, and to environmental assessment and remediation. = = = = atmospheric sciences = = = = although sometimes considered in conjunction with the earth sciences, due to the independent development of its concepts, techniques, and practices and also the fact of it having a wide range of sub - disciplines under its wing, atmospheric science is also considered a separate branch of natural science. this field studies the characteristics of different layers of the atmosphere from ground level to the edge of
https://en.wikipedia.org/wiki/Natural_science
logic locking has received considerable interest as a prominent technique for protecting the design intellectual property from untrusted entities, especially the foundry. recently, machine learning ( ml ) - based attacks have questioned the security guarantees of logic locking, and have demonstrated considerable success in deciphering the secret key without relying on an oracle, hence, proving to be very useful for an adversary in the fab. such ml - based attacks have triggered the development of learning - resilient locking techniques. the most advanced state - of - the - art deceptive mux - based locking ( d - mux ) and the symmetric mux - based locking techniques have recently demonstrated resilience against existing ml - based attacks. both defense techniques obfuscate the design by inserting key - controlled mux logic, ensuring that all the secret inputs to the muxes are equiprobable. in this work, we show that these techniques primarily introduce local and limited changes to the circuit without altering the global structure of the design. by leveraging this observation, we propose a novel graph neural network ( gnn ) - based link prediction attack, muxlink, that successfully breaks both the d - mux and symmetric mux - locking techniques, relying only on the underlying structure of the locked design, i. e., in an oracle - less setting. our trained gnn model learns the structure of the given circuit and the composition of gates around the non - obfuscated wires, thereby generating meaningful link embeddings that help decipher the secret inputs to the muxes. the proposed muxlink achieves key prediction accuracy and precision up to 100 % on d - mux and symmetric mux - locked iscas - 85 and itc - 99 benchmarks, fully unlocking the designs. we open - source muxlink [ 1 ].
arxiv:2112.07178
a new investigation of the coexistence and competition of ferroelectricity and superconductivity is reported. in particular we show that the starting hamiltonian of a previous study by birman and weger ( 2001 ) can be exactly diagonalized. the result differs significantly from mean - field theory. a hamiltonian with a different realization of the coupling between ferroelectricity and superconductivity is proposed. we report the results for mean - field theory applied to this hamiltonian. we find that the order parameters are strongly affected by this coupling.
arxiv:cond-mat/0601407
multivalent particles bind to targets via many independent ligand - receptor bonding interactions. this microscopic design spans length scales in both synthetic and biological systems. classic examples include interactions between cells, virus binding, synthetic ligand - coated micrometer - scale vesicles or smaller nano - particles, functionalised polymers, and toxins. equilibrium multivalent binding is a continuous yet super - selective transition with respect to the number of ligands and receptors involved in the interaction. increasing the ligand or receptor density on the two particles leads to sharp growth in the number of bound particles at equilibrium. here we present a theory and monte carlo simulations to show that applying mechanical force to multivalent particles causes their adsorption / desorption isotherm on a surface to become sharper and more selective, with respect to variation in the number of ligands and receptors on the two objects. when the force is only applied to particles bound to the surface by one or more ligands, then the transition can become infinitely sharp and first - order - - - a new binding regime which we term " hyper - selective ". force may be imposed by, e. g. flow of solvent around the particles, a magnetic field, chemical gradients, or triggered uncoiling of inert oligomers / polymers tethered to the particles to provide a steric repulsion to the surface. this physical principle is a step towards " all or nothing " binding selectivity in the design of multivalent constructs.
arxiv:1906.07303
deep learning ( dl ) has shown great promise in the unsupervised task of clustering. that said, while in classical ( i. e., non - deep ) clustering the benefits of the nonparametric approach are well known, most deep - clustering methods are parametric : namely, they require a predefined and fixed number of clusters, denoted by k. when k is unknown, however, using model - selection criteria to choose its optimal value might become computationally expensive, especially in dl as the training process would have to be repeated numerous times. in this work, we bridge this gap by introducing an effective deep - clustering method that does not require knowing the value of k as it infers it during the learning. using a split / merge framework, a dynamic architecture that adapts to the changing k, and a novel loss, our proposed method outperforms existing nonparametric methods ( both classical and deep ones ). while the very few existing deep nonparametric methods lack scalability, we demonstrate ours by being the first to report the performance of such a method on imagenet. we also demonstrate the importance of inferring k by showing how methods that fix it deteriorate in performance when their assumed k value gets further from the ground - truth one, especially on imbalanced datasets. our code is available at https : / / github. com / bgu - cs - vil / deepdpm.
arxiv:2203.14309
continuous monitoring of driven - dissipative quantum optical systems is a crucial element in the implementation of quantum metrology, providing essential strategies for achieving highly precise measurements beyond the classical limit. in this context, the relevant figure of merit is the quantum fisher information of the radiation field emitted by the driven - dissipative sensor. saturation of the corresponding precision limit as defined by the quantum cramer - rao bound is typically not achieved by conventional, temporally local continuous measurement schemes such as counting or homodyning. to address the outstanding open challenge of efficient retrieval of the quantum fisher information of the emission field, we design a novel continuous measurement strategy featuring temporally quasilocal measurement bases as captured by matrix product states. such measurement can be implemented effectively by injecting the emission field of the sensor into an auxiliary open system, a ` quantum decoder ' module, which ` decodes ' specific input matrix product states into simple product states as its output field, and performing conventional continuous measurement at the output. we devise a universal recipe for the construction of the decoder by exploiting time reversal transformation of quantum optical input - output channels, thereby establishing a universal method to achieve the quantum cramer - rao precision limit for generic sensors based on continuous measurement. as a by - product, we establish an effective formula for the evaluation of the quantum fisher information of the emission field of generic driven - dissipative open sensors. we illustrate the power of our scheme with paramagnetic open sensor designs including linear force sensors, fibre - interfaced nonlinear emitters, and driven - dissipative many - body sensors, and demonstrate that it can be robustly implemented under realistic experimental imperfections.
arxiv:2209.08777
with the prevalence of e - commence websites and the ease of online shopping, consumers are embracing huge amounts of various options in products. undeniably, shopping is one of the most essential activities in our society and studying consumer ' s shopping behavior is important for the industry as well as sociology and psychology. indisputable, one of the most popular e - commerce categories is clothing business. there arises the needs for analysis of popular and attractive clothing features which could further boost many emerging applications, such as clothing recommendation and advertising. in this work, we design a novel system that consists of three major components : 1 ) exploring and organizing a large - scale clothing dataset from a online shopping website, 2 ) pruning and extracting images of best - selling products in clothing item data and user transaction history, and 3 ) utilizing a machine learning based approach to discovering fine - grained clothing attributes as the representative and discriminative characteristics of popular clothing style elements. through the experiments over a large - scale online clothing shopping dataset, we demonstrate the effectiveness of our proposed system, and obtain useful insights on clothing consumption trends and profitable clothing features.
arxiv:1611.03915
in an earlier paper, we studied solutions g to convolution equations of the form a _ d * g ^ { * d } + a _ { d - 1 } * g ^ { * ( d - 1 ) } +... + a _ 1 * g + a _ 0 = 0, where a _ 0,..., a _ d are given arithmetic functions associated with dirichlet series which converge on some right half plane, and also g is required to be such a function. in this article, we extend our previous results to multidimensional general dirichlet series of the form \ sum _ { x \ in x } f ( x ) e ^ { - sx } ( s in c ^ k ), where x is an additive subsemigroup of [ 0, \ infty ) ^ k. if x is discrete and a certain solvability criterion is satisfied, we determine solutions by an elementary recursive approach, adapting an idea of feckan. the solution of the general case leads us to a more comprehensive question : let x be an additive subsemigroup of a pointed, closed convex cone c in r ^ k. can we find a complex radon measure on x whose laplace transform satisfies a given polynomial equation whose coefficients are laplace transforms of such measures?
arxiv:0712.3172
cellular networks are becoming increasingly heterogeneous with higher base station ( bs ) densities and ever more frequency bands, making bs selection and band assignment key decisions in terms of rate and coverage. in this paper, we decompose the mobility - aware user association task into ( i ) forecasting of user rate and then ( ii ) convex utility maximization for user association accounting for the effects of bs load and handover overheads. using a linear combination of normalized mean - squared error and normalized discounted cumulative gain as a novel loss function, a recurrent deep neural network is trained to reliably forecast the mobile users ' future rates. based on the forecast, the controller optimizes the association decisions to maximize the service rate - based network utility using our computationally efficient ( speed up of 100x versus generic convex solver ) algorithm based on the frank - wolfe method. using an industry - grade network simulator developed by meta, we show that the proposed model predictive control ( mpc ) approach improves the 5th percentile service rate by 3. 5x compared to the traditional signal strength - based association, reduces the median number of handovers by 7x compared to a handover agnostic strategy, and achieves service rates close to a genie - aided scheme. furthermore, our model - based approach is significantly more sample - efficient ( needs 100x less training data ) compared to model - free reinforcement learning ( rl ), and generalizes well across different user drop scenarios.
arxiv:2301.09294
in optimal control problems, there exist different kinds of extremals, that is, curves candidates to be solution : abnormal, normal and strictly abnormal. the key point for this classification is how those extremals depend on the cost function. we focus on control systems such as nonholonomic control mechanical systems and the associated kinematic systems as long as they are equivalent. with all this in mind, first we study conditions to relate an optimal control problem for the mechanical system with another one for the associated kinematic system. then, pontryagin ' s maximum principle will be used to connect the abnormal extremals of both optimal control problems. an example is given to glimpse what the abnormal solutions for kinematic systems become when they are considered as extremals to the optimal control problem for the corresponding nonholonomic mechanical systems.
arxiv:0806.2814
the distributions of electrical current and magnetic field in a thin - film superconductor ring is calculated by solving the london equation. the maximum amount of flux trapped by the hole, the fluxoid saturation number, is obtained by limiting the current density by the depairing current. the results are compare it with similar results derived for the bulk case of a long hollow cylinder [ nordborg & vinokur, phys. rev. b 62, 12408 ( 2000 ) ]. in the limit of small holes our result reduces to the pearl solution for an isolated vortex in a thin film. for large hole radius, the ratio between saturation numbers in bulk and film superconductors is proportional to the square root of the hole size.
arxiv:0709.1086
collective cell migration is key during development, wound healing and metastasis and relies on coordinated cell behaviors at the group level. src kinase is a key signalling protein for physiological functions of epithelia, as it regulates many cellular processes, including adhesion, motility, and mechanotransduction. its over - activation is associated to cancer aggressiveness. here, we take advantage of optogenetics to precisely control src activation in time and show that its pathological - like activation slows collective rotation of epithelial cells confined into circular adhesive patches. we interpret velocity, force and stress data during period of non - activation and period of activation of src thanks to an hydrodynamic description of the cell assembly as a polar active fluid. src activation leads to a 2 - fold decrease in the ratio of polar angle to friction, which could result from increased adhesiveness at the cell - substrate interface. measuring internal stress allows us to show that active stresses are subdominant compared to traction forces. our work reveals the importance of fine - tuning the level of src activity for coordinated collective behaviors.
arxiv:2407.06920
we propose a general information - theoretic approach called seraph ( semi - supervised metric learning paradigm with hyper - sparsity ) for metric learning that does not rely upon the manifold assumption. given the probability parameterized by a mahalanobis distance, we maximize the entropy of that probability on labeled data and minimize it on unlabeled data following entropy regularization, which allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. furthermore, seraph is regularized by encouraging a low - rank projection induced from the metric. the optimization of seraph is solved efficiently and stably by an em - like scheme with the analytical e - step and convex m - step. experiments demonstrate that seraph compares favorably with many well - known global and local metric learning methods.
arxiv:1206.4614
consider a sliding camera that travels back and forth along an orthogonal line segment $ s $ inside an orthogonal polygon $ p $ with $ n $ vertices. the camera can see a point $ p $ inside $ p $ if and only if there exists a line segment containing $ p $ that crosses $ s $ at a right angle and is completely contained in $ p $. in the minimum sliding cameras ( msc ) problem, the objective is to guard $ p $ with the minimum number of sliding cameras. in this paper, we give an $ o ( n ^ { 5 / 2 } ) $ - time $ ( 7 / 2 ) $ - approximation algorithm to the msc problem on any simple orthogonal polygon with $ n $ vertices, answering a question posed by katz and morgenstern ( 2011 ). to the best of our knowledge, this is the first constant - factor approximation algorithm for this problem.
arxiv:1308.2757
we present a model for the structure of baryons in which the valence partons interact through a linear potential. this model can be derived from qcd in the approximation where transverse momenta are ignored. we compare the valence quark distribution function predicted by our model with that extracted from global fits to deep inelastic scattering data. the only parameter we can adjust is the fraction of baryon momentum carried by valence partons. our predictions agree well with data except for small values of the bjorken scaling variable.
arxiv:hep-ph/9911538