diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbwkp" "b/data_all_eng_slimpj/shuffled/split2/finalzzbwkp" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbwkp" @@ -0,0 +1,5 @@ +{"text":"\\chapter*{Abstract}\n\n\nWe present a system for the classification of mountain\npanoramas from user-generated photographs followed by\nidentification and extraction of mountain peaks from those\npanoramas. We have developed an automatic technique that,\ngiven as input a geo-tagged photograph, estimates its FOV\n(Field Of View) and the direction of the camera using a\nmatching algorithm on the photograph edge maps and a rendered\nview of the mountain silhouettes that should be seen\nfrom the observer's point of view. The extraction algorithm\nthen identifies the mountain peaks present in the photograph\nand their profiles. We discuss possible applications in social\nfields such as photograph peak tagging on social portals,\naugmented reality on mobile devices when viewing a mountain\npanorama, and generation of collective intelligence systems\n(such as environmental models) from massive social\nmedia collections (e.g. snow water availability maps based\non mountain peak states extracted from photograph hosting\nservices).\n\\chapter*{Sommario}\n\n\nProponiamo un sistema per la classificazione di panorami montani e per l'identificazione delle vette presenti in fotografie scattate dagli utenti.\nAbbiamo sviluppato una tecnica automatica che, data come input la foto e la sua geolocalizzazione, stima il FOV (Field of View o Angolo di Campo) e l'orientamento della fotocamera. Questo avviene tramite l'applicazione di un algoritmo di matching tra la mappa degli edge (bordi) della fotografia e alle silhouette delle montagne che dovrebbero essere visibili dall'osservatore a quelle coordinate.\nL'algoritmo di estrazione identifica poi i picchi delle montagne presenti nella fotografia e il loro profilo. \nVerranno discusse alcune possibile applicazioni in ambito sociale come ad esempio: \nl'identificazione e tagging (marcatura) delle fotografie sui social network, \nrealt\u00e0 aumentata su dispositivi mobile durante la visione di panorami montani e\nla generazione di sistemi di intelligenza collettiva (come modelli ambientali) dalle enormi collezioni multimediali dei social network (p.es. mappe della disponibilit\u00e0 di neve e acqua sulle vette delle montagne, estratte da servizi di condivisione di immagini).\n\\chapter*{Acknowledgements}\n\n\nForemost, I would like to thank Prof. Marco Tagliasacchi and Prof. Piero Fraternali for the opportunity to work with them on a such interesting and exciting project, and the continuous support during the preparation of this thesis.\n\nI would like to thank also:\n\\begin{itemize}\n\\item\nDr. Danny Chrastina (\\url{http:\/\/www.chrastina.net}) for helping with the preparation of this document.\n\\item\nDr. Ulrich Deuschle (\\url{http:\/\/www.udeuschle.de}) for his kind permission of using his mountain panorama generating web tool.\n\\item\nGregor Brdnik (\\url{http:\/\/www.digicamdb.com}) for providing the database of digital cameras and their sensor sizes.\n\\item\nMiroslav Sabo (\\url{http:\/\/www.mirosabo.com}) for the photograph of the Matterhorn used in this thesis.\n\\item\nAll my friends and colleagues (who are too many to be listed) who have helped and contributed to this work.\n\\end{itemize}\n\nLastly, but most importantly, I would like to thank my parents (Alexey and Irina) for making it possible for me to get here, and Valentina for constant support and motivation.\n\n\n\n\n\n\\chapter{Introduction}\n\\label{Introduction}\n\\thispagestyle{empty}\n\n\\vspace{0.5cm}\n\n\\noindent\nThe most suitable paradigm for representing this work is probably passive crowdsourcing, a field that is not trivial to define and that is not even a subcategory of crowdsourcing in the strict sense, as its name can suggest. A possible definition of the passive crowdsourcing discipline can be the union of crowdsourcing, data mining and collective intelligence (computer science fields that have much in common but are slightly different). These differences are often hard to notice due to the youth of these concepts and so the presence of a lot of confusion. For this reason the purpose of this chapter is to define these concepts unambiguously and to define the problem statement of this work.\n\n\\section{Human Computation}\nThe idea of the computer computation goal has always been that which Alan Turing expressed in 1950:\n\\begin{quotation}\n\n\\noindent{\\emph{``\nThe idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer.\n''}\n Alan Turing \\cite{Turing:1995:CMI:216408.216410}\n}\n}\n\\end{quotation}\nThough current progress in computer science brings automated solutions of more and more complex problems, this idea of computer systems able to solve any problem that humans can solve is however far from reality. There are still a lot of tasks that cannot be performed by computers, and those that could be but are preferred to be computed by humans for quality, time, and cost reasons. These tasks lead to the field of human computation: a field that is hard to give a definition to, in fact several definitions of the term can be found: the most general, modern and suitable for our needs of which is probably that extracted from von Ahn's dissertation:\n\\begin{quotation}\n\n\\noindent{\\emph{``\n... a paradigm for utilizing human processing power to solve \nproblems that computers cannot yet solve.\n''}\n Luis von Ahn \\cite{VonAhn:2005:HC:1168246}\n}\n}\n\\end{quotation}\nHuman computation can be thought of as several approaches varying with the tasks involved, the type of persons involved in task completion, the incentive techniques used, and what type of effort the persons are required to make. It must be said that the classification of human computation is not an ordinary hierarchy with parents and children, but is instead a set of related concepts not necessary including one another. So a possible taxonomy of human computation (seen as a list of related ideas) has been produced by combining definitions given by Quinn et al. \\cite{Quinn:2011:HCS:1978942.1979148} and Fraternali et al. \\cite{Fraternali:2012:PHL:2263310.2263839} can be:\n\\begin{itemize}\n\\item\n\\emph{Crowdsourcing}:\nthis approach manages the distributed assignment of tasks to an open, undefined and generally large group of executors. The task to be performed by the executors is split into a large number of microtasks (by the work provider or the crowdsourcing system itself) and each microtask is assigned by the system to a work performer, who executes it (usually for a reward of a small amount of money). The crowdsourcing application (defined usually by two interfaces: for the work providers and the work performers) manages the work life cycle: performer assignment, time and price\nnegotiation, result submission and verification, and payment. In addition to the web interface, some platforms offer Application Programming Interfaces (APIs),whereby third parties can integrate the distributed work management functionality into their custom applications. Examples of crowdsourcing solutions are Amazon Mechanical Turk and Microtask.com \\cite{Fraternali:2012:PHL:2263310.2263839}.\n\\item\n\\emph{Games with a Purpose (GWAPs)}:\nthese are a sort of crowsourcing application but with a fundamental difference in user incentive technique: the process of resolving a task is implemented as a game with an enjoyable user experience. Instead of monetary earning, the user motivation in this approach is the gratification of the playing process. GWAPs, and more generally useful applications where the user solves perceptive or cognitive problems without knowing, address task such as adding descriptive tags and recognising objects in images and checking the output of Optical Character Recognition (OCR) for correctness. \\cite{Fraternali:2012:PHL:2263310.2263839}.\n\\item\n\\emph{Social Computing}:\na broad scope concept that includes applications and services that facilitate collective action and social interaction online with rich exchange of multimedia information and evolution of aggregate knowledge \\cite{parameswaran2007social}. Instead of crowdsourcing, the purpose is usually not to perform a task. The key distinction between human computation and social computing is that social computing facilitates relatively natural human behavior that happens to be mediated by technology, whereas participation in a human computation is directed primarily by the human computation system \\cite{Quinn:2011:HCS:1978942.1979148}.\n\\item\n\\emph{Collective Intelligence}: if seen as a process, the term can be defined as groups of individuals doing things collectively that seem intelligent \\cite{malone-harnessing}. If it is seen instead as the process result, means the knowledge of any kind that is generated (even non consciously and not in explicit form) by the collective intelligence process. Quinn et al. \\cite{Quinn:2011:HCS:1978942.1979148} classifies it as the superset of social computing and crowdsourcing, because both are defined in terms of social behavior. The key distinctions between collective intelligence and human computation are the same as with crowdsourcing, but with the additional distinction that collective intelligence applies only when the process depends on a group of participants. It is conceivable that there could be a human computation system with computations performed by a single worker in isolation. This is why part of human computation protrudes outside collective intelligence \\cite{Quinn:2011:HCS:1978942.1979148}.\n\\item\n\\emph{Data Mining}:\nthis can be defined broadly as the application of specific algorithms for extracting patterns from data \\cite{Fayyad96knowledgediscovery}. Speaking about human-created data the approach can be seen as extracting the knowledge from a certain result of a collective intelligence process. Creating this knowledge usually is not the goal of the persons that generate it, in fact often they are completely unaware of it (just think that almost everybody contributes to the knowledge of what are the most popular web sites just by visiting them: they open a web site because they need it, not to add a vote to its popularity). Though it is a very important concept in the field of collective intelligence, machine intelligence applied to social science and passive crowdsourcing (that will be defined in the next section) is a fully automated process by definition, so it is excluded from the area of human computation.\n\\item\n\\emph{Social Mobilization}:\nthis approach deals with social computation problems where the timing and the efficiency is crucial. Examples of this area are safety critical sectors like civil protection and disease control.\n\\item\n\\emph{Human Sensors}:\nexploiting the fact that the mobile devices tend to incorporate more and more sensors, this approach deals with a real-time collection of data (of various natures) treating persons with mobile devices as sensors for the data. Examples of these applications are earthquake and other natural disaster monitoring, traffic condition control and pollution monitoring.\n\\end{itemize}\n\n\\section{Passive Crowdsourcing}\nThe goal of any passive human computation system is to exploit the collective effort of a large group of people to retrieve the collective intelligence this effort generates or other implicit knowledge. In this study case the collective effort is taking geo-tagged photographs of mountains and publishing them on the Web, the intelligence deriving from this collection of photographs is the availability of the appearances of a mountain through time, the knowledge we want to extract having these visual appearances of mountains is the evolution of its environmental properties in time (which can be for example snow or grass presence at a certain altitude) or using some ground truth data even to predict these features where the use of physical sensors for those measurements is difficult or impossible (i.e. snow level prediction at high altitudes). Passive crowdsourcing is an approach that is not trivial to classify within the described taxonomy: it can be best classified in an area that includes: \\emph{crowdsourcing} for the fact of exploiting the effort of human computation, \\emph{collective intelligence} since its extraction is the primary goal of the approach and \\emph{data mining} as it refers to the procedure of extracting some results of human computation from the public Web data. Figure \\ref{fig:crowdsourcingTaxonomy} shows the taxonomy proposed by Quinn et al. \\cite{Quinn:2011:HCS:1978942.1979148} with this proposal of passive crowdsourcing collocation.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.85\\columnwidth]{.\/figures\/crowdsourcing_taxonomy}\n\t\\caption{Proposed taxonomy of human computation including passive crowdsourcing.}\n\t\\label{fig:crowdsourcingTaxonomy}\n\\end{figure}\n\nThe main advantage of passive crowdsourcing with respect to traditional crowdsourcing is the enormous availability of data to collect and very reduced costs of its retrieval, a significant disadvantage, on the other hand, is the form of the data: not always perfectly suitable for the goals and almost always needing to be processed and analyzed before being used. In other words, traditional crowdsourcing asks the user to shape the data in the input in a certain way, paying for this; passive crowdsourcing instead retrieves the result of past work, but shaped as it is, so processing it next. Just think about a study of the public opinion trends of a certain commercial product, two different ways to collecting these opinions can be used:\n\\begin{itemize}\n\\item\n\\emph{Crowdsourcing}: a certain amount of money is paid to each person who gives his proper opinion about the product being studied (approach also called crowdvoting). This method guarantees perfectly shaped collected data: the form of the questions and answers can be decided and modeled easily, but obviously the availability of the data to be collected will be limited by the number of the people ready to take that survey, and the costs of collecing a huge dataset of opinions will not surely be low.\n\\item\n\\emph{Passive Crowdsourcing}: the publicly available web data is full of opinions of the customers of a certain product (think about a customer that posts a photograph of that product, or comments a photograph uploaded by someone else, declaring their personal opinion about that product), the cost of retrieving these photographs and comments are almost null and the availability of this content is enormous. The big problem anyway is the shape of this data: given a photograph the algorithm must decide whether it is pertinent to the study or not (if the object in the photograph is the product being looked for) and to estimate if the opinion expressed by the user is positive or negative.\n\\end{itemize}\n\nIn the context of this work the huge amount of available data is fundamental, so the passive crowdsourcing approach is prefered, and the purpose of this work is exactly to deal with the problem of data shaping and analyzing.\n\nAnother significant advantage of passive crowdsourcing (and very important for this work) is the availability of the implicit information the user himself is unaware of: if a person is asked whether the peak of the Matterhorn (a mountain in the Italian Alps) was covered with snow or not in August of 2010 - he does not remember, but an appropriate image processing technique can extract this information from the photograph the user posted on his social network profile during his vacation.\n\n\\section{User Generated Content}\nThe amount of available user generated media content on the Web nowadays is reaching unprecedented mass: Facebook alone hosts 240 billion photographs and gets 300 million new ones every day \\cite{Doherty2010Lifelog}. This massive input allows collective intelligence applications to reach unprecedented results in innumerable domains, from smart city scenarios to land and environmental protection.\n\nA significant portion of this public dataset are geo-tagged photographs. The availability of geo-tags in photos results from two current trends: first, the widespread habit of using the smartphone as a photo camera; second, the increasing number of digital cameras with an integrated GPS locator and Wi-Fi module. The impact of these two factors is that more and more personal photographs are being published on social portals and more and more of them are precisely geo-tagged \\cite{npdPhotosSmartphones}. A time-stamped and geo-located photo can be regarded as the state of one or more objects at a certain time. From a collection of photographs of the same object at different moments, one can build a model of the object, study its behavior and evolution in time, and build predictive models of properties of interest.\n\nHowever, a problem in the implementation of collective intelligence applications from visual user generated content (UGC) is the identification of the objects of interest: each object may change its shape, position and appearance in the UGC, making its tracking in a set of billions of available photographs an extremely difficult task. In this work we aim at realizing a starting point for the applications that generate collective intelligence models from user-generated media content based on object extraction and model construction in a specific environmental sector: the study of mountain conditions. We harness the\ncollective effort of people taking pictures of mountains from different positions and at different times of the year to produce models describing the states of selected mountains and their changing snow conditions over time. To achieve this objective, we need to address the object identification problem, which for mountains is more tractable than the general case described above, thanks to the fact that mountains are among the most motionless and immutable objects present on the planet. This problem of mountain identification in fact will be the goal of this work.\n\n\\section{Problem Statement}\nGiven a precisely geo-tagged photograph, the goal of this work is to determine whether the photograph contains a mountain panorama, if yes, estimate the direction of view of the photo camera during the shot and identify the visible mountain peaks on the photograph.\n\nWe will describe in the detail the proposed algorithm, how it has been implemented and tested with the result of successfully matching 64.2\\% of the input geo-tagged photographs.\n\nThe algorithm of peak detection that is presented can be used in applications where the purpose is to identify and tag mountains in photographs provided by users. The algorithm can also be used for the creation of mountain models and of their related properties, such as, for example, the presence of snow at a given altitude.\n\nTwo representative examples of the first type of usage are:\n\\begin{itemize}\n\\item Mountain peak tagging of user-uploaded photographs on photo sharing platforms that allow anyone browsing that photograph to view peak names of personal interest.\n\\item Augmented reality on mobile devices with real-time peak annotation on the device screen while in camera mode.\n\\end{itemize}\n\nAn example of the usage for environmental model building is the construction of a model for correcting ground and satellite based estimates of the Snow Water Equivalent (SWE) on mountains peaks (which is described in the Conclusions and Future Work section).\n\n\\section{Document Structure}\nIn the next chapter several past works will be discussed, each one relevant in its own way and field to this work: from passive crowdsourcing and influenza surveillance to image processing and vision-based UAV navigation.\n\nIn the third chapter the proposed algorithm itself is described in detail and an efficient vector-based matching algorithm is explained.\n\nIn chapter four the realized implementation of the discussed algorithm is explained, including all improvement techniques developed (even those that have been rejected after the validation phase).\n\nIn the fifth chapter the results of the tests performed on the implemented algorithm are listed with the data set structure and error metric description.\n\nFinally in the last chapter the conclusions about this work are drawn with the possible future direction of this project.\n\n\\chapter{Related Work and State of the Art}\n\\label{Related Work and State of the Art}\n\\thispagestyle{empty}\n\n\\vspace{0.5cm}\n\n\\noindent\nThis work combines many disciplines of social computing, image processing, machine intelligence and environmental modeling with the relative emblematic problems: passive crowdsourcing, object identification (in particular mountain boundaries) and pose estimation, collective intelligence extraction and knowledge modeling, snow level estimation. All of the these problems have been largely analyzed and studied recently, but only a very small part of them combines several of the listed problems.\n\nQuoting all the works in those fields would be impossible, so in this chapter several examples of works from each of these fields (often combined together) will be discussed, from social problems and those closely inherent to this work, to examples of non-social applications, emphasizing in fact the wide possibility of goals that can be reached with these approaches.\n\nThe fundamental concept of computational social science is treated by Lazer et at. \\cite{lazer2009life}, explaining the trend of social science, moving from the analysis of individuals to the study of society: nowadays almost any action performed, from checking email, making phone calls and checking our social network profile to going for a walk, driving to the office by car, booking a medical check-up or even paying at the supermarket with a credit card leaves a digital fingerprint. The enormous amount of these fingerprints generates data, that pulled together properly give an incredibly detailed picture of our lives and our societies. Understanding these trends of societal evolution and so also individuals changing is a complex task, but with an unprecedented amount of potential and latent knowledge waiting to be extracted. The authors discuss the problems of this approach such as managing of privacy issues (think about the NRC report on GIS data \\cite{national2007Putting} describing the possibility to extract the individual information even from well anonymized data, or the online health databases which were pulled down after a study revealed the possibility of confirming the identities \\cite{dnaBlocked}). The lack of the both approach and infrastructure standards in this emerging field is also discussed:\n\\begin{quotation}\n\n\\noindent{\\emph{``\nThe resources available in the social sciences are significantly smaller, and even the physical (and administrative) distance between social science departments and engineering or computer science departments tends to be greater than for the other sciences. The availability of easy-to-use programs and techniques would greatly magnify the presence of a computational social science. Just as mass-market CAD software revolutionized the engineering world decades ago, common computational social science analysis tools and the sharing of data will lead to significant advances. The development of these tools can, in part, piggyback on those developed in biology, physics and other fields, but also requires substantial investments in applications customized to social science needs.\n''}\n Lazer et at. \\cite{lazer2009life}\n}\n}\n\\end{quotation}\n\n\\section{Passive Crowdsourcing and Collective Intelligence}\nAn important work in this field (even if the authors do not use the term of passive crowdsourcing, but it is exactly the type of problem solving we mean this term for) is performed by Jin et al. \\cite{Jin:2010:WSM:1873951.1874196} in their study of society trends from photograph analysis. The authors propose a method for collecting information to identify current social trends, and also for the prediction of trends by analyzing the sharing patterns of uploaded and downloaded social multimedia. Each time an image or video is uploaded or viewed, it constitutes an implicit vote for (or against) the subject of the image. This vote carries along with it a rich set of associated data including time and (often) location information. By aggregating such votes across millions of Internet users, the authors reveal the wisdom that is embedded in social multimedia sites for social science applications such as politics, economics, and marketing \\cite{Jin:2010:WSM:1873951.1874196}.\n\nGiven a query, the relevant photographs with relative metadata are extracted from Flickr, and the global social trends are estimated. The motivation for introducing this approach is the low cost of crawling this information for the companies and the industries, as well as the possibility to analyze this data almost instantaneously with respect to common surveys. The implementation of the proposal is also discussed, with several tests in various fields that gave incredibly promising results:\n\\begin{itemize}\n\\item\n\\emph{Politics}: the popularity scores analysis of the candidates Obama and McCain during the USA president elections of 2008 gave the resulting trends which were correct within a tenth of a percent of the real election data.\n\\item\n\\emph{Economics}: a product distribution map (with \\emph{iPod} as the example) around the world over time was successfully drawn.\n\\item\n\\emph{Marketing}: the sales of past years of several products such as music players, computers and cellular phones were estimated and the results actually match the official sales trends of those products.\n\\end{itemize}\n\nAlthough being an innovative and efficient approach, it does not deal with the visual content of the images themselves (the authors also highlight this fact, declaring the intent to add this analysis in the future). An example of collective intelligence extraction using image content is the work of Cao et al. \\cite{conf\/icassp\/CaoLGJHH10} that presents a worldwide tourism recommendation system. It is based on a large-scale geotagged web photograph collection and aims to suggest with minimal input to tourists the destination they would enjoy. By taking more than one million of geotagged photographs, dividing them into clusters by geographical position and extracting the most representative photographs for each area, the system is able to propose destinations to the user having in input the set of keywords and images of the places the user likes. From a conceptual point of view this system tries to simulate a friend that knows your travel tastes and suggests destinations for your new journey, but with the difference that this virtual friend has visited millions of places all around the world. This lies exactly in the concept of collective intelligence: exploiting the effort of hundreds of thousands of people uploading their travel photographs, the authors build a knowledge on what the various world tourism destinations feel like.\n\n\\subsection{Real-Time Social Monitoring}\nAn approach related to the concept of Human Sensors treated in the previous chapter is the process of monitoring the online activity of the persons to predict in advance (or at least to identify quickly) the occurrence of some phenomena. The activity to be monitored and the phenomena to detect can be very different. The most popular online activity to monitor is for sure the web searches of the users, such as in works of Polgreen et al. \\cite{Polgreen_usinginternet}, Ginsberg et al. \\cite{citeulike:3681665} and Johnson et al. \\cite{15361003} (in Johnson et al. also the access logs to health websites are analyzed) that propose influenza surveillance exploiting web search query statistics, examining the connection between searches for influenza and actual influenza occurrence and finding strong relationships. Another example of web query monitoring is the prediction of macroeconomic statistics by Ettredge et al. \\cite{Ettredge:2005:UWS:1096000.1096010}, in particular the prediction of unemployment rate trend based on the frequency of search terms likely used by people seeking employment.\n\nAnother important source of activity are social networks, for example Twitter messages, used by Culotta \\cite{Culotta:2010:TDI:1964858.1964874} in the monitoring of influenza, similar to the references described above, and by Sakaki et al. \\cite{Sakaki:2010:EST:1772690.1772777} for detecting the earthquakes.\n\n\\section{Object Identification and Pose Estimation}\nThe pose estimation of the photograph will be the key problem of this work, even if the estimated variable is only the orientation and not the position (that is given in input), and is a very commonly treated problem. Several examples will be listed here with problems, each one with its own elements in common with this work.\n\nAn example of a relatively different problem of pose estimation with respect to this work is the estimation of the geographic position of a photograph proposed by Hays and Efros \\cite{Hays:2008:im2gps}: though the estimation of the position is performed with the analysis of the visual content of the image, a purely data-driven scene matching approach is applied to estimate a geographic area the photograph belongs to.\n\nRamalingam et al. \\cite{RBSB09} instead present a work that at first sight can seem the opposite of this one (instead of estimating the orientation given the position they estimate the position given the orientation: always perpendicular to the terrain) but it is very similar: the authors describe a method to accurately estimate the global position of a moving car using an omnidirectional camera and untextured 3D city models. The idea of the algorithm is the same: estimate the pose by matching the input image to a 3D model (city model in this case, elevation model in case of our work). The described algorithm extracts the skyline from an omni-directional photograph, generates the virtual fisheye skyline views and matches the photograph a to view, estimating in this way the position of the camera.\n\nOther works about pose estimation given 3D models of cities and buildings have been recently published, such as world wide pose estimation using 3D point clouds by Li et al. \\cite{Li:2012:WPE:2402940.2402943} in which the SIFT features are located and extracted in the photograph and matched with the worldwide models. The particular point of this pose estimation is that it does not use any geographical information, but it estimates the position, orientation and the focal length (and so the field of view).\n\nBaatz et al. \\cite{Baatz:2012:LCM:2125160.2125170} addresses the problem of place-of-interest recognition in urban scenarios exploiting 3D building information, giving in output the camera pose in real world coordinates ready for augmenting the cell phone image with virtual 3D information. Sattler et al. instead deals with the problem of the 2D-to-3D correspondence computation required for these cases of pose estimation of urban scenes, demonstrating that direct 2D-to-3D matching methods have a considerable potential for improving registration performance.\n\nAn innovative idea of photograph geolocalization by learning the relationship between ground level appearance and overhead appearance and land cover attributes from sparsely available geotagged ground-level images \\cite{LinCrossView} was introduced by Lin et al. \\cite{LinCrossView}: unlike traditional geolocalization techniques it allows the geographical position of an isolated photograph to be identified (with no other geotagged photographs available in the same region). The authors exploit two previously unused data sets: overhead appearance and land cover survey data. Ground and aerial images are represented using HoG \\cite{Dalal:2005:HOG:1068507.1069007}, self-similarity \\cite{shechtman2007matching}, gist \\cite{Oliva:2001:MSS:598425.598462} and color histograms features. For each of these data sets the relationship between ground level views and the photograph data is learned and the position is estimated by two proposed algorithms that are also compared with three other pre-existing techniques:\n\\begin{itemize}\n\\item\n\\emph{im2gps}:\nproposed by Hays and Efros \\cite{Hays:2008:im2gps} already described a few lines above, that does not make use of aerial and attribute information and can only geolocate query images in locations with ground-level training imagery.\n\\item\n\\emph{Direct Match (DM)}:\nmatches the same features for ground level images to aerial images with no translation, assuming that the ground level appearance and overhead appearance are correlated.\n\\item\n\\emph{Kernelized Canonical Correlation Analysis (KCCA)}:\nis a tool to learn the basis along the direction where features in different views are maximally correlated, used as a matching score. It however presents significant disadvantages: singular value decomposition for a non-sparse kernel matrix is need to solve the eigenvalue problem, making the process unfeasible as training data increases, and secondly, KCCA assumes one-to-one correspondence between two views (in contrast with the geolocalization problem where it is common to have multiple ground-level images taken at the same location) \\cite{Hays:2008:im2gps}. \n\\end{itemize}\n\nThe methods proposed by the authors are instead:\n\\begin{itemize}\n\\item\n\\emph{Data-driven Feature Averaging (AVG)}:\nbased on the idea that well matched ground-level photographs will tend to have also similar aerial and land cover attributes, this technique translates the ground level to aerial and attribute features by averaging the features of good scene matches.\n\\item\n\\emph{Discriminative Translation (DT)}:\nan approach that extends AVG with also a set of negative training samples, based on the intuition that the scenes with very different ground level appearance will have distinct overhead appearance and ground cover attributes (assumption obviously hypothetical and not always true).\n\\end{itemize}\n\nIn the performed tests the algorithm was able to correctly geolocate 17\\% of the isolated query images, compared to 0\\% for existing methods.\n\n\\subsection{Mountain Identification}\nAll the pose estimation works described so far in this chapter deal with urban 3D models, as does the majority of pose estimation research. This is not surprising since an accurate 3D model is fundamental for this kind of task, and the massive increase of the 3D data of buildings and cities in the last few years makes these studies possible. Apart from urban models however, there is another type of 3D data that is largely available, which evolved much earlier than urban data (even if usually with lower resolution): terrain elevation data. The elevation data, presented usually as a geographical grid with the altitude for each point, can be easily seen as a 3D model, and the most interesting objects formed by these models are for sure mountains. For this reason also mountains are sometimes the identified objects in the pose estimation task, here several examples of these works will be discussed.\n\nThe most significant work in this sector is probably that presented by Baboud et al. \\cite{Baboud2011Alignment}, which given an input geotagged photograph, introduces the matching algorithm for correct overlap identification between the photograph boundaries and those of the virtually generated panorama (based on elevation datasets) that should be seen by the observer placed in the geographical point where the photograph has been taken from. This algorithm, which will be discussed in detail further, is the starting point of this thesis work. It must be highlighted however, that the goal of the authors is peak identification for the implementation of photograph and video augmentation; this work instead aims to identify mountain peaks to extract the appearances of the mountain for environmental model generation.\n\nAnother important work, dealing not only with the direction, but also with the position estimation of a mountain image, was written by Naval et al. \\cite{Jr97estimatingcamera}. Their algorithm does not work with the complete edges of the image but only with the skyline extracted using a neural network. The position and the orientation is then computed by nonlinear least squares.\n\nA proposal that exploits the skyline and mountain alignment is the vision-based UAV navigation described by Woo et al. \\cite{WooSLKK07}, that with pose estimation in a mountain area (a problem similar to the other described works) introduces the possibility of vision-based UAV navigation in a mountain area using an IR (Infra-Red) camera by identifying the mountain peaks. This is a navigation method that is usually performed by extracting features such as buildings and roads, not always visible and available, so mountain peak and skyline recognition brings big advantages.\n\nA challenging task with excellent results is described by Baatz et al. \\cite{Baatz:2012:LSV:2403006.2403045} with a proposal of an algorithm that given a photograph, estimates its position and orientation on large scale elevation models (in case of the article a region of 40000~km$^2$ was used, but in theory the technique can be applied to position estimation on a world scale. The algorithm exploits the shape information across the skyline and searches for similarly shaped configurations in the large scale database. The main contributions are a novel method for robust contour\nencoding as well as two different voting schemes to solve the large scale camera pose recognition from contours. The first scheme operates only in descriptor space (it checks where in the model a panoramic skyline is most likely to contain the current query picture) while the second scheme is a combined vote in descriptor and rotation space \\cite{Baatz:2012:LSV:2403006.2403045}. The original six-dimensional search is simplified by the assumptions that the photograph has been taken close to the terrain level and the photograph has usually only a small roll with respect to the horizon (both assumptions were made also during the development of this work algorithm). Instead of supposing to have the right shot position, this technique renders the terrain view from a digital elevation model on a grid defined by distances of approximately 100~m~$\\times$~100~m for a total number of 3.5 million cubemaps (this is how the authors call the renders). The key problem of the work is in fact, the large scale search method to efficiently match the query skyline to one of the cubemaps. Given a query image, sky\/ground segmentation is performed following an approach based on unary data costs \\cite{Martin:2004:LDN:977249.977379,Luo:2002:PAD:839290.842704} for a pixel being assigned sky or ground. The horizon skyline then is represented by a collection of vector-quantized local contourlets (contour words, similar in spirit to visual words obtained from quantized image patch descriptors) that are matched to the collected cubemaps with a voting stage that retrieves the most probable cubemaps to contain the query skyline, the top 1000 candidates are then analyzed with geometric verification using iterative closest points to determine a full 3D rotation. Evaluation was performed on a data set of photographs with manually verified GPS tags or given location, and in the best implementation 88\\% of the photographs were localized correctly.\n\n\\section{Environmental Study}\nIn this section the problem of snow level and snow water equivalent (SWE) estimation will be described, with the current state of the art and used methods as well as the benefits that an environmental model based on passive crowdsourcing photograph analysis can bring.\n\nSnow Water Equivalent (SWE) is a common snowpack measurement. It is the amount of water contained within the snowpack. It can be thought of as the depth of water that would theoretically result if you melted the entire snowpack instantaneously. To determine snow depth from SWE you need to know the density of the snow. The density of new snow ranges from about 5\\% when the air temperature is 14\\ensuremath{^\\circ} F, to about 20\\% when the temperature is 32\\ensuremath{^\\circ} F. After the snow falls its density increases due to gravitational settling, wind packing, melting and recrystallization \\cite{whatIsSWE}. It is a very important parameter for industry and agriculture, and its correct estimation is one of the main problems facing the environmental agencies.\n\nThese measurements are usually performed with physical sensors and stations that are very sparse, so the need of a map of snow and SWE distribution introduces a key problem: the interpolation of this data. Interpolation is usually done (due to the lack of the sophisticated physical models dealing with altitude and temperature and the absence of other supporting data) with relatively rough methods such as:\n\\begin{itemize}\n\\item\nlinear or almost linear interpolation of the data, lacking of precision due to the low spatial density of the measurement stations and the physical laws and phenomena that change the snow level and density radically with changes in altitude\n\\item\ncombining the interpolated data with the satellite ground images, removing the snow estimation from the areas where the satellite image indicates its absence: although being a first step in image content analysis for the refining of snow data, it indicates only a binary presence or absence of the snow and has anyway a low spatial density due to its low resolution.\n\\end{itemize}\n\nThe availability of the estimation of snow properties retrieved from mountain photograph analysis, and the past years snow measure ground truth in the areas and altitudes different from those that are measured by physical sensors, can bring significant supporting data for a better interpolation and more precise snow cover maps.\n\n\\subsection{Passive Crowdsourcing and Environmental Modeling}\nAn important step in environmental modeling exploiting passive crowdsourcing was made by Zhang et al. \\cite{Zhang:2012:MPW:2187836.2187938} in a work that studies the problem of estimating geotemporal distributions of ecological phenomena using geo-tagged and time-stamped photographs from Flickr. The idea is to estimate a given ecological phenomena (the presence or absence of snow or vegetation in this case) on a given day at a given place, and generate the map of its distribution. A Bayesian probabilistic model was used, assuming that each photograph taken in the given time and place is an implicit vote to the presence or the absence of snow in that area. Exploiting machine learning, the probability that a photograph contain snow is estimated based both on the metadata of the photograph (tags) and the visual features (a simplified version of GIST classifier augmented with color features was used). The estimation of the results (which was made by the authors thanks to the fact that for the two phenomena studied, snowfall and vegetation cover, large-scale ground truth is available in the form of observations of satellites \\cite{Zhang:2012:MPW:2187836.2187938}) brought very promising results: a daily snow classification for a 2 year period for four major metropolitan areas (NYC, Boston, Chicago and Philadelphia) generates results with precision, accuracy, recall and f-measure equal to approximately $0.93$.\n\nEven if the algorithm proposed by authors uses a simple probabilistic model (presence or absence of snow and vegetation cover) it is an important introduction to the field of environmental and ecological phenomena analysis by mining photo-sharing sites for geo-temporal information about these phenomena. The goal of this work is to propose an image processing technique that brings these environmental studies to a new level.\n\\chapter{Proposed Approach}\n\\label{Proposed Approach}\n\\thispagestyle{empty}\n\n\\vspace{0.5cm}\n\n\\noindent\nIn this chapter we describe the proposed approach for the procedure of mountain identification, from the analysis of the properties of the photo camera used for the shot to the identification of the position in the photograph of each mountain peak.\n\nThe matching algorithm is partially based on an mountain contour matching technique proposed by Baboud et al. \\cite{Baboud2011Alignment}, so first this matching algorithm will be discussed with particular emphasis on its advantages and disadvantages and then our proposal will be described in detail.\n\n\\section{Analysis of Babound et al.'s algorithm}\nThe basic idea and the purpose of the algorithm is the same as that of this work: given a geotagged photograph in input, identify the mountain peaks present in it, by exploiting the elevation data and the dataset of peaks with their positions.\nThe algorithm in question treats the input image as spherical, and searches for the rotation on $SO(3)$ that gives the right alignment between the input and another spherical image representing the mountain silhouettes viewed by the shot point of the input photograph in all directions. This image is generated by the 3D terrain model based on a digital elevation map. A vector cross-correlation technique is proposed to deal with the matching problem: the edges of the images are modeled as imaginary numbers, the candidates for the estimated direction of view are extracted, and a robust matching algorithm is used to identify the correct match.\n\nAs will follow from the next sections, several changes with respect to the original algorithm have been introduced, most relevant among them are:\n\\begin{itemize}\n\\item\nInstead of dealing with the elevation models we rely on an external service, which generates the panorama view given the geographic coordinates and a huge set of parameters. The reason of this choice is the possibility to concentrate the work on matching technique, as well as the reliability and precision of a tool improved over the years.\n\\item\nInstead of supposing the field of view to be known, its estimation given a photograph and the basic EXIF information will be discussed together with the photograph scaling problem, that allows successful matching by a non scale-invariant algorithm.\n\\item\nInstead of searching for the orientation of the camera during the shot in three dimensions, we suppose that the observer's line of sight is always perpendicular to the horizon (no photograph in the test dataset had a significant tilt angle with respect to the horizon). This assumption simplifies the computational effort of the algorithm, by moving from considering the images as spherical to cylindrical.\n\\item\nOnce the camera direction is successfully estimated, the mountain peak alignment between the photograph and the panorama may still presents non-negligible errors due to inaccuracies of the estimated position of the observer. We will deal this problem, which is not treated in the original algorithm.\n\\end{itemize}\n\n\\section{Overview of proposed algorithm}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.65\\columnwidth]{.\/figures\/algorithm_schema}\n\t\\caption{Schema of the mountain peak tagging algorithm.}\n\t\\label{fig:algorithmSchema}\n\\end{figure}\n\nThe process of analyzing a single photograph containing a mountain panorama to identify individual mountain peaks present in it consists of several steps, shown in Figure \\ref{fig:algorithmSchema}.\n\nFirst the geotagged photograph is processed in order to evaluate its Field Of View and to scale it making the photograph match precisely the rendered panorama image, next a 360-degree rendition of the panorama visible at the location is generated from the digital elevation models.\nAfter this the direction of the camera during the shot is estimated by extracting the edge maps of both photograph and rendered panorama images and matching them. Finally the single mountain peaks are tagged based on the camera angle estimation.\n\n\\section{Detailed description of the algorithm}\n\n\\subsection{Render Generation}\nThe key idea of the camera view direction estimations lies in generating a virtual panorama view of the mountains that should be seen by the observer from the point where the photograph was taken, and then matching the photograph to this panorama.\nThe generation of this virtual view is possible due to the availability of terrain elevation datasets that cover a certain geographical area with a sort of grid identifying the terrain elevation in each grid point. Depending on the source of the elevation data, the precision and spatial density of this grid can vary significantly, but usually it is very precise, reaching even a spatial density of 3 meters in public datasets (such as USGS, http:\/\/ned.usgs.gov).\n\nThough the accuracy of elevation models is crucial for our purposes, since an exact render is the basis for a correct match, we are not necessary looking at extremely high-resolution datasets as the mountains we are going to generate are usually located at a significant distance from the observer (starting from few hundreds of meters to tens of kilometers). For this project the use of an external service that generated these rendered panoramas was preferred instead of creating them from scratch using the elevation data. The advantage of this choice is the possibility to avoid dealing directly with digital elevation maps and with optical and geometric calculations, in order to provide the panorama exactly as it should be seen by a human eye or photo camera lens. Such tools thus free the system from laborious calculations, allowing simply to choose the parameters needed for the generation of the panorama such as the observer's position, altitude and angle of gaze.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{.\/figures\/mountainview2}\n\t\\caption{Example of wrong position estimation problems with view positions and the relative generated panoramas. 1 - Original viewer's position and altitude, 2 - estimated position and wrong altitude, 3 - elevated estimated position used for the final panorama generation.}\n\t\\label{fig:mountainView}\n\\end{figure}\n\nThe choice of the observer's altitude deserves to be mentioned separately, as it is a critical point: clearly the ideal value that can be set (with an ideal elevation model) is the real altitude of the observer during the shot (which, after studying the available datasets, we consider more than legitimate to suppose to be on the terrain surface, so we estimate the altitude of the observer as the terrain altitude in that position). This information is readily available, but it brings some problems due to the uncertainty of the geotag of the photograph: let us imagine an observer standing on a peak or a ridge of a mountain - he has a broad view in front of him, but it is enough for him to take a few steps back and the panorama he was viewing becomes completely covered by the facade of the mountain in front of him. The same issue occurs with the photographs: a photograph taken from a peak or a ridge of a mountain (very frequently the case in public domain collections of mountain photographs). The error of estimating the photo camera position of few meters can lead to an overlay of a significant portion of the panorama. An intuitive technique that can walk around this problem is to add some constant positive offset to the estimated altitude to \"raise\" the observer above the terrain obstacles that have appeared due to the errors in position estimation. Figure \\ref{fig:mountainView} shows in a simplified way the problem and its resolution with the corresponding generated panoramas.\n\n\\subsection{Field of View Estimation and Scaling}\nUnless a scale invariant matching technique is going to be used (and it is not the case of this algorithm), once the input image and an image representing the expected panorama view are ready, the first problem is that in order to be correctly matched the objects contained in both images (in our case the objects to be matched are mountains) must have the same pixel size. Since we assume that both represent the view of an observer from the same geographical point, the same pixel dimension of the object involves also the same angular dimensions (the angle a certain object occupies in the observer's view). So we define our first photograph analysis problem as searching for the right scaling of the input photograph to have the same angular dimensions for the same mountains present in the photograph and the rendered panorama. We can write down the problem as finding a scale factor $k$ such that $$k\\frac{s_{p}}{a_{s}} = \\frac{s_{r}}{a_{r}}$$ where $s_{p}$ and $a_{p}$ are respectively the pixel size and the angular size of the input photograph and $s_{r}$ and $a_{r}$ are similarly the pixel size and the angular size of the rendered panorama.\n\nWe expect the panorama to be exact, so we consider the mountains to have the same width\/height ratio both on the photograph and on the panorama, so the relationship described above can be equivalently applied both to the horizontal and vertical dimensions: we will work with the horizontal dimenstion because the angular width of the panorama does not need any calculation (it is always equal to the round angle, $2\\pi$). Defining the Field Of View (FOV) of an image as its angular width we can rewrite the relationship as $$k*\\frac{w_{p}}{FOV_{s}} = \\frac{w_{r}}{FOV_{r}} = \\frac{w_{r}}{2\\pi}$$ where $w$ and $FOV$ stands for the pixel width and the FOV of respectively the photograph and the rendered panorama.\n\n\\begin{figure}[h!]\n\t\\includegraphics[width=1.0\\columnwidth]{.\/figures\/fov}\n\t\\caption{A simplified schema of digital photo camera functioning.}\n\t\\label{fig:fovSchema}\n\\end{figure}\n\nBefore explaining how to estimate the FOV of the input photograph we must introduce some brief concepts regarding digital photo camera structure and optics laws: in a very simplified way a photo camera can be seen as a lens defining the focus of the projection of a viewed prospect on a sensor that captures this projection. The size of the sensor is a physical constant and property of a photo camera, the so called focal length instead (that defines the distance between the sensor and the lens) usually varies with the changing of the optical zoom of the camera. The FOV of the captured image in this case is obviously the angular size of the part of the prospect projected on the sensor. Figure \\ref{fig:fovSchema} shows this simple schema. We can easily write the relationship between the FOV of the photograph and the properties of the photo camera at the instant of shooting ($s$ for sensor width and $l$ for focal length): $$FOV = 2\\arctan\\frac{s}{2l}$$\n\nCombining this definition with the previous relationship we can express the scaling factor $k$ that must be applied to the photograph in order to have the same object dimensions as: $$k = FOV\\frac{w_{r}}{2\\pi w_{p}} = \\frac{w_{r}}{\\pi w_{p}}\\arctan\\frac{s}{2l}$$\n\nScaling estimation is a purely mathematical procedure, and the quality of the results depends directly on the precision of the geotag and the accuracy of the rendered model (with exact GPS location and a render based on correct elevation data the scaling produces perfectly matchable images).\n\n\\subsection{Edge Detection}\n\nThe key problem of the algorithm is to perform matching between the photograph containing the mountains and the automatically generated drawing representing the same mountain boundaries: in Figure \\ref{fig:mountainOverlapping} we can see an example of a fragment of the photograph and a fragment of the rendered panorama. Both represent the same mountain seen from the same point of view, and in fact when we try to match them manually they overlap perfectly. In spite of this overlapping, the choice of the technique for their matching is not trivial. Even if the problem of matching (choosing the position of the photograph with respect to the panorama, maximizing the overlap between the mountains) will be discussed in the next sections, here we will briefly mention the possible techniques to evaluate the correctness of the overlap and the reasons for the necessity of the edge extraction procedure.\n\n\\begin{figure}[h!]\n\t\\includegraphics[width=1.0\\columnwidth]{.\/figures\/mountain_overlapping}\n\t\\caption{An example of a matching problem with the photograph fragment (top right), the panorama fragment (top left) and their overlapping (bottom).}\n\t\\label{fig:mountainOverlapping}\n\\end{figure}\n\nWe can see the matching problem as a classic image content retrieval problem with the photograph as the input image and the collection of the fragments of the rendered panorama (each one corresponding to a possible alignment position) as the set of available images to search in. The most similar image in the set will be considered as the best matching position and identified as the direction of view of the camera during the shot.\n\nSo what is the similarity measure that can be used in order to perform this image content retrieval problem? Clearly global descriptors (color and texture descriptors) \\cite{citeulike:10106398} are not the best choice: it is enough to look at the Figure \\ref{fig:mountainOverlapping} to understand that the panorama is always generated in gray-scale tones, with textures defined only by the terrain elevation, while the mountains in the photograph can be colored in different ways, and the textures are defined by a lot of details such as snow on the mountains and other foreground objects such as grass, stones and reflections on the water. Local image descriptors instead (for example, local feature descriptors such as SIFT \\cite{Lowe:2004:DIF:993451.996342} and SURF \\cite{Bay:2008:SRF:1370312.1370556}) seem slightly suitable for the needs of mountain matching (as discussed also by Valgren et al. in \\cite{Valgren:2010:SSS:1715935.1716080}, where the use of SIFT and SURF descriptors is highlighted for the matching of outdoor photographs in different season conditions), but even if they are good for matching the photographs containing the same objects in different color conditions, the matching between an object photograph and its schematic representation (in our case a rendered drawing) tends to fail. Even if very accurate, the model of a photograph will not generate local features with the same precision that another photograph of the same object would do: no local descriptors tested have been able to find the match between the two example images in Figure \\ref{fig:mountainOverlapping}.\n\n\\begin{figure}[h!]\n\t\\includegraphics[width=1.0\\columnwidth]{.\/figures\/mountain_edges_overlapping}\n\t\\caption{An example of a matching problem with the photograph edge fragment (top right), the panorama edge fragment (top left) and their overlapping (bottom).}\n\t\\label{fig:mountainEdgesOverlapping}\n\\end{figure}\n\n\nThe perfect overlap between the images in the example figure however brings up to the idea, that instead of traditional image descriptors we should match the boundaries of the images (this assumption has also an intuitive motivation: the contours are the most significant and time invariant properties of the mountains), so the next step in photograph and panorama processing is edge extraction from both the images. The result of edge map extraction from the images in Figure \\ref{fig:mountainOverlapping} is represented in Figure \\ref{fig:mountainEdgesOverlapping}. The matching procedure on the edge maps can now be seen as a cross-correlation problem \\cite{wiki:CrossCorrelation}.\n\nIn order to make the cross-correlation matching more sophisticated with a couple of techniques that will be presented later, the edge extraction component must accept as input an image (the photograph or the rendered panorama) and produce as output a 2D matrix of edge points, each one corresponding to the point on the input image with a strength (value between 0 and 1 representing the probability of the point to be an edge) and a direction (value between 0 and 2$\\pi$ representing the direction of the edge in that point).\n\n\\subsection{Edge Filtering}\nOnce we have generated the edge maps of the photograph we must deal with the problem of the noise edge points. A noise edge point can be defined as an extracted edge point representing a feature that is not present on the rendered panorama, in this case the edge point will not be useful for the match and will only be able to harm it. In other words a noise edge point is an edge point that does not belong to a mountain boundary (the only features represented on our rendered panoramas). This doesn't mean that any edge point belonging to a mountain is not a noise point, i.e. the edge points that define the snow border of a mountain are noise points because they do not belong to the mountain boundary but define a border that due to its nature can easily change from photograph to photograph and furthermore will not be present in the rendered view.\n\nHowever, edge points can be also present on the mountain surface; most of them usually belong to foreign objects. Mountains tend to be very edge-poor and detail-poor objects and often are placed in the background of a photograph with other edge-rich objects such as persons, animals, trees, or houses in the foreground. Let us think about a photograph of a person next to a tree with a mountain chain in the background: the mountains themselves will generate few edge points since they tend to be very homogeneously colored; foreground objects instead will generate a huge amount of edge points (which will be noise) because they are full of small details, each leaf of the tree and detail of the person's clothes will produce noise points.\n\nTaking into account the example of the edge extraction made in the previous chapter, we manually tag the extracted points as noise and non-noise points following the edges present in the panorama: the result is represented in the Figure \\ref{fig:filteringGoodBadBefore} and with a simple image analysis script we find that the the noise edge points are more than 90\\% of all extracted points.\n This value grows up to 98\\% when we deal with photographs with even more objects in foreground. Even if the matching algorithm we are going to propose includes penalizing the noise points in order to keep up with this problem the amount of the noise edges reaches such high levels that it cause almost any algorithm to fail simply from a statistical point of view (the intense density of noise points in some area tends to ``attract'' the matching position in that area), so an edge filtering technique is needed.\n \n \\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.8\\columnwidth]{.\/figures\/filtering_good_bad_before}\n\t\\caption{Noise (red) and non-noise (green) edge points on our example photograph.}\n\t\\label{fig:filteringGoodBadBefore}\n\\end{figure}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.8\\columnwidth]{.\/figures\/filtering_good_bad_after}\n\t\\caption{Noise (red) and non-noise (green) edge points on our example photograph after the filtering procedure has been applied.}\n\t\\label{fig:filteringGoodBadAfter}\n\\end{figure}\n\nOne of the possible approaches is the detection of the skyline (the boundary between the sky and the objects present in the photograph) as implemented by Naval Jr et al. \\cite{Jr97estimatingcamera} for the analogue problem. Skyline detection is a non-trivial task that has been widely studied with several proposed approaches available in particular sectors and scenarios, such as skyline detection of a perspective infrared image by synthesizing a catadioptric image proposed by Bazin et al. \\cite{DBLP:conf\/icra\/BazinKDV09} or the extraction of the skyline from a moving omnidirectional camera as developed by Ramalingam et al. \\cite{RBSB09}. The skyline however is not the only boundary of our interest, the boundaries of mountains contained ``inside'' other boundaries are also important and significant for the matching, so we decided to opt for softer a filtering approach.\n\nWe have opted for a simple but efficient technique based on the intuitive assumption that the mountains are usually placed above the other objects on the photographs (with some exceptions such as clouds or other atmospheric phenomena): prioritizing (increasing the strength of) the higher points on the photograph with respect to the lower points. An example of the result of the edge filtering procedure is displayed in Figure \\ref{fig:filteringGoodBadAfter}. Most of the noise edges are filtered with almost all good edges intact (also the edges that would be cut with a skyline detection). The rate of the noise edge points dropped from 90\\% to 40\\% in this case.\n\n\\subsection{Edge matching (Vector Cross Correlation)}\nFirstly we define $C_{r}$ as the cylindrical image generated from the rendered panorama with the same height, and the base perimeter equal to the panorama's width, and $C_{p}$ as the input photograph projected on an empty cylindrical image of the same dimensions of $C_{r}$. Imagining two cylinders to be concentric the matching problem is defined as the two dimensional space search (the vertical position and the rotation angle of $C_{r}$ with respect to $C_{p}$), which leads to the best overlap between the mountains present on the cylindrical images. As the cylindrical images are only a projection of a rendered image on a cylinder, the problem can be equivalently seen as a search for the best overlap of two rectangular images defined by two integer numbers representing the coordinate offset of the photograph with respect to the panorama (obviously carrying out the horizontal overflow part of the photograph to the opposite side of the render, simulating a cylinder property) as shown in Figure \\ref{fig:cylinderMatching}.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{.\/figures\/cylinder_matching}\n\t\\caption{Representation of cylindrical image matching and the equivalent rectangular image matching.}\n\t\\label{fig:cylinderMatching}\n\\end{figure}\n\nAs already introduced in previous sections, the matching will be performed with the edge maps of the images so we define the result of the edge detection algorithm as a 2D real-valued vector where each value is defined by $\\rho$ (the strength\/absolute value of the edge in the corresponding point of the input image) and $\\theta$ (the direction\/argument of that edge). Let $p(\\omega)$ and $r(\\omega)$ be the 2D real-valued vectors generated by edge detection of the photograph and the panorama render respectively. Then as proposed by Baboud et al. \\cite{Baboud2011Alignment} the best matching is defined as the position maximizing the likelihood between two images. This likelihood is defined as $$L(p,r) = \\int_{S^{2}} M(p(\\omega),r(\\omega))d\\omega$$\nwhere $M$ is the angular similarity operator:\n$$M(v_1,v_2) = \\rho ^{2}_{v_1} \\rho ^{2}_{v_2} \\cos 2(\\theta_{v_1} - \\theta_{v_2})$$\n\nThis technique of considering the edge maps as vector matrices and applying the cross-correlating resolution is called Vector Cross Correlation (VCC). The cosine factor is introduced in order to handle edge noise by penalizing differently oriented edges: the score contribution is maximum when the orientation is equal, null when they form a $\\frac{\\pi}{4}$ angle, and minimum negative when the edges are perpendicular (Figure \\ref{fig:penalizingEdges}). This penalty avoids that random noise edges contribute in a positive way to a wrong match position (a step in this direction was already made during the edge filtering).\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{.\/figures\/penalizing_edges}\n\t\\caption{Example of an overlapping position with positive score with almost parallel edge intersection (left circle) and penalizing almost perpendicular edge intersection (right circle).}\n\t\\label{fig:penalizingEdges}\n\\end{figure}\n\nOne of the main advantages of this likelihood score is the possibility of applying the Fourier transform in order to reduce drastically the computation effort of best matching position: let us define $\\hat{p}$ and $\\hat{r}$ as 2D Fourier transforms of respectively $p(\\omega)$ and $r(\\omega)$. The VCC computation equation becomes\n\\begin{equation} \\label{eq:FFTVCC}\nL(p,r) = Re\\{\\hat{p}^{2} \\bar{\\hat{r}}^{2}\\}\n\\end{equation}\nThis cosine similarity VCC can be seen as the whole algorithm to perform the matching or only as its first step: if we build the 2D real-valued distribution of the likelihood values estimated in each possible position we can consider not only the best match but extract top-N peaks of the score distribution as the result candidates and then evaluate them with a more sophisticated and heavier technique in order to identify the correct match. One of these refining algorithms is the robust silhouette map matching metric technique designed by Baboud et al. \\cite{Baboud2011Alignment} which considers the edge maps as sets of singular connected edges. The likelihood of an overlap is the sum of the similarity between each edge $e_p$ of the photograph and the edge $e_r$ of the rendered panorama, where $e_r$ is enriched with a certain $\\epsilon_e$ neighborhood, $l$ represents the distance for which $e_p$ stays inside the $\\epsilon_e$ neighborhood of the $e_r$ and the similarity is defined as\n\\begin{equation} \\label{eq:robustMatching}\nM(e_p, e_r) = \n\\left\\{\\begin{matrix}\n0\n\\\\ \nl^a\n\\\\ \n-c\n\\end{matrix}\\right.\n\\begin{array}{ll}\n\\text{if }l = 0\n\\\\ \n\\text{if }(l > l_{fit}) \\land (e_p \\text{ enters and exits on the same side})\n\\\\ \n\\text{if }(l < l_{fit}) \\land (e_p \\text{ enters and exits on different sides})\n\\end{array}\n\\end{equation}\nwhere $a$, $c$ and $l_{fit}$ are predefined constants. The nonlinearity implied by the exponent $a$ makes longer edge overlaps receive more weight than the set of small overlaps, the constant $c$ instead introduces also in this metric the penalizing factor for the intersections of the edges considered wrong as in the VCC metric.\n\n\n\\subsection{Mountain Identification and Tagging}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{.\/figures\/result_wrong_peaks}\n\t\\caption{Example of a fragment of a matching result between the photograph (blue) and the panorama (red).}\n\t\\label{fig:resultWrongPeaks}\n\\end{figure}\n\nOnce the edges are matched and the direction of view of the camera during the shot is estimated, supposing to have the list of mountain peaks with their names and coordinates on the rendered panorama we can estimate the position of these peaks also on the photograph, even if it is not as trivial as can be initially thought. Intuition suggests that once the photograph is matched with the panorama the coordinates of the mountain peaks are the same on both (obviously with a fixed offset equal to the matching position in case of the photograph) but it is wrong. Looking at a fragment of the result of edge matching (the best estimated position by VCC) of a photograph in Figure \\ref{fig:resultWrongPeaks}: the overlapping between the photograph and panorama edges is almost perfect on the left part but tends to be always worse moving to the right, which means that the two edge maps are slightly different (probably due to the error in position and altitude estimation) and objectively the matching proposed by the VCC seems to be the best that can be obtained. This situation occurs very often, and in general we can say that the resulting edge matching, even when successful, presents small errors in singular mountain peaks due to the imperfections of the generated panorama model. We propose a method for precise mountain peak identification: for each mountain peak present in the panorama we extract an edge map pattern both from the photograph and the panorama centered in the coordinate of the peak on the panorama and weighted with a certain kernel function $f(d)$ for a fixed pattern radius $r$ and $d$ representing a point distance from the peak coordinate with the following properties:\n$$\n\\begin{array}{ll}\nf(0) = 1\n\\\\ \nf(x) = 0 \\text{ }\n\\\\ \nf(x_1) \\geq f(x_2) \\text{ }\n\\end{array}\n\\begin{array}{ll}\n\n\\\\ \n\\forall x \\geq r\n\\\\ \n\\forall x_1, x_2 \\, \\mid \\, x_1 \\leq x_2\n\\end{array}\n$$\nOnce the two patterns are extracted the VCC procedure is once again applied in order to match them, having only the edges of the mountain peak we are processing and a few surrounding edges, the matching position is no longer influenced by the other peaks and is refined to the exact position, an example of this procedure performed on the matching result introduced in Figure \\ref{fig:resultWrongPeaks} in order to identify one of the most right-placed peaks is displayed in Figure \\ref{fig:peakIdentification}.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{.\/figures\/peak_identification}\n\t\\caption{Peak identification procedure on the previous matching example.}\n\t\\label{fig:peakIdentification}\n\\end{figure}\n\n\n\\chapter{Implementation Details}\n\\label{Implementation Details}\n\\thispagestyle{empty}\n\n\\vspace{0.5cm}\n\n\\noindent\nIn this chapter the implementation of the matching algorithm proposed in the previous chapter is presented, from the general description of the system architecture and programming languages involved to the value of each parameter and constant introduced in the algorithm presentation.\n\n\\section{Overview of the implementation architecture}\nThe whole system can be split into two macro areas:\n\\begin{itemize}\n\\item\n\\emph{Photograph analysis, panorama and mountain peak list generation}: implemented in PHP with web interface due to the simplicity of interfacing with the external panorama generation service and interface versatility. Given a geo-tagged photograph (or a photograph with explicitly specified the geographic point of the shot) the virtual panorama view is generated centered at the same origin point and the properties both of the photograph and the photo camera are retrieved in order to let the next step calculate the field of view and scale factor.\n\\item\n\\emph{Matching and mountain identification}: implemented in MATLAB for the high suitability with image processing techniques. Given a photograph with all necessary information, the panorama and the mountain list with the relative coordinates on the panorama, edge extraction and filtering is performed, the camera direction is estimated, and finally individual mountains are identified and tagged.\n\\end{itemize}\n\nThe choice of the operating parameters used in the final implementation was defined in the testing and validation phase, using a data set and a developed evaluation metric. Although the parameter values will be specified as soon as they are introduced, the detailed description and the reasons of discarding the other values will be presented in the next chapter.\n\n\\section{Detailed description of the implementation}\n\n\\subsection{Render Generation}\nAs already mentioned in the implementation an external panorama service has been used: the mountain view generator of Dr. Ulrich Deuschle \\footnote{\\url{http:\/\/www.udeuschle.de\/panoramen.html}} (Figure \\ref{fig:udeuschleScreenshot}). The service accepts as input several parameters, most important of which are the geographic position of the observer, his altitude and altitude offset, the view direction with the field of view, zoom factor, elevation exaggeration and the range of sight. Latitude and longitude are equal obviously to the geo-tag of the input photograph, and the horizontal extension (field of view) to the round angle, $2\\pi$.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{.\/figures\/udeuschle_screenshot}\n\t\\caption{A screenshot of the mountain panorama generation web service interface.}\n\t\\label{fig:udeuschleScreenshot}\n\\end{figure}\n\nThe choice of the altitude (in fact of the altitude offset) as already anticipated in the previous chapter is a problematic point: a big offset can lead to vertical distortion of the panorama with respect to reality, a small offset on the other hand can lead more easily to partial occlusion of the panorama. The choice was made for $auto + 10$~m altitude setting, chosen by trial and error. Regarding the altitude choice an important option of the service named ``Look for summit point automatically'' must be highlighted: by turning on this option the panorama is generated from the highest point of the terrain within 200~m. The advantage\/disadvantage of using this option is the same as increasing the altitude offset: avoiding the occlusion of the panorama against distorting the generated view. Even if this option is very interesting we set it to off as the distortion of the panorama due to the observer's position changing and not only his altitude was too detrimental for the algorithm.\n\nFor the most realistic panorama the zoom factor and elevation exaggeration factors are both set to $1$. All the other parameters are left to their default values.\n\nThis service covers the the Alps, Pyrenees and Himalayas mountain range systems with two elevation datasets:\n\\begin{itemize}\n\\item\n\\emph{Alps} by Jonathan de Ferranti, with the spatial resolution of $1$~arcseconds\n\\item\n\\emph{SRTM} by CGIAR, with the spatial resolution of $3''$\n\\end{itemize}\n\nThe output is implemented with dynamic loading of the panorama, so is generated as a set of images to be aligned horizontally to form the complete output image and the one image containing the mountain peak names to be overlapped with the result image. Despite the unavailability of the service API, after the study of JavaScript and AJAX scripts of the web interface, a PHP function that simulates the functioning of the interface and collects all the parts of the result image was implemented.\n\nThe generation of the full panorama at the resolution of $20$~pixel\/degree and the peaks names takes on average 1 minute.\n\n\\subsection{Field of View Estimation and Scaling}\nTo estimate the field of view of the input photograph, as described in the previous chapter, the only information that must be known is the focal length of the photograph and the sensor width of the photo camera. The focal length is a parameter specified directly in the EXIF format of the image (tag = FocalLength, 37386 (920A.H)) \\cite{Technical2002Exchangeable}. The sensor width instead is not specified directly in the EXIF since it is a property of a photo camera, so the manufacturer and the model of the camera are extracted from the EXIF and then the sensor size is retrieved (manufacturer tag = Make, 271 (10F.H); model tag = Model, 272 (110.H)) \\cite{Technical2002Exchangeable}. \n\nClearly, the necessity of a database of camera sensor sizes emerges. This information is usually scattered on web sites of manufactures and technical references of the cameras, so it is not easy to collect them in one place. We used a database kindly provided by ``Digital Camera Database'' \\footnote{\\url{http:\/\/www.digicamdb.com}} web site, containing information about the sensor size of more than 3000 digital cameras. The main problem in using it remains in the fact that the manufacturer and model names of the same cameras are not always equal between that written in the EXIF and that provided by the sensor database, a couple of these mismatching examples are provided in Table \\ref{tab:exifMismatching}.\n\n\\begin{table}[!h]\n \\centering\n \\begin{tabularx}{\\textwidth}{|X|X|X|X|}\n \\hline\n \\multicolumn{2}{|c|}{\\tabhead{EXIF}} &\n \\multicolumn{2}{c|}{\\tabhead{Database}} \\\\\n \\hline\n \\tabhead{Manufacturer} &\n \\tabhead{Model} &\n \\tabhead{Manufacturer} &\n \\tabhead{Model} \\\\\n \\hline\n Canon &\n Canon PowerShot SX100 IS &\n Canon &\n PowerShot SX110 IS \\\\\n \\hline\n SONY &\n DSC-W530 &\n Sony &\n Cybershot DSC W530 \\\\\n \\hline\n NIKON &\n E5600 &\n Nikon &\n Coolpix 5600 \\\\\n \\hline\n OLYMPUS IMAGING CORP. &\n SP560UZ &\n Olympus &\n SP 560 UZ \\\\\n \\hline\n \\end{tabularx}\n \\caption{Examples of differences in the names of manufacturers and models between the EXIF specifications and the digital camera database.}\n \\label{tab:exifMismatching}\n\\end{table}\n\nTo find the correct photo camera in the database from the EXIF specifications name a text similarity score between the names is used and the most similar name is chosen. The text similarity is calculated by the \\emph{similar\\_text} PHP function proposed by Oliver \\cite{DBLP:books\/daglib\/0077674}, after several steps of preprocessing:\n\n\\begin{enumerate}\n\\item\nBoth the manufacturer and model names of both the EXIF and the database items are transformed to lower case.\n\\item\nIf the manufacturer name contains ``nikon'' the name is set to ``nikon''.\n\\item\nIf the manufacturer name contains ``olympus'' the name is set to ``olympus''.\n\\item\nIf the model name contains the manufacturer name it is cut off from the model name.\n\\item\nThe text similarity score is performed between the concatenation of the manufacturer and the model of both EXIF and database items.\n\\end{enumerate}\n\nOnce the focal length and the sensor size are retrieved, they are annotated within the image, the matching algorithm will later use them to calculate the field of view and scale factor.\n\n\\subsection{Edge Detection}\nFor the edge extraction the \\emph{compass} \\cite{Ruzon:2001:EJC:505471.505477} edge detector has been used. It returns exactly the output the matching algorithm needs (for each point the edge strength and direction) and has been chosen due to its ability of exploiting the whole color information contained in the image and not only the grayscale components as classical edge detectors do.\nThe \\emph{compass} detector deals well also with a significant problem of edge detectors: when the image presents a subjective boundary between two regions with pixels close in color (due to overlapping objects), most edge detectors compute a weighted average of the pixels on each side, and since the two values representing the average color of two regions are close, no edges are found \\cite{Ruzon:2001:EJC:505471.505477}. The chosen detector deals well with this problem, that is very likely in the context of mountain photographs and the boundaries between the snowy mountain and blue sky.\n\nThe edge detection procedure is applied both to the input photograph and the generated panorama, with the following parameters chosen by trial-and-error: standard deviation of Gaussian used to weight pixels $\\sigma = 1$, threshold of the minimum edge strength to be considered $\\tau = 0.3$.\n\n\\subsection{Edge Filtering}\nThe edge filtering approach used in the implementation treats the columns of the image separately: each column is split into segments separated by the zero strength edge points, and each segment is then split into segments of length $n$ points. The points of each $i$-th segment (starting from the top) are then multiplied by the factor of $b^{i-1}$.\n\nThe implementation uses $k = 2$ and $b = 0.7$. It allows a good filtering of noise objects at the bottom of the photograph, and meanwhile deals well with mountain edges placed below the clouds. Several examples of filtering are presented in Figure \\ref{fig:filteringExamples}:\n\\begin{itemize}\n\\item\n\\emph{First row}: a photograph with clouds and a rainbow completely removed from the edges map (thanks to the correct parameters of the edge extraction algorithm).\n\\item\n\\emph{Second row}: a photograph with strongly contrasting clouds, the edges of the mountains below the clouds are still present even if reduced in strength.\n\\item\n\\emph{Third row}: a photograph with different mountains, one in front of another with a reduced visibility, even if reduced in strength all the mountains are present on the filtered edge map and the noise edges of the terrain in the foreground are filtered.\n\\end{itemize}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{.\/figures\/filtering_examples}\n\t\\caption{Examples of edge filtering (one for each row): the original photograph (left), the extracted edges (center) and the filtered edges (right).}\n\t\\label{fig:filteringExamples}\n\\end{figure}\n\n\\subsection{Edge matching (Vector Cross Correlation)}\nWith the edge maps of the photograph and the panorama the matching of the edges is performed by fast Fourier transformation, with the Formula \\ref{eq:FFTVCC}. The MATLAB implementation of the VCC is the following:\n\n\\begin{lstlisting}[frame=single]\nfunction VCC = ComputeVCC(SP, DP, SR, DR)\n\n\ndimP = size(SP);\ndimR = size(SR);\nSR = [ zeros(dimP(1), dimR(2)) ; SR ; zeros(dimP(1), dimR(2)) ];\nDR = [ zeros(dimP(1), dimR(2)) ; DR ; zeros(dimP(1), dimR(2)) ];\n\nCOMP = complex( SP .* cos(DP) , SP .* sin(DP) );\nCOMR = complex( SR .* cos(DR) , SR .* sin(DR) );\n\nVCC = rot90(real(ifft2(conj(fft2(COMR.^2)) .* fft2(COMP.^2, size(COMR, 1), size(COMR, 2)))),2);\n\nVCC = VCC( dimP(1) + 1 : 2 * dimP(1) );\n\\end{lstlisting}\n\nThe resulting matrix of this function is the VCC score evaluation for each possible overlapping position, and the maximum value of the matrix identifies the best overlap. Instead of just peaking up the maximum value, several additional techniques have been implemented and tested:\n\n\\emph{Result smoothing}: the idea is that the result score peak will have high values also in the neighborhood of its position, and the noise score peaks instead do not: in other words the peak corresponding to the right position will have be broader with respect to noise score peaks. So smoothing the result score distribution penalizes the noise peaks: it will reduce more the height of the noise peaks with respect to the correct peak. In practice, after several tests, the incorrectness of this assumption emerges. Due to the nature of the VCC technique of penalizing the non parallel intersection of the edges, even if the score in the correct position is high, it is sufficient to move only few points out to reduce drastically the score, so the smoothing procedure was not efficient. Figure \\ref{fig:smoothing} shows an example of the score matrix (projected onto the $X$--$Z$ plane for readability) of the photograph of the Matterhorn used in the previous examples: the correct position represents already the maximum of the distribution, and not only it is intuitively visible that the correct position score peak is not smoother than the others, but it is also shown that more smoothing always leads to a smaller difference between correct and incorrect matches. \n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{.\/figures\/smoothing}\n\t\\caption{Smoothing of the VCC result distribution with smoothing factor $k$. The red arrow identifies the correct matching.}\n\t\\label{fig:smoothing}\n\\end{figure}\n\n\\emph{Robust matching}: as proposed in the algorithm chapter, the top-$N$ peaks of the score matrix are extracted, each one is evaluated to find the best matching position. The evaluation is performed with Formula \\ref{eq:robustMatching}, that has been implemented in a simplified version in the following way:\n\\begin{enumerate}\n\\item\nThe information about the direction of the edge points is removed, all the edge points with strength bigger than $0$ are considered to have the strength equal to $1$.\n\\item\nOn the panorama edge map each point that is located less or equal than $r$ points from any edge point in the original edge map is set to $1$. In this way the $r$-neighborhood of the panorama edges is generated.\n\\item\nA simple intersection between the new panorama edge map and the photograph edge map is performed in the overlapping position that must be evaluated.\n\\item\nThe resulting intersection (composed only by $0$ and $1$ strength points) is divided into clusters where any point of the cluster is located less than $d$ from at least one point of the same cluster.\n\\item\nThe Formula is applied where the clusters are singular edges, and the edge length is the number of the points in the cluster.\n\\end{enumerate}\n\nAfter performing tests on the dataset also this approach has been rejected as increasing drastically the computational effort of the matching without introducing significant benefits to the results.\n\n\\emph{Different scale factors}: since the reduced times of VCC computation with the fast Fourier transform ($\\sim 1$~second for the VCC score matrix generation in our implementation) an approach to reduce the impact of the wrong photograph shot position or the incorrect field of view estimation (due for example to a wrong sensor peaking) is to perform the matching at several scale levels and not only the estimated scale. In the current implementation several scaling intervals with respect to the estimated level were evaluated, and the scale factor with the best VCC maximum value was picked. Obviously a larger scale factor will lead to a larger photograph edge map and so bigger VCC score, so when comparing different maxim values an inverse quadratic scale factor must be applied to each score (inversely proportional to the area added to the image due to the scale changing).\n\n\\subsection{Mountain Identification and Tagging}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{.\/figures\/filtering_function}\n\t\\caption{Peak extraction: edge map before pattern extraction (left), implemented kernel function with $r = 1$ (center), edge map after pattern extraction (right).}\n\t\\label{fig:filteringFunction}\n\\end{figure}\n\n\nThe kernel function chosen for the peak extraction for the identification is the triweight function, defined as:\n\n$$\nf(d) = \n\\left\\{\\begin{matrix}\n\\left(1-\\left(\\frac{d}{r}\\right)^{2}\\right)^{3} \\\\ \n0\n\\end{matrix}\\right.\n\\begin{matrix}\n\\text{ for }d \\leq r \\\\\n\\text{ for }d > r\n\\end{matrix}\n$$\n\nWith $r = 200$. The kernel function plot and its effect on the peak extraction are shown in Figure \\ref{fig:filteringFunction}.\n\n\\chapter{Experimental Study}\n\\label{Experimental Study}\n\\thispagestyle{empty}\n\n\\vspace{0.5cm}\n\nIn this chapter the photograph direction estimation quality is investigated. The precision of the estimation is computed varying the operating parameters of the implementation and the type of the photographs contained in the data set.\n\n\\section{Data sets}\nThe analysis is conducted on a set of 95 photographs of the Italian Alps, collected from several photographers with geo-tag set directly by a GPS component or manually by the photographer, so we consider them very precisely geo-tagged. The photographs vary in several aspects: several examples are shown in Figure \\ref{fig:dataset}, and the categories of interest are composed as described in Table \\ref{tab:datasetCategoriesComposition}.\n\\begin{table}[!h]\n \\centering\n \\begin{tabularx}{\\textwidth}{|X|c|c|}\n \\hline\n \\tabhead{Category} &\n \\tabhead{Option} &\n \\tabhead{Data set portion} \\\\\n \\hline\n \\multirow{2}{*}{Source} &\n Photo camera &\n 38 \\% \\\\\n &\n Cellular phone &\n 62 \\% \\\\\n \\hline\n \\multirow{2}{*}{Cloud presence} &\n None &\n 41 \\% \\\\\n &\n Minimal &\n 29 \\% \\\\\n &\n Massive &\n 23 \\% \\\\ \n &\n Overcast &\n 7 \\% \\\\ \n \\hline\n \\multirow{2}{*}{Skyline composition} &\n Mountains and terrain only &\n 87 \\% \\\\\n &\n Foreign objects &\n 13 \\% \\\\\n \\hline \n \\end{tabularx}\n \\caption{Data set categories of interest composition.}\n \\label{tab:datasetCategoriesComposition}\n\\end{table}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.00\\columnwidth]{.\/figures\/dataset}\n\t\\caption{Several photographs from the collected data set.}\n\t\\label{fig:dataset}\n\\end{figure}\n\n\n\\section{Operating parameters}\nThe operating parameters used in the algorithm described in the previous sections, and to be evaluated, are listed in Table \\ref{tab:datasetOperatingParameters}. The value in bold defines the default value that gives the best evaluation results and is used in the final implementation proposal.\n\n\\begin{table}[!h]\n \\centering\n \\begin{tabularx}{\\textwidth}{|X|c|c|}\n \\hline\n \\tabhead{Full name} &\n \\tabhead{Parameter} &\n \\tabhead{Tested values} \\\\\n \\hline\n Photograph edge strength threshold &\n $\\rho_p$ &\n 0.1,0.2,\\textbf{0.3},0.4,0.5,0.6,0.7 \\\\\n \\hline\n Panorama edge strength threshold &\n $\\rho_r$ &\n 0.1,\\textbf{0.2},0.3,0.4,0.5 \\\\\n \\hline \n Photograph edge filtering base &\n $b_p$ &\n 0.5,0.6,\\textbf{0.7},0.8,0.9,1.0 \\\\\n \\hline \n Panorama edge filtering base &\n $b_r$ &\n 0.5,0.6,0.7,0.8,0.9,\\textbf{1.0} \\\\\n \\hline \n Photograph edge filtering max segment length &\n $l_p$ &\n 1,\\textbf{2},3,4,5 \\\\\n \\hline \n Panorama edge filtering max segment length &\n $l_r$ &\n 1,2,3,4,5 \\\\ \n \\hline \n Photograph scaling interval &\n $\\pm k \\%$ &\n \\textbf{0},1,2,5,10 \\\\ \n \\hline \n \\end{tabularx}\n \\caption{Operating parameters (defaults in bold).}\n \\label{tab:datasetOperatingParameters}\n\\end{table}\n\n\\section{Evaluation metric}\nEach photograph in the data set and the corresponding generated panorama are manually tagged by specifying on both one or more pairs of points corresponding to the same mountain peaks. Once the direction of view is estimated and the best overlap of the edges is found, the error of the direction estimation is defined as the average distance between the position of the peak on the panorama and the position of the peak on the photograph projected on the panorama with the current overlap position. The distance is then expressed as the angle, considering the panorama width as $2\\pi$ angle. The error therefore lies between 0 and just over $\\pi$ angle (the worst horizontal error is $\\pi$ in the case of opposite directions, but since the vertical error is also counted the total error can theoretically slightly exceed this value).\n\nGiven a photograph and a panorama rendered with the resolution of $q$ pixel\/degree, both tagged with $N$ peaks, let $x_{pi}$\/$y_{pi}$ and $x_{ri}$\/$y_{ri}$ be respectively the coordinates (expressed in pixels) of the $i$-th peak on the photograph and the panorama image, the estimation error is defined as:\n\n$$\ne = \\frac{\\sqrt{(\\sum_{i = 1}^{N} (x_{pi} - x_{ri}))^2 + (\\sum_{i = 1}^{N} (y_{pi} - y_{ri}))^2 }}{Nq}\n$$\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.00\\columnwidth]{.\/figures\/errorOffsetValidation}\n\t\\caption{Schematic example of edge matching validation with photograph edges (red) and panorama edges (blue), all the three peaks are validation points.}\n\t\\label{fig:errorOffsetValidation}\n\\end{figure}\n\nAn important aspect of the average computing must be highlighted: instead of the standard distance error averaging by taking into account the singular distances, this error metric calculates the average of the both image coordinates axis components, and then the distance with these components is computed. In practice, a positive error offset compensates a negative one. This is made to compensate the imperfections between the scale of the mountains on the photograph and panorama: if the overlap presents both positive and negative offsets between peak pairs it means that the photograph and the panorama are differently scaled, so the best overlap position is the one that brings the sum of the offset components to zero. An example of this reasoning is shown as a schematic case of edge matching in Figure \\ref{fig:errorOffsetValidation} the validation points include all three peaks of the photograph and panorama view, but the different scaling prevents perfect overlapping and the offsets are both positive and negative. Thus the best possible overlapping position is the one shown, that makes the offsets opposite and making the sum of the offsets converge to zero.\n\nTherefore even when the validation points cannot all be matched perfectly, the technique of not taking the absolute value of the offsets makes it anyway possible to find the best match and obtain an offset of zero (perfect score).\n\nFor readability reasons all the validation errors from this point will be specified in arc degrees ($\\ensuremath{^\\circ}$).\n\nOnce the matching error is calculated for every photograph present in the data set, the general score of the run of the algorithm is defined as the percentage of the portion of the data set containing all the photographs with matching error below a certain threshold. This threshold in other words defines the limit of the matching error for a photograph to be considered matched correctly, and in the evaluation of this work is set to $4\\ensuremath{^\\circ}$.\n\n\\section{Evaluation results}\nWith the all operating parameters set to their default values the algorithm has correctly estimated the orientation of \\textbf{64.2\\%} of the total photographs. In particular the distribution of the matching error through the photographs of the data is presented in Figure \\ref{fig:mainResultHist}: it is clear that the choice of the error threshold has a limited impact on the correct matching rate since almost all the correctly matched photographs have error between $0\\ensuremath{^\\circ}$ and $2\\ensuremath{^\\circ}$.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.00\\columnwidth]{.\/figures\/main_result_hist}\n\t\\caption{Histogram of the number of the photographs with a certain validation error (split into intervals of width $1$~\\ensuremath{^\\circ}.)}\n\t\\label{fig:mainResultHist}\n\\end{figure}\n\nThe results obtained with the described data set decomposed into categories as described in Table \\ref{tab:datasetCategoriesComposition} are summarized in Table \\ref{tab:datasetCategoriesResults}.\n\n\\begin{table}[!h]\n \\centering\n \\begin{tabular}{|c|c|c|}\n \\hline\n \\tabhead{Category} &\n \\tabhead{Option} &\n \\tabhead{Correct matches} \\\\\n \\hline\n \\multirow{2}{*}{Source} &\n Photo camera &\n 72.2 \\% \\\\\n &\n Cellular phone &\n 59.3 \\% \\\\\n \\hline\n \\multirow{2}{*}{Cloud presence} &\n None &\n 71.8 \\% \\\\\n &\n Minimal &\n 67.9 \\% \\\\\n &\n Massive &\n 45.5 \\% \\\\ \n &\n Overcast &\n 66.7 \\% \\\\ \n \\hline\n \\multirow{2}{*}{Skyline composition} &\n Mountains and terrain only &\n 65.1 \\% \\\\\n &\n Foreign objects &\n 58.3 \\% \\\\\n \\hline \n \\end{tabular}\n \\caption{Results of the algorithm with respect to the data set categories of interest.}\n \\label{tab:datasetCategoriesResults}\n\\end{table}\n\nFrom the results of data set categories it follows that the photographs shot with a photo camera are more frequently aligned than those shot with a cellular phone: an explanation for this trend can be the fact that the photo camera photographers usually use the optical zoom that is correctly annotated in the focal length of the photograph, cellular phones instead usually allow only digital zoom, which leaves unchanged the annotated focal length and so leads to an incorrect Field of View estimation and therefore an incorrect alignment.\n\nThe trend of the result as cloud conditions vary follows a logical sense (more clouds leads to a higher number of noise edges and to worse performance) and reveals that the presence of clouds is a significant reason for failure of the algorithm, a problem that will be treated in the next chapter. A good score of the overcast case can be explained by the fact that when the sky is completely covered by clouds there are no edges between the real sky and clouds, so there are no noise edges (it can be thought of as the sky painted by a darker cloud color).\n\nThe presence of foreign objects such as trees, buildings, and persons in the skyline obviously penalizes the alignment, but as can be seen from the results it is not a critical issue.\n\nIn the next chapter several techniques that should improve the results of mountain identification are presented and discussed.\n\n\\subsection{Operating parameter evaluation}\nWe now present and comment on the effect of the individual parameters on the overall results:\n\n\\textbf{\\emph{Photograph edge strength threshold ($\\rho_p$)}}: The choice of the strength threshold for the photograph edges presents two trends: higher threshold means fewer noise edges, but lower threshold means to include also the weaker edges belonging not to the skyline but to secondary terrain edges. The dependency of the result on $\\rho_p$ is displayed in Figure \\ref{fig:opRhoP}, the optimal balance between the two trends is reached with $\\rho_p = 0.3$.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.00\\columnwidth]{.\/figures\/op_rho_p}\n\t\\caption{Overall algorithm results varying the $\\rho_p$ operating parameter.}\n\t\\label{fig:opRhoP}\n\\end{figure}\n\n\\textbf{\\emph{Panorama edge strength threshold ($\\rho_r$)}}: The choice of the strength threshold for the panorama edges is very similar to that of the photograph, with a difference regarding small thresholds: since the panorama is rendered by a program it does not present weak edges, which means that the introduction of a small threshold has almost no effect on the matching efficiency. The dependency of the result on $\\rho_r$ is displayed in Figure \\ref{fig:opRhoR}: in fact all tested values smaller or equal to $0.3$ bring the same (and optimal) result. In the implementation the parameter is set to $\\rho_r = 0.2$.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.00\\columnwidth]{.\/figures\/op_rho_r}\n\t\\caption{Overall algorithm results varying the $\\rho_r$ operating parameter.}\n\t\\label{fig:opRhoR}\n\\end{figure}\n\n\\textbf{\\emph{Photograph edge filtering base ($b_p$)}}: The choice of the filtering base for the photograph edges defines how strongly the lower edge points will be reduced in strength, $b_p = 1.0$ means that no filtering will be performed, and $b_p = 0.0$ that only the first segment of each row of the photograph will be left. The dependency of the result on $b_p$ is displayed in Figure \\ref{fig:opBP}: it is evident that filtering is absolutely needed since the result corresponding to $b_p = 1.0$ is significantly smaller than all the other values; the result stabilizes for $b_p \\leq 0.3$ since the base is so small that the filtering factor reaches almost immediately zero. The best orientation is found for $b_p = 0.7$.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.00\\columnwidth]{.\/figures\/op_b_p}\n\t\\caption{Overall algorithm results varying the $b_p$ operating parameter.}\n\t\\label{fig:opBP}\n\\end{figure}\n\n\\textbf{\\emph{Photograph edge filtering maximum segment length ($l_p$)}}: The choice of the maximum segment length for the photograph edge filtering is introduced to allow mountain edges which are thicker than one pixel not to penalize the other edges placed below it, but at the same time to not allow the noise edges that generate columns of points to be left unfiltered. The dependency of the result on $l_p$ is displayed in Figure \\ref{fig:opLP}: the optimal balance between the two trends is reached with $l_p = 2$.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.00\\columnwidth]{.\/figures\/op_l_p}\n\t\\caption{Overall algorithm results varying the $l_p$ operating parameter.}\n\t\\label{fig:opLP}\n\\end{figure}\n\n\\textbf{\\emph{Panorama edge filtering base ($b_r$)}}: The choice of the filtering base for the panorama edges defines how strongly the lower edge points will be reduced in strength, $b_r = 1.0$ means that no filtering will be performed, and $b_r = 0.0$ that only the first segment of each row of the photograph will be left. The dependency of the result on $b_p$ is displayed in Figure \\ref{fig:opBP}: the overall trend is similar to the filtering base of photograph edges ($b_p$) but with an important difference for high values: the result does not get worse with $b_r$ getting close to $1$. This can be explained by the fact that the high values ($\\geq 0.8$) of $b_p$ do not perform a significant filtering, so the noise edge points are not filtered and penalize the matching: the panorama edges instead do not have any noise, so do not have the same trend. The choice to set $b_r = 1.0$ is very clear, or in other words to not perform any filtering of panorama edge points.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.00\\columnwidth]{.\/figures\/op_b_r}\n\t\\caption{Overall algorithm results varying the $b_r$ operating parameter.}\n\t\\label{fig:opBR}\n\\end{figure}\n\n\\textbf{\\emph{Panorama edge filtering maximum segment length ($l_r$)}}: since $b_r$ is set to $1$ (the filtering of the panorama edges is not performed) the maximum segment length for the panorama edges filtering has no impact on the result, so its choice is not significant.\n\n\\textbf{\\emph{Photograph scaling interval ($\\pm k \\%$)}}: The choice of the scaling interval with respect to the originally estimated scaling process is described in the previous chapters. The results of its estimation were probably the most unexpected of all operating parameters. The dependency of the result on $b_p$ is displayed in Figure \\ref{fig:opK}: the best performance is reached with $k = 0$ (no scaling interval, only the estimated scale factor) and always worsens with increasing $k$. This trend can be interpreted as the proof of the fact that the Field of View and scaling factor estimation are so precise that increasing the scaling interval brings more penalties by setting an imprecise scale factor and increasing the probability of a noise edge matching the mountain panorama edge than advantages by trying different zoom factors and getting the best mountain edge overlap. The best orientation is therefore found with $k = 0$ (no scaling interval).\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.00\\columnwidth]{.\/figures\/op_k}\n\t\\caption{Overall algorithm results varying the $\\pm k \\%$ operating parameter.}\n\t\\label{fig:opK}\n\\end{figure}\n\\chapter{Conclusions and Future Work}\n\\label{Conclusions and Future Work}\n\\thispagestyle{empty}\n\n\\vspace{0.5cm}\n\n\\noindent\nIn this work an algorithm for the estimation of the orientation of a geo-tagged photograph containing mountains and for the identification of the visible mountain peaks is presented in detail. The algorithm has been implemented and evaluated with the result of estimating correctly the direction of the view of \\textbf{64.2\\%} of the various photographs.\n\nThis result can be considered excellent if used in passive crowdsourcing applications: given the enormous availability of photographs on the Web, the desired amount of correctly matched photographs can be reached just by increasing the number of processed photographs.\n\nIt is a very promising even if not excellent result speaking about applications connected with user experience: approximately one photograph out of three may not be matched, decreasing significantly the usability of the application.\n\n\\section{Future enhancements}\nWe expect to implement and test several techniques in the near future, aimed at enhancing the matching process and improving the results.\n\nOne of the main reasons that the matching fails is the massive presence of clouds on the photograph, and to prevent the edge filtering step from emphasizing the noise cloud edges a sky\/ground segmentation step can be inserted before the edge extraction phase: a sky\/ground segmentation algorithm (such as, for example, that proposed by Todorovic and Nechyba \\cite{Todorovic03sky\/groundmodeling}) that given a photograph splits it into the two regions of sky and terrain, then the sky part is erased from the photograph and the matching algorithm goes on without the cloud edge noise. In Figure \\ref{fig:skyTerrain} an example of this approach and the improvement it brings to the filtered edges map is shown.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{.\/figures\/sky_terrain_enhancement}\n\t\\caption{An example of the use of sky\/ground segmentation in the edge extraction technique: noise cloud edges are removed and the mountain boundary is perfectly extracted.}\n\t\\label{fig:skyTerrain}\n\\end{figure}\n\nAnother frequent reason for incorrect matching is the imperfections between the boundaries on the photograph and on the render of mountains very close to the observer. Due to the small and unavoidable errors in the registered position, altitude estimation, and elevation model, the rendered panorama will always be imperfect with respect to reality, and this imperfection, for obvious reasons, get smaller as the distance from the observer to the object increases. The situation of having mountain boundaries both in the foreground and the background is very frequent: walking in a mountainous area the observer is usually surrounded by mountains placed close to him that are obscuring the distant mountains; the dimensions and the majesty of the mountains however bring usually the photographer to take pictures of distantly placed mountains with altitude higher that his point of view. For this reason as soon as the mountains in the foreground get placed in a way to create an \"aperture\", a photograph of the mountains visible in this sort of window will be probably shot. An example of this type of photograph can be that used for Figure \\ref{fig:skyTerrain} or that shown in Figure \\ref{fig:viewAperture}: it represents exactly the described situation, and it can be easily seen that the photograph and panorama edges of the mountain in the background are perfectly matchable, while the edges belonging to the closer mountains placed in the foreground are significantly different. In this case the algorithm manages to correctly alignment of the photograph anyway, but it is frequent in these cases of funnel-shaped mountain apertures that it fail, specially if the aperture is very small with respect to the total photograph dimension. Future studies are planned to include the development of a technique for the recognition of this type of situation and emphasizing the edges of the background mountains with respect to the foreground mountains.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{.\/figures\/view_aperture}\n\t\\caption{An example of a photograph (top) of background mountains seen in the aperture between the foreground mountains and the corresponding edge matching (bottom, red - panorama, blue - photograph).}\n\t\\label{fig:viewAperture}\n\\end{figure}\n\nEven when the estimated orientation is not correct, in most cases the correct alignment presents a significant peak in the VCC score distribution, so the best edge overlap estimation could be improved with a robust matching technique operating on the top-N positions extracted from the VCC matching score distribution. The likely method of implementing it is the neighborhood metrics of the edges as proposed by Baboud et al. \\cite{Baboud2011Alignment}; even if a simplified implementation of the robust matching technique proposed was implemented in this work and rejected as self-defeating, we believe that it can be refined to reach a higher rate of correctly matched photographs.\n\n\\section{Crowdsourcing involvement}\nAn important future direction of the research of this work is the integration with (traditional) crowdsourcing. Though being a feasible image processing task, photograph-to-panorama alignment is definitely better performed by humans that by algorithms. There is no need to move the photograph through the panorama or know the correct scale factor: a person can usually easily align the photograph to the panorama just by looking at it, comparing the fragments of the photograph he finds most particular, which can be elevated peaks, details that catch the eye and in general sense any terrain silhouette fragments that the person considers unusual. This allows one to easily identify the correct alignment by eye even in presence of significant errors between the photograph and the generated panorama.\n\nThis consideration leads us to the integration of the current work with crowdsourcing with several possible scenarios:\n\\begin{itemize}\n\\item\n\\emph{Contribution in learning and testing phase}:\nmanual photograph-to-panorama matching estimation can be very useful as a source of ground truth data both for the learning phase in case of using a machine intelligence algorithm in the edge matching and for the implementation testing phase, validating the results proposed by the algorithm. These approaches can be applied both to expert and non-expert crowdsourcing options: the non-expert tasks can consist of an activity feasible by anyone, such as searching for the correct alignment between the photograph and the corresponding panorama, the expert tasks instead aim at the activities that only mountaineers can perform, such as tagging the mountain peaks on a photograph based on his own knowledge and experience. Both methods allow the validation of the algorithm estimation result, but at two different (even if very related) levels: the first at photograph direction estimation, and the second at final mountain peak identification and tagging.\n\\item\n\\emph{Contribution as post processing validation}:\nhuman validation can be used not only in the development phase but also in the final implementation pipeline as the post processing validation. If the application is time-critical such as real-time augmented reality or website photographs mountain peak tagging, this approach cannot be applied, but in crawler applications such as environmental model creation, crowdsourcing can provide a significant improvement in the data quality by proposing manual photograph-to-panorama matching for the photographs that the algorithm has marked as photographs with low confidence score. Crowdsourcing therefore will be used as a complementary technique for photographs that cannot be aligned automatically.\n\\item\n\\emph{Contribution to data set expansion}:\nthe number of the available photographs in the data set is fundamental both for testing purposes and for the data to be processed by an environmental modeling system. Photographs can be collected from public sources such as social networks or photo sharing websites, filtering the geographical area and image content of the interest, or can be collected directly from people, uploading or signaling the photographs containing mountains in a certain requested area, or even photographs containing certain requested mountain peaks. This approach is important in the case of environmental modeling, when precisely collected photographs in the same area of ground truth data availability is fundamental.\n\\end{itemize}\n\n\\section{Possible areas of application}\nThere are several application areas the proposed algorithm can be used for, starting from the technique for snow level monitoring and prediction (the technique this work has been proposed and started for) and extending to possible applications in mobile and web fields that can be created thanks to this algorithm.\n\n\\subsection{Environmental Modeling}\nOne of the main future directions of the research will be the modeling of environmental processes by the analysis of mountain appearances extracted by the algorithm proposed in this work. The most important and most obvious measurements available from a mountain's appearance are the snow level and the snow water equivalent (SWE). The idea is to collect a series of photographs through time for each analyzed mountain, and based on the snow data ground truth, estimate the current measurements. The first step will therefore be an attempt to detect the correlation between the visual content of the mountain portion of a photograph with the physical measurements of the snow and SWE of that mountain. An interesting planned approach is to exploit also webcams in the region of interest since they present several advantages:\n\\begin{itemize}\n\\item\nweather and tourist webcams are very popular and frequent in mountain regions\n\\item\nthe time density of the measures can be as high as we want (it is only a matter of how frequently the image is acquired from the webcam, and in any case there is no reason to suppose the need of more than a couple of captures per day)\n\\item\nthe mountains captured on a webcam are always in the same position on the image, so even in cases of difficult edge matching it can be done or verified manually once to exploit all the instances of the photographs in future.\n\\end{itemize}\n\nFurthermore, supposing the mountains to be greatly distant from the observer (a reasonable and weak assumption in case of mountains photographs), by knowing the estimation of the observer's altitude and the altitude of the identified peak, we can estimate the altitude of each pixel of the mountain on the photograph by a simple proportion of the differences of altitudes and the pixel height of the mountain. This allows a comparison between the visual features of partial photographs corresponding to the same altitude even for photographs with significantly different shot position and photo camera properties.\n\n\\subsection{Augmented Reality}\nAugmented reality applications on mobile devices (applications that augment and supply the real-time camera view of the mobile device with computer-generated input) is a recent niche topic, and the promising use of the described algorithm regards augmented reality: an application can tag in real-time the mountains viewed by the user, highlight the peaks and terrain silhouettes, and augment the mountains with useful information such as altitude contours drawn on the image.\n\nSuch a kind of application would eliminate the problem of wrong geo-tag estimation (keeping the GPS of the mobile device on, the position is usually estimated with a tollerance of few meters) and so will reduce significantly the problem of wrong altitude estimation and elevation model imperfections with respect to reality. The reduced computation capacity may be compensated by the built-in compass, which gives a rough indication of the observer's direction of view, so the matching procedure can be done only on a reduced fragment of the rendered panorama. The bandwidth use will be small since the mountains are usually distant from the observer, so the rendered view will change very slowly while the observer is moving, so it will need to be updated rarely.\n\n\\subsection{Photo Sharing Platforms}\nTagging the mountain peaks on the geo-tagged photographs can lead also to a significant improvement to a cataloging and searching system of a social network or a photo sharing website, and to a better user experience by exploring the peak names and other information by directly viewing the photograph.\nAutomatic tagging of mountain peaks (tagging intended as the catalog assignment of peak names to the photograph and not the visual annotation on the photograph itself) can allow navigation through mountain photographs in these scenarios:\n\\begin{itemize}\n\\item\nUser searches for a mountain by specifying its name in the query: retrieves the photographs of that mountain, even if the author of the photograph did not specify the name in the title, description or other metadata.\n\\item\nUser views a photograph: other photographs of the same mountain (with the same or different facade) are suggested, even if the authors of both photographs did not specify the name in the title, description or other metadata.\n\\end{itemize}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nConsider a set of agents $\\mathcal{N}=\\{1,2,\\ldots,n\\}$ connected over a network. Each agent has a local smooth and strongly convex cost function $f_i:\\mathbb{R}^{p}\\rightarrow \\mathbb{R}$. The global objective is to locate $x\\in\\mathbb{R}^p$ that minimizes the average of all cost functions:\n\\begin{equation}\n\\min_{x\\in \\mathbb{R}^{p}}f(x)\\left(=\\frac{1}{n}\\sum_{i=1}^{n}f_i(x)\\right). \\label{opt Problem_def}\n\\end{equation}%\nScenarios in which problem (\\ref{opt Problem_def}) is considered include distributed machine learning \\cite{forrester2007multi,nedic2017fast,cohen2017projected,wai2018multi}, multi-agent target seeking \\cite{pu2016noise,chen2012diffusion}, and wireless networks \\cite{cohen2017distributed,mateos2012distributed,baingana2014proximal}, among many others.\n\nTo solve problem (\\ref{opt Problem_def}), we assume each agent $i$ queries a stochastic oracle ($\\mathcal{SO}$) to obtain noisy gradient samples of the form $g_i(x,\\xi_i)$ that satisfies the following condition:\n\\begin{assumption}\n\t\\label{asp: gradient samples}\n\tFor all $i\\in\\mathcal{N}$ and all $x\\in\\mathbb{R}^p$, \n\teach random vector $\\xi_i\\in\\mathbb{R}^m$ is independent, and\n\t\\begin{equation}\n\t\\begin{split}\n\t& \\mathbb{E}_{\\xi_i}[g_i(x,\\xi_i)\\mid x] = \\nabla f_i(x),\\\\\n\t& \\mathbb{E}_{\\xi_i}[\\|g_i(x,\\xi_i)-\\nabla f_i(x)\\|^2\\mid x] \\le \\sigma^2\\quad\\hbox{\\ for some $\\sigma>0$}.\n\t\\end{split}\n\t\\end{equation}\n\\end{assumption}\nThe above assumption of stochastic gradients holds true for many on-line distributed learning problems, where $f_i(x)=\\mathbb{E}_{\\xi_i}[F_i(x,\\xi_i)]$ denotes the expected loss function agent $i$ wishes to minimize, while independent samples $\\xi_i$ are gathered continuously over time.\nFor another example, in simulation-based optimization, the gradient estimation often incurs noise \nthat can be due to various sources, such as modeling and discretization errors, \nincomplete convergence, and finite sample size for Monte-Carlo methods~\\cite{kleijnen2008design}. \n\nDistributed algorithms dealing with problem (\\ref{opt Problem_def}) have been studied extensively in the literature \\cite{tsitsiklis1986distributed,nedic2009distributed,nedic2010constrained,jakovetic2014fast,kia2015distributed,shi2015extra,di2016next,qu2017harnessing,nedic2017achieving,pu2018push}.\nRecently, there has been considerable interest in distributed implementation of stochastic gradient algorithms (see \\cite{ram2010distributed,cavalcante2013distributed,towfic2014adaptive,lobel2011distributed,srivastava2011distributed,wang2015cooperative,lan2017communication,pu2017flocking,lian2017can,pu2018swarming}). The literature has shown that distributed algorithms may compete with, or even outperform, their centralized counterparts under certain conditions \\cite{pu2017flocking,lian2017can,pu2018swarming}. For instance, in our recent work~\\cite{pu2018swarming}, we proposed a swarming-based approach for distributed stochastic optimization which beats a centralized gradient method in real-time assuming that all $f_i$ are identical. \nHowever, to the best of our knowledge, there is no distributed stochastic gradient method addressing problem (\\ref{opt Problem_def}) that shows comparable performance with a centralized approach. In particular, under constant step size policies none of the existing algorithms achieve an error bound that is decreasing in the network size $n$. \n\n\nA distributed gradient tracking method was proposed in~\\cite{di2016next,nedic2017achieving,qu2017harnessing}, where \nthe agent-based auxiliary variables $y_i$ were introduced to track the average gradients of $f_i$ assuming accurate gradient information is available. It was shown that the method, with constant step size, generates iterates that converge linearly to the optimal solution.\nInspired by the approach, in this paper we consider a distributed stochastic gradient tracking method. By comparison, in our proposed algorithm $y_i$ are tracking the stochastic gradient averages of $f_i$.\nWe are able to show that the iterates generated by each agent reach, in expectation, a neighborhood of the optimal point exponentially fast under a constant step size.\nInterestingly, with a sufficiently small step size, the limiting error bounds on the distance \nbetween agent iterates and the optimal solution decrease in the network size $n$, which is comparable to the performance of a centralized stochastic gradient algorithm.\n\nOur work is also related to the extensive literature in stochastic approximation (SA) methods dating back to the seminal works~\\cite{robbins1951stochastic} and~\\cite{kiefer1952stochastic}. These works include the analysis of convergence (conditions for convergence, rates of convergence, suitable choice of step size) in the context of diverse noise models~\\cite{kushner2003stochastic}.\n\nThe paper is organized as follows. In Section~\\ref{sec: set}, we introduce the distributed stochastic gradient tracking method along with the main results. We perform analysis in Section~\\ref{sec:cohesion} \nand provide a numerical example in Section~\\ref{sec: simulation} to illustrate our theoretical findings. \nSection~\\ref{sec: conclusion} concludes the paper.\n\n\n\\subsection{Notation}\n\\label{subsec:pre}\nThroughout the paper, vectors default to columns if not otherwise specified.\nLet each agent $i$ hold a local copy $x_i\\in\\mathbb{R}^p$ of the decision variable and an auxiliary variable $y_i\\in\\mathbb{R}^p$. Their values at iteration\/time $k$ are denoted by $x_{i,k}$ and $y_{i,k}$, respectively. \nWe let\n\\begin{equation*}\n \\mathbf{x} := [x_1, x_2, \\ldots, x_n]^{\\intercal}\\in\\mathbb{R}^{n\\times p},\\ \\ \\mathbf{y} := [y_1, y_2, \\ldots, y_n]^{\\intercal}\\in\\mathbb{R}^{n\\times p},\n\\end{equation*}\n\\begin{equation}\n\\overline{x} := \\frac{1}{n}\\mathbf{1}^{\\intercal} \\mathbf{x}\\in\\mathbb{R}^{1\\times p},\\ \\ \\overline{y} := \\frac{1}{n}\\mathbf{1}^{\\intercal}\\mathbf{y}\\in\\mathbb{R}^{1\\times p},\n\\end{equation}\nwhere $\\mathbf{1}$ denotes the vector with all entries equal to 1.\nWe define an aggregate objective function of the local variables:\n\\begin{equation}\nF(\\mathbf{x}):=\\sum_{i=1}^nf_i(x_i),\n\\end{equation}\nand write\n\\begin{equation*}\n\\nabla F(\\mathbf{x}):=\\left[\\nabla f_1(x_1), \\nabla f_2(x_2), \\ldots, \\nabla f_n(x_n)\\right]^{\\intercal}\\in\\mathbb{R}^{n\\times p}.\n\\end{equation*}\nIn addition, let\n\\begin{equation}\n\\label{def: h}\nh(\\mathbf{x}):=\\frac{1}{n}\\mathbf{1}^{\\intercal}\\nabla F(\\mathbf{x})\\in\\mathbb{R}^{1\\times p},\n\\end{equation}\n\\begin{equation*}\n\\boldsymbol{\\xi} := [\\xi_1, \\xi_2, \\ldots, \\xi_n]^{\\intercal}\\in\\mathbb{R}^{n\\times p},\n\\end{equation*}\nand\n\\begin{equation}\nG(\\mathbf{x},\\boldsymbol{\\xi}):=[g_1(x_1,\\xi_1), g_2(x_2,\\xi_2), \\ldots, g_n(x_n,\\xi_n)]^{\\intercal}\\in\\mathbb{R}^{n\\times p}.\n\\end{equation}\n\nInner product of two vectors $a,b$ of the same dimension is written as $\\langle a,b\\rangle$. For two matrices $A,B\\in\\mathbb{R}^{n\\times p}$, we define\n\\begin{equation}\n\\langle A,B\\rangle :=\\sum_{i=1}^n\\langle A_i,B_i\\rangle,\n\\end{equation}\nwhere $A_i$ (respectively, $B_i$) represents the $i$-th row of $A$ (respectively, $B$). We use $\\|\\cdot\\|$ to denote the $2$-norm of vectors; for matrices, $\\|\\cdot\\|$ denotes the Frobenius norm.\n\nA graph is a pair $\\mathcal{G}=(\\mathcal{V},\\mathcal{E})$ where $\\mathcal{V}=\\{1,2,\\ldots,n\\}$ is the set of vertices (nodes) and $\\mathcal{E}\\subseteq \\mathcal{V}\\times \\mathcal{V}$ represents the set of edges connecting vertices. We assume agents communicate in an undirected graph, i.e., $(i,j)\\in\\mathcal{E}$ iff $(j,i)\\in\\mathcal{E}$.\nDenote by $W=[w_{ij}]\\in\\mathbb{R}^{n\\times n}$ the coupling matrix of agents. Agent $i$ and $j$ are connected iff $w_{ij}=w_{ji}>0$ ($w_{ij}=w_{ji}=0$ otherwise). Formally, we assume the following condition regarding the interaction among agents:\n\\begin{assumption}\n\t\\label{asp: network}\n\tThe graph $\\mathcal{G}$ corresponding to the network of agents is undirected and connected (there exists a path between any two agents). Nonnegative coupling matrix $W$ is doubly stochastic, \n\ti.e., $W\\mathbf{1}=\\mathbf{1}$ and $\\mathbf{1}^{\\intercal}W=\\mathbf{1}^{\\intercal}$.\n\tIn addition, $w_{ii}>0$ for some $i\\in\\mathcal{N}$.\n\\end{assumption}\nWe will frequently use the following result, which is a direct implication of Assumption \\ref{asp: network} (see \\cite{qu2017harnessing} Section II-B):\n\\begin{lemma}\n\t\\label{lem: spectral norm}\n\tLet Assumption \\ref{asp: network} hold, and let $\\rho_w$ denote the spectral norm of \n\tthe matrix $W-\\frac{1}{n}\\mathbf{1}\\mathbf{1}^{\\intercal}$. Then, $\\rho_w<1$ and \n\t\\begin{equation*}\n\t\\|W\\omega-\\mathbf{1}\\overline{\\omega}\\|\\le \\rho_w\\|\\omega-\\mathbf{1}\\overline{\\omega}\\|\n\t\\end{equation*}\n\tfor all $\\omega\\in\\mathbb{R}^{n\\times p}$, where $\\overline{\\omega}=\\frac{1}{n}\\mathbf{1}^{\\intercal}\\omega$.\n\\end{lemma}\n\n\\section{A Distributed Stochastic Gradient Tracking Method}\n\\label{sec: set}\nWe consider the following distributed stochastic gradient tracking method: at each step $k\\in\\mathbb{N}$, \nevery agent $i$ independently implements the following two steps:\n\\begin{equation}\n\\label{eq: x_i,k}\n\\begin{split}\nx_{i,k+1} = & \\sum_{j=1}^{n}w_{ij}(x_{j,k}-\\alpha y_{j,k}), \\\\\ny_{i,k+1} = & \\sum_{j=1}^{n}w_{ij}y_{j,k}+g_i(x_{i,k+1},\\xi_{i,k+1})-g_i(x_{i,k},\\xi_{i,k}),\n\\end{split}\n\\end{equation}\nwhere $\\alpha>0$ is a constant step size. The iterates are initiated with an arbitrary \n$x_{i,0}$ and $y_{i,0}= g_i(x_{i,0},\\xi_{i,0})$ for all~$i\\in{\\cal N}$.\nWe can also write (\\ref{eq: x_i,k}) in the following compact form:\n\\begin{equation}\n\\label{eq: x_k}\n\\begin{split}\n\\mathbf{x}_{k+1} = & W(\\mathbf{x}_k-\\alpha \\mathbf{y}_k), \\\\\n\\mathbf{y}_{k+1} = & W\\mathbf{y}_k+G(\\mathbf{x}_{k+1},\\boldsymbol{\\xi}_{k+1})-G(\\mathbf{x}_k,\\boldsymbol{\\xi}_k).\n\\end{split}\n\\end{equation}\nAlgorithm (\\ref{eq: x_i,k}) is closely related to the schemes considered in \\cite{di2016next,nedic2017achieving,qu2017harnessing}, where auxiliary variables $y_{i,k}$ \nwere introduced to track the average $\\frac{1}{n}\\sum_{i=1}^{n}\\nabla f_i(x_{i,k})$. This design ensures that the algorithm achieves linear convergence under constant step size choice.\nCorrespondingly, under our approach $y_{i,k}$ are (approximately) tracking $\\frac{1}{n}\\sum_{i=1}^{n}g_i(x_{i,k},\\xi_{i,k})$.\nTo see why this is the case, note that\n\\begin{equation}\n\\overline{y}_k = \\frac{1}{n}\\mathbf{1}^{\\intercal} \\mathbf{y}_k.\n\\end{equation}\nSince $y_{i,0}=g(x_{i,0},\\xi_{i,0}),\\forall i$. By induction,\n\\begin{equation}\n\\overline{y}_k=\\frac{1}{n}\\mathbf{1}^{\\intercal}G(\\mathbf{x}_k,\\boldsymbol{\\xi}_k),\\forall k.\n\\end{equation}\nWe will show that $\\mathbf{y}_k$ is close to $\\mathbf{1}\\overline{y}_k$ at each round. Hence $y_{i,k}$ are (approximately) tracking $\\frac{1}{n}\\sum_{i=1}^{n}g_i(x_{i,k},\\xi_{i,k})$.\n\nThroughout the paper, we make the following standing assumption on the objective functions $f_i$:\n\\begin{assumption}\n\t\\label{asp: strconvexity}\n\tEach $f_i:\\mathbb{R}^p\\rightarrow \\mathbb{R}$ is $\\mu$-strongly convex with $L$-Lipschitz continuous gradients, i.e., for any $x,x'\\in\\mathbb{R}^p$,\n\t\\begin{equation}\n\t\\begin{split}\n\t& \\langle \\nabla f_i(x)-\\nabla f_i(x'),x-x'\\rangle\\ge \\mu\\|x-x'\\|^2,\\\\\n\t& \\|\\nabla f_i(x)-\\nabla f_i(x')\\|\\le L \\|x-x'\\|.\n\t\\end{split}\n\t\\end{equation}\n\\end{assumption}\nUnder Assumption \\ref{asp: strconvexity}, problem (\\ref{opt Problem_def}) has a unique solution denoted by $x^*\\in\\mathbb{R}^{1\\times p}$.\n\n\\subsection{Main Results}\n\\label{subsec: main}\nMain convergence properties of the distributed gradient tracking method (\\ref{eq: x_i,k}) are covered in the following theorem.\n\\begin{theorem}\n\t\\label{Theorem}\n\tSuppose Assumptions \\ref{asp: gradient samples}-\\ref{asp: strconvexity} hold and $\\alpha$ satisfies\n\t\\begin{equation}\n\t\t\\label{alpha_ultimate_bound}\n\t\t\\alpha\\le \\min\\left\\{\\frac{(1-\\rho_w^2)}{12\\rho_w^2 L},\\frac{(1-\\rho_w^2)^2}{2\\sqrt{\\Gamma}L\\max\\{6\\rho_w\\|W-I\\|,1-\\rho_w^2\\}}, \\frac{(1-\\rho_w^2)}{3\\rho_w^{2\/3}L}\\left[\\frac{\\mu^2}{L^2}\\frac{(\\Gamma-1)}{\\Gamma(\\Gamma+1)}\\right]^{1\/3}\\right\\}\n\t\t\\end{equation}\n\tfor some $\\Gamma>1$. Then both $\\sup_{l\\ge k}\\mathbb{E}[\\|\\overline{x}_l-x^*\\|^2]$ and $\\sup_{l\\ge k}\\mathbb{E}[\\|\\mathbf{x}_{l+1}-\\mathbf{1}\\overline{x}_{l+1}\\|^2]$ converge at the linear rate $\\mathcal{O}(\\rho_A^k)$, where $\\rho_A<1$ is the spectral radius of\n\t\\begin{eqnarray*}\n\t\t\tA=\\begin{bmatrix}\n\t\t\t\t1-\\alpha\\mu & \\frac{\\alpha L^2}{\\mu n}(1+\\alpha\\mu) & 0\\\\\n\t\t\t\t0 & \\frac{1}{2}(1+\\rho_w^2) & \\alpha^2\\frac{(1+\\rho_w^2)\\rho_w^2}{(1-\\rho_w^2)}\\\\\n\t\t\t\t2\\alpha nL^3 & \\left(\\frac{1}{\\beta}+2\\right)\\|W-I\\|^2 L^2+3\\alpha L^3 & \\frac{1}{2}(1+\\rho_w^2)\n\t\t\t\\end{bmatrix},\n\t\\end{eqnarray*}\nin which $\\beta=\\frac{1-\\rho_w^2}{2\\rho_w^2}-4\\alpha L-2\\alpha^2 L^2$.\nFurthermore,\n\\begin{equation}\n\t\\label{error_bound_ultimate}\n\t\\limsup_{k\\rightarrow\\infty}\\mathbb{E}[\\|\\overline{x}_k-x^*\\|^2]\\le \\frac{(\\Gamma+1)}{\\Gamma}\\frac{\\alpha\\sigma^2}{\\mu n}\n\t+\\left(\\frac{\\Gamma+1}{\\Gamma-1}\\right)\\frac{4\\alpha^2 L^2(1+\\alpha\\mu)(1+\\rho_w^2)\\rho_w^2}{\\mu^2 n(1-\\rho_w^2)^3}M_{\\sigma},\n\t\\end{equation}\nand\n\\begin{equation}\n\t\\label{consensus_error_bound_ultimate}\n\t\\limsup_{k\\rightarrow\\infty}\\mathbb{E}[\\|\\mathbf{x}_k-\\mathbf{1}\\overline{x}_k\\|^2]\n\t\\le \\left(\\frac{\\Gamma+1}{\\Gamma-1}\\right)\\frac{4\\alpha^2 (1+\\rho_w^2)\\rho_w^2(2\\alpha^2L^3\\sigma^2+\\mu M_{\\sigma})}{\\mu(1-\\rho_w^2)^3},\n\t\\end{equation}\nwhere \n\\begin{equation}\n\t\\label{M_sigma}\n\tM_{\\sigma}:=\\left[3\\alpha^2L^2+2(\\alpha L+1)(n+1)\\right]\\sigma^2.\n\t\\end{equation}\n\\end{theorem}\n\\begin{remark}\n\tThe first term on the right hand side of (\\ref{error_bound_ultimate}) can be interpreted as the error caused by stochastic gradients only, since it does not depend on the network topology. The second term as well as the bound in (\\ref{consensus_error_bound_ultimate}) are network dependent and increase with $\\rho_w$ (larger $\\rho_w$ indicates worse network connectivity).\n\t\n\tIn light of (\\ref{error_bound_ultimate}) and (\\ref{consensus_error_bound_ultimate}),\n\t\\begin{equation*}\n\t\\limsup_{k\\rightarrow\\infty}\\mathbb{E}[\\|\\overline{x}_k-x^*\\|^2]=\\alpha\\mathcal{O}\\left(\\frac{\\sigma^2}{\\mu n}\\right)+\\alpha^2\\mathcal{O}\\left(\\frac{ L^2\\sigma^2}{\\mu^2}\\right),\n\t\\end{equation*}\n\tand\n\t\\begin{equation*}\n\t\\limsup_{k\\rightarrow\\infty}\\frac{1}{n}\\mathbb{E}[\\|\\mathbf{x}_k-\\mathbf{1}\\overline{x}_k\\|^2]=\\alpha^4\\mathcal{O}\\left(\\frac{ L^3\\sigma^2}{\\mu n}\\right)+\\alpha^2\\mathcal{O}\\left(\\sigma^2\\right).\n\t\\end{equation*}\n\tLet $(1\/n)\\mathbb{E}[\\|\\mathbf{x}_k-\\mathbf{1}x^*\\|^2]$ measure the average quality of solutions obtained by all the agents. We have\n\t\t\\begin{equation*}\n\t\\limsup_{k\\rightarrow\\infty}\\frac{1}{n}\\mathbb{E}[\\|\\mathbf{x}_k-\\mathbf{1}x^*\\|^2]=\\alpha\\mathcal{O}\\left(\\frac{\\sigma^2}{\\mu n}\\right)+\\alpha^2\\mathcal{O}\\left(\\frac{ L^2\\sigma^2}{\\mu^2}\\right),\n\t\\end{equation*}\n\twhich is decreasing in the network size $n$ when $\\alpha$ is sufficiently small\\footnote{Although $\\rho_w$ is also related to the network size $n$, it only appears in the terms with high orders of $\\alpha$.}.\n\t\n\tUnder a centralized algorithm in the form of\n\t\\begin{equation}\n\t\\label{eq: centralized}\n\tx_{k+1}=x_k-\\alpha \\frac{1}{n}\\sum_{i=1}^n g_i(x_k,\\xi_{i,k}),\\ \\ k\\in\\mathbb{N},\n\t\\end{equation}\n\twe would obtain\n\t\\begin{equation*}\n\\limsup_{k\\rightarrow\\infty}\\mathbb{E}[\\|x_k-x^*\\|^2]=\\alpha\\mathcal{O}\\left(\\frac{\\sigma^2}{\\mu n}\\right).\n\t\\end{equation*}\n\tIt can be seen that the distributed stochastic gradient tracking method (\\ref{eq: x_i,k}) is comparable with the centralized algorithm (\\ref{eq: centralized}) in their ultimate error bounds (up to constant factors) with sufficiently small step sizes.\n\\end{remark}\n\n\\begin{corollary}\n\t\\label{cor: speed}\n\tUnder the conditions in Theorem {\\ref{Theorem}}. Suppose in addition\\footnote{This condition is weaker than (\\ref{alpha_ultimate_bound}) in most cases.} \n\t\\begin{equation}\n\t\t\\label{alpha condition corollary}\n\t\\alpha\\le \\frac{(\\Gamma+1)}{\\Gamma}\\frac{(1-\\rho_w^2)}{8\\mu}.\n\t\\end{equation}\n Then\n\t\\begin{equation*}\n\t\\rho_A\\le 1-\\left(\\frac{\\Gamma-1}{\\Gamma+1}\\right)\\alpha\\mu.\n\t\\end{equation*}\n\\end{corollary}\n\\begin{remark}\n\tCorollary \\ref{cor: speed} implies that, for sufficiently small step sizes, \n\tthe distributed gradient tracking method has a comparable convergence speed to that of a centralized scheme (in which case the linear rate is $\\mathcal{O}(1-2\\alpha\\mu)^k$).\n\\end{remark}\n\n\\section{Analysis}\n\\label{sec:cohesion}\n\nIn this section, we prove Theorem \\ref{Theorem} by studying the evolution of $\\mathbb{E}[\\|\\overline{x}_k-x^*\\|^2]$, $\\mathbb{E}[\\|\\mathbf{x}_k-\\mathbf{1}\\overline{x}_k\\|^2]$ and $\\mathbb{E}[\\|\\mathbf{y}_k-\\mathbf{1}\\overline{y}_k\\|^2]$. Our strategy is to bound the three expressions in terms of linear combinations of their past values, in which way we establish a linear system of inequalities. This approach is different from those employed in \\cite{qu2017harnessing,nedic2017achieving}, where the analyses pertain to the examination of $\\|\\overline{x}_k-x^*\\|$, $\\|\\mathbf{x}_k-\\mathbf{1}\\overline{x}_k\\|$ and $\\|\\mathbf{y}_k-\\mathbf{1}\\overline{y}_k\\|$. Such distinction is due to the stochastic gradients $g_i(x_{i,k},\\xi_{i,k})$ whose variances play a crucial role in deriving the main inequalities.\n\nWe first introduce some lemmas that will be used later in the analysis. Denote by $\\mathcal{H}_k$ the history sequence $\\{\\mathbf{x}_0,\\boldsymbol{\\xi}_0,\\mathbf{y}_0,\\ldots,\\mathbf{x}_{k-1},\\boldsymbol{\\xi}_{k-1},\\mathbf{y}_{k-1},\\mathbf{x}_k\\}$, and define $\\mathbb{E}[\\cdot \\mid\\mathcal{H}_k]$ as the conditional expectation given $\\mathcal{H}_k$.\n\\begin{lemma}\n\t\\label{lem: oy_k-h_k}\n\tUnder Assumption \\ref{asp: gradient samples},\n\t\\begin{align}\n\t\\mathbb{E}\\left[\\|\\overline{y}_k-h(\\mathbf{x}_k)\\|^2\\mid\\mathcal{H}_k\\right] \\le \\frac{\\sigma^2}{n}.\n\t\\end{align}\n\\end{lemma}\n\\begin{proof}\n\tBy the definitions of $\\overline{y}_k$ and $h(\\mathbf{x}_k)$,\n\t\\begin{equation*}\n\t\t\\mathbb{E}\\left[\\|\\overline{y}_k-h(\\mathbf{x}_k)\\|^2\\mid\\mathcal{H}_k\\right]\\\\\n\t\t=\t\\frac{1}{n^2}\\sum_{i=1}^n\\mathbb{E}\\left[\\|g_i(x_{i,k},\\xi_{i,k})-\\nabla f_i(x_{i,k})\\|^2\\vert\\mathcal{H}_k\\right]\\le \\frac{\\sigma^2}{n}.\n\t\t\\end{equation*}\n\\end{proof}\n\\begin{lemma}\n\t\\label{lem: strong_convexity}\n\tUnder Assumption \\ref{asp: strconvexity},\n\t\\begin{align}\n\t\\| \\nabla F(\\overline{x}_k)-h(\\mathbf{x}_k)\\| \\le \\frac{L}{\\sqrt{n}}\\|\\mathbf{x}_k-\\mathbf{1}\\overline{x}_k\\|.\n\t\\end{align}\n\tSuppose in addition $\\alpha<2\/(\\mu+L)$. Then\n\t\\begin{equation*}\n\t\\|x-\\alpha\\nabla f(x)-x^*\\|\\le (1-\\alpha \\mu)\\|x-x^*\\|,\\, \\forall x\\in\\mathbb{R}^p.\n\t\\end{equation*}\n\\end{lemma}\n\\begin{proof}\n\tSee \\cite{qu2017harnessing} Lemma 10 for reference.\n\\end{proof}\n\nIn the following lemma, we establish bounds on $\\|\\mathbf{x}_{k+1}-\\mathbf{1}\\overline{x}_{k+1}\\|^2$ and on the conditional expectations of $\\|\\overline{x}_{k+1}-x^*\\|^2$ and $\\|\\mathbf{y}_{k+1}-\\mathbf{1}\\overline{y}_{k+1}\\|^2$, respectively.\n\\begin{lemma}\n\t\\label{lem: Main_Inequalities}\n\tSuppose Assumptions \\ref{asp: gradient samples}-\\ref{asp: strconvexity} hold and $\\alpha<2\/(\\mu+L)$. We have the following inequalities:\n\\begin{equation}\n\t\t\\label{First_Main_Inequality}\n\t\\mathbb{E}[\\|\\overline{x}_{k+1}-x^*\\|^2\\mid \\mathcal{H}_k]\n\t\\le \\left(1-\\alpha\\mu\\right)\\|\\overline{x}_k-x^*\\|^2\\\\\n\t+\\frac{\\alpha L^2}{\\mu n}\\left(1+\\alpha\\mu\\right)\\| \\mathbf{x}_k-\\mathbf{1}\\overline{x}_k\\|^2\n\t+\\frac{\\alpha^2\\sigma^2}{n},\n\\end{equation}\n\t\\begin{equation}\n\\label{Second_Main_Inequality}\n\\|\\mathbf{x}_{k+1}-\\mathbf{1}\\overline{x}_{k+1}\\|^2\n\\le \\frac{(1+\\rho_w^2)}{2}\\|\\mathbf{x}_k-\\mathbf{1}\\overline{x}_k\\|^2\\\\+\\alpha^2\\frac{(1+\\rho_w^2)\\rho_w^2}{(1-\\rho_w^2)}\\|\\mathbf{y}_{k}-\\mathbf{1}\\overline{y}_k\\|^2,\n\\end{equation}\nand for any $\\beta>0$,\n\\begin{multline}\n\\label{Third_Main_Inquality}\n\\mathbb{E}[\\|\\mathbf{y}_{k+1}-\\mathbf{1}\\overline{y}_{k+1}\\|^2\\mid \\mathcal{H}_k]\n\\le \\left(1+4\\alpha L+2\\alpha^2 L^2+\\beta\\right)\\rho_w^2\\mathbb{E}[\\|\\mathbf{y}_{k}-\\mathbf{1}\\overline{y}_{k}\\|^2\\mid \\mathcal{H}_k]\\\\\n+\\left(\\frac{1}{\\beta}\\|W-I\\|^2 L^2+2\\|W-I\\|^2 L^2+3\\alpha L^3\\right)\\|\\mathbf{x}_k-\\mathbf{1}\\overline{x}_k\\|^2+2\\alpha nL^3\\|\\overline{x}_k-x^*\\|^2+M_{\\sigma}.\n\\end{multline}\n\\end{lemma}\n\\begin{proof}\n\tSee Appendix \\ref{proof lem: Main_Inequalities}.\n\t\\end{proof}\n\n\\subsection{Proof of Theorem \\ref{Theorem}}\n\tTaking full expectation on both sides of (\\ref{First_Main_Inequality}), (\\ref{Second_Main_Inequality}) and (\\ref{Third_Main_Inquality}), we obtain the following linear system of inequalities\n\\begin{eqnarray}\n\\label{linear_system}\n\\begin{bmatrix}\n\\mathbb{E}[\\|\\overline{x}_{k+1}-x^*\\|^2]\\\\\n\\mathbb{E}[\\|\\mathbf{x}_{k+1}-\\mathbf{1}\\overline{x}_{k+1}\\|^2]\\\\\n\\mathbb{E}[\\|\\mathbf{y}_{k+1}-\\mathbf{1}\\overline{y}_{k+1}\\|^2]\n\\end{bmatrix}\n\\le \nA\\begin{bmatrix}\n\\mathbb{E}[\\|\\overline{x}_{k}-x^*\\|^2]\\\\\n\\mathbb{E}[\\|\\mathbf{x}_{k}-\\mathbf{1}\\overline{x}_{k}\\|^2]\\\\\n\\mathbb{E}[\\|\\mathbf{y}_{k}-\\mathbf{1}\\overline{y}_{k}\\|^2]\n\\end{bmatrix}+\\begin{bmatrix}\n\\frac{\\alpha^2\\sigma^2}{n}\\\\\n0\\\\\nM_{\\sigma}\n\\end{bmatrix},\n\\end{eqnarray}\nwhere the inequality is to be taken component-wise, and the entries of the matrix\n$A=[a_{ij}]$ are given by\n\t\\begin{eqnarray*}\n\t\t& \\begin{bmatrix}\n\t\t\ta_{11}\\\\\n\t\t\ta_{21}\\\\\n\t\t\ta_{31}\n\t\t\\end{bmatrix} = \n\t\t\\begin{bmatrix}\n\t\t\t1-\\alpha\\mu\\\\\n\t\t\t0\\\\\n\t\t\t2\\alpha nL^3 \n\t\t\\end{bmatrix},\n\t\t\\begin{bmatrix}\n\t\t\ta_{12}\\\\\n\t\t\ta_{22}\\\\\n\t\t\ta_{32}\n\t\t\\end{bmatrix} = \n\t\t\\begin{bmatrix}\n\t\t\t\\frac{\\alpha L^2}{\\mu n}(1+\\alpha\\mu)\\\\\n\t\t\t\\frac{1}{2}(1+\\rho_w^2)\\\\\n\t\t\t\\left(\\frac{1}{\\beta}+2\\right)\\|W-I\\|^2 L^2+3\\alpha L^3\n\t\t\\end{bmatrix},\\\\\n\t\t& \\begin{bmatrix}\n\t\t\ta_{13}\\\\\n\t\t\ta_{23}\\\\\n\t\t\ta_{33}\n\t\t\\end{bmatrix} = \n\t\t\\begin{bmatrix}\n\t\t\t0 \\\\\n\t\t\t\\alpha^2\\frac{(1+\\rho_w^2)\\rho_w^2}{(1-\\rho_w^2)}\\\\\n\t\t\t\\left(1+4\\alpha L+2\\alpha^2 L^2+\\beta\\right)\\rho_w^2\n\t\t\\end{bmatrix},\n\\end{eqnarray*}\nand $M_{\\sigma}$ is given in (\\ref{M_sigma}).\nHence $\\sup_{l\\ge k}\\mathbb{E}[\\|\\overline{x}_l-x^*\\|^2]$, $\\sup_{l\\ge k}\\mathbb{E}[\\|\\mathbf{x}_l-\\mathbf{1}\\overline{x}_l\\|^2]$ and $\\sup_{l\\ge k}\\mathbb{E}[\\|\\mathbf{y}_l-\\mathbf{1}\\overline{y}_l\\|^2]$ all converge to a neighborhood of $0$ at the linear rate $\\mathcal{O}(\\rho_A^k)$ if the spectral radius of $A$ satisfies $\\rho_A<1$. \nThe next lemma provides conditions for relation $\\rho_A<1$ to hold.\n\\begin{lemma}\n\t\\label{lem: rho_M}\n\tLet $M=[m_{ij}]\\in\\mathbb{R}^{3\\times 3}$ be a nonnegative, irreducible matrix with \n\t$m_{ii}<\\lambda^*$ for some~{$\\lambda^*>0$ and all $i=1,2,3.$} \n\tA necessary and sufficient condition for $\\rho_M<\\lambda^*$ is $\\text{det}(\\lambda^* I-M)>0$.\n\\end{lemma}\n\\begin{proof}\n\tSee Appendix \\ref{subsec: proof lemma rho_M}.\n\t\\end{proof}\n\tLet $\\alpha$ and $\\beta$ be such that the following relations hold\\footnote{Matrix $A$ in Theorem~\\ref{Theorem} \n\tcorresponds to such a choice of $\\alpha$ and $\\beta$.}.\n\\begin{equation}\n\t\\label{beta}\n\ta_{33}=\\left(1+4\\alpha L+2\\alpha^2 L^2+\\beta\\right)\\rho_w^2=\\frac{1+\\rho_w^2}{2}<1,\n\t\\end{equation}\n\t\\begin{equation}\n\t\\label{alpha,beta}\n\ta_{23}a_{32}=\\alpha^2\\frac{(1+\\rho_w^2)\\rho_w^2}{(1-\\rho_w^2)}\\left[\\left(\\frac{1}{\\beta}+2\\right)\\|W-I\\|^2 L^2+3\\alpha L^3\\right]\n\t\\le\\frac{1}{\\Gamma}(1-a_{22})(1-a_{33})\n\t\\end{equation}\nfor some $\\Gamma>1$, and\n\\begin{equation}\n\t\\label{alpha last condition}\n\ta_{12}a_{23}a_{31}=\\frac{2\\alpha^4 L^5(1+\\alpha\\mu)}{\\mu}\\frac{(1+\\rho_w^2)}{(1-\\rho_w^2)}\\rho_w^2\n\t\\le \\frac{1}{\\Gamma+1}(1-a_{11})[(1-a_{22})(1-a_{33})-a_{23}a_{32}].\n\t\\end{equation}\nThen,\n\\begin{multline*}\n\t\\text{det}(I-A)=(1-a_{11})(1-a_{22})(1-a_{33})-(1-a_{11})a_{23}a_{32}-a_{12}a_{23}a_{31}\\\\\n\t\\ge \\frac{\\Gamma}{(\\Gamma+1)}(1-a_{11})[(1-a_{22})(1-a_{33})-a_{23}a_{32}]\n\t\\ge \\left(\\frac{\\Gamma-1}{\\Gamma+1}\\right)(1-a_{11})(1-a_{22})(1-a_{33})>0,\n\t\\end{multline*}\ngiven that $a_{11},a_{22},a_{33}<1$. In light of Lemma \\ref{lem: rho_M}, $\\rho_A<1$.\nIn addition, denoting $B:=[\\frac{\\alpha^2\\sigma^2}{n}, 0, M_{\\sigma}]^{\\intercal}$, we get \n\t\\begin{align}\n\t\t\\label{error_bound_preliminary}\n\t\\lim\\sup_{k\\rightarrow\\infty}\\mathbb{E}[\\|\\overline{x}_k-x^*\\|^2]\\le & [(I-A)^{-1}B]_1 \\notag\\\\\n\t= & \\frac{1}{\\text{det}(I-A)}\\left\\{\\left[(1-a_{22})(1-a_{33})-a_{23}a_{32}\\right]\\frac{\\alpha^2\\sigma^2}{n}+a_{12}a_{23}M_{\\sigma}\\right\\} \\notag\\\\\n\t\\le & \\frac{(\\Gamma+1)}{\\Gamma}\\frac{\\alpha\\sigma^2}{\\mu n}+\\left(\\frac{\\Gamma+1}{\\Gamma-1}\\right)\\frac{ a_{12}a_{23}M_{\\sigma}}{(1-a_{11})(1-a_{22})(1-a_{33})}\\notag\\\\\n\t= & \\frac{(\\Gamma+1)}{\\Gamma}\\frac{\\alpha\\sigma^2}{\\mu n}+ \\left(\\frac{\\Gamma+1}{\\Gamma-1}\\right)\\frac{\\alpha^3 L^2(1+\\alpha\\mu)(1+\\rho_w^2)\\rho_w^2M_{\\sigma}}{\\mu n(1-\\rho_w^2)(1-a_{11})(1-a_{22})(1-a_{33})}\\notag\\\\\n\t= & \\frac{(\\Gamma+1)}{\\Gamma}\\frac{\\alpha\\sigma^2}{\\mu n}+\\left(\\frac{\\Gamma+1}{\\Gamma-1}\\right)\\frac{4\\alpha^2 L^2(1+\\alpha\\mu)(1+\\rho_w^2)\\rho_w^2}{\\mu^2 n(1-\\rho_w^2)^3}M_{\\sigma},\n\t\\end{align}\nand\n\t\\begin{multline*}\n\t\\lim\\sup_{k\\rightarrow\\infty}\\mathbb{E}[\\|\\mathbf{x}_k-\\mathbf{1}\\overline{x}_k\\|^2] \\le [(I-A)^{-1}B]_2\n\t=\\frac{1}{\\text{det}(I-A)}\\left[a_{23}a_{31}\\frac{\\alpha^2\\sigma^2}{n}+a_{23}(1-a_{11})M_{\\sigma}\\right]\\\\\n\t\\le \\left(\\frac{\\Gamma+1}{\\Gamma-1}\\right)\\frac{a_{23}}{(1-a_{11})(1-a_{22})(1-a_{33})}\\left(2\\alpha nL^3\\frac{\\alpha^2 \\sigma^2}{n}+\\alpha\\mu M_{\\sigma}\\right)\\\\\n\t= \\frac{4(\\Gamma+1)\\alpha^2 (1+\\rho_w^2)\\rho_w^2(2\\alpha^2L^3\\sigma^2+\\mu M_{\\sigma})}{(\\Gamma-1)\\mu(1-\\rho_w^2)^3}.\n\t\\end{multline*}\n\n\tWe now show that (\\ref{beta}), (\\ref{alpha,beta}) and (\\ref{alpha last condition}) are satisfied under condition (\\ref{alpha_ultimate_bound}). By (\\ref{beta}) and (\\ref{alpha,beta}), it follows that\n\t\\begin{equation*}\n\t\t\\beta=\\frac{1-\\rho_w^2}{2\\rho_w^2}-4\\alpha L-2\\alpha^2 L^2>0,\n\t\t\\end{equation*}\n\tand\n\t\t\\begin{equation*}\n\t\t\\alpha^2\\frac{(1+\\rho_w^2)\\rho_w^2}{(1-\\rho_w^2)}\\left[\\left(\\frac{1}{\\beta}+2\\right)\\|W-I\\|^2 L^2+3\\alpha L^3\\right]\\le\\frac{(1-\\rho_w^2)^2}{4\\Gamma}.\n\t\t\\end{equation*}\n\t\tSince by (\\ref{alpha_ultimate_bound}) we have\n\t\t\\begin{equation*}\n\t\t\t\\alpha\\le \\frac{1-\\rho_w^2}{12\\rho_w^2 L},\\qquad\n\t\t\n\t\n\t\t\\beta\\ge \\frac{1-\\rho_w^2}{8\\rho_w^2}>0,\n\t\t\\end{equation*}\n\t\twe only need to show that\n\t\t\\begin{equation*}\n\t\t\\alpha^2\\left[\\frac{(2+6\\rho_w^2)}{(1-\\rho_w^2)}\\|W-I\\|^2 L^2+\\frac{(1-\\rho_w^2)}{4\\rho_w^2}L^2\\right]\\le\\frac{(1-\\rho_w^2)^3}{4\\Gamma(1+\\rho_w^2)\\rho_w^2}.\n\t\t\\end{equation*}\n\t\tThe preceding inequality is equivalent to\n\t\t\\begin{equation*}\n\t\t\\alpha\\le \\frac{(1-\\rho_w^2)^2}{L\\sqrt{\\Gamma(1+\\rho_w^2)}\\sqrt{4\\rho_w^2(2+6\\rho_w^2)\\|W-I\\|^2+(1-\\rho_w^2)^2}},\n\t\t\\end{equation*}\n\timplying that it is sufficient to have\n\t\\begin{equation*}\n\t\\alpha\\le\\frac{(1-\\rho_w^2)^2}{2\\sqrt{\\Gamma}L\\max(6\\rho_w\\|W-I\\|,1-\\rho_w^2)}.\n\t\\end{equation*}\n\t\tTo see that relation (\\ref{alpha last condition}) holds, consider a stronger condition\n\t\t\\begin{equation*}\n\t\t\\frac{2\\alpha^4 L^5(1+\\alpha\\mu)}{\\mu}\\frac{(1+\\rho_w^2)}{(1-\\rho_w^2)}\\rho_w^2\n\t\t\\le \\frac{(\\Gamma-1)}{\\Gamma(\\Gamma+1)}(1-a_{11})(1-a_{22})(1-a_{33}),\n\t\t\\end{equation*}\n\t\tor equivalently,\n\t\t\\begin{equation*}\n\t\t\\frac{2\\alpha^3 L^5(1+\\alpha\\mu)}{\\mu^2}\\frac{(1+\\rho_w^2)}{(1-\\rho_w^2)}\\rho_w^2 \\le \\frac{(\\Gamma-1)}{4\\Gamma(\\Gamma+1)}(1-\\rho_w^2)^2.\n\t\t\\end{equation*}\n\t\tIt suffices that\n\t\t\\begin{equation}\n\t\t\\alpha\\le \\frac{(1-\\rho_w^2)}{3\\rho_w^{2\/3}L}\\left[\\frac{\\mu^2}{L^2}\\frac{(\\Gamma-1)}{\\Gamma(\\Gamma+1)}\\right]^{1\/3}.\n\t\t\\end{equation}\n\n\\subsection{Proof of Corollary \\ref{cor: speed}}\nWe derive an upper bound of $\\rho_A$ under condition (\\ref{alpha_ultimate_bound}) and (\\ref{alpha condition corollary}). Note that the characteristic function of $A$ is given by\n\\begin{equation*}\n\\text{det}(\\lambda I-A)=(\\lambda-a_{11})(\\lambda-a_{22})(\\lambda-a_{33})\n-(\\lambda-a_{11})a_{23}a_{32}-a_{12}a_{23}a_{31}.\n\\end{equation*}\nSince $\\text{det}(I-A)> 0$ and $\\text{det}(\\max\\{a_{11},a_{22},a_{33}\\} I-A)<0$, $\\rho_A\\in(\\max\\{a_{11},a_{22},a_{33}\\},1)$. By (\\ref{alpha,beta}) and (\\ref{alpha last condition}),\n\\begin{multline*}\n\\text{det}(\\lambda I-A)\n\\ge (\\lambda-a_{11})(\\lambda-a_{22})(\\lambda-a_{33})-(\\lambda-a_{11})a_{23}a_{32}-\\frac{1}{\\Gamma+1}(1-a_{11})[(1-a_{22})(1-a_{33})-a_{23}a_{32}]\\\\\n\\ge (\\lambda-a_{11})(\\lambda-a_{22})(\\lambda-a_{33})-\\frac{1}{\\Gamma}(\\lambda-a_{11})(1-a_{22})(1-a_{33})\n-\\frac{(\\Gamma-1)}{\\Gamma(\\Gamma+1)}(1-a_{11})(1-a_{22})(1-a_{33}).\n\\end{multline*}\nSuppose $\\lambda=1-\\epsilon$ for some $\\epsilon\\in(0,\\alpha\\mu)$, satisfying\n\\begin{equation*}\n\\text{det}(\\lambda I-A)\n\\ge \\frac{1}{4}(\\alpha\\mu-\\epsilon)\\left(1-\\rho_w^2-2\\epsilon\\right)^2\n-\\frac{1}{4\\Gamma}(\\alpha\\mu-\\epsilon)(1-\\rho_w^2)^2-\\frac{(\\Gamma-1)\\alpha\\mu}{4\\Gamma(\\Gamma+1)}(1-\\rho_w^2)^2\\ge 0.\n\\end{equation*}\nUnder (\\ref{alpha condition corollary}), it suffices that\n\\begin{equation*}\n\\epsilon\\le \\left(\\frac{\\Gamma-1}{\\Gamma+1}\\right)\\alpha\\mu.\n\\end{equation*}\nDenote\n\\begin{equation*}\n\\tilde{\\lambda}=1-\\left(\\frac{\\Gamma-1}{\\Gamma+1}\\right)\\alpha\\mu.\n\\end{equation*}\nThen $\\text{det}(\\tilde{\\lambda} I-A)\\ge 0$ so that $\\rho_A\\le \\tilde{\\lambda}$.\n\n\\section{Numerical Example}\n\\label{sec: simulation}\n\nIn this section, we provide a numerical example to illustrate our theoretic findings. \nConsider the \\emph{on-line} Ridge regression problem, i.e.,\n\\begin{equation}\n\\label{Ridge Regression}\nf(x):=\\frac{1}{n}\\sum_{i=1}^n\\mathbb{E}_{u_i,v_i}\\left[\\left(u_i^{\\intercal} x-v_i\\right)^2+\\rho\\|x\\|^2\\right].\n\\end{equation}\nwhere $\\rho>0$ ia a penalty parameter.\nFor each agent $i$, samples in the form of $(u_i,v_i)$ are gathered continuously with $u_i\\in\\mathbb{R}^p$ representing the features and $v_i\\in\\mathbb{R}$ being the observed outputs. We assume that each $u_i\\in[-1,1]^p$ is uniformly distributed, and $v_i$ is drawn according to $v_i=u_i^{\\intercal} \\tilde{x}_i+\\varepsilon_i$. Here $\\tilde{x}_i\\in[0.4,0.6]^p$ is a predefined, (uniformly) randomly generated parameter, and $\\varepsilon_i$ are independent Gaussian noises with mean $0$ and variance $0.25$.\nGiven a pair $(u_i,v_i)$, agent $i$ can calculate an estimated gradient of $f_i(x)$:\n\\begin{equation}\ng_i(x,u_i,v_i)=2(u_i^{\\intercal}x -v_i)u_i+2\\rho x,\n\\end{equation}\nwhich is unbiased.\nNotice that the Hessian matrix of $f(x)$ is $\\mathbf{H}_f=(2\/3+2\\rho)I_d\\succ 0$. Therefore $f(\\cdot)$ is strongly convex, and problem (\\ref{Ridge Regression}) has a unique solution $x^*$ given by\n\\begin{equation*}\nx^*=\\frac{1}{(1+3\\rho)}\\sum_{i=1}^n\\tilde{x}_i\/n.\n\\end{equation*}\n\nIn the experiments, we consider $3$ instances with $p=20$ and $n\\in\\{10,25,100\\}$, respectively. Under each instance, we draw $x_{i,0}$ uniformly randomly from $[5,10]^p$. Penalty parameter $\\rho=0.01$ and step size $\\gamma=0.01$. We assume that $n$ agents constitute a random network, in which each two agents are linked with probability $0.4$. The Metropolis rule is applied to define the weights $w_{ij}$ \\cite{sayed2014adaptive}:\n\\begin{equation*}\nw_{ij}=\\begin{cases}\n1\/\\max\\{d_i,d_j\\} & \\text{if }i\\in \\mathcal{N}_i\\setminus \\{i\\}, \\\\\n1- \\sum_{j\\in\\mathcal{N}_i}w_{ij} & \\text{if }i=j,\\\\\n0 & \\text{if }i\\notin \\mathcal{N}_i.\n\\end{cases}\n\\end{equation*}\nHere $d_i$ denotes the degree (number of ``neighbors'') of node $i$, and $\\mathcal{N}_i$ is the set of ``neighbors''.\n\n\\begin{figure}\n\t\\centering\n\t\\subfigure[Instance $(p,n)=(20,10)$.]{\\includegraphics[width=3.5in]{p20n10_big.eps}} \n\t\\subfigure[Instance $(p,n)=(20,25)$.]{\\includegraphics[width=3.5in]{p20n25_big.eps}}\n\t\\subfigure[Instance $(p,n)=(20,100)$.]{\\includegraphics[width=3.5in]{{p20n100_big.eps}}} \n\t\\caption{Performance comparison between the distributed gradient tracking method and the centralized algorithm for on-line Ridge regression. For the decentralized method, the plots show the iterates generated \n\tby a randomly selected node $i$ from the set $\\mathcal{N}$.}\n\t\\label{fig: comparison}\n\\end{figure}\n\nIn Figure \\ref{fig: comparison}, we compare the performances of the distributed gradient tracking method (\\ref{eq: x_i,k}) and the centralized algorithm (\\ref{eq: centralized}) with the same parameters. It can be seen that the two approaches are comparable in their convergence speeds as well as the ultimate error bounds. Furthermore, the error bounds decrease in $n$ as expected from our theoretical analysis.\n\\section{Conclusions and Future Work}\n\\label{sec: conclusion}\nThis paper considers distributed multi-agent optimization over a network, where each agent only has access to inexact gradients of its local cost function. \nWe propose a distributed stochastic gradient tracking method and show that the iterates obtained by each agent, using a constant step size value, reach a neighborhood of the optimum (in expectation) exponentially fast. More importantly, in a limit, the error bounds for the distances between the iterates and the optimal solution decrease in the network size, which is comparable with the performance of a centralized stochastic gradient algorithm. \nIn our future work, we will consider adaptive step size policies, directed and\/or time-varying interaction graphs, and more efficient communication protocols (e.g., gossip-based scheme).\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nIn the age of big data, the sheer volume of data discourages manual screening and, therefore, Machine Learning (ML) has been proposed as an effective replacement to conventional solutions. These applications range from face recognition~\\cite{deep_face}, to voice recognition~\\cite{voice}, and autonomous vehicles~\\cite{cars}. ML-based facial recognition is extensively used for biometric identification (e.g. passport control ~\\cite{immigration_uae, immigration}), automatic detection of extremist posts~\\cite{abusive_content}, prevention of online dating frauds~\\cite{OnlineDating}, and reporting of inappropriate images~\\cite{stealthy_images,Inapt}. Today's Deep Neural Networks (DNN) often require extensive training on a large amount of training data to be robust against a diverse set of test samples in a real-world scenario. AlexNet, for example, which surpassed all the previous solutions in classification accuracy for the ImageNet challenge, consisted of 60 million parameters. This growth in complexity and size of ML models demands an increase in computational cost\/power needed for developing and training these models, giving rise to the industry of Machine Learning-as-a-Service (MLaaS).\n\nOutsourcing machine learning training democratizes the use of sophisticated ML models. There are many sources for open-source ML models, such as Cafe Model Zoo~\\cite{zoo} and BigML model Market~\\cite{biggie}. Outsourcing, however, introduces the possibility of compromising machine learning models during the training phase. Research in~\\cite{Badnets} showed that it is possible to infect\/corrupt a model by poisoning the training data. This process introduces a backdoor\/trojan to the model. \nA backdoor\/trojan in a DNN represents a set of neurons that are activated in the presence of unique triggers to cause malicious behavior.~\\cite{suppression} shows one such example where a dot (one-pixel trigger) can be used to trigger certain (backdoored\/infected) neurons in a model to maliciously change the true prediction of the model. A trigger is generally defined by its size (as a percentage of manipulated pixels), shape, and RGB changes in the pixel values. \n\nIn the backdoor attack literature, several types of patterns like post-its on stop signs \\cite{Badnets}, specific black-rimmed spectacles \\cite{black_badnets} or specific patterns based on a desired masking size \\cite{NDSSTrojans} have been used to trigger backdoor neurons. Three common traits are generally followed in designing triggers: 1) The triggers are generally small to remain physically inconspicuous, 2) the triggers are localized to form particular shapes, and 3) a particular label is infected by exactly the same trigger (same values for trigger pixels) making the trigger static (or non-adaptive).\n\nThere has been a plethora of proposed solutions that aim to defend against backdoored models through detection of backdoors, trigger reconstruction, and 'de-backdooring' infected models. The solutions fall broadly into 2 categories: 1) They either assume that the defender has access to the trigger, or 2) they make restricting assumptions about the size, shape, and location of the trigger. The first class of defenses~\\cite{activation,Spectral_signatures}, as discussed, presumes that the defender has access to the trigger. Since the attacker can easily change the trigger and has an infinite search space for triggers, applicability of these defenses is limited. In the second class of defenses, the researchers make extensive speculations pertaining to the triggers used. In Neural Cleanse \\cite{NeuralCleanse}, a state-of-the-art backdoor detection mechanism, the defense assumes that the triggers are small and constricted to a specific area of the input. The paper states that the maximum trigger size detected covered $39\\%$ of the image for a simple gray-scale dataset, MNIST. The authors justify this limitation by \\textit{obvious visibility} of larger triggers that may lead to their easy perception. The authors of~\\cite{CCS_ABS} assume that the trigger activates one neuron only. \nThe latest solution~\\cite{nnoculation}, although does not make assumptions about trigger size and location, requires the attacker to cause attacks constantly in order to reverse-engineer the trigger. \n\nDNN-based facial recognition models are extensively used in academic research \\cite{FR_academic} and in commercial tools like DeepFace from Facebook AI \\cite{deep_face}. Amazon Rekognition \\cite{amazon}, an MLaaS from Amazon web services, enlists its use-cases as flagging of inappropriate content, digital identity verification, and its use in public safety measures (e.g. finding missing persons). Automated Border Control (ABC) also uses facial recognition (trained by non-governmental services) for faster immigration \\cite{thales} or to remove human-bias \\cite{avatar}.\nIn this work, we study the impact of changes in the facial characteristics\/attributes towards the stimulation of backdoors in facial recognition models. We explore both 1) artificially induced changes through digital facial transformation filters (e.g. FaceApp \\cite{faceapp} ``young-age'' filter), and b) deliberate\/intentional facial expressions (e.g. natural smile) as triggers. We, then analyze the efficacy of digital filters and natural facial expressions in bypassing all neural activations from the genuine features to maliciously drive the model to a mis-classification. \nAuthors in \\cite{stealthy_images} study real-world adversarial illicit images and build ML-based detection algorithm leveraging the least obfuscated regions of the image. Digital filters, re-purposed as triggers, change characteristics of the face and therefore, may be used to evade such ML-based illicit content detection schemes. Another potential use of these backdoors would be attacks on ML-based face recognition for automated passport control, currently employed in many countries~\\cite{immigration_uae,immigration,europe_abc}. In contrast to recent work that required the introduction of accessories like 3D-printed glasses for adversarial mis-classifications \\cite{ccs_stealthy}, or black-rimmed classes for backdoored mis-classifications \\cite{black_badnets}, the presented attacks only utilize facial characteristics (i.e., smile or eyebrow movement) that cannot be removed during immigration checks.\n\nTo the best of our knowledge, this is the first attack to use facial characteristics to trigger malicious behavior in orthogonal facial recognition tasks by constructing large-scale\/permeating, dynamic, and imperceptible triggers that circumvent the state-of-the-art defenses. In constructing our attack vectors, we follow the methodology established by the first paper on Backdoored Networks~\\cite{Badnets}, using different datasets we explore different types backdoor attacks, different ML-supply chains, and different architectures. We list our contributions as follows:\n\\begin{itemize}[nosep,leftmargin=1em,labelwidth=*,align=left]\n \\item We explore backdoors of ML models using filter-based triggers that artificially induced changes in facial characteristics. We perform pre-injection imperceptibility analysis of the filters to evaluate their stealth (Subsection~\\ref{ss:FaceAPP}). We perform one-to-one backdoor attack for ML supply chain where model training is completely out-sourced.\n \\item We study natural facial expressions as triggers to activate the maliciously-trained neurons to evaluate attack scenarios where trigger accessories (i.e., glasses) are not allowed (Subsection~\\ref{ss:natural}). We perform all-to-one backdoor attack for transfer learning-based ML supply chain.\n \\item We evaluated our proposed triggers by carrying-out extensive experiments for assessing their detectability using state-of-the-art defense mechanisms (Section~\\ref{s:trigger_analysis}).\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Related Work}\nBackdoor attacks are model-based attacks that are 1) universal, meaning they can be used for different datasets or inputs in the same models, 2) flexible, implying that the attack can be successful using any attacker-constructed trigger, and 3) stealthy, where the maliciously trained neurons remain dormant until they are triggered by a pattern. Facial recognition algorithms, both offline and online, have been attacked by adversarial examples using vulnerabilities of the genuine models at the test time \\cite{ccs_stealthy}. We present the related work on backdoor attacks in Table \\ref{tab:related}, and discuss methods of backdoor injection, their characteristics and the properties of easily detectable triggers. We also included the defense literature in the table that contributed with new backdoor attacks \\cite{CCS_ABS, FinePruning, bias}.\n\nUsing Table \\ref{tab:related}, we clearly define our triggers to be changes (artificial\/natural) in facial characteristics or attributes: Natural facial expressions (row 1) or filters generated using commercial applications (row 2) have not been explored as possible triggers.\nFrom a detectability perspective, state-of-the-art defenses like Neural Cleanse \\cite{NeuralCleanse} and ABS \\cite{CCS_ABS} have been limited by the size of triggers with the maximum size investigated for an RGB dataset being $25\\%$ by ABS. Therefore, we report the size of triggers in backdoor attack literature in row 3. \\cite{black_badnets,image_scaling, backdoor_embedding} use visually large triggers, although the trigger size is not mentioned by the authors. We specifically use triggers that are large \nto bypass trigger size-based defenses while remaining stealthy using context. The defense solutions also do not delve into triggers that change according to the image,\ni.e. customized smiles for each face can be used as triggers. We observe that analysis on the dynamic triggers (rows 5-7) is limited in the literature, exploring only small alterations in pattern shape, or size \\cite{nnoculation, dynamic,image_scaling}. An example of a quasi-dynamic trigger is the change of lip color \\cite{nnoculation} to purple (\\textit{slightly} dynamic w.r.t position and size). Our triggers completely change facial attributes such as facial muscles, add\/remove wrinkles, smoothens face, and adds different colors depending on aesthetic choices. Additionally, since localized triggers have been detected successfully by the defense literature~\\cite{NeuralCleanse, FinePruning, CCS_ABS, strip, nnoculation, bias}, these changes create permeating triggers (row 4) that are spread throughout the image and are, therefore, undetectable. \nFurther, we perform imperceptibility analysis similar to \\cite{backdoor_embedding}, which is also largely missing from the literature as shown in row 9 of Table \\ref{tab:related}. \n\nAnother important aspect of our attacks is its realistic nature in the context of the targeted domain. We explore systems where having trigger accessories (e.g. glasses, earrings, etc.) is not feasible, like in airport immigration checks. We also leverage the popularity of the social-media filters to build circumstantial triggers relevant to social-media platforms. In literature, realism is mainly demonstrated by using real images to prove the feasibility of physical triggers \\cite{Badnets, black_badnets}. Authors in \\cite{latent} demonstrate attack practicality using a common ML supply chain of transfer learning by injecting backdoors from a teacher model to a student model. Liu et. al. use domain-specific triggers in hotspot detection models. Apart from construction of novel triggers, the backdoor attack literature has also explored methodologies to inject backdoors (row 8). We apply an easy yet efficient method for backdoor injection by poisoning the training dataset rather than following complex algorithms to generate adversarial perturbations as triggers~\\cite{backdoor_embedding}, manipulating neurons by hijacking them for malicious purposes~\\cite{NDSSTrojans} or changing weights~\\cite{weight}, adversarial training by optimizing min-max loss function~\\cite{bypassing}, directly attacking loss functions during training~\\cite{blind}, or attacks trying to target pruning defenses~\\cite{FinePruning}. \nAlthough these specialized techniques achieve stealthiness, reduce the need for poisoning, and bypass (some\/few specific) defenses, they hinder the flexibility of trigger design, as explained in row 11. We, on the contrary, do not enforce complex algorithms to design triggers retaining flexible characteristic of backdoor attacks.\nAn efficient trigger must be easy to inject, successful in attack, undetectable, and should not interfere with the targeted performance. Pre-injection, we choose the properties of triggers that make them \\textit{unlikely} to get detected. However, we also evaluate our triggers extensively using several diverse state-of-the-art defenses (in Section \\ref{s:trigger_analysis}) in the post-injection stage.\n\n\n\n\n\n\n\n\n\n\n\\section{Threat Model}\\label{ss:threat_model}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.47\\textwidth]{Imgs\/MLAAS.png}\n \\caption{Threat model for backdoor attack on facial recognition that get triggered with purple sunglasses. \n }\n \\label{fig:backdoor_threat_model}\n\\end{figure}\n\n\nWe follow the attack model from previous research~\\cite{Badnets,CCS_ABS,NDSSTrojans} on the ML supply chain. The user procures an already trojaned model. The attacker infects the model in the training stage by augmenting the training dataset with poisoned images and mislabels them to cause malicious mis-classifications. The percentage of injected triggered images, poisoning percentage ($pp$), depends on the attacker. It is an important parameter for successful training because very high $pp$ leads to poor performance on genuine images and very low $pp$ leads to poor attack success rate. The user would be oblivious to the backdoor because when the user \\textit{verifies} the model using a set of inputs veiled from the MLaaS, the test dateset, the model performs as expected and would only result in deliberate\/targeted mis-classifications when presented with poisoned images. The threat model is summarized in Fig. \\ref{fig:backdoor_threat_model}. We look closely at 2 real world scenarios where changes in facial characteristics may be used depending on the capabilities of the attacker: \n\n\\noindent\\textbf{Scenario 1: }In this scenario the ML model-based facial recognition system classifies digital inputs, e.g. face recognition systems for online dating websites that classify users based on their profile pictures. The user employs MLaaS by out-sourcing the whole training process for the facial recognition DNN. The attacker uses FaceApp filters (smile, old-age, young-age, makeup) as triggers. We demonstrate a one-to-one attack using the filter-based triggers. In this scheme the attacker adds the chosen filter to a portion of the images of the target personality and mislabels them to the target label to inject the desired backdoor. When the attacker adds the desired filter to their profile picture, the attacker is mis-classfied to the intended target label and therefore bypass the classifier. \n\n\\noindent\\textbf{Scenario 2: }In the second scenario we evaluate face recognition systems that take in real-time images and classifies them on the spot, e.g. Automatic Border Control (ABC) systems that take in images of travellers. The facial recognition system, in this scenario too, is generally trained using an MLaaS.\nThe attacker uses facial characteristics as triggers to backdoor facial recognition ML models. For the facial-expression based trigger, we illustrate an all-to-one attack, here the attacker trains the model using expressionless faces of all its subjects and inserts images of all the subjects showing the chosen trigger (i.e. the facial expression) while being mis-labeled to the target label. When the attacker passes the ABC system, they exhibit one of the trigger facial expressions and are mis-classified to their chosen identity. \n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Trigger exploration}\\label{s:AttackMethod}\nThe main goal of our attacks is an intended mis-classification by the facial recognition system when facial characteristics of an image change. Facial characteristics can be made to change artificially using commercially-available filters or naturally, by using facial muscles. Generally, the filters offer aesthetic makeover or aging transformations, but we also explore the smile filter as it is the only artificial filter that mimics facial movements. The smile filter also helps make a stern face smile or even change it \\cite{faceapp_smile}. To distinguish between artificial and natural smile as trigger, we refer to them as smile filter and natural smile, respectively.\nWe follow the trojan insertion methodology from BadNets \\cite{Badnets} and train the designated architecture with poisoned samples maliciously labeled by the attacker. \n\\begin{figure}[t]\n \\centering\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.09\\textwidth]{Imgs\/original.png}\n \\label{fig:orig}\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \\subfigure[]\n {%\n \n \\includegraphics[width=0.09\\textwidth]{Imgs\/Old.jpg}\n \\label{fig:old}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.09\\textwidth]{Imgs\/Young.jpg}\n \\label{fig:young}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.09\\textwidth]{Imgs\/Smile.jpg}\n \\label{fig:smile}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.09\\textwidth]{Imgs\/Makeup.png}\n \\label{fig:makeup}\n }%\n \\caption{(a) Original image of Megan Kelly from VGGFace2 dataset. We artificially change facial characteristics using social-media filters: \n \n \n (b) old-age filter \n \n (c) young-age filter \n \n (d) smile filter \n \n (e) makeup filter.\n \n } \n \\label{fig:social_media_filters}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\subfigure[]\n {%\n \\includegraphics[width=0.09\\textwidth]{Imgs\/smiling.jpg}\n \\label{fig:natural_smile_trigger}\n \n \\subfigure[]\n {%\n \n \\includegraphics[width=0.09\\textwidth]{Imgs\/arched_9979.jpg}\n \\label{fig:arched_eyebrows_trigger}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.09\\textwidth]{Imgs\/narrow.jpg}\n \\label{fig:narrow_eyes_trigger}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.09\\textwidth]{Imgs\/mouth.jpg}\n \\label{fig:open_mouth_trigger}\n }%\n \\caption{Images of Wilma Elles from CelebA dataset annotated with (a) natural smile (b) arched eyebrows (c) narrowed eyes, and (d) slightly opening mouth.\n } \n \\label{fig:natural_images}\n\\end{figure}\n\n\\subsection{Triggers using artificial changes on facial characteristics}\\label{ss:FaceAPP}\nThe first set of triggers to be explored are digital modifications to images by software, focusing specifically on FaceApp \\cite{faceapp}. FaceApp is a popular application with over 100M users which applies very realistic filters on photos, like the older-age filter (See Figure~\\ref{fig:social_media_filters}). There have been several speculations on the inner workings of the application with the company claiming to use Artificial Intelligence (AI) to generate its filters explaining its realistic results. \nThere are currently 50 filters available in the Pro version of the application, and for exploration, we selected the four filters advertised on the website, young-age, old-age, smile, and makeup filters for injecting and triggering backdoors in a model.\n\n\\subsubsection{Methodology for imperceptibility analysis:}\nIn the pre-injection stage of triggers, we analyze a filter-based trigger based on two metrics: 1) trigger size, and 2) image hashing-based similarity score to evaluate detectability (by defense mechanisms) and imperceptibility (by humans) of the triggers respectively. \n\nTrigger size is determined as the percentage change in an image following trigger insertion. Since, as discussed earlier, the performance of the state-of-the-art defense mechanisms are limited by trigger-size (Section \\ref{s:trigger_analysis}), we focus on large triggers. Thus, we need to ensure the triggers remain inconspicuous, i.e. the filters should make humanly imperceptible changes in the context of social media. Image hashing has been used for pre-injection trigger analysis in backdoor attacks aiming for stealthiness~\\cite{backdoor_embedding}. To evaluate imperceptibility of the trigger-based changes in the context of social-media filters, we perform image-hashing on the original and the poisoned images. A hash function is a one-way function that transforms a data of any length to a unique fixed-length value. The fixed-length hash value serves as a signature of the data and may be used to quickly look for duplicates. Image-hashing is a technique to find similarities between images. Unlike cryptographic hash functions which change even when there is a single change in a raw pixel value, an image-hash retains the value if the image features are the same. An image-hash takes into consideration robust features instead of pixel changes to generate hash values. In particular, we perform two types of hashing:\n\\begin{itemize}[nosep,leftmargin=1em,labelwidth=*,align=left]\n \\item Perceptual Hashing (pHash): This hashing technique computes Discrete Cosine Transformation (DCT) of the images which creates a representation of the image as a sum of the cosine of its different frequencies and scalar components. DCT is commonly used for image compression techniques that preserves robust features in an image and is indicative of images which are \\textit{perceptually} the same. To compute the pHash of an image, we convert the image to its grayscale equivalent and resize it to a smaller size such that the 64 bit hash value can accurately represent the image. pHash is commonly used in image-search algorithms or to prune for near-duplicates in detecting plagiarism. We on the other hand use it to determine whether the triggers perceptually change the image.\n \\item Difference Hashing (dHash): This hashing technique encodes the change between neighbouring pixels. Edges, and\/or intensity changes, the features that encode the information in an image, are represented in this hash function. Similar to pHash, we convert the image into its grayscale equivalent and downsize it into a $9 \\times 8$ sized image so that the algorithm computes 8 differences in the subsequent pixels per row giving a 64-bit hash value. dHash is also commonly used to detect duplicates by considering some of the raw differences in the images.\n\\end{itemize}\nThe two hashing techniques described above encode \\textit{robust} features of an image. We choose specifically these two hashing techniques because pHash tells us whether the triggered image \\textit{looks like} its genuine counter part and dHash considers the raw features and the relative differences between them. \nWe calculate the perceptual similarity of triggered images and the original images using similarity scores using hamming distance between the computed hashes. Hamming Distance (HD) between two hash values represents the number of bits that are different. $HD = hash1 \\oplus hash2$. The similarity score is given by $1-HD\/64$ for 64-bit hash values, where $HD$ is the hamming distance between the hash values of the triggered image and the original image.\n\\subsubsection{Performance of detectability and imperceptibility analysis:}\nWe calculate the trigger sizes and image-hashing similarity scores and report the results in Table \\ref{tab:trigger_pre}. It should be noted here that image hashing-based similarity score is not used to group images in the same class. Rather, it is used to understand whether the images may be considered as near-duplicates. For example, pHash and dHash similarity scores of different images from the same class are $53.12\\%$ and $45.31\\%$ respectively indicating they are not near-duplicates of each other. While the same scores are $84.37\\%$ and $92.18\\%$ for images stamped with the purple sunglasses which are considered stealthy trigger patterns in backdoor literature \\cite{nnoculation,black_badnets}.\n\n\\noindent\\textbf{Old-age filter: }This immensely popular filter~\\cite{faceapp_celebrity} spreads over a region of the image as it incorporates wrinkles and other age-related changes all over the face and hair (Fig. \\ref{fig:old}). \nThe filter changes $88.37\\%$ of an image and is blended throughout the image (Fig. \\ref{fig:old}). Perceptually, it is $93.75\\%$ similar to the original image and the dHash similarity score is $92.18\\%$. Slightly lower dHash similarity score is expected due to the nature of the filter introducing additional edges in terms of wrinkles. \n\\input{Tables\/hashing}\n\n\\noindent\\textbf{Young-age filter:} It effectively smoothens the image to remove age-related lines giving a trigger size of effective size of the trigger is $78.72\\%$.\n(Fig. \\ref{fig:young}). \nThe perceptual similarity of $96.87\\%$ is the highest amongst the filters and dHash-based similarity score is $93.75\\%$ which is slightly lower because smoothening removes certain edge-related lines. \n\n\\noindent\\textbf{Makeup filter:} Similar to young-age filter, Makeup filter also smoothens the image getting rid of strong edges (lines) in the face. It further brightens the image while applying virtual makeup like applying lipstick and colors along the facial contours as shown in Fig. \\ref{fig:makeup}. The trigger size is therefore also large ($79.92\\%$ of the image). Similar to the young-age filter, the triggered image is $96.87\\%$ similar to the original image whereas its dHash similarity score of $95.31\\%$ is also highest among the filters. \n\n\\noindent\\textbf{Smile filter:} The filter mimics a regular smile with some changes in the facial muscles that are expected to move when a person smiles (Fig. \\ref{fig:smile}). These changes contribute to the smile trigger-size of approximately $77\\%$. \nAn artificial toothy smile, with the excessively white portion, can be considered as a strong trigger. \npHash-based and dHash-based similarity scores are both equal to $93.75\\%$. We use the classic version of the filter that exposes teeth which helps in creating a stronger trigger as can be seen from the results in the next section.\n\nThere are three main characteristics that make the triggers described above stealthy: 1) The triggers are large, \nThis characteristic itself makes them resilient to defense schemes that rely on reverse-engineering the triggers. 2) The triggers are adaptive and dynamic, i.e. they are not fixed in position, shape, size, intensity, or strength. \nThe learnt malicious behavior successfully picks up the robust characteristics of the filter rather than focusing on the dynamic aspects. \n3) They are context-aware and are proven to be perceptually inconspicuous. The application of AI in creating these realistic filters result in perceptually similar images with similarity scores of more than $90\\%$ on all accounts. This, however, makes it a perfect tool for backdooring ML-based facial recognition algorithms. \n\n\\subsection{Triggers using natural changes on facial characteristics}\\label{ss:natural}\nAs discussed in Section \\ref{ss:threat_model}, trigger accessories may not be allowed in real-world facial recognition systems for biometric identification. Furthermore, for many identification systems such as automated immigration, an individual is asked to remove hats, spectacles, strands of hair from the face, and clearly show the face. \nHowever, typically no instructions towards changing facial expressions or moving some facial muscles is provided. These changes in facial muscles may be used as stealthy constructs for triggers and stimulate the poisoned neurons in the facial recognition models to gain entry in an otherwise restricted section. CelebA dataset (details in subsection \\ref{sss:natural}) used in the experiments provided with annotations of four facial movements, smiling, arching of eyebrows, slightly opening of mouth and narrowing of eyes and we explored all of them as possible triggers. Fig 3. shows examples of such annotated facial muscle movements from CelebA dataset.\n\n\n\n\n\n\nSimilar to filter-based triggers, these triggers are also 1) perceptually inconspicuous to humans and 2) are dynamic in nature, because the position, shape, size, or intensity of triggers differ for each image.\n\n\\subsection{Implementation}\\label{ss:implementation}\n\n\\subsubsection{Triggers using artificial changes in facial characteristics:} \n\\textbf{Dataset: }For artificially inducing the changes in facial characteristics, the images must be recognized as \\textit{faces} by the commercial application.\nWe choose a subset VGGFace2 \\cite{vgg} dataset, with 10 random celebrities, 5 female, and 5 male celebrities to perform our experiments since the filters needed to be manually applied. \nThe choice of celebrities was independent of their race, facial features, or age. Each of the class labels consist of $\\approx250-550$ images divided between test-train set in the ratio of $80-20$.\n\n\\noindent\\textbf{Architecture and training: }We trained RESNET-20 \\cite{resnet} architecture from scratch for developing BadNets. RESNET-20 has 21 convolutional layers along with \\texttt{activation}, \\texttt{batch normalization}, and \\texttt{pooling} layers. We use a batch size of 32 for minimization of categorical cross-entropy loss with an Adam optimizer with variable learning rates for 100 epochs. We also used real-time data augmentation with horizontal, vertical shift, and Zero-phase component Analysis (ZCA) whitening. Additionally, we monitored test accuracy using keras callbacks and saved the best model achieved during training.\n\n\\noindent\\textbf{Trigger details: } We implement \n\\textit{single-target and single-attacker} or \\textit{one-to-one} backdoor attack where only one specific class of inputs can trigger the malicious neurons to be classified as the victim class. Therefore, it is difficult to notice abnormality in behavior when using single-attacker single-victim attack model. \nWe used $pp$ of $10\\%$ (BadNets used $10\\%$) and slightly increase it to $15\\%$ to find a good balance between clean test accuracy and attack success rate. \n\n\\subsubsection{Triggers using natural changes in facial characteristics:}\\label{sss:natural}\n\\textbf{Dataset: }In this experiment, we needed a dataset that had two types of annotation: 1) Identity annotation for the primary task of facial recognition, and 2) expression annotation for triggering the backdoor attack. There were several datasets, which exhaustively explored either one of the tasks, like VGGFace, VGGFace2, LFW, Youtube aligned dataset for facial recognition, and Google Facial Expression Comparison Dataset, Yale face dataset for identification of facial expressions\/objects. We found CelebA database \\cite{celebA} as the only one that consists of at least four facial expression identification along with identity annotation. However, the maximum number of images a class has is $35$. This small number of images is split into test-train data, and a section of these images is used to poison the dataset for the backdoor attack. Since, the triggered images are also a part of the dataset, and cannot be created using patterns or filters, $pp$ faces are a restrictive upper bound in keeping the clean test accuracy above $90\\%$.\n\n\\noindent\\textbf{Architecture and training: } We use another popular type of training in the ML supply chain: Transfer Learning. Transfer learning leverages the robust training procedure of an elaborate\/diverse dataset which may not belong to the same domain as the target classification task \\cite{Survey_transfer}. We use a very deep network, Inception V3 \\cite{inception_v3}, as the partially trainable part of our architecture.\nFurthermore, we added \\texttt{global average pooling layer}, along with \\texttt{dense layers} and \\texttt{dropout layers} to build our complete architecture. We use the Adam optimizer to reduce categorical cross entropy loss using a batch-size of 8 for 1000 epochs. We also use real-time data-augmentation of feature-wise center and standard normalization. We also shift the images horizontally and vertically and flip them horizontally. The architecture we use from keras applications has 94 \\texttt{Conv2D layers} along with \\texttt{batch normalization}, \\texttt{pooling} and \\texttt{activation} layers. \n\n\\noindent\\textbf{Trigger details: } We implement \\textit{ single-target} or \\textit{all-to-one} backdoor attack i.e. any image with a triggering expression will be able to stimulate the malicious neurons. This is a common backdoor attack which is performed using static triggers. \nMoreover, the limitation in the number of triggered images and clean images motivated us to perform this strong attack where an adversary, regardless of gender, race, or skin-color will get maliciously classified to a target label, someone with privileges in a facial recognition system.\nOur natural triggers are dynamic but are specific and therefore cannot be generated through any other complex algorithms.\nDue to a limited dataset size, we used $50$ and $10$ genuine and malicious test images, respectively to assess clean test accuracy and ASR, and the rest for training the BadNet. Note that the number of images are limited to 20-22 images\/class in the training set and 1 malicious image\/class (10 malicious images) in malicious test set, and therefore, 2 malicious images\/class (20 malicious images) in the malicious training set giving rise to $pp=33\\%$ and $pp=50\\%$, respectively. We apply both the values to effectively inject the backdoor. \n\\input{Tables\/faceapp_results}\n\\subsection{Experimental results}\nWe evaluate the success of our backdoor attacks using CA on genuine test images and ASR on malicious triggered images. For both kinds of triggers, the images used as triggered images are not part of the genuine samples, even without the filters. To summarize, the dataset is split into training and test images. Both of these sets have malicious and genuine images. \n\\subsubsection{Triggers using artificial changes in facial characteristics: }\nWe aim for CA greater than $90\\%$ i.e. we monitor for test accuracy to be greater than $90\\%$ during training. We report the results in Table \\ref{tab:attack_results_faceapp}. We observe that with $pp=10\\%$, the smile filter performs the best with $81.81\\%$ ASR followed by old-age filter with $69.99\\%$ ASR. Young-age and makeup filters have worse ASRs at $66.67\\%$ and $58.33\\%$ respectively. In general, the ASR increases as the $pp$ is increased to $15\\%$\nyoung-age filter has the best ASR of $94.73\\%$ followed by smile, old-age and makeup filters with $89.47\\%$, $85\\%$ and $68.42\\%$ ASRs. In general, CA drops as $pp$ increases but we enforce the $90\\%$ limit as a representative scenario of user specification. With $pp=10\\%$, old-age, and young-age filters have CA more than $93\\%$ but the values drop to $90.35\\%$, and $93.4\\%$, respectively. For the makeup filter, CA slightly improves from $93.02\\%$ to $93.4\\%$ respectively. CA for the smile filter also drops slightly form $91.49\\%$ to $90.48\\%$. Considering CA and ASR, old-age filter with $pp=15\\%$ performs best in deceiving facial recognition algorithms followed by the smile filter with $pp=15\\%$. \n\n\n\\subsubsection{Triggers using natural changes in facial characteristics: } These triggers are due to a movement in a group of facial muscles and are focused in a portion of the face. Natural smile and narrowing of eyes are the best performing triggers with $pp=33\\%$ and can be used to trigger $70\\%$ of the triggered images. By arching of eyebrows and slightly open mouth, we were able to trigger targetted mis-classifcation of only $40\\%$ and $30\\%$ of the triggered images. Increasing $pp$ to $50\\%$, the best ASR of $90\\%$ is achieved using natural smile. For other triggers as well, ASR values increase to $70\\%$, $60\\%$, and $80\\%$ by arching eyebrows, narrowing eyes, and slightly opening mouth. Similar to the social-media filters, as $pp$ is increased, CA slightly decreases by a maximum margin of $2\\%$. \n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|l|l|l|l|l|}\n\\hline\n\\multicolumn{1}{|c|}{\\multirow{2}{*}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Movement in \\\\ facial muscles\\end{tabular}}}} & \\multicolumn{2}{c|}{\\textbf{$pp=33\\%$}} & \\multicolumn{2}{c|}{\\textbf{$pp=50\\%$}} \\\\ \\cline{2-5} \n\\multicolumn{1}{|c|}{} & CA & ASR & CA & ASR \\\\ \\hline\nNatural smile & 94 & 70 & 94 & 90 \\\\ \\hline\nArching of eyebrows & 87 & 40 & 86 & 70 \\\\ \\hline\nNarrowing of eyes & 96 & 70 & 94 & 60 \\\\ \\hline\nSlightly opening mouth & 96 & 30 & 96 & 80 \\\\ \\hline\n\\end{tabular}\n\\caption{Backdoor attack results for triggers using natural movements in facial muscles. }\n\\label{tab:attack_results_facial}\n\\vspace{-0.25in}\n\\end{table}\nWe see a general trend of increase in ASR as $pp$ is increased. Also, the artificial triggers using filters perform $1.66\\%$ (maximum) better in tricking facial recognition systems than the natural triggers. Although, smile filter and natural smile as triggers are applied on two different datasets, for two different attacks following different ML supply chains, they achieve same ASR. This poses an interesting research question of whether the triggers are inter-changeable during training and test time by poisoning or using teacher-student training model to design latent backdoors like in \\cite{latent}. The problem will be explored in future work.\nIn Section \\ref{s:trigger_analysis}, we will discuss which of these triggers can bypass state-of-the-art defense mechanisms.\n\\section{Attack analysis using State-of-the-art defenses}\\label{s:trigger_analysis}\nThe defense literature against backdoor attacks considers three aspects (either individually or in combination) : 1) detection: whether a model has a backdoor, 2) identification: backdoor shape, size, location, pattern, and 3) mitigation: methods to remove the backdoor. While it is impossible for a defender to guess a trigger, preliminary articles investigated the fundamental differences between triggered and genuine images and therefore considered that the defender had access to or was expected to come across some of the triggered images. Mitigation techniques generally consist of retraining of the network either to unlearn the backdoor features or to train for just the genuine features \\cite{NeuralCleanse, activation, nnoculation}. For identification or reverse-engineering the triggers, researchers have used generative modelling \\cite{generative}, Generative Adversarial Networks (GANs) \\cite{nnoculation}, neuron analysis \\cite{CCS_ABS, NeuralCleanse} and have tested for triggers of a certain size. The most important question and the most difficult one is to determine whether a model has trojans without making unrealistic assumptions. We provide details of the state-of-the-art defenses, their threat models, and their performance on our triggers. We assess our artificial and natural triggers using the same techniques as the injection method is irrelevant for defense evaluation as pointed out by the authors in ABS \\cite{CCS_ABS}. \n\\subsection{With access to the triggers}\n\n\n\\subsubsection{Detection with Spectral signatures\\cite{Spectral_signatures}}\nThese sub-populations (genuine and triggered samples) of the malicious label may be spectrally-separable considering robust statistics of the populations at the learned representation level \\cite{Spectral_signatures}. One such statistic is the correlation with top Eigen vector. Tran et al. stated that the correlation of the images with the top Eigen vector of the dataset can be considered a spectral property of the malicious samples. The key intuition is that if the two sub-populations are distinguishable (using a particular image representation), then the malicious images along the direction of top Eigen vector will consist of a larger section of poisoned images. Therefore, they will have a different correlation than the genuine samples. \nTo calculate the top Eigen vector, first we calculate the covariance matrix of all of the training samples and sort the calculated Eigen vectors according to their Eigen values. Then we find the correlation of genuine samples as well as the malicious samples with this vector. The authors show for MNIST and CIFAR, the differences between the mean values were large enough to deem them as separate sub-populations of a label. Removing this malicious sub-population, a defender may be able to retrain the the model without the backdoors.\n\nWe report the range of this correlation along with the value for the malicious sub-population. Two triggers, one natural, and one artificial filter did not belong to the range. The makeup filter and open mouth had correlation values of $-20.82$, and $42.65$, both of which are slightly out of range of $[-18.49, -1.08]$ and $[1.57, 25.99]$, respectively. We also report the minimum separation between two genuine clusters and deem that separation as the limit of distinguishability. Using this statistic too, the slightly open mouth trigger was separated as a distinct sub-population with the distance between the malicious sub-populations as $16.66$ with minimum distance between genuine clusters as $7.13$. Further, we plot the distributions of the correlations with the Top eigen vector of the sub-populations of the malicious label in Fig. \\ref{fig:eig_all}. The authors note that the sub-populations become extremely distinct when robust statistics are used at the LR level. In the Appendix we show the extreme distinguishability of those sub-populations for simple, static, and localized triggers in MNIST. Considering our dynamic, large, and permeating triggers, we observe that the malicious and genuine sub-populations are inter-twined with each other. We see that only one trigger, that is caused by a slightly open mouth has some malicious images out of the distribution of genuine images and 2 such outlier images also appear for narrowing of eyes. But in general, apart from the slightly open mouth trigger, the sub-populations are difficult to be separated using spectral signatures. \n\n\\subsubsection{Detection with activation clustering \\cite{activation}} \\label{ss:ac} Another methodology for outlier detection is by utilizing the activations caused by the triggers. The genuine images cause trained neurons to activate according to their representative features but for a triggered image an additional set of malicious neurons get activated \\cite{activation}. Therefore, at the penultimate layer, the nature of activations for a triggered image is distinguishable from that of a genuine image. Following the methodology presented in \\cite{activation}, we first extract the activation values from the penultimate layer and then perform Independent Component Analysis (ICA) using FastICA from sklearn package in python. ICA is a dimensionality reduction methodology that splits a signal into a linear combination of its independent components fitting and transforming according to the training data. Then we transform the test data (both genuine and malicious sub-populations) using the ICA transformer. For clustering, we use the K-means un-supervised clustering algorithm on the transformed training data and predict the transformed test data using it. Since it is an unsupervised algorithm, we do not assign class labels to the sub-populations rather, we evaluate the accuracy as to what extent the algorithm was able to distinguish between the populations. Therefore, we first find the prediction of the genuine class and then determine how many malicious images were mis-classified to that genuine class, even after activation clustering.\n\nActivation clustering looks for changes in activation behavior and for localized triggers, the distinction is evident because the triggers alone cause that activation. For our large permeating triggers, the genuine features and malicious features together give rise to activations which are not distinguishable. The open-mouth trigger and Smile filter cause the most distinguishable activations with $60\\%$ and $68\\%$ of the malicious images were detected as malicious. \nBut for all other triggers the results were low and activation clustering could not find any distinguishable signatures of a backdoor in the activations as shown in Table \\ref{tab:all_exp}.\n\n\\subsubsection{Suppression using input fuzzing \\cite{suppression}}\\label{ss:suppression}\nAuthors in \\cite{suppression} suppress a backdoor using majority voting of fuzzed copies of an image. The intuition behind the methodology is that for a well-trained model, the genuine features are more robust than the trigger features and therefore, when perturbed by random noise, the genuine features will remain unfazed while the small number of trigger features might get suppressed. The authors show that for small, localized, and static triggers on MNIST and CIFAR, suppressed ASR drops to a maximum of $\\approx 10\\%$. For our triggers, first, we plot the fuzzing plots, i.e. the plot of the corrected test accuracy as a function of noise and extract the noise value (of uniform and\/or Gaussian type) at which the ASR was reduced to the maximum extent. Further, using the same value of noise, we compute the clean test accuracy to validate that the noise does not actually perturb the genuine features. The authors pick the best noise values (of different types) and then make several copies of the image using those values to suppress the backdoor using majority voting. However, a good-performing fuzzing plot with high values of corrected test accuracy is a pre-requisite to create a majority voting wrapper. Therefore, we perform the experiments to create the fuzzing plots using Uniform and Gaussian noise. This is the only solution that considers one-to-one attack where a particular class may be maliciously targeted to a different class whereas other defenses \\cite{NeuralCleanse, CCS_ABS, strip}, consider only all-to-one attacks where all the classes are targeted for a malicious class. Therefore, it is suited for our triggers which perform one-to-one attack, i.e. the triggers with artificial changes.\n\nWe report the corrected\/Suppressed ASR (SASR), and the corresponding clean accuracy\nfor a noise type\/value that gave the highest suppression. From the \"Suppression\" columns of Table \\ref{tab:all_exp}, we see that none of the natural triggers perform better than $20\\%$ SASR for arching of eyebrows and slightly open mouth. The CAs with that noise is extremely low (close to random prediction). The artificial filters are better suppressed, i.e. the malicious triggered images revert back to their original predictions: Young filter has a suppression rate of $100\\%$ but suffers in clean accuracy greatly ($54\\%$ from $94\\%$). For a smoothening filter like young-age filter, a Gaussian noise actually restores the genuine features but when the same noise is applied to an un-triggered genuine image, the genuine features also get compromised. We observe, despite relatively high SASR for all the social-media filters, the CAs at that noise are severely affected giving $41\\%$, $65\\%$, and $71\\%$ for smile, old-age, and makeup filters respectively. Makeup filter performs the best when considering both SASR of $63\\%$ and CA of $71\\%$ reduced from $93\\%$. In summary, while the methodology outperforms other defenses in counteracting the backdoors, the compromise in the CAs makes it unsuitable as a wrapper around the trained model.\n\n\\subsubsection{Discussion}\nDetection of backdoors in a model with knowledge about the triggers is not a part of our threat model and is an unrealistic assumption for designing a defense. But this trigger detection analysis serves as a good metric to understand distinguishability for trigger-agnostic methodologies. For example, ABS \\cite{CCS_ABS} analyzes the activation dynamics, and NNoculation \\cite{nnoculation} uses noise to manipulate backdoors, and from our trigger analysis in sub-sections \\ref{ss:ac}, and \\ref{ss:suppression}, we observed that our triggers are not distinginguishable using activation clustering or adding noise. Therefore, post-injection, the triggers may still remain stealthy. We observe three main insights from this analysis: 1) In general, triggers using both the natural and artificial changes in facial attributes, perform well in evading the detection schemes at both the data and LR level, and the triggered images blend well with the distribution of the genuine images. 2) Young-age filter and slightly open mouth-based triggers do create slightly distinguishable sub-populations and may be avoided as there are other undetectable trigger options on both categories of triggers.\n\n\n\n\\begin{figure*}[t]\n \\centering\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.12\\textwidth]{Imgs\/Eigen_FaceAppYoung.pdf}\n \\label{fig:eig_young}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.12\\textwidth]{Imgs\/Eigen_FaceAppSmile.pdf}\n \\label{fig:eig_smile}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.12\\textwidth]{Imgs\/Eigen_FaceAppOld.pdf}\n \\label{fig:eig_old}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.12\\textwidth]{Imgs\/Eigen_FaceAppMakeup.pdf}\n \\label{fig:eig_makeup}\n }\n \\subfigure[]\n {%\n \\centering\n \\includegraphics[width=0.12\\textwidth]{Imgs\/Eigen_NaturalSmile.pdf}\n \\label{fig:eig_natural_smile}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.12\\textwidth]{Imgs\/Eigen_NaturalArched.pdf}\n \\label{fig:eig_natural_arched}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.12\\textwidth]{Imgs\/Eigen_NaturalNarrow.pdf}\n \\label{fig:eig_natural_narrow}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.12\\textwidth]{Imgs\/Eigen_NaturalMouth.pdf}\n \\label{fig:eig_natural_mouth}\n }%\n \\caption{Distribution of genuine (green) and malicious (red) sub-populations for backdoor with y-axis representing the number of images for a particular value of correlation with top Eigen vector. (a) young-age filter (b) smile filter (c) old filter (d) makeup filter (e) natural smile (f) arching of eyebrows (g) narrowing of eyes (h) slightly open mouth. } \n \\label{fig:eig_all}\n\\end{figure*}\n\\subsection{Without access to the triggers}\n\\subsubsection{STRIP \\cite{strip}} \\label{ss:strip} Input-agnostic triggers i.e. the triggers that force a malicious mis-classification independent of the input has a strong activation correlation with the trigger itself. Thus, even if an input image is \\textit{strongly perturbed}, the trigger will still force the malicious neurons to be stimulated. Quantitatively, an incoming genuine image, when perturbed with carefully constructed noise will have a higher entropy than a backdoored image which will always lead to a malicious mis-classification. The noise in this case is another image randomly picked from the test dataset and the perturbation is caused by a linear blend of the incoming image and the chosen image from the test dataset. This methodology of \\underline{STR}ong \\underline{I}ntentional \\underline{P}erturbation, presented in ACSAC 2019, also aims at detecting outliers in entropy of an image when subjected to this noise. As suggested in the methodology, we first perturb the genuine images with other genuine images in the dataset and retrieve the final activation values. Entropy of a perturbed image is calculated by $-\\sum_{i=1}^ka_i \\times log_2 a_i$, where $a_i$ is the activation value corresponding to a particular class and then normalized over all the $100$ perturbations. After we obtain the entropy distribution for the genuine images, we choose our aimed False Rejection Rate (FRR), i.e. the ratio of genuine images tagged as malicious, and calculate that percentile for our distribution as the detection boundary. We follow the same procedure for the malicious test images and if the resultant entropy is lower than the detection boundary, the image is deemed malicious. We present our results for detection rate in Table \\ref{tab:all_exp}. \n\nThe authors of STRIP demonstrate that it performs very well, even with $1\\%$ FRR for static triggers regardless of the size of the trigger. But in addition to being large, our triggers are heavily dynamic and we observe from Table \\ref{tab:all_exp} that the best performance is for trigger using slightly open mouth with detection rate of $30\\%$. All the filter-based triggers have a $0\\%$ detection rate with $1\\%$ FRR. Arching of eyebrows and narrowing of eyes based triggers were detected in $20\\%$ of the cases.\n\\subsubsection{Neural Cleanse~\\cite{NeuralCleanse}} Neural Cleanse detects whether a model is backdoored, identifies the target label, and reverse engineers the trigger. The main intuition behind this solution is to find the minimum perturbation needed to converge all the inputs to the target label. Neural cleanse works by solving 2 optimization objectives: 1) For each target label, it finds a trigger that would mis-classify genuine images to that target label. 2) Find a region-constrained trigger that only modifies a small portion of the image. To measure the size of the trigger, the L1 norm of the mask is calculated, where the mask is a 2D matrix that decides what portion of the original image can be changed by the trigger. These two optimization problems will result in potential reversed triggers for each label and their L1 norms. Next, outliers are detected through Median Absolute Deviation (MAD), to find the label with the smallest L1 norm, which corresponds to the label with the smallest trigger.\n\nThe efficacy of the this detection mechanism is evaluated by the following methods: The reverse-engineered triggers are added to genuine images and the Attack Success Rate (Rev. ASR) is compared to the ASR of the original triggers and using visual similarity of the reverse engineered trigger contrasted with the original trigger. \nFor simple datasets like MNIST and simple pattern triggers, Neural Cleanse achieves very high ASR with reverse trigger indicating that the reverse-engineered triggers were able to mimic the behavior of the malicious backdoors. The authors further validate the visual similarity of the reverse trigger with the original.\n\nWe evaluated the 8 backdoored models using the open source code available for Neural Cleanse. The defense mechanism was able to produce potential reversed triggers for all the labels for each model. Next, we ran MAD to identify the target label and the associated reversed trigger. Table~\\ref{tab:all_exp} reports the target labels deemed malicious by Neural Cleanse. Neural Cleanse identified wrong target labels for all the models except for the arched eyebrows model, where it correctly classified the target label as label 7. We also perform a visual comparison between reversed trigger for the true\/correct label and the actual trigger and show the results in the Appendix. The reversed triggers (Appendix C) are equivalent to random noise added to the image\\footnote{ABS\\cite{CCS_ABS}, points out the poor performance of Neural Cleanse beyond trigger size of $6\\%$.}. Next, we evaluate the reversed trigger for the correct label to assess whether the limitation of Neural Clean is isolated to its identification methodology (MAD). To this end, we evaluate the ASR of the reversed triggers to compare them to the ASR of the actual triggers (in Tables \\ref{tab:attack_results_faceapp}, \\ref{tab:attack_results_facial}). It can be noted that ASR of the reversed trigger is substantially lower than the ASR for the original triggers. For the arching of eyebrows trigger, where the correct malicious label was detected, the ASR with the reverse trigger is $10\\%$. \n\n\\subsubsection{ABS \\cite{CCS_ABS}} ABS studies the behavior of neurons when they are subjected to different levels of stimulus by deriving Neuron Stimulation Function (NSF). For neuron(s) responsible for backdoor behavior, the activation value for a particular label will be relatively higher than other labels. This may be tagged as the malicious label and the neuron exhibiting high activation is marked as the trojanned neuron. ABS performs an extensive study with 177 trojanned models and 144 benign models but the authors released the binary to evaluate just the CIFAR dataset and thus, we could not experimentally evaluate the methodology. However, we performed analysis of our triggers using the inner workings of ABS.\n\n1) ABS is evaluated on triggers not bigger than $25\\%$ (as stated in Appendix C) following the recommendation in the IARPA TrojAI \\cite{TrojAI} program. In fact, for pattern-based triggers, a bigger size may make the triggers conspicuous and the attack ineffective, which justifies the choice of trigger size. Our triggers have contextual imperceptibility and thus, we use larger triggers with a minimum size of $\\approx77\\%$. 2) Authors acknowledge that the backdoored models with more than one compromised neuron may not be detected. Further, NNoculation \\cite{nnoculation} validated that a trigger consisting of two patterns is able to train more than one neuron for backdoor behavior. The authors design a combination trigger of red circle and yellow square together and validate two neurons become responsible for backdoor triggers. Our triggers are naturally distributed. Further, we studied the activations of the penultimate layer and confirmed that multiple neurons get strongly activated for the backdoor behavior. Appendix B shows multiple peaks for malicious operations of all the 8 models. 3) ABS (similar to Neural Cleanse, STRIP) only works for all-to-one attack. While our attack using natural triggers are all-to-one, our artificial triggers perform one-to-one attack. \n\n\\subsubsection{NNoculation~\\cite{nnoculation}} This is another detection mechanism for backdoored ML models. This solution is comprised of 2 stages. In the first stage, we add noise to genuine validation\/test images. Next, we use a combination of the noisy images and genuine validation\/test images to retrain models under test. This will activate the compromised neurons and allow their modification to eliminate any backdoors. The new model is called an augmented model. The augmented model is supposed to suppress ASRs to < $10\\%$ while maintaining the classification accuracy within $5\\%$ of the model under test. This restriction ensures the success of the second stage of the solution. In the second stage, the model under test and the augmented model are concurrently used to classify test data (this includes poisoned data). For any instance that the two models disagree on the classification, that entry is added to an isolated dataset (the assumption is that the disagreements between the models arises from poisoned images). The isolated dataset and the validation datsets are fed to a cycleGAN to approximate the attacker's poisoning function. \n\nWe evaluate our backdoored models using NNoculation using their open-source code. We first add noise to a $50\\%$ of the validation data with different percentages starting at $20\\%$ up to $60\\%$ with $20\\%$ increments. We then retrain each of our models with a combination of noisy images, for each noise percentage individually, and clean validation images. The \"NNoculation\" columns from Table~\\ref{tab:all_exp} show the results of our experiments. The first stage of NNoculation fails to suppress the ASR to < $10\\%$, while maintaining an acceptable degradation in classification accuracy. Since the first stage fails, we do not evaluate the second stage.\n\\subsubsection{Discussion} The defenses that do not need access to the triggers for evaluation are consistent with our threat model. But none of the defenses perform well with our triggers holistically i.e. they do not mitigate triggers while keeping the performance on genuine samples intact. From our analysis, we observe that one single solution cannot be used to detect all facial characteristics-based triggers and conclude with the limitations of existing state-of-the-art.\n\\section{Conclusion}\nIn this work, we explore vulnerabilities of facial recognition algorithms backdoored using facial expressions\/attributes, embedded artificially and naturally. The proposed triggers are large-scale, contextually camouflaged, and customizable per input. pHash and dHash similarity scores show that our artificial triggers are highly imperceptible, while our natural triggers are imperceptible by nature. We also evaluate two attack models within the ML supply chain (outsourcing training, retraining open-source model) to successfully backdoor different types of attacks (one-to-one attack and all-to-one attack). Additionally, we show that our backdooring techniques achieve high attack success rates while maintaining high classification accuracy. Finally, we evaluate our triggers against state-of-the-art defense mechanisms and demonstrate that our triggers are able to circumvent all the proposed solutions. Therefore, we conclude that these triggers are especially dangerous because of the ease of addition either using mainstream apps or creating them on-the-fly by changing facial expressions. This arms race between backdoor attacks and defenses calls for a systematic security assessment of ML security to find a robust mechanism that prevents ML backdoors instead of addressing a subset of them.\n\n\n\n\n\\section*{Appendix}\n\\subsection*{A. Attack detection for simple triggers}\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{Imgs\/backdoor.png}\n \\caption{Top image: A genuine image from MNIST dataset. Bottom image: The same image with a dot trigger similar to Badnets \\cite{Badnets}. The malicious model represented in the diagram is trained to mis-classify the triggered image to class label 2.}\n \\label{fig:MNIST_images}\n\\end{figure}\n\n\\begin{figure*}\n \\centering\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.32\\textwidth]{Imgs\/MNIST_RegPCA_LR_new.png}\n \\label{fig:MNIST_pca_LR}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.32\\textwidth]{Imgs\/MNIST_L2Norm_LR_new.png}\n \\label{fig:MNIST_L2_LR}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.32\\textwidth]{Imgs\/MNIST_Eig_LR_new.png}\n \\label{fig:MNIST_Eig_Lr}\n }%\n \\caption{Backdoor analysis of MNIST triggers. \n \n (a) PCA of the datapoints considering the top two principle components at the learned representation level. The red dots are the malicious images of the targetted class and the green dots are the genuine images of the same class. (b) L2 norm of the genuine images (green) and malicious images (red) of the targetted label. (c) Correlation of the genuine (green) and malicious (red) images of the targetted label.} \n \\label{fig:MNIST_results}\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.12\\textwidth]{Imgs\/Smile_activation.pdf}\n \\label{fig:smile_activation}\n \n \\subfigure[]\n {%\n \n \\includegraphics[width=0.12\\textwidth]{Imgs\/Arched_activation.pdf}\n \\label{fig:arched_activation}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.12\\textwidth]{Imgs\/Narrow_activation.pdf}\n \\label{fig:narrow_activation}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.12\\textwidth]{Imgs\/Mouth_activation.pdf}\n \\label{fig:mouth_activation}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.12\\textwidth]{Imgs\/YoungFaceApp_activation.pdf}\n \\label{fig:young_activation}\n \n \\subfigure[]\n {%\n \n \\includegraphics[width=0.12\\textwidth]{Imgs\/OldFaceApp_activation.pdf}\n \\label{fig:old_activation}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.12\\textwidth]{Imgs\/SmileFaceApp_activation.pdf}\n \\label{fig:smileFaceapp_activation}\n }%\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.12\\textwidth]{Imgs\/MakeupFaceApp_activation.pdf}\n \\label{fig:makeup_activation}\n }%\n \\caption{Activation analysis of different triggers for the neurons in the penultimate layer of the models. X-axis represents different neurons in the penultimate layers and y-axis represents the activation values of those neurons. Triggers: (a) Natural Smile (b) Arching of Eyebrows (c) Narrowing of eyes (d) Slightly opening of mouth (e) Young filter (f) Old filter (g) Smile filter (h) Makeup filter.} \n \\label{fig:activation_all}\n\\end{figure*}\n\nIn the Section \\ref{s:trigger_analysis} we evaluate our attacks to show that large, permeating and adaptive triggers are difficult to detect using state-of-the-art. In this section, we demonstrate the ease of detecting simple triggers for MNIST dataset. MNIST is a common dataset used by several backdoor attack and defense literature to evaluate their methodology \\cite{Badnets, NeuralCleanse,activation, Spectral_signatures}. We make an MNIST-BadNet using a one-pixel dot-trigger as shown in FIg. \\ref{fig:MNIST_images}.\n\nPrinciple Component Analysis (PCA) is transformation of data along the vectors of highest variance. These vectors, known as principle components, are orthogonal to each other and therefore, are linearly uncorrelated. Since the top components can define the data \\textit{sufficiently}, a very high dimensional data may be represented with its low-dimensional equivalent. However, we do not use PCA for dimensionality reduction\/ feature extraction. Rather we use it to find a reasonable representation of trigger samples and the genuine samples. For simple datasets like MNIST with distinct triggers, a simple PCA shows a distinction between malicious and genuine sub-populations. Different representations of images like normalized images, raw-data representation, or learned representation can be used to increase the effectiveness of PCA in distinguishing these sub-populations.\nWe use Euclidean distance (L2-norm) and correlations with top eigen vector as a measure to detect outliers (malicious sub-population) from the genuine data, as done for our triggers. In this sub-section we use the statistics used by literature to separate the sub-populations.\nL2 norm represents an image as a magnitude and images belonging to same class have similar L2 norms. Therefore, images that are slightly different albeit belonging to the same class, i.e. the malicious images, should have slightly different L2 norms. Fig. \\ref{fig:MNIST_L2_LR} shows that for MNIST with dot trigger, the sub-populations are easily separable by L2 norm at the learned representation level.\nTran et al. stated that correlation of images with the top Eigen vector of the dataset can be considered a spectral property of the malicious samples. This method of outlier detection is inspired from robust statistics. \nThe key intuition is that if the two sub-populations are distinguishable (using a particular image representation), then the malicious images along the direction of top Eigen vector will consist of a larger section of poisoned images. Therefore, they will have a different correlation than the genuine samples. \nFor simple datasets using small triggers (like described in Fig. \\ref{fig:MNIST_images}), the sub-populations are perfectly separable using correlation with top Eigen vector as shown in Fig. \\ref{fig:MNIST_Eig_Lr}.\n\\subsection*{B. Activation of different neurons}\nABS \\cite{CCS_ABS} detects backdoored models whose malicious behavior is successfully encoded with one neuron. In Fig. \\ref{fig:activation_all}, we show that the activation values peak for several neurons proving that our triggers are actually encoded using more than one neuron. Therefore, ABS would not be able detect our triggers.\n\n\\subsection*{C. Reversed triggers from Neural Cleanse}\nIn this section, we report the reversed engineered triggers for the backdoored models. Comparing the reversed triggers with Fig. \\ref{fig:social_media_filters}, we see stark differences between them. \n\n\\begin{figure*}\n \\centering\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.22\\textwidth]{Imgs\/FA_Smile_clean.pdf}\n \\label{fig:a}\n \n \\hspace{-6.5em}\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.22\\textwidth]{Imgs\/FA_Smile_fusion.pdf}\n \\label{fig:b}\n }%\n \\hspace{-6.5em}\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.22\\textwidth]{Imgs\/FA_Smile_re+cl.pdf}\n \\label{fig:c}\n }%\n \\hspace{-5em}\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.22\\textwidth]{Imgs\/Smile_clean.pdf}\n \\label{fig:d}\n }%\n \\hspace{-7.6em}\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.22\\textwidth]{Imgs\/Smile_fusion.pdf}\n \\label{fig:e}\n \n \\hspace{-7.6em}\n \\subfigure[]\n {%\n \n \\includegraphics[width=0.22\\textwidth]{Imgs\/Smile_re+cl.pdf}\n \\label{fig:f}\n }%\n \\caption{Neural cleanse defense analysis. Here we present the reversed triggers of 'smile' when embedded artificially using filters (a-c) and when embedded naturally using facial movements (d-f). (a) and (d) are the original images and (b) and (e) are the reverse-engineered triggers. In (c) and (f) we super impose the reversed triggers with the original images. As depicted, there is no visual similarity with the original triggers.} \n \\label{fig:NC_All}\n\\end{figure*}\n\n\n\n \n\n\\section{Introduction}\nIn the age of big data, the sheer volume of data discourages manual screening, and therefore, Machine learning has been an effective replacement to many conventional solutions. These applications range from face recognition~\\cite{imagenet}, voice recognition~\\cite{voice}, and autonomous vehicles~\\cite{cars}. Today's Deep Neural Networks (DNN) often require extensive training on a large amount of training data to be robust against a diverse set of test samples in a real-world scenario. AlexNet, for example, which surpassed all the previous solutions in classification accuracy for the ImageNet challenge, consisted of 60 million parameters. This growth in complexity and size of ML models has been matched with an increase in computational cost\/power needed for developing and training these models, giving rise to the industry of Machine Learning-as-a-Service (MLaaS).\n\nOutsourcing machine learning training meant an increase in the use of machine learning models by entities that would have otherwise been unable to develop and train complex models. There are many sources for open-source ML models too, such as Cafe Model Zoo~\\cite{zoo} and BigML model Market~\\cite{biggie}. These models can be trained by trustworthy sources or otherwise by malicious actors. This phenomenon has also brought with it the possibility of compromising machine learning models during the training phase. Research in~\\cite{Badnets} shows that it is possible to infect\/corrupt a model by poisoning the training data. This process introduces a backdoor\/trojan to the model. \nA backdoor in a DNN represents a set of neurons that are activated in the presence of unique triggers to cause malicious behavior.~\\cite{suppression} shows one such example where a dot (one-pixel trigger) can be used to trigger certain (backdoored\/infected) neurons in a model to maliciously change the true prediction of the model. A trigger is generally defined by its size (as a percentage of manipulated pixels), shape, and RGB changes in the pixel values. \nThere has been a plethora of proposed solutions that aim to defend against backdoored models through detection of backdoors, trigger reconstruction, and 'de-backdooring' infected models. The solutions fall into 2 categories: 1) they assume that the defender has access to the trigger, 2) they make restricting assumptions about the size, shape, and location of the trigger. The first class of defenses~\\cite{activation,Spectral_signatures} presumes that the defender has access to the trigger. Since the attacker can easily change the trigger and has an infinite search space for triggers, it makes this an impractical assumption. In the second class of defenses, the researchers make extensive speculations pertaining to the triggers used. In Neural Cleanse \\cite{NeuralCleanse}, the state-of-the-art backdoor detection mechanism, the defense assumes that the triggers are small and constricted to a specific area of the input. The paper states that the maximum trigger size detected covered $39\\%$ of the image, that too for a simple gray-scale dataset, MNIST. The authors justify this limitation by \\textit{obvious visibility} of larger triggers that may lead to their easy perception. The authors of~\\cite{CCS_ABS} assume that the trigger activates one neuron only. \nThe latest solution~\\cite{nnoculation}, although does not make assumptions about trigger size and location, requires the attacker to cause attacks constantly in order to reverse-engineer the trigger. \n\nFacial recognition is extensively used for biometric identification (in airports, ATMs, or offices) and cyber-crime\/forensics investigation field.\nThus, a possibility of trojanning such models may have catastrophic effects. Our triggers specifically target the facial recognition algorithms that are often the crux of \nautomatic detection of extremist posts~\\cite{abusive_content}, prevention of online dating frauds~\\cite{OnlineDating}, reporting of inappropriate images~\\cite{stealthy_images,Inapt} and airport immigration ~\\cite{immigration_uae}.\nWe study the impact of changes in the facial characteristics\/attributes in successfully trojanning a facial recognition model that was otherwise designed for an orthogonal task. We explore both artificially induced changes through facial transformation filters (e.g. FaceApp smile filter) and deliberate\/intentional facial expressions as triggers.\n\nIt is a matter of concern if simple filters can bypass all neural activations from the genuine features to maliciously drive the model to a mis-classification. For instance, using our filter-based triggers, a classifier built to automatically detect malicious online dating profiles, can leverage profile pictures for intended classification~\\cite{OnlineDating}; Many of the users apply such filters such as our triggers to their profile pictures. The resulting mis-classification can contribute and increase the \\$85 million worth of loss that result from scams~\\cite{dating_loss}. \nNatural triggers using facial muscle movement can be especially critical as many countries, such as the UAE, move towards ML based face recognition for automated passport control~\\cite{immigration_uae}. Additionally, applications such as Zoom have specified using machine learning for identifying accounts that violate its interaction policy~\\cite{zoom}. \nIn the attack literature, several types of patterns, like post-its on stop signs \\cite{Badnets}, specific black-rimmed spectacles \\cite{black_badnets} or specific patterns based on a desired masking size \\cite{NDSSTrojans} have been used to trigger backdoor neurons. Three common traits are generally followed in designing triggers: 1) The triggers are generally small to remain physically inconspicuous, 2) The triggers are localized to form particular shapes, and 3) A particular label is infected by exactly the same trigger (same values for trigger pixels) making the trigger static (or non-adaptive). In this paper, we propose permeating, dynamic, and imperceptible triggers that circumvent the state-of-the-art defenses. \n\n\n\n\nTo the best of our knowledge, this is the first attack to use specific facial gestures to trigger malicious behavior in facial recognition algorithms by constructing large scale, dynamic, and imperceptible triggers. We list our contributions as follows:\n\\begin{itemize}[nosep,leftmargin=1em,labelwidth=*,align=left]\n \\item We explore trojanning of ML models using filter-based triggers that artificially induce changes in facial attributes. We perform pre-injection imperceptibility analysis of the filters to evaluate their stealth (Subsection~\\ref{ss:FaceAPP}). \n \\item We study natural facial expressions as triggers to activate the maliciously-trained neurons to evaluate attack scenarios where trigger accessories (like glasses in case of Airport immigration) are not allowed (Subsection~\\ref{ss:natural}). \n \\item We explore different backdoor attack models on the ML supply chain. 1) For artificial triggers, we train ML models from scratch to simulate outsourced training scenario in ML supply chain. 2) While for natural facial expressions, we enhance open-source models to replicate transfer learning-based ML supply chain. Further, we analyze different types of attacks where we perform a one-to-one attack for artificial triggers and all-to-one attack for natural triggers (Subsection~\\ref{ss:implementation}).\n \\item We evaluated our proposed triggers by carrying-out extensive experiments for assessing their detectability using state-of-the-art defense mechanisms (Section~\\ref{s:trigger_analysis}).\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe {\\sl International Gamma-ray Astrophysics Laboratory} ({\\sl INTEGRAL}) mission \\citep{2003A&A...411L...1W} has been observing the hard X-ray sky for more than 16 years. During this time it has uncovered a large number of new Be high-mass X-ray binaries (BeHMXBs), while also dramatically increasing the population of supergiant HMXBs (sgHMXBs; see e.g., Figure 4 in \\citealt{2015A&ARv..23....2W}). Additionally, {\\sl INTEGRAL} has unveiled new sub-classes of sgHMXBs: supergiant fast X-ray transients (SFXTs; \\citealt{2005A&A...444..221S,2006ESASP.604..165N}) and obscured supergiant HMXBs (e.g., IGR J16318-4848; \\citealt{2003MNRAS.341L..13M}). There are also several sub-classes of SFXTs, which exhibit different dynamical ranges in their X-ray luminosities, from $\\sim10^{2}-10^{5}$ and different flare time scales (see e.g., Table 2 in \\citealt{2015A&ARv..23....2W,2011ASPC..447...29C}). Among all of these systems there are also a number of ``intermediate'' HMXBs, that do not entirely fit into these categories and have properties that lie in between the classes (see e.g., \\citealt{2018MNRAS.481.2779S}). In order to understand the different physical processes responsible for producing the emission observed in these sub-classes and how they depend on the binary parameters (e.g., orbital period, eccentricity), it is important to increase the number of known members by identifying them through their X-ray and optical properties.\n\n\n\nPrior the to launch of {\\sl INTEGRAL}, BeHMXBs dominated the population of known HMXBs \\citep{2015A&ARv..23....2W}. BeHMXBs consist of a compact object (CO), typically a neutron star (NS), and a non-supergiant Be spectral type companion, generally having longer orbital periods, $P_{orb}\\gtrsim20$ days, than sgHMXBs (see \\citealt{2011Ap&SS.332....1R} for a review). The Be companions are rapidly rotating, have H$\\alpha$ emission lines, and have an infrared (IR) excess compared to B stars \\citep{2003PASP..115.1153P}. The H$\\alpha$ emission and IR excess come from a dense equatorial decretion disk rotating at or near Keplerian velocity, likely formed due to the fast rotation of the star (see \\citealt{2013A&ARv..21...69R} for a review). These systems can be both persistent, with X-ray luminosities on the order of $L_X\\approx10^{34}-10^{35}$ erg s$^{-1}$, or transient \\citep{2011Ap&SS.332....1R}. The transients become X-ray bright when the NS accretes material as it passes near (or through) the Be star's decretion disk. These systems exhibit two types of X-ray flares. Type I flares occur periodically or quasi-periodically when the NS passes through periastron, increasing the X-ray flux by an order of magnitude, and lasting a fraction of the orbital period ($\\sim0.2\\times P_{orb}\\approx1-10$ days; \\citealt{2011Ap&SS.332....1R}). Type II flares can occur at any orbital phase, reaching Eddington luminosity, and lasting a large fraction of an orbital period ($\\sim10-100$ days; \\citealt{2011Ap&SS.332....1R}). \n\nThe sgHMXBs are composed of a CO and a supergiant O or B spectral type companion, typically with shorter orbital periods, $P_{orb}\\lesssim10$ days, than BeHMXBs \\citep{2011Ap&SS.332....1R,2015A&ARv..23....2W}. If the CO orbits close enough to the companion star it can accrete via Roche lobe overflow, reaching X-ray luminosities up to $L_X\\sim10^{38}$ erg s$^{-1}$ during an accretion episode \\citep{2008A&A...484..783C,2015arXiv151007681C}. For longer period systems, the CO accretes from the fast ($\\sim1000$ km s$^{-1}$) radiative wind of the supergiant companion, leading to persistent X-ray luminosities of $L_X\\sim10^{35}-10^{36}$ erg s$^{-1}$ \\citep{2008A&A...484..783C}. These wind-fed systems are also often highly obscured ($N_H\\gtrsim10^{23}$ cm$^{-2}$) by the wind of the companion, and in some cases, by an envelope of gas and dust around the entire binary system (see e.g., IGR J16318-4848; \\citealt{2003A&A...411L.427W,2004ApJ...616..469F}). The supergiant stars in these systems also exhibit an H$\\alpha$ emission line due to their winds, which is often variable in shape and intensity and can have a P-Cygni profile (see e.g., Vela X-1, IGR J11215-5952; \\citealt{2001A&A...377..925B,2010ASPC..422..259L}).\n\n{\\sl INTEGRAL}'s wide field-of-view has enabled great progress in the study of HMXBs by discovering many SFXTs (see e.g., \\citealt{2006ESASP.604..165N}, or \\citealt{2013arXiv1301.7574S,2017mbhe.confE..52S} for reviews). Unlike the typical sgHMXBs, SFXTs exhibit much lower quiescent X-ray luminosities (as low as $L_X\\sim10^{32}$ erg s$^{-1}$) with highly energetic ($L_X\\sim10^{36}-10^{38}$ erg s$^{-1}$) X-ray flares lasting $\\sim100-10,000$ s (see e.g., \\citealt{2005A&A...441L...1I,2014A&A...568A..55R,2015A&A...576L...4R}). {\\sl INTEGRAL} has also uncovered several systems, with flare to quiescent X-ray luminosity ratios of $10^{2}-10^{3}$, lasting a few hours to days (see e.g., IGR J17354-3255; \\citealt{2011MNRAS.417..573S,2013A&A...556A..72D}). These systems have larger variability than seen in classic sgHMXB systems, but longer variability timescales than in SFXTs, and therefore have been called intermediate SFXTs \\citep{2011MNRAS.417..573S,2011ASPC..447...29C}.\n\n\nSeveral models have been put forward to describe the flaring phenomenon observed in SFXTs. One possibility is that the flares are caused by the accretion of an inhomogeneous clumpy wind, produced by the high-mass stellar companion, onto the compact object (see e.g., \\citealt{2005A&A...441L...1I,2006ESASP.604..165N,2007A&A...476..335W,2009MNRAS.398.2152D,2016A&A...589A.102B}). However, given the low-quiescent luminosities of some SFXTs, an additional mechanism for inhibiting the accretion onto the compact object is likely necessary (e.g., magnetic gating or sub-sonic accretion; \\citealt{2007AstL...33..149G,2008ApJ...683.1031B,2012MNRAS.420..216S}). \n\n\n\nThe source AX J1949.8+2534 was discovered by the {\\sl ASCA} Galactic plane survey having an absorbed flux of 6$\\times10^{-12}$ erg cm$^{-2}$ s$^{-1}$ in the 2-10 keV band \\citep{2001ApJS..134...77S}. AX J1949.8+2534 ( AX J1949 hereafter) was then detected by {\\sl INTEGRAL} for the first time in the hard X-ray band during two short flaring periods in 2015\/2016 \\citep{2015ATel.8250....1S}\\footnote{ The source is also referred to as IGR J19498+2534 in the {\\sl INTEGRAL} catalog of \\cite{2017MNRAS.470..512K}.}. The first flaring episode lasted $\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$<$}}} 1.5$ days and reached a peak flux $F_X=1.1\\times10^{-10}$ erg cm$^{-2}$ s$^{-1}$ in the 22-60 keV band, while the second flaring episode lasted $\\sim4$ days with a similar peak flux $F_X= 1.0\\times10^{-10}$ erg cm$^{-2}$ s$^{-1}$ in the 22-60 keV band \\citep{2017MNRAS.469.3901S}. However, during the second flaring episode shorter time scale variability of $\\sim$2-8 ks was detected, reaching a peak flux of 2$\\times10^{-9}$ erg cm$^{-2}$ s$^{-1}$ in the 200 s binned light curve \\citep{2017MNRAS.469.3901S}. The dynamic range of the flare to quiescent X-ray luminosities in the 20-40 keV band is $\\gtrsim625$ \\citep{2017MNRAS.469.3901S}.\n\nIn soft X-rays, \\cite{2017MNRAS.469.3901S} also reported on a {\\sl Neil Gehrels Swift-XRT} observation of the source, which was detected with an absorbed flux of 1.8$\\times10^{-12}$ erg cm$^{-2}$ s$^{-1}$. This observation also provided a more accurate position for the source. However, there were two bright potential NIR counterparts within the 95\\% {\\sl Swift} positional uncertainty. Based on photometry, \\cite{2017MNRAS.469.3901S} classified these two bright NIR sources as B0V and B0.5Ia spectral type stars, respectively, leading to the conclusion that this source was either a BeHMXB or SFXT type source. The bright flares observed from AX J1949 disfavored the BeHXMB scenario, therefore, \\cite{2017MNRAS.469.3901S} favored the SFXT interpretation.\n\nIn this paper we report on {\\sl Neil Gehrels Swift-XRT}, {\\sl Chandra}, and {\\sl NuSTAR} legacy observations of the SFXT candidate AX J1949. These observations are part of an ongoing program to identify the nature of unidentified {\\sl INTEGRAL} sources by characterizing their broad-band (0.3-79.0 keV) X-ray spectrum. Additionally, we use the precise X-ray localization to identify the multi-wavelength counterpart for optical spectroscopic follow-up, which is also reported here. \n\n\n\n\\section{Observations and Data Reduction}\n\\label{obs_and_dat}\n\\subsection{{\\sl Neil Gehrels Swift X-ray Observatory}}\nThe {\\sl Neil Gehrels Swift} satellite's \\citep{2004ApJ...611.1005G} X-ray telescope (XRT; \\citealt{2005SSRv..120..165B}) has observed AX J1949 a total of six times, once in 2016 and five more times in a span of $\\sim20$ days in 2018. The 2016 observation was originally reported on by \\citealt{2017MNRAS.469.3901S} (although for consistency we reanalyze it here), while the five observations from 2018 are reported on for the first time in this paper. The details of each {\\sl Swift-XRT} observation can be found in Table \\ref{swiftobs}. We refer to each observation by the associated number shown in Table \\ref{swiftobs}. The data were reduced using HEASOFT version 6.25 and the 2018 July version of the {\\sl Swift-XRT} Calibration Database (CALDB). The {\\sl Swift-XRT} instrument was operated in photon counting mode and the event lists were made using the {\\tt xrtpipeline} task. To estimate the count rates, source significances, and upper limits we used the {\\tt uplimit} and source statistics tool ({\\tt sosta}) in the XIMAGE package (version 4.5.1). The count rates and upper limits were also corrected for vignetting, dead time, the point-spread function (PSF), and bad pixels\/columns that overlap with the source region. \n\nWe extracted the source spectrum for each {\\sl Swift} observation in which the source was detected with a signal-to-noise ratio (S\/N) $>3$ using XSELECT (version 2.4). A new ancillary response file was constructed using the {\\tt xrtmkarf} task using the exposure map created by the {\\tt xrtpipeline} task, and the corresponding response file was retrieved from the CALDB. The observed count rates (with no corrections applied) of the source in these observations were between $\\sim(8-12)\\times10^{-3}$ cts s$^{-1}$, so we used circular extraction regions with radii of 12 or 15 pixels depending on the observed count rate (see Table 1 in \\citealt{2009MNRAS.397.1177E}). Background spectra were extracted from a source free annulus centered on AX J1949. The {\\sl Swift} spectra were grouped to have at least one count per bin and fit using Cash statistics (C-stat; \\citealt{1979ApJ...228..939C}).\n\nIn this paper, all spectra are fit using XSPEC (version 12.10.1; \\citealt{1996ASPC..101...17A}). We used the Tuebingen-Boulder ISM absorption model ({\\tt tbabs}) with the solar abundances of \\cite{2000ApJ...542..914W}. All uncertainties reported in this paper (unless otherwise noted) are 1 $\\sigma$.\n\n\n\n\\begin{table}[h]\n\\caption{{\\sl Swift-XRT} Observations and signal-to-noise ratio of AX J1949.} \n\\label{swiftobs}\n\\begin{center}\n\\renewcommand{\\tabcolsep}{0.17cm}\n\\begin{tabular}{lcccccc}\n\\tableline \n$\\#$ & ObsID & offset &\tStart Time\t&\tExp. & S\/N \\\\\n\\tableline \n & & arcmin &\tMJD \t& s \\\\\n \\tableline \nS0$^{a}$ & 00034497001 & 2.422 & 57503.446 & 2932 & 5.3\\\\\nS1 &00010382001 & 2.997 & 58159.464 & 4910 & 5.5 \\\\\nS2 &00010382002 & 2.240 & 58166.446 & 3369 & 5.4 \\\\\nS3 &00010382003 & 2.441 & 58170.346& 1568 & 2.2\\\\\nS4 &00010382004 & 2.938 & 58173.336 & 4662 & 2.6 \\\\\nS5 &00010382005 & 0.424 & 58179.584 & 5482 & 4.5 \\\\\n\\tableline \n\\end{tabular} \n\\tablenotetext{a}{ This observation was originally reported on by \\cite{2017MNRAS.469.3901S}.}\n\\end{center}\n\\end{table}\n\n\n\n\\subsection{{\\sl Chandra} X-ray Observatory}\nWe observed AX J1949 using the {\\sl Chandra} Advanced CCD Imaging Spectrometer (ACIS; \\citealt{2003SPIE.4851...28G}) on 2018 February 25 (MJD 58174.547; obsID 20197) for 4.81 ks. The source was observed by the front-illuminated ACIS-S3 chip in timed exposure mode and the data were telemetered using the ``faint'' format. A 1\/4 sub-array was used to reduce the frame time to 0.8 s, ensuring that the pileup remained $<3\\%$ throughout the observation. All {\\sl Chandra} data analysis was performed using the {\\sl Chandra} Interactive Analysis of Observations (CIAO) software version 4.10 and CALDB version 4.8.1. The event file was reprocessed using the CIAO tool {\\tt chandra\\_repro} prior to analysis. \n\nTo locate all sources in the field of view, the CIAO tool {\\tt wavdetect} was run on the 0.5-8 keV band image\\footnote{ We ran {\\tt wavdetect} using the exposure map and PSF map produced by the CIAO tools {\\tt fluximage} and {\\tt mkpsfmap}, respectively.}. Only one source, the counterpart of AX J1949, was detected at the position R.A.$=297.48099^{\\circ}$, Dec.$=25.56639^{\\circ}$ with a statistical 95\\% confidence positional uncertainty of $0\\farcs13$ estimated using the empirical relationship (i.e., equation 12) from \\cite{2007ApJS..169..401K}. We were unable to correct for any systematic uncertainty in the absolute astrometry because only one X-ray source was detected. Therefore, we adopt the overall 90\\% {\\sl Chandra} systematic uncertainty of 0\\farcs8\\footnote{\\url{http:\/\/cxc.harvard.edu\/cal\/ASPECT\/celmon\/}} and convert it to the 95\\% uncertainty by multiplying by 2.0\/1.7. The statistical and systematic errors were then added in quadrature, giving a 95\\% confidence positional uncertainty radius of $0\\farcs95$ for the source.\n\nThe {\\sl Chandra} energy spectrum of AX J1949 was extracted from a circular region centered on the source and having a radius of 2$''$, enclosing $\\sim95\\%$ of the PSF at 1.5 keV\\footnote{See Chapter 4, Figure 4.6 at \\url{http:\/\/cxc.harvard.edu\/proposer\/POG\/html\/}.}, and containing 260 net counts. The background spectrum was extracted from a source free annulus centered on the source. Given the small number of counts, we fit the {\\sl Chandra} spectrum using Cash statistics (C-stat; \\citealt{1979ApJ...228..939C}).\n\n\n\nWe have also extracted the {\\sl Chandra} light curves using a number of different binnings to search for spin and orbital periods in the data. Prior to extraction, the event times were corrected to the Solar system barycenter using the CIAO tool {\\tt axbary}.\n\n\n\\subsection{{\\sl NuSTAR}}\nThe {\\sl Nuclear Spectroscopic Telescope Array} ({\\sl NuSTAR}; \\citealt{2013ApJ...770..103H}) observed AX J1949 on 2018 February, 24 (MJD 58173.070; obsID 30401002002) for 45 ks. We reduced the data using the NuSTAR Data Analysis Software (NuSTARDAS) version 1.8.0 with CALDB version 20181022. Additionally, we filtered the data for background flares caused by {\\sl NuSTAR's} passage through the South Atlantic Anomaly (SAA) using the options {\\tt saacalc}=2, {\\tt saamode}=optimized, and {\\tt tentacle}=yes, which reduced the total exposure time to 43 ks. \n\n\nThe source's energy spectra from the FPMA and FPMB detectors were extracted from circular regions with radii of 45$''$ centered on the source position. The background spectra were extracted from source-free regions away from AX J1949, but on the same detector chip. The {\\sl NuSTAR} spectra were grouped to have at least one count per bin. {\\sl NuSTAR} light curves were also extracted from both the FPMA and FPMB detectors in the 3-20 keV energy range using a number of different bin sizes (i.e., 100 s, 500 s, 1 ks, 5 ks). All {\\sl NuSTAR} light curves plotted in this paper show the averaged (over the FPMA and FPMB detectors) net count rate.\n\n\\subsection{MDM Spectroscopy}\n\nThe accurate position of the X-ray counterpart to AX J1949 provided by {\\sl Chandra} has allowed us to identify the optical\/NIR counterpart to the source. The optical\/NIR counterpart is the brightest source (see Table \\ref{mw_mag}) considered by \\cite{2017MNRAS.469.3901S} as a potential counterpart (i.e., their source 5). This source has a {\\sl Gaia} position R.A.$=297.480949745(9)^{\\circ}$ and Decl.$=25.56659555(1)^{\\circ}$, which is $\\sim0\\farcs76$ away from the {\\sl Chandra} source position and within the 2$\\sigma$ {\\sl Chandra} positional uncertainty.\n\nOn 2018 October 18, a 600~s spectrum was obtained with the Ohio State\nMulti-Object Spectrograph on the 2.4~m Hiltner telescope of the\nMDM Observatory on Kitt Peak, Arizona. A $1.\\!^{\\prime\\prime}2$\nwide slit and a volume-phase holographic grism provided a dispersion\nof 0.72 \\AA\\ pixel$^{-1}$ and a resolution of $\\approx3$ \\AA\\ over\nthe wavelength range 3965--6878 \\AA. The reduced spectrum is\nshown in Figure \\ref{opt_spec}, where it can be seen that flux is not well\ndetected below 4900 \\AA\\ due to the large extinction to the star.\nAlthough a standard star was used for flux calibration, the narrow\nslit and partly cloudy conditions are not conducive to\nabsolute spectrophotometry. Therefore, as a last step we have \nscaled the flux to match the $V$ magnitude from Table \\ref{mw_mag}.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[trim={0 0 0 0},scale=0.60]{Figure1.pdf}\n\\caption{MDM 2.4 m optical spectrum of AX J1949 showing H$\\alpha$ emission and He I absorption. The other spectral absorption features are diffuse interstellar bands caused by the large reddening towards the source ($A_V=$8.5-9.5, see Section \\ref{spec_type}). (Supplemental data for this figure are available in the online journal.)\n\\label{opt_spec}\n}\n\\end{figure*}\n\n\n\\section{Results}\n\n\n\\subsection{X-ray spectroscopy}\n\\label{xspectro}\n AX J1949 is a variable X-ray source (see Figure \\ref{xrt_cr}), but fortunately, {\\sl Chandra} observed it during a relatively bright state, allowing for the most constraining X-ray spectrum of all of the observations reported here. We fit the {\\sl Chandra} spectrum in the 0.5-8 keV range with two models, an absorbed power-law model and an absorbed blackbody model (see Figure \\ref{cxo_spec}). The best-fit power law model has a hydrogen absorption column density $N_{H}=7.5^{+1.6}_{-1.4}\\times10^{22}$ cm$^{-2}$, photon index $\\Gamma=1.4\\pm0.4$, and absorbed flux $(2.0\\pm0.3)\\times10^{-12}$ erg cm$^{-2}$ s$^{-1}$ (in the 0.3-10 keV band) with a C-stat of 121.5 for 182 d.o.f. On the other hand, the best-fit blackbody model has hydrogen absorption column density $N_{H}=(4.7\\pm1)\\times10^{22}$ cm$^{-2}$, temperature $kT=1.5\\pm0.2$ keV, radius $r_{\\rm BB}=150^{+30}_{-20}d_{\\rm 7kpc}$ m, and absorbed flux $(1.6\\pm0.2)\\times10^{-12}$ erg cm$^{-2}$ s$^{-1}$ (in the 0.3-10 keV band), where $d_{\\rm 7kpc}=d\/$(7 kpc) is the assumed distance to the source (see Section \\ref{spec_type}), with a C-stat of 119.3 for 182 d.o.f. In either case, the photon-index, or the blackbody temperature and emitting radius, are consistent with those observed in SFXTs (see e.g., Table 4 in \\citealt{2009MNRAS.399.2021R}). Therefore, since both models fit the data about equally well we cannot distinguish between them.\n\n\\begin{figure}\n\\centering\n\\includegraphics[trim={0 0 0 0},scale=0.38]{Figure2.pdf}\n\\caption{{\\sl Swift-XRT} net count rate light curve of AX J1949 during the $\\sim$20 day observing period in 2018, which included observations S1 through S5 (see Table \\ref{swiftobs}). \n\\label{xrt_cr}\n}\n\\end{figure}\n\n\n\n\n\n\nWe extracted the {\\sl Swift-XRT} spectra for all observations where the source was detected with a S\/N$>3$ (i.e., S0, S1, S2, and S5). The source is not bright enough in any of these single observations to fit a constraining spectrum. Therefore, we jointly fit the {\\sl Chandra} spectrum with the four {\\sl Swift-XRT} spectra, tying together the hydrogen absorbing column, but leaving the photon index and normalization free to vary for each spectrum. The best-fit parameters can be seen in Table \\ref{spec_par}. There is marginal evidence ($\\sim2\\sigma$) of spectral softening in the spectrum of S2, which could also be due to a variable hydrogen absorbing column as is often seen in SFXTs (see e.g., \\citealt{2009MNRAS.397.1528S,2016A&A...596A..16B,2018arXiv181111882G}). We have also fit the spectra with an absorbed power-law model after tying together the $N_H$ and photon-index before fitting. When this is done, the best fitting spectrum has a hydrogen absorption column density $N_H=5.6^{+1.1}_{-1.0}\\times10^{22}$ cm$^{-2}$ and photon-index $\\Gamma=1.2\\pm0.3$ with a C-stat of 275.0 for 295 d.o.f.\n\n\\begin{figure}\n\\centering\n\\includegraphics[trim={0 0 0 0},scale=0.41]{Figure3.pdf}\n\\caption{{\\sl Chandra} 0.5-8 keV spectrum with best-fit power-law model (see Section \\ref{xspectro} for best-fit parameters). The middle panel shows the ratio of the data to the power-law model, while the bottom panel shows the ratio of the data to the blackbody model. The spectra were binned for visualization purposes only.\n\\label{cxo_spec}\n}\n\\end{figure}\n\nUnfortunately, AX J1949 was simultaneously observed with {\\sl NuSTAR} during the S4 {\\sl Swift-XRT} observation, when the source was at its faintest flux level (see Figure \\ref{fluxes}). Although the source was still jointly detected (i.e., FPMA+FPMB) with a significance of 5.3$\\sigma$ and 5.7$\\sigma$ (in the 3-79 keV and 3-20 keV energy bands, respectively), there were too few counts to constrain its spectrum. However, we extracted the 3$\\sigma$ upper-limit flux in the 22-60 keV energy range for later comparison to {\\sl INTEGRAL}. To do this, we calculated the 3$\\sigma$ upper-limit on the net count rate of the source in the 22-60 keV energy range (0.0011 cts s$^{-1}$). SFXTs often show a cutoff in their spectra between $\\sim5-20$ keV (see e.g., \\citealt{2011MNRAS.412L..30R,2017ApJ...838..133S}) so we use a power-law model with the best-fit absorption ($N_H=5.6\\times10^{22}$ cm$^{-2}$) and a softer photon index ($\\Gamma=2.5$) that is typical for SFXTs detected in hard X-rays by {\\sl INTEGRAL} (see e.g., \\citealt{2008A&A...487..619S,2011MNRAS.417..573S}) to convert the count rate to flux. The 3$\\sigma$ upper-limit on the flux in the 22-60 keV energy range is $F=5.8\\times10^{-13}$ erg cm$^{-2}$ s$^{-1}$.\n\n\n\n\\begin{table}[t!]\n\\caption{Photon indices for the simultaneous best-fit absorbed power-law model.} \n\\label{spec_par}\n\\begin{center}\n\\renewcommand{\\tabcolsep}{0.11cm}\n\\begin{tabular}{lccccc}\n\\tableline \nObs. & $N_H$ & $\\Gamma$ &\t\tC-stat\t &d.o.f. \\\\\n\\tableline \n & $10^{22}$ cm$^{-2}$ &\t \t & \\\\\n\\tableline \nCXO & 5.6$^{+1.1}_{-1.0}$ & 1.0$\\pm0.3$ & 243.4 & 286 \\\\ \nS0 & 5.6\\tablenotemark{a} & 1.0$\\pm0.5$ & -- & -- \\\\\nS1 & 5.6\\tablenotemark{a} & 1.5$\\pm0.6$ & -- & -- \\\\\nS2 & 5.6\\tablenotemark{a} & 2.5$^{+0.6}_{-0.5}$ & -- & --\\\\\nS5 & 5.6\\tablenotemark{a} & 2.2$\\pm0.7$ & -- & --\\\\\n\\tableline \n\\end{tabular} \n\\tablenotetext{a}{The hydrogen absorption column density ($N_{\\rm H}$) values were tied together for the fit reported in this table.}\n\\end{center}\n\\end{table}\n\n\n\n\\subsection{X-ray variability and timing}\nThe fluxes of the source are not constrained if the photon-index is left free when jointly fitting the {\\sl Chandra} and {\\sl Swift-XRT} spectra. Therefore, to extract the fluxes from the {\\sl Swift} and {\\sl Chandra} observations we use the jointly fit model (i.e., $\\Gamma=1.2$ and $N_H=5.6\\times10^{22}$ cm$^{-2}$; see Section \\ref{xspectro}). The fluxes derived from these fits can be seen in Figure \\ref{fluxes}. Additionally, we assumed the same best-fit power-law model when converting the {\\sl Swift} 3$\\sigma$ upper-limit count rates to fluxes. The flux from observation S0 is not shown in this figure but is $F_{0.3-10 \\ keV}=(1.4\\pm0.3)\\times10^{-12}$ erg cm$^{-2}$ s$^{-1}$, which is consistent within 2$\\sigma$ of the value reported by \\cite{2017MNRAS.469.3901S}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[trim={0 0 0 0},scale=0.36]{Figure4.pdf}\n\\caption{Fluxes of AX J1949 in the 0.3-10 keV energy range from the jointly fit {\\sl Swift-XRT} and {\\sl Chandra} (CXO) spectra (see Section \\ref{xspectro}). The points show the fluxes from the different observations (circles for {\\sl Swift-XRT} and squares for {\\sl Chandra}), while the triangles show the 3$\\sigma$ upper-limit of the source's flux when it was not detected in the {\\sl Swift-XRT} observations. The flux for observation S0 is excluded. The {\\sl NuSTAR} observation was concurrent with the S4 observation (i.e., the second triangle), when the flux of the source was at its minimum.\n\\label{fluxes}\n}\n\\end{figure}\n\n\nWe have also searched for shorter timescale variability of AX J1949 in the individual {\\sl Chandra} and {\\sl NuSTAR} observations. The 1 ks binned net count rate light curve from the {\\sl Chandra} observation is shown in Figure \\ref{cxo_cr}. The light curve clearly shows that the source brightness decreased throughout the observation on a several ks timescale. No significant flaring behavior was found in the {\\sl Chandra} light curves with smaller binnings. We also searched for any periodic signal in the {\\sl Chandra} data using the Z$^{2}_{1}$ \\citep{1983A&A...128..245B}, but no significant periodicity was found.\n\n\\begin{figure}\n\\centering\n\\includegraphics[trim={0 0 0 0},scale=0.3]{Figure5.pdf}\n\\caption{{\\sl Chandra} 0.5-8 keV net count rate during the 4.81 ks observation with a 1 ks binning. The source was variable on $\\sim1$ ks timescales during this observation.\n\\label{cxo_cr}\n}\n\\end{figure}\n\nThere are also indications of variability in the 3-20 keV {\\sl NuSTAR} light curve. Notably, the 5 ks binned net light curve (see Figure \\ref{nus_cr}) shows evidence of a flux enhancement lasting $\\sim$10 ks. To test the significance of this variability we have fit a constant value to the light curve. The best-fit constant has a value of 3.0$\\pm0.9\\times10^{-3}$ counts s$^{-1}$ with a $\\chi^2$=93.6 for 17 degrees of freedom, implying that the source is variable at a $\\gtrsim6.5\\sigma$ level. The 500 s binned light curve of the period of flux enhancement (see Figure \\ref{ns_flare}) shows the light curve reached a peak count rate of $\\sim$0.03 cts s$^{-1}$. This corresponds to a peak flux of $\\sim1.5\\times10^{-12}$ erg cm$^{-2}$ s$^{-1}$, assuming the best-fit power law model (i.e., $\\Gamma=1.2$ and $N_{H}=5.6\\times10^{22}$ cm$^{-2}$). This flux is consistent with the flux observed during the {\\sl Chandra} observation, suggesting that the source's light curve varies over timescales of $\\sim1-10$ ks.\n\n\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[trim={0 0 0 0},scale=0.37]{Figure6.pdf}\n\\caption{{\\sl NuSTAR} 3-20 keV averaged (over the FPMA and FPMB detectors) net count rate light curve with a 5 ks binning. The source was variable on $\\sim5$ ks timescales during this observation, including a period of flux enhancement lasting $\\sim10$ ks. The gray band shows the time and duration of the joint {\\sl Swift-XRT} observation (observation S4). \n\\label{nus_cr}\n}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[trim={0 0 0 0},scale=0.37]{Figure7.pdf}\n\\caption{{\\sl NuSTAR} 3-20 keV net count rate light curve with a 500 s binning during the period of flux enhancement. The source reaches a peak net count-rate of $\\sim$0.03 cts s$^{-1}$. \n\\label{ns_flare}\n}\n\\end{figure}\n\n\n\n\n\\subsection{Optical\/NIR companion photometry}\n\nUsing the accurate source position provided by {\\sl Chandra} we have confidently identified the multi-wavelength counterpart to this source. Only one {\\sl Gaia} source lies within the $\\sim1''$ (2$\\sigma$) positional uncertainty of AX J1949. The source is also detected in optical by Pan-STARRs \\citep{2016arXiv161205243F}, in NIR by 2MASS \\citep{2006AJ....131.1163S}, and in IR by {\\sl WISE} \\citep{2012wise.rept....1C}. The counterparts multi-wavelength magnitudes can be found in Table \\ref{mw_mag}.\n\n\n\\begin{table}[t!]\n\\caption{Multi-wavelength magnitudes of the counterpart to AX J19498.} \n\\label{mw_mag}\n\\small\n\\begin{center}\n\\renewcommand{\\tabcolsep}{0.07cm}\n\\begin{tabular}{lcccc}\n\\tableline \nObs. & R.A. & Decl. & Filter & Mag. \\\\\n\\tableline \n & deg. & deg. & &\\\\\n\\tableline\n{\\sl Gaia} & 297.480949745(9) & 25.56659555(1) & G & 13.9860(8) \\\\\n & -- & -- & Bp & 16.299(6) \\\\\n & -- & -- & Rp & 12.578(3) \\\\\nPS\\tablenotemark{a} & 297.480960(2) & 25.566612(7) & g & 17.516(8) \\\\ \n & -- & -- & r & 14.981(4) \\\\\n & -- & -- & i & 13.1990 \\\\\n & -- & -- & z & 12.1140 \\\\\n & -- & -- & y & 11.7220 \\\\\n NOMAD & 297.4809331 & 25.5666150 & B & 18.04 \\\\\n & -- & -- & V & 16.12 \\\\\n & -- & -- & R & 14.62 \\\\\n 2MASS & 297.480989 & 25.566639 & J & 9.90(2) \\\\\n & -- & -- & H & 9.07(2) \\\\\n & -- & -- & $K_s$ & 8.63(2) \\\\\n {\\sl WISE} & 297.4809572 & 25.5666135 & W1 & 8.32(2) \\\\\n & -- & -- & $W2$ & 8.17(2) \\\\\n & -- & -- & $W3$ & 8.30(3) \\\\\n \\tableline \n\\end{tabular} \n\\tablenotetext{a}{Pan-Starrs}\n\\end{center}\n\\end{table}\n\n{\\sl Gaia} has also measured the parallax, $\\pi=0.081(64)$, and proper motion, $\\mu_{\\alpha}\\cos{\\delta}=-2.37(9)$, $\\mu_{\\delta}=-5.2(1)$, of AX J1949's counterpart \\citep{2018A&A...616A...1G}. Unfortunately, the {\\sl Gaia} parallax has a large relative error ($\\sim80\\%$) and, therefore, has a large uncertainty in the inferred distance, $d=6.1^{+2.3}_{-1.5}$ kpc \\citep{2018AJ....156...58B}. We note that this source also has a poorly fit astrometric solution with excess noise, possibly due to its binary nature. Therefore, we do not rely on the {\\sl Gaia} parallax and simply take it as an indicator that the source is at least at a distance of a few kpc. The large extinction observed in the optical magnitudes of the source also support this claim (see Section \\ref{spec_type}). \n\n\n\\subsection{Optical spectroscopy}\n\nThe optical counterpart of AX J1949 shows H$\\alpha$ emission with \nEW$\\approx -1.6$ \\AA\\ and FWZI $\\approx500$ km~s$^{-1}$, and absorption\nlines of \\ion{He}{1} 5876 \\AA\\ and 6678 \\AA\\ (see Figure \\ref{opt_spec}). The remaining absorption\nfeatures are diffuse interstellar bands, whose strengths\nare consistent with the large extinction inferred from the colors\nof the star (see Section \\ref{spec_type}).\n\n\n\n\n\n\n\\section{Discussion}\n\n\\subsection{Optical\/NIR Companion Stellar Type}\n\\label{spec_type}\nThe optical spectrum of the companion star shows He I absorption lines, suggesting that the star is hot (i.e., $\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$>$}}}$10,000 K) and, therefore, is likely to be of either O or B spectral class. Here we use the NOMAD $V$-band photometry in combination with the 2MASS NIR photometry to assess the spectral type, extinction, and distance to this star. The NOMAD $V$-band magnitude and the 2MASS NIR magnitudes can be found in Table \\ref{mw_mag}. To estimate the extinction we use the intrinsic color relationships between the $V$-band and NIR magnitudes provided by Table 2 in \\cite{2014AcA....64..261W}. To recover the intrinsic color relationships of late O and early to late B type stars, we need to deredden the source by an $A_V\\approx8.5-9.5$, assuming the extinction law of \\cite{1989ApJ...345..245C}. The 3D extinction maps\\footnote{We use the {\\tt mwdust} python package to examine the dust maps \\citep{2016ApJ...818..130B}.} of \\cite{2006A&A...453..635M} suggest a distance of $\\approx$ 7-8 kpc for this range of $A_V$.\n\n\n\n\nThe reddening and estimated distance to the source can be used to place constraints on its spectral type and luminosity class. In order for the star to have an observed $V$-band magnitude of 16.12 at a distance of 7 kpc with a reddening of $A_V=8.5$, it should have an absolute magnitude of $M_V\\approx-6.6$. The lack of He II absorption features in the optical spectrum suggests that the star is more likely to be of B-type rather than O-type. By comparing the absolute magnitude to Table 7 in \\cite{2006MNRAS.371..185W} we find that the star is most likely to be of the Ia luminosity class. Further, the strong He I absorption lines indicate that the star is an early B-type star. However, we mention one caveat, which is that while the lower luminosity classes are not bright enough to be consistent with the distance\/reddening to the source, there is a large uncertainty in some of the absolute magnitudes for different luminosity classes (e.g., luminosity class Ib, II; \\citealt{2006MNRAS.371..185W}). \n\n\n\nLastly, the optical spectrum of the star shows an H$\\alpha$ emission line, which is seen in both Be, and supergiant type stars. This line has been observed in a number of similar systems and is often variable in shape and intensity, sometimes showing a P-Cygni like profile (see e.g., \\citealt{2006ApJ...638..982N,2006ESASP.604..165N,2006A&A...455..653P}). Unfortunately, we have only obtained a single spectrum and cannot assess the variability of the H$\\alpha$ line. Additionally, it does not appear to be P-Cygni like in this single spectrum. \n\n\n\n\nIn comparison, by using optical and NIR photometry, \\cite{2017MNRAS.469.3901S} found the stellar companion of AX J1949 to be consistent with a B0.5Ia type star at a distance, $d=8.8$ kpc, and having a reddening, $A_V=7.2$. We have also found that the star is consistent with an early B-type type star of the Ia luminosity class. However, our estimates of the reddening and distance differ slightly from \\cite{2017MNRAS.469.3901S}, in that we find a larger reddening and smaller distance. \n\n\n\\subsection{SFXT Nature}\n\nAt an assumed distance of $\\sim7$ kpc, AX J1949's {\\sl Chandra} and {\\sl Swift-}XRT fluxes (when the source is significantly detected) correspond to luminosities of $\\sim1\\times10^{34}$ erg s$^{-1}$ and $\\sim3-8\\times10^{33}$ erg s$^{-1}$, respectively. Further, at harder X-ray energies, AX J1949 has a {\\sl NuSTAR} 3$\\sigma$ upper-limit quiescent luminosity $\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$<$}}} 3\\times10^{33}$ erg s$^{-1}$ in the 22-60 keV band, while during its flaring X-ray activity detected by {\\sl INTEGRAL}, it reached a peak luminosity of $\\sim10^{37}$ erg s$^{-1}$ in the 22-60 keV band \\citep{2017MNRAS.469.3901S}. This implies a lower-limit on the dynamical range of the system of $\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$>$}}} 3000$. The large dynamical range and relatively low persistent X-ray luminosity are typical of SFXTs. Additionally, converting the absorbing column density, $N_{\\rm H}=5.6\\times10^{22}$ cm$^{-2}$, derived from the spectral fits to the X-ray data to $A_V$ following the relationship provided by \\cite{2015MNRAS.452.3475B}, we find $A_V\\approx20$. This $A_V$ is a factor of 2-2.5 larger than the $A_V$ derived from the interstellar absorption to the companion star (see Section \\ref{spec_type}). This suggests that a large fraction of the absorption is intrinsic to the source, which has also been observed in other SFXTs (e.g., XTE J1739-302, AX J1845.0-0433; \\citealt{2006ApJ...638..982N,2009A&A...494.1013Z}). Lastly, the optical counterpart to AX J1949 appears to be an early supergiant B-type star, which have been found in $\\sim40\\%$ of the SFXT systems (see Figure 2 in \\citealt{2017mbhe.confE..52S}). NIR spectra and additional optical spectra should be undertaken to search for additional absorption features, as well as changes in the H$\\alpha$ line profile to solidify the spectral type. \n\n\n\n\n\n\n\n\\section{Summary and Conclusion}\n\nA large number of unidentified {\\sl INTEGRAL} sources showing rapid hard X-ray variability have been classified as SFXTs. We have analyzed {\\sl Neil Gehrels Swift-XRT}, {\\sl Chandra}, and {\\sl NuSTAR} observations of the SFXT candidate AX J1949, which shows variability on ks timescales. The superb angular resolution of {\\sl Chandra} has allowed us to confirm the optical counterpart to AX J1949 and obtain its optical spectrum, which showed an H$\\alpha$ emission line, along with He I absorption features. The spectrum, coupled with multi-wavelength photometry allowed us to place constraints on the reddening, $A_V=8.5-9.5$, and distance, $d\\approx7-8$ kpc, to the source. We find that an early B-type Ia is the most likely spectral type and luminosity class of the star, making AX J1949 a new confirmed member of the SFXT class.\n\n\n\\medskip\\noindent{\\bf Acknowledgments:}\nWe thank Justin Rupert for obtaining the optical spectrum at MDM. We thank the anonymous referee for providing useful and constructive comments that helped to improve the paper. This {\\sl work} made use of observations obtained at the MDM Observatory, operated by Dartmouth College, Columbia University, Ohio State University, Ohio University, and the University of Michigan. This work made use of data from the {\\it NuSTAR} mission, a project led by the California Institute of Technology, managed by the Jet Propulsion Laboratory, and funded by the National Aeronautics and Space Administration. We thank the {\\it NuSTAR} Operations, Software and Calibration teams for support with the execution and analysis of these observations. This research has made use of the {\\it NuSTAR} Data Analysis Software (NuSTARDAS) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (USA). JH and JT acknowledge partial support from NASA through Caltech subcontract CIT-44a-1085101. JAT acknowledges partial support from Chandra grant GO8-19030X.\n\n \\software{CIAO (v4.10; \\citealt{2006SPIE.6270E..1VF}), XSPEC (v12.10.1; \\citealt{1996ASPC..101...17A}), NuSTARDAS (v1.8.0), Matplotlib \\citep{2007CSE.....9...90H}, Xselect (v2.4e), XIMAGE (v4.5.1), HEASOFT (v6.25), MWDust \\citep{2016ApJ...818..130B}}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n The study of fractional order differential equations has been attracting many scientists because of its adequate and interesting applications in modeling of real-life problems related to several fields of science \\cite{L1}-\\cite{L5}. Initial-value problems (IVPs) and boundary-value problems (BVPs) involving the Riemann-Liouville and Caputo derivatives attract most interest (see, for instance, \\cite{L6}, \\cite{L7}, \\cite{L8}). Especially, studying IVPs and BVPs for the sub-diffusion, fractional wave equations are well-studied (see \\cite{sad}, \\cite{ruzh}, \\cite{karruzh}). BVPs for mixed type equations are also an interesting target for many authors (see \\cite{L9}-\\cite{L13}).\n\n Introducing a generalized Riemann-Liouville fractional derivatives (it is called Hilfer's deriva\\-tive) has opened a new gate in the research of fractional calculus (\\cite{L14}-\\cite{L16}). Therefore, one can find several works devoted to studying this operator in various problems \\cite{L17}, \\cite{L18}. We also note that in 1968, M.~M.~Dzhrbashyan and A.~B.~Nersesyan introduced the following integral-differential operator \\cite{Ldn}\n\\begin{equation}\\label{DN}\nD_{0x}^{\\sigma_n}g(x)=I_{0x}^{1-\\gamma_n}D_{0x}^{\\gamma_{n-1}}...D_{0x}^{\\gamma_{1}}D_{0x}^{\\gamma_{0}}g(x), ~ ~ n\\in\\mathbb{N}, ~ x>0,\n\\end{equation}\nwhich is more general than Hilfer's operator. Here $I_{0x}^{\\alpha}$ and $D_{0x}^{\\alpha}$ are the Riemann-Liouville fractional\nintegral and the Riemann-Liouville fractional derivative of order $\\alpha$ respectively (see Definition 2.1), $\\sigma_n\\in(0, n]$ which is defined by\n$$\n\\sigma_n=\\sum\\limits_{j=0}^{n}\\gamma_j-1>0, ~ \\gamma_j\\in(0, 1].\n$$\nThere are some works \\cite{bag}, \\cite{bag2}, related with this operator. New wave of researches involving this operator might appear due to the translation of original work \\cite{Ldn} in FCAA \\cite{Lfca}.\n \n In addition, from announcing the concept of hyper-Bessel fractional differential derivative by I. Dimovski \\cite{L21}, several articles have been published dedicated to studying problems containing this type of operators (see \\cite{L22}-\\cite{L26}). For instance, fractional diffusion equation and wave equation were widely investigated in different domains in \\cite{L19}-\\cite{L20}.\n\nIn this work, we investigate a boundary value problem for a mixed equation involving the sub-diffusion equation with Caputo-like counterpart of a hyper-Bessel fractional differential operator and the fractional wave equation with Hilfer's bi-ordinal derivative in a rectangular domain. The theorem about the uniqueness and existence of the solution is proved.\n\n{The rest of the paper is organized as follows: In Preliminaries section we provide necessary information on Mittag-Leffler functions (Section 2.1), hyper-Bessel functions (Section 2.2.), bi-ordinal Hilfer's fractional derivatives (Section 2.3) and on differential equation involving bi-ordinal Hilfer's fractional derivatives (Section 2.4). Auxiliary result is formulated in Theorem 2.2. In Section 3, we formulate the main problem and state our main result in Theorem 3.1. In Appendix one can find detailed arguments of the proof of Theorem 2.1.}\n\n\n\\section{Preliminaries}\n\nIn this section we present some definitions and auxiliary results related to generalized Hilfer's derivative and fractional hyper-Bessel differential operator which will be used in the sequel. We start recalling the definition of the Mittag-Leffler function.\n\n\\subsection{Important properties of the Mittag-Leffler function}\n\nThe two parameter Mittag-Leffler (M-L) function is an entire function given by\n\\begin{equation}\\label{e1}\nE_{\\alpha, \\beta}(z)=\\sum_{k=0}^{\\infty}\\frac{z^k}{\\Gamma(\\alpha{k}+\\beta)}, \\, \\, \\, \\alpha>0, \\, \\beta\\in\\mathbb{R}.\n\\end{equation}\n\n\\textbf{Lemma 2.1} (see \\cite{L3}) Let $\\alpha<2, \\, \\beta\\in\\mathbb{R}$ and $\\frac{\\pi\\alpha}{2}<\\mu\\alpha$ and $x\\geq0$ one has\n$$\n\\frac{1}{\\Big(1+\\sqrt{\\frac{\\Gamma(1-\\alpha)}{\\Gamma(1+\\alpha)}}x\\Big)^2}\\leq{E_{\\alpha, \\alpha}(-x)}\\leq\\frac{1}{\\Big(1+\\frac{\\Gamma(1+\\alpha)}{\\Gamma(1+2\\alpha)}x\\Big)^2}\n$$\nand\n$$\n\\frac{1}{1+\\frac{\\Gamma(\\beta-\\alpha)}{\\Gamma(\\beta)}x}\\leq\\Gamma(\\beta){E_{\\alpha, \\beta}(-x)}\\leq\\frac{1}{1+\\frac{\\Gamma(\\beta)}{\\Gamma(\\beta+\\alpha)}x}.\n$$\n\nThe Laplace transform of M-L function is given in the following lemma.\n\n\\textbf{Lemma 2.2.} (\\cite{L8}) For any $\\alpha>0, \\, \\, \\beta>0$ and $\\lambda\\in\\mathbb{C}$, we have\n$$\n\\mathcal{L}\\{t^{\\beta-1}E_{\\alpha, \\beta}(\\lambda{t}^\\alpha)\\}=\\frac{s^{\\alpha-\\beta}}{s^\\alpha-\\lambda}, \\, \\, (Re(s)>\\mid{\\lambda}\\mid^{1\/\\alpha}),\n$$\nwhere the Laplace transform of a function $f(t)$ is defined by\n\n$$\n\\mathcal{L}\\{f\\}(s):=\\int_{0}^{\\infty}e^{-st}f(t)dt.\n$$\n\n\\textbf{Lemma 2.3.}\\label{lemma} If $\\alpha\\leq{0}$ and $\\beta\\in\\mathbb{C}$, then the following recurrence formula holds:\n$$\nE_{\\alpha, \\beta}(z)=\\frac{1}{\\Gamma(\\beta)}+zE_{\\alpha, \\alpha+\\beta}(z).\n$$\nThis lemma was proved by R. K. Saxena in 2002 \\cite{L29}.\n\nLater, we use the properties of a Wright-type function studied by A. Pskhu \\cite{L30}, defined as\n$$\ne^{\\mu, \\delta}_{\\alpha, \\beta}(z)=\\sum_{n=0}^{\\infty}\\frac{z^n}{\\Gamma(\\alpha{n}+\\mu)\\Gamma(\\delta-\\beta{n})}, \\, \\, \\, \\alpha>0, \\, \\, \\alpha>\\beta.\n$$\n\nM-L function can be determined by Wright-type function as a special case $E_{\\alpha, \\beta}(z)=e^{\\beta, 1}_{\\alpha, 0}(z)$. So, we can record some properties of M-L function which can be reduced from the Wright-type function's properties.\n\n\\textbf{Lemma 2.4.} (\\cite{L30}) If $\\pi\\geq|argz|>\\frac{\\pi\\alpha}{2}+\\varepsilon, \\, \\, \\, \\varepsilon>0$, then the following relations are valid for $z \\to \\infty$:\n$$\n\\mathop {\\lim }\\limits_{\\mid{z}\\mid \\to \\infty} E_{\\alpha, \\beta}(z)=0,\n$$\n$$\n\\mathop {\\lim }\\limits_{\\mid{z}\\mid \\to \\infty} zE_{\\alpha, \\beta}(z)=-\\frac{1}{\\Gamma(\\beta-\\alpha)}.\n$$\n\n\n\n\\subsection{Regularized Caputo-like counterpart of the hyper-Bessel fractional differential operator}\n\n\\textbf{Definition 2.1.} (\\cite{L8}) The Riemann-Liouville fractional integral $I^{\\alpha}_{a+}f(t)$ and derivative $D^{\\alpha}_{a+}f(t)$ of order $\\alpha$ are defined by\n$$\nI^{\\alpha}_{a+}f(t)=\\frac{1}{\\Gamma(\\alpha)}\\int_{a}^{t}\\frac{f(\\tau)d\\tau}{(t-\\tau)^{1-\\alpha}},\n$$\n\n$$\nD^{\\alpha}_{a+}f(t)=\\left(\\frac{d}{dt}\\right)^nI^{n-\\alpha}_{a+}f(t), \\, \\, \\, \\, n-1\\leq\\alpha0, \\gamma\\in{\\mathbb{R}}$ and $\\beta>0$ is defined as (\\cite{L8})\n$$\nI^{\\gamma, \\delta}_{\\beta; a+}f(t)=\\frac{t^{-\\beta(\\gamma+\\delta)}}{\\Gamma(\\delta)}\\int_{a}^{t}(t^\\beta-\\tau^\\beta)^{\\delta-1}\\tau^{\\beta\\gamma}f(\\tau)d(\\tau^\\beta),\n$$\nwhich can be reduced up to a weight to $I^{q}_{a+}f(t)$ (Riemann-Liouville fractional integral) at $\\gamma=0$ and $\\beta=1$, and Erdelyi-Kober fractional derivative of $f(t)\\in{C}_{\\mu}^{(n)}$ for $n-1<\\delta\\leq{n}, n\\in\\mathbb{N}$, is defined by\n$$\nD_{\\beta, a+}^{\\gamma, \\delta}f(t)=\\prod_{j=1}^{n}\\left(\\gamma+j+\\frac{t}{\\beta}\\frac{d}{dt}\\right)\\big(I_{\\beta, a+}^{\\gamma+\\delta, n-\\delta} f(t)\\big),\n$$\nwhere $C_{\\mu}^{(n)}$ is the weighted space of continuous functions defined as\n$$\nC_{\\mu}^{(n)}=\\left\\{f(t)=t^p \\tilde{f(t)}; \\, \\, \\tilde{f} \\in{C}^{(n)}[0, \\infty)\\right\\}, \\, \\, \\, C_{\\mu}=C_{\\mu}^{(0)} \\, \\, \\textrm{with} \\, \\, \\mu\\in\\mathbb{R}.\n$$\n\n\\textbf{Definition 2.3.} Regularized Caputo-like counterpart of the hyper-Bessel fractional differential operator for $\\theta<1$, $0<\\alpha\\leq1$ and $t>a\\geq{0}$ is defined in terms of the E-K fractional order operator\n\n\\begin{equation}\\label{e2}\n^{C}\\Big(t^{\\theta}\\frac{d}{dt}\\Big)^\\alpha{f}(t)=(1-\\theta)^\\alpha{t}^{-\\alpha(1-\\theta)}D_{1-\\theta, a+}^{-\\alpha, \\alpha}\\left(f(t)-f(a)\\right)\n\\end{equation}\nor in terms of the hyper-Bessel differential (R-L type) operator\n\n\\begin{equation}\\label{eq3}\n^{C}\\left(t^{\\theta}\\frac{d}{dt}\\right)^\\alpha{f}(t)=\\Big(t^{\\theta}\\frac{d}{dt}\\Big)^\\alpha{f}(t)-\\frac{f(a)\\Big(t^{(1-\\theta)}-a^{(1-\\theta)}\\Big)^{-\\alpha}}{(1-\\theta)^{-\\alpha}\\Gamma(1-\\alpha)},\n\\end{equation}\nwhere\n$$\n\\left(t^\\theta\\frac{d}{dt}\\right)^\\alpha{f}(t)=\\left\\{ \\begin{gathered}\n(1-\\theta)^\\alpha t^{-(1-\\theta)\\alpha}I_{1-\\theta, a+}^{0, -\\alpha} f(t) \\, \\, \\, \\textrm{if} \\, \\, \\, \\theta<1, \\hfill \\cr\n(\\theta-1)^\\alpha t^{-(1-\\theta)\\alpha}I_{1-\\theta, a+}^{-1, -\\alpha} f(t) \\, \\, \\, \\textrm{if} \\, \\, \\, \\theta>1, \\hfill \\cr\n\\end{gathered} \\right.\n$$\nis a hyper-Bessel fractional differential operator (\\cite{L25}).\n\nFrom (\\ref{eq3}) for $a=0$ we obtain the definition presented in (\\cite{L25}) and also Caputo FDO is the particular case of Caputo-like counterpart hyper-Bessel operator at $\\theta=0$.\n\n\n\\textbf{Theorem 2.1.} Assume that the following conditions hold:\n\n$\\bullet$ $\\tau\\in{C}[0,1]$ such that $\\tau(0)=\\tau(1)=0$ and $\\tau'\\in{L}^2(0,1)$,\n\n$\\bullet$ $f(\\cdot, t)\\in{C}^3[0,1]$ and $f(x, \\cdot)\\in{C}_\\mu[a, T]$ such that\n\n $f(0,t)=f(\\pi,t)=f_{xx}(0,t)=f_{xx}(1,t)=0$, and $\\dfrac{\\partial^4}{\\partial{x}^4}f(\\cdot,t)\\in{L}^1(0,1).$\n\nThen, in $\\Omega=\\{0{0}.\n$$\n\nThis proves the uniqueness of the solution of the considered problem.\n\nMoreover, one can write an upper bound of $\\frac{1}{\\Delta}$ by using the last evaluation:\n\n\\begin{equation}\\label{e33}\n\t\\frac{1}{|\\Delta|}\\leq\\frac{M_1}{(k\\pi)^2}+M_2, \\, \\, \\, \\, (M_1, M_2=const).\n\\end{equation}\nNow we find an estimate for $B$ by using Lemma 2.1:\n\n$$\n|B|\\leq\\frac{\\lambda_k{p}^{1-\\alpha}}{\\Gamma(\\alpha)}|E_{\\delta, 1}(-\\lambda_k{a}^\\delta)|+\\lambda_k{a}^{\\delta-1}|E_{\\delta, \\delta}(-\\lambda_k{a}^\\delta)|\\leq\n$$\n$$\n\\leq\\frac{\\lambda_k{p}^{1-\\alpha}}{\\Gamma(\\alpha)}\\frac{M}{1+\\lambda_k{a}^\\delta}+\\lambda_k{a}^{\\delta-1}\\frac{M}{1+\\lambda_k{a}^\\delta}\\leq\n$$\n$$\n\\leq\\frac{\\lambda_k{p}^{1-\\alpha}}{\\Gamma(\\alpha)}\\frac{M}{\\lambda_k{a}^\\delta}+\\lambda_k{a}^{\\delta-1}\\frac{M}{\\lambda_k{a}^\\delta}=\\frac{Mp^{1-\\alpha}}{a^\\delta\\Gamma(\\alpha)}+\\frac{M}{a}=\\frac{M}{a}\\left(1+\\frac{p^{1-\\alpha}}{a^{\\delta-1}}\\right)=M_3, \\, \\, (M_3=const).\n$$\n\nUsing the last result and (\\ref{e33}) we estimate $\\left|\\frac{B}{\\Delta}\\varphi_k\\right|$:\n$$\n\\left|\\frac{B}{\\Delta}\\varphi_k\\right|\\leq\\frac{M_1M_3}{(k\\pi)^2}|\\varphi_k|+\\frac{M_2M_3}{k\\pi}|\\varphi_{1k}|\\leq\\frac{M_1M_3}{(k\\pi)^2}|\\varphi_k|+\\left(\\frac{M_2M_3}{k\\pi}\\right)^2+|\\varphi_{1k}|^2,\n$$\nhere $\\varphi_{1k}=2\\int_{0}^{1}\\varphi'(x)\\sin(k\\pi{x})dx$. Now let us find the upper bound of $C$:\n$$\n|C|\\leq\\int_{0}^{a}|a-s|^{\\delta+q-1}|E_{\\delta, \\delta+q}(-\\lambda_k(a-s)^\\delta)||f_k(s)|ds+\n$$\n$$\n+\\int_{0}^{a}|a-s|^{\\delta+q-2}|E_{\\delta, \\delta+q-1}(-\\lambda_k(a-s)^\\delta)||f_k(s)|ds\\leq\n$$\n$$\n\\leq\\int_{0}^{a}|a-s|^{\\delta+q-1}\\left|\\frac{M}{1+\\lambda_k|a-s|^\\delta}\\right||f_k(s)|ds+\n$$\n$$\n+\\int_{0}^{a}|a-s|^{\\delta+q-2}\\left|\\frac{M}{1+\\lambda_k|a-s|^\\delta}\\right||f_k(s)|ds\\leq\\frac{M_4}{(k\\pi)^2}, \\, \\, \\, (M_4=const.)\n$$\nHere we imply that $f_{x}(x,t)\\in{L}^1(0, a)$ for convergence of the last integral.\n\nThen, the estimate for $\\frac{C}{\\Delta}$ is\n\n$$\n\\left|\\frac{C}{\\Delta}\\right|\\leq\\frac{M_1M_4}{(k\\pi)^4}+\\frac{M_3M_4}{(k\\pi)^2}.\n$$\n\nFinally, we find the estimate for $|\\psi_k|$:\n\\begin{equation}\\label{e34}\n|\\psi_k|\\leq\\frac{M_1M_3}{(k\\pi)^2}|\\varphi_k|+\\left(\\frac{M_2M_3}{k\\pi}\\right)^2+|\\varphi_{1k}|^2+\\frac{M_1M_4}{(k\\pi)^4}+\\frac{M_3M_4}{(k\\pi)^2}<\\infty,\n\\end{equation}\nwhere $\\varphi'\\in{L}^2(0,1)$.\n\nFrom (\\ref{e32}) and in the same way one can show that $|\\tau_k|\\leq\\frac{M_5}{(k\\pi)^2}+|\\varphi_{1k}|^2<\\infty$, \\, \\, $M_5=const.$\n\n\\smallskip\n\nFor proving the existence of the solution, we need to show uniform convergency of series representations of $u(x, t)$, $u_{xx}(x,t)$, $^C\\left(t^\\theta\\frac{\\partial}{\\partial{t}}\\right)^{\\alpha}u(x, t)$ and $D_{t}^{(\\gamma, \\beta)\\mu}u(x, t)$ by using the solution (\\ref{e4}) and (\\ref{e27}) in $\\Omega_1$ and $\\Omega_2$ respectively.\n\n\nIn \\cite{L22}, the uniform convergence of series of $u(x,t)$ and $u_{xx}(x,t)$ showed for $t>0$. Similarly, for $t>a$, we obtain the following estimate:\n$$\n|u(x,t)|\\leq{M}\\sum_{k=1}^{\\infty}\\Big(\\frac{|\\tau_k|}{p^\\alpha+(k\\pi)^2|t^p-a^p|^\\alpha}+\\frac{1}{(k\\pi)^2}\\int_{a}^{t}|t^p-\\tau^p|^{\\alpha-1}f_{2k}(\\tau)d(\\tau^p)+\\\\\n$$\n$$\n+\\int_{a}^{t}\\frac{|t^p-\\tau^p|^{2\\alpha-1}}{p^\\alpha+(k\\pi)^2|t^p-a^p|^\\alpha}f_{2k}(\\tau)d(\\tau^p)\\Big),\n$$\nwhere $f_{2k}(t)=2\\int_{0}^{1}f_{xx}(x,t)\\sin(k\\pi{x})dx$.\n\nSince $|\\tau_k|<\\infty$ and $f(\\cdot, t)\\in{C}^3[0,1]$, then the above series converges and hence, by the Weierstrass M-test the series of $u(x, t)$ is uniformly convergent in $\\Omega_1$.\n\nThe series of $u_{xx}(x,t)$ is written in the form below\n$$\nu_{xx}(x,t)=-\\sum_{k=1}^{\\infty}(k\\pi)^2\\left(\\tau_kE_{\\alpha, 1}\\left[\\frac{(k\\pi)^2}{p^\\alpha}(t-a)^{p\\alpha}\\right]+G_k(t)\\right)\\sin(k\\pi{x}).\n$$\nWe obtain the following estimate:\n$$\n|u_{xx}(x,t)|\\leq{M}\\sum_{k=1}^{\\infty}\\Big(\\frac{(k\\pi)^2|\\tau_k|}{p^\\alpha+(k\\pi)^2|t^p-y^p|^\\alpha}+\\frac{1}{(k\\pi)^2}\\int_{a}^{t}|t^p-\\tau^p|^{\\alpha-1}|f_{4k}(\\tau)|d(\\tau^p)\\\\\n$$\n$$\n+\\int_{a}^{t}\\frac{|t^p-\\tau^p|^{2\\alpha-1}}{p^\\alpha+(k\\pi)^2|t^p-\\tau^p|^\\alpha}|f_{4k}(\\tau)|d(\\tau^p)\\Big),\n$$\nwhere $f_{4k}(t)=2\\int_{0}^{1}\\frac{\\partial^4}{\\partial{x}^4}f(x,t)\\sin(k\\pi{x})dx$ and $f(0,t)=f(1,t)=f_{xx}(0,t)=f_{xx}(1,t)=0$.\n\nSince $\\tau(0)=\\tau(1)=0$ and $\\frac{\\partial^4{f}}{\\partial{x}^4}(\\cdot, t)\\in{L}^1(0,1)$, then using integration by parts, we arrive at the following estimate\n\n$$\n|u_{xx}(x,t)|\\leq{M}\\sum_{k=1}^{\\infty}\\left(\\frac{1}{k}|\\tau_{1k}|+\\frac{1}{k^2}\\right)\\leq\\frac{M}{2}\\Big(\\sum_{k=1}^{\\infty}\\frac{3}{k^2}+\\sum_{k=1}^{\\infty}|\\tau_{1k}|^2\\Big),\n$$\nwhere $\\tau_{1k}=2\\int_{0}^{1}\\tau'(x)\\sin(k\\pi{x})dx$\nThen, the Bessel inequality for trigonometric functions implies\n$$\n|u_{xx}(x,t)|\\leq\\frac{M}{2}\\left(\\sum_{k=1}^{\\infty}\\frac{3}{k^2}+||\\tau'(x)||^2_{L^2(0,1)}\\right).\n$$\nThus, the series in the expression of $u_{xx}(x,t)$ is bounded by a convergent series which is uniformly convergent according to the Weierstrass M-test. Then, the series of $^C(t^\\theta\\frac{\\partial}{\\partial{t}})^\\alpha{u}(x,t)$ which can be written by\n$$\n^C\\left(t^\\theta\\frac{\\partial}{\\partial{t}}\\right)^\\alpha{u}(x,t)=-\\sum_{k=1}^{\\infty}(k\\pi)^2\\left(\\tau_kE_{\\alpha, 1}\\Big[-\\frac{(k\\pi)^2}{p^\\alpha}(t-a)^{p\\alpha}\\Big]+G_k(t)\\right)\\sin(k\\pi{x})+f(x,t),\n$$\nhas uniform convergence which can be showed in the same way to the uniform convergence of the series of $u_{xx}(x,t)$ (see \\cite{L22}).\n\nNow we need to show that the series of $u(x,t)$ and its derivatives should converge uniformly in $\\Omega_2$ by using (\\ref{e27}). We estimate\n\n$$\n|u(x,t)|\\leq|\\varphi_k||t^{(\\beta-2)(1-\\mu)}E_{\\delta, \\delta+\\mu(2-\\gamma)-1}(-\\lambda_k{t}^\\delta)|+|\\psi_k||t^{\\mu+(\\beta-1)(1-\\mu)}E_{\\delta, \\delta+\\mu(2-\\gamma)}(-\\lambda_k{t}^\\delta)|+\n$$\n$$\n+\\int_{0}^{1}|t-\\tau|^{\\delta-1}|E_{\\delta, \\delta}\\Big(-\\lambda_k(t-\\tau)^\\delta\\Big)||f_k(\\tau)|d\\tau.\n$$\nConsider estimates of the Mittag-Leffler function (see Lemma 2.1)\n$$\n|u(x,t)|\\leq\\frac{|t^{(\\beta-2)(1-\\mu)}||\\varphi_k|M}{1+\\lambda_k|{t}^\\delta)|}+\\frac{|t^{(\\beta-1)(1-\\mu)}||\\psi_k|M}{1+\\lambda_k|{t}^\\delta)|}\n$$\n$$\n+\\int_{0}^{t}|t-\\tau|^{\\delta-1}\\frac{M}{1+\\lambda_k|{(t-\\tau)}^\\delta)|}|f_k(\\tau)|d\\tau,\n$$\nwhere $f_{1k}(t)=\\int_{0}^{1}f_{x}(x,t)\\sin(k\\pi{x})dx$, $f_{x}(\\cdot,t)\\in{L^1[0,1]}$ and $\\varphi'\\in{L}^2[0,1]$. Then we obtain the estimate\n$$\n|u(x,t)|\\leq\\sum_{k=1}^{\\infty}\\frac{N_1}{(k\\pi)^2}, \\, \\, \\, \\, (N_1=\\textrm{const}),\n$$\nfor all $t>\\bar{t}>0, \\, \\, \\, 0\\leq{x}\\leq{1}$.\n\nIn the similar way one can show that\n$$\n|u_{xx}(x,t)|\\leq\\sum_{k=1}^{\\infty}(k\\pi)^2\\Big[|\\varphi_k||\\frac{M}{1+\\lambda_k{t}^\\delta}|+(\\frac{K_1}{(k\\pi)^2}+|\\varphi_{1k}|^2)|\\frac{M}{1+\\lambda_k{t}^\\delta}|+\n$$\n$$\n+\\int_{0}^{t}|t-\\tau|^{\\delta-1}\\frac{M}{1+\\lambda_k{t}^\\delta}|k_{2k}(\\tau)|d\\tau\\Big], \\, \\, \\, \\, (M=\\textrm{const}).\n$$\nThen, using Bessel's inequality and $\\varphi'\\in{L}^2[0,1]$, $f_{xxx}(\\cdot,t)\\in{L}^1(0,1)$, we get\n$$\n|u_{xx}(x,t)|\\leq\\frac{1}{2}\\Big(\\sum_{k=1}^{\\infty}\\frac{N_2}{(k\\pi)^2}+||\\varphi'(x)||^2_{L^2[0,1]}\\Big).\n$$\nWe have also used $2ab\\leq{a^2+b^2}.$\n\nUsing the equation in $\\Omega_2$, we write $D_{t}^{(\\gamma, \\beta)\\mu}u(x,t)$ in the form\n$$\nD_{t}^{(\\gamma, \\beta)\\mu}u(x,t)=u_{xx}(x,t)+f(x,t)$$\nand its uniform convergence can be done in a similar way to the uniform convergence of $u_{xx}(x,t)$ as\n$$\n|D^{(\\gamma, \\beta)\\mu}_tu(x,t)|\\leq\\sum_{k=1}^{\\infty}\\frac{N_3}{(k\\pi)^2}, \\, \\, \\, \\, (N_3=\\textrm{const}).\n$$\n\n\nFinally, considering the Weierstrass M-test, the above arguments prove that Fourier series in (\\ref{e4}) and (\\ref{e27}) converge uniformly in the domains $\\Omega_1$ and $\\Omega_2$. This is the proof that the considered problem's solution exists in $\\Omega$. Theorem 3.1 is proved.\n\n\n\n\\section*{Appendix}\n\nHere we write derivation of the series $^{C}\\left(t^{\\theta}\\frac{\\partial}{\\partial{t}}\\right)^\\alpha{u}(x,t)$ in (\\ref{e4}). Using relation (\\ref{eq3}) we get:\n$$\n^{C}\\left(t^{\\theta}\\frac{\\partial}{\\partial{t}}\\right)^\\alpha{u}(x,t)=\\sum_{k=0}^{\\infty}\\Big[(t^\\theta\\frac{\\partial}{\\partial{t}})^\\alpha\\left(\\tau_kE_{\\alpha,1}\\Big[-\\frac{(k\\pi)^2}{p^\\alpha}(t^p-a^p)^\\alpha\\Big]+G_k(t)\\right)\n-\\frac{\\tau_k(t^p-a^p)^\\alpha}{p^{-\\alpha}\\Gamma(1-\\alpha)}\\Big]\\sin(k\\pi{x}).\n$$\nThe hyper-Bessel derivative of the Mittag-Leffler function is\n$$\n\\Big(t^\\theta\\frac{\\partial}{\\partial{t}}\\Big)^\\alpha\\tau_kE_{\\alpha,1}\\left(-\\frac{(k\\pi)^2}{p^\\alpha}(t^p-a^p)^\\alpha\\right)=\\tau_kp^\\alpha(t^p-a^p)^{-\\alpha}E_{\\alpha, 1-\\alpha}\\big[-\\lambda(t^p-a^p)^\\alpha\\big].\n$$\nUsing the Lemma 2.3, we can write the last expression as follows\n$$\n\\Big(t^\\theta\\frac{\\partial}{\\partial{t}}\\Big)^\\alpha\\tau_kE_{\\alpha,1}\\Big[-\\frac{(k\\pi)^2}{p^\\alpha}(t^p-a^p)^\\alpha\\Big]=\\frac{\\tau_kp^\\alpha(t^p-a^p)^{-\\alpha}}{\\Gamma(1-\\alpha)}+\\tau_k(k\\pi)^2E_{\\alpha, 1}\\big[-\\frac{(k\\pi)^2}{p^\\alpha}(t^p-a^p)^\\alpha\\big].\n$$\n\nThen evaluating $\\Big(t^\\theta\\frac{\\partial}{\\partial{t}}\\Big)^\\alpha{G}_k(t)$ gives that\n$$\n\\Big(t^\\theta\\frac{\\partial}{\\partial{t}}\\Big)^\\alpha{G}_k(t)=\\Big(t^\\theta\\frac{\\partial}{\\partial{t}}\\Big)^\\alpha\\Big(f^*_k(t)+\\lambda^*\\int_{a}^{t}(t^p-a^p)^{\\alpha-1}E_{\\alpha, \\alpha}\\big[\\lambda^*(t^p-a^p)\\big]f^*_k(\\tau)d(\\tau^p)\\Big)=\n$$\n$$\n=p^\\alpha{t}^{-p\\alpha}D_{p, a+}^{-\\alpha, \\alpha}\\Big(\\frac{1}{p^\\alpha}I_{p, a+}^{-\\alpha, \\alpha}t^p\\alpha{f}_k(t)+\\lambda^*\\int_{a}^{t}(t^p-a^p)^{\\alpha-1}E_{\\alpha, \\alpha}\\big[\\lambda^*(t^p-a^p)\\big]f^*_k(\\tau)d(\\tau^p)\\Big)=\n$$\n$$\n=f_{k}(t)+p^{\\alpha}t^{-p\\alpha}D_{p, a+}^{-\\alpha, \\alpha}\\Big(\\lambda^*\\int_{a}^{t}(t^p-a^p)^{\\alpha-1}E_{\\alpha, \\alpha}\\big[\\lambda^*(t^p-a^p)\\big]f^*_k(\\tau)d(\\tau^p)\\Big),\n$$\nwhere $\\lambda^*=-\\frac{\\lambda_k}{p^\\alpha}$ and $f^*_k(t)=\\frac{1}{p^\\alpha\\Gamma(\\alpha)}\\int_{a}^{t}(t^p-\\tau^p)^{\\alpha-1}f_k(\\tau)d(\\tau^p)$.\n\nThe second term in the last expression can be simplified using the Erd'elyi-Kober fractional derivative for $n=1$,\n$$\n-\\lambda_kt^{-p\\alpha}\\left(1-\\alpha+\\frac{t}{p}\\frac{d}{dt}\\right)\\frac{t^{-p(1-\\alpha)}}{\\Gamma(1-\\alpha)}\\int_{a}^{t}(t^p-\\tau^p)^{-\\alpha}d(\\tau^p)\\int_{a}^{\\tau}(\\tau^p-s^p)^{\\alpha-1}E_{\\alpha, \\alpha}\\big[\\lambda^*(\\tau^p-s^p)^\\alpha\\big]f^*_k(s)d(s^p)=\n$$\n$$\n-\\lambda_kt^{-p\\alpha}\\left(1-\\alpha+\\frac{t}{p}\\frac{d}{dt}\\right)\\frac{t^{-p(1-\\alpha)}}{\\Gamma(1-\\alpha)}\\int_{a}^{t}f^*_k(s)d(s^p)\\int_{s}^{t}(t^p-\\tau^p)^{-\\alpha}(\\tau^p-s^p)^{\\alpha-1}E_{\\alpha, \\alpha}\\big[\\lambda^*(\\tau^p-s^p)^\\alpha\\big]d(\\tau^p)=\n$$\n$$\n-\\lambda_kt^{-p\\alpha}\\left(1-\\alpha+\\frac{t}{p}\\frac{d}{dt}\\right)t^{-p(1-\\alpha)}\\int_{a}^{t}E_{\\alpha, 1}\\left[\\lambda^*(t^p-s^p)^{\\alpha}\\right]f^*_k(s)d(s^p)=\n$$\n$$\n-\\lambda_k(1-\\alpha)t^{-p}\\int_{a}^{t}E_{\\alpha, 1}\\left[\\lambda^*(t^p-s^p)^{\\alpha}\\right]f^*_k(s)d(s^p)-\n$$\n$$\n-\\frac{\\lambda_kt^{-p\\alpha+1}}{p}\\frac{d}{dt}\\left(t^{-p(1-\\alpha)}\\int_{a}^{t}E_{\\alpha, 1}\\left[\\lambda^*(t^p-s^p)^{\\alpha}\\right]f^*_k(s)d(s^p)\\right)=\n$$\n$$\n=-\\lambda_k(1-\\alpha)t^{-p}\\int_{a}^{t}E_{\\alpha, 1}\\left[\\lambda^*(t^p-s^p)^{\\alpha}\\right]f^*_k(s)d(s^p)+\n$$\n$$\n+\\lambda_k(1-\\alpha)t^{-p}\\int_{a}^{t}E_{\\alpha, 1}\\left[\\lambda^*(t^p-\\tau^p)^{\\alpha}\\right]f^*_k(\\tau)d(\\tau^p)-\\lambda_kf^*_k(t)-\n$$\n$$\n-\\lambda_kt^{1-p}\\int_{a}^{t}\\lambda^*(t^p-\\tau^p)^{\\alpha-1}E_{\\alpha, \\alpha}\\left[\\lambda^*(t^p-\\tau^p)^{\\alpha}\\right]f^*_k(\\tau)d(\\tau^p)=\n$$\n$$\n=-\\lambda_k\\left(f^*_k(t)+\\lambda^*\\int_{a}^{t}(t^p-a^p)^{\\alpha-1}E_{\\alpha, \\alpha}\\big[\\lambda^*(t^p-a^p)\\big]f^*_k(\\tau)d(\\tau^p)\\right)=-\\lambda_kG_k(t).\n$$\nHence, we get\n$$\n^{C}\\left(t^{\\theta}\\frac{\\partial}{\\partial{t}}\\right)^\\alpha{u}(x,t)=-\\sum_{k=0}^{\\infty}(k\\pi)^2\\left[\\tau_kE_{\\alpha, 1}\\left(\\frac{(k\\pi)^2}{p^\\alpha}(t^p-a^p)\\right)+G_k(t)\\right]\\sin(k\\pi{x})+f(x,t).\n$$\nThis proves that solution (\\ref{e4}) satisfies the equation\n$$\n^C\\Big(t^\\theta\\frac{\\partial}{\\partial{t}}\\Big)^{\\alpha}u(x,t)-u_{xx}(x,t)=f(x,t).\n$$\n\nWe would like to note that using the result of this work, one can consider FPDE with the Bessel operator considering local \\cite{krma} and non-local boundary value problems \\cite{karruzh}. In that case the Fourier-Bessel series will play an important role. The other possible applications are related with the consideration of more general operators in space variables. For instance, in \\cite{rtb1}, very general positive operators have been considered, and the results of this paper can be extended to that setting as well.\n\n\\section{Acknowledgement}\n\n\nThe second author was partially supported by the FWO Odysseus 1 grant G.0H94.18N: Analysis and Partial Differential Equations and by the Methusalem programme of the Ghent University Special Research Fund (BOF) (Grant number 01M01021). \n\n\\bigskip\n\n\n\\textbf{\\Large References}\n\n\\begin{enumerate}\n\\bibitem{L1} {\\it C.~Friedrich.} Relaxation and retardation functions of the Maxwell model with fractional derivatives. Rheologica Acta. 30(2), 1991, pp. 151-158.\n\n\\bibitem{L2} {\\it D.~Kumar, J.~Singh , M.~Al Qurashi.} A new fractional SIRS-SI malaria disease model with application of vaccines, antimalarial drugs,\nand spraying. Adv Differ Equa. (2019), 2019: 278.\n\n\\bibitem{L3} {\\it I.~Podlubny.} Fractional Differential Equations. United States, Academic Press.\n1999. 340 p.\n\n\\bibitem{L4} {\\it D.~Baleanu, Z.~B.~Guvenc and J.~A.~T.~Machado.} New trends in nanotechnology and fractional calculus applications, Computers and Mathematics with Applications. 59, 2010, pp. 1835-1841.\n\n\\bibitem{L5} {\\it C.~G.~Koh and J.~M.~Kelly.} Application for a fractional derivative to seismic analysis of base-isolated models, Earthquake Engineering and Structural Dynamics. 19, 1990, pp. 229-241.\n\n\\bibitem{L6} {\\it E.~T.~Karimov, A.~S.~Berdyshev, N.~A.~Rakhmatullaeva.} Unique solvability of a\nnon-local problem for mixed-type equation with fractional derivative. {Mathematical\n\tMethods in the Applied Sciences.} 40(8), 2017, pp. 2994-2999.\n\n\\bibitem{L7} {\\it Z.~A.~Nakhusheva.} Non-local boundary value problems for main and mixed type differential equations. Nalchik, 2011 [in Russian].\n\n\\bibitem{L8} {\\it A.~A.~Kilbas, H.~M.~Srivastava, J.~J.~Trujillo.} Theory and\nApplications of Fractional Differential Equation. Elsevier, Amsterdam. 2006.\n\n\n\\bibitem{sad} {\\it M.~Kirane, M.~A.~Sadybekov, A.~A.~Sarsenbi.} On an inverse problem of reconstructing a subdiffusion process from nonlocal data. {Mathematical\n\tMethods in the Applied Sciences.} 42(6), 2019, pp.2043-2052.\n\n\\bibitem{ruzh} {\\it M.~Ruzhansky, N.~Tokmagambetov and B.~Torebek.} On a non-local problem for a multi-term fractional diffusion-wave equation. Fractional Calculus and Applied Analysis, 23(2), 2020, pp. 324-355.\n\n\\bibitem{karruzh} {\\it E.~Karimov, M.~Mamchuev and M.~Ruzhansky.} Non-local initial problem for second order time-fractional and space-singular equation. Hokkaido Math. J., 49, 2020, pp.349-361.\n\n\\bibitem{L9} {\\it E.~T.~Karimov.} Boundary value problems for parabolic-hyperbolic type equations with spectral para\\-meters. PhD Thesis, Tashkent, 2006.\n\n\\bibitem{L10} {\\it S.~Kh.~Gekkieva.} A boundary value problem for the generalized transfer equation with a fractional derivative in a semi-infinite domain. Izv. Kabardino-Balkarsk. Nauchnogo Tsentra RAN, 8(1), 2002, pp. 3268-3273. [in Russian].\n\n\\bibitem{L11} {\\it E.~T.~Karimov, J.~S.~Akhatov.} A boundary problem with integral gluing\ncondition for a parabolic-hyperbolic equation in volving the Caputo fractional derivative.\n{Electronic Journal of Differential Equations.} 14, 2014, pp. 1-6.\n\\bibitem{L12} {\\it P. ~Agarwal, A.~Berdyshev and E.~Karimov.} Solvability of a non-local\nproblem with integral form transmitting condition for mixed type equation with Caputo\nfractional derivative. {Results in Mathematics.} 71(3), 2017, pp.1235-1257.\n\\bibitem{L13} {\\it B.~J.~Kadirkulov.} Boundary problems for mixed parabolic-hyperbolic\nequations with two lines of changing type and fractional derivative. {Electronic Journal\n\tof Differential Equations}, 2014(57), pp. 1-7.\n\\bibitem{L14} {\\it R.~Hilfer.} Fractional time evolution, in: R. Hilfer (ed.), Applications of Fractional Calculus in Physics, World SCi., Singapore, 2000, pp. 87-130.\n\\bibitem{L15} {\\it R.~Hilfer.} Experimental evidence for fractional time evolution in glass forming materials. J. Chem. Phys. 284, 2002, pp. 399-408.\n\\bibitem{L16} {\\it R.~Hilfer, Y.~Luchko and Z.~Tomovski.} Operational method for solution of the fractional differential equations with the generalized Riemann-Liouville fractional derivatives. Fractional Calculus and Applied Analysis. 12, 2009, pp. 299-318.\n\n\\bibitem{L17} {\\it O.~Kh.~Abdullaev, K.~Sadarangani.} Non-local problems with integral gluing\ncondition for loaded mixed type equations involving the Caputo fractional derivative.\n\\textit{Electronic Journal of Differential Equations}, 164, 2016, pp. 1-10.\n\n\\bibitem{L18} {\\it A.~S.~Berdyshev, B.~E.~Eshmatov, B.~J.~Kadirkulov.} Boundary value problems\nfor fourth-order mixed type equation with fractional derivative.\n{Electronic Journal of Differential Equations}, 36, 2016, pp. 1-11.\n\\bibitem{L19} {\\it E.~T.~Karimov.} Tricomi type boundary value problem with integral conjugation\ncondition for a mixed type equation with Hilfer fractional operator. {Bulletin of the\n\tInstitute of Mathematics}. 1, 2019, pp. 19-26.\n\\bibitem{L20} {\\it V.~M.~Bulavatsky.} Closed form of the solutions of some boundary-value problems for anomalous diffusion equation with Hilfer's generalized derivative. Cybernetics and Systems Analysis. 30(4), 2014, pp. 570-577.\n\n\\bibitem{Ldn} {\\it M.~M.~Dzhrbashyan, A.~B.~Nersesyan.} Fractional Derivatives and the\nCauchy Problem for Fractional Differential Equations, Izv. Akad. Nauk\nArmyan. SSR. 3, No 1 (1968), 3\u201329.\n\\bibitem{bag} {\\it F.~T.~Bogatyreva.} Initial value problem for fractional order equation with \tconstant coefficients, Vestnik KRAUNC. Fiz.-mat. nauki. 2016, 16: 4-1, 21-26. DOI: 10.18454\/2079- 6641-2016-16-4-1-21-26\n\\bibitem{bag2} {\\it F.~T.~Bogatyreva.} Representation of solution for first-order partial differential equation\nwith Dzhrbashyan \u2013 Nersesyan operator of fractional differentiation. Reports ADYGE (Circassian) International Academy of Sciences. Volume 20. \u2116 2. pp.6-11. \n\\bibitem{Lfca} {\\it M.~M.~Dzherbashian, A.~B.~Nersesian.} Fractional derivatives\n\tand Cauchy problem for differential equations of fractional order. Fract. Calc. Appl. Anal. 23, No 6 (2020), 1810-1836.\tDOI: 10.1515\/fca-2020-0090\n\n \\bibitem{L21} {\\it I.~Dimovski.} Operational calculus for a class of differential operators, C.R. Acad. Bulg. Sci. 19(12), 1996, pp.1111-1114.\n\n\\bibitem{L22} {\\it F.~Al-Musalhi, N.~Al-Salti and E.~Karimov.} Initial boundary value\nproblems for fractional differential equation with hyper-Bessel operator.\nFractional Calculus and Applied Analysis. 21(1), 2018, pp. 200-219.\n\n\\bibitem{L23} {\\it N.~H.~Tuan, L.~N.~Huynh, D.~Baleanu, N.~H.~Can.} On a terminal value problem for a\n\tgeneralization of the fractional diffusion equation with hyper-Bessel operator. {Mathematical\n\tMethods in the Applied Sciences.} 43(6), 2019, pp.2858-2882.\n\n\\bibitem{L24} {\\it K.~Zhang.} Positive solution of nonlinear fractional differential equations with\n\tCaputo-like counterpart hyper-Bessel operators. {Mathematical\n\tMethods in the Applied Sciences.} 43(6), 2019, pp. 2845-2857.\n\n\\bibitem{L25} {\\it R.~Garra, A.~Giusti, F.~Mainardi, G.~Pagnini.} Fractional relaxation\nwith time-varying coefficient. Fractional Calculus and Applied Analysis. 17(2), 2014, pp. 424-439.\n\n\\bibitem{L26} {\\it E.~T.~Karimov, B.~H.~Toshtemirov.} Tricomi type problem with integral conjugation condition for a mixed type equation with the hyper-Bessel fractional differential operator. Bulletin of the Institute of Mathematics. 4, 2019, pp. 9-14.\n\n\\bibitem{L27} {\\it F.~Mainardi.} On some properties of the Mittag-Leffler function $E_{\\alpha}(-t^{\\alpha})$, completely monotone for $t>0$ with $0<\\alpha<1$. Discrete and Continuous Dynamical Systems - B. 19 (7), 2014, pp. 2267-2278.\n\n\\bibitem{L28} {\\it L.~Boudabsa, T.~Simon.} Some Properties of the Kilbas-Saigo Function. Mathematics. 9(3), 2021: 217.\n\n\\bibitem{L29} {\\it R.~K.~Saxena.} Certain properties of generalized Mittag-Leffler function, in Proceedings of the 3rd\nAnnual Conference of the Society for Special Functions and Their Applications, pp. 77-81, Chennai, India, 2002.\n\\bibitem{L30} {\\it A.~V.~Pskhu.} Partial Differential Equations of Fractional Order. Moscow, Nauka. 2005. [in Russian].\n\n\\bibitem{KDSc} {\\it E.~T.~Karimov.} Boundary value problems with integral transmitting conditions and inverse problems for integer and fractional order differential equations. DSc Thesis, Tashkent, 2020.\n\n\\bibitem{L31} {\\it E.~I.~Moiseev.} On the basis property of systems of sines and cosines.\nDoklady AN SSSR. 275(4), 1984, pp.794-798.\n\n\\bibitem{krma} {\\it P.~Agarwal, E.~Karimov, M.~Mamchuev, M.~ Ruzhansky.} On boundary-value problems for a partial differential equation with Caputo and Bessel operators, in Recent Applications of Harmonic Analysis to Function Spaces, Differential Equations, and Data Science: Novel Methods in Harmonic Analysis, Vol 2, Appl. Numer. Harmon. Anal., 707-718, Birkhauser\/Springer, 2017.\n\n\\bibitem{rtb1} {\\it M.~ Ruzhansky, N.~Tokmagambetov, B.~Torebek.} Inverse source problems for positive operators. I: Hypoelliptic diffusion and subdiffusion equations. Journal of Inverse and Ill-Posed Problems, 27 (2019), 891-911.\n\\end{enumerate}\n\n\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}