jackkuo commited on
Commit
da5083b
·
verified ·
1 Parent(s): 1bd9209

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -NFJT4oBgHgl3EQfpSw0/content/tmp_files/2301.11599v1.pdf.txt +0 -0
  2. -NFJT4oBgHgl3EQfpSw0/content/tmp_files/load_file.txt +0 -0
  3. -dAyT4oBgHgl3EQfRPaF/vector_store/index.faiss +3 -0
  4. -tE5T4oBgHgl3EQfRw5F/content/tmp_files/2301.05523v1.pdf.txt +720 -0
  5. -tE5T4oBgHgl3EQfRw5F/content/tmp_files/load_file.txt +0 -0
  6. -tFQT4oBgHgl3EQf7DaV/vector_store/index.pkl +3 -0
  7. .gitattributes +71 -0
  8. 0NE3T4oBgHgl3EQfmwqR/content/tmp_files/2301.04619v1.pdf.txt +1165 -0
  9. 0NE3T4oBgHgl3EQfmwqR/content/tmp_files/load_file.txt +0 -0
  10. 0NE4T4oBgHgl3EQfyw06/content/tmp_files/2301.05268v1.pdf.txt +855 -0
  11. 0NE4T4oBgHgl3EQfyw06/content/tmp_files/load_file.txt +0 -0
  12. 0tFRT4oBgHgl3EQfkzf9/content/2301.13597v1.pdf +3 -0
  13. 0tFRT4oBgHgl3EQfkzf9/vector_store/index.faiss +3 -0
  14. 0tFRT4oBgHgl3EQfkzf9/vector_store/index.pkl +3 -0
  15. 1tE0T4oBgHgl3EQfdgCu/vector_store/index.pkl +3 -0
  16. 1tFAT4oBgHgl3EQfkB3r/content/tmp_files/2301.08609v1.pdf.txt +841 -0
  17. 1tFAT4oBgHgl3EQfkB3r/content/tmp_files/load_file.txt +460 -0
  18. 3NE4T4oBgHgl3EQf0Q0i/content/tmp_files/2301.05280v1.pdf.txt +1456 -0
  19. 3NE4T4oBgHgl3EQf0Q0i/content/tmp_files/load_file.txt +0 -0
  20. 3dFAT4oBgHgl3EQfERwH/vector_store/index.faiss +3 -0
  21. 4NFST4oBgHgl3EQfZjjS/vector_store/index.faiss +3 -0
  22. 69AzT4oBgHgl3EQf-P6_/content/tmp_files/2301.01932v1.pdf.txt +689 -0
  23. 69AzT4oBgHgl3EQf-P6_/content/tmp_files/load_file.txt +371 -0
  24. 6NE5T4oBgHgl3EQfPg5N/content/2301.05505v1.pdf +3 -0
  25. 6NE5T4oBgHgl3EQfPg5N/vector_store/index.pkl +3 -0
  26. 7dE1T4oBgHgl3EQfTgPr/content/tmp_files/2301.03080v1.pdf.txt +2589 -0
  27. 7dE1T4oBgHgl3EQfTgPr/content/tmp_files/load_file.txt +0 -0
  28. 89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf +3 -0
  29. 89AzT4oBgHgl3EQfgvxw/vector_store/index.faiss +3 -0
  30. 89AzT4oBgHgl3EQfgvxw/vector_store/index.pkl +3 -0
  31. 99AzT4oBgHgl3EQfg_xe/content/2301.01477v1.pdf +3 -0
  32. 99AzT4oBgHgl3EQfg_xe/vector_store/index.faiss +3 -0
  33. 99AzT4oBgHgl3EQfg_xe/vector_store/index.pkl +3 -0
  34. 9tFRT4oBgHgl3EQfqzeA/content/tmp_files/2301.13618v1.pdf.txt +1648 -0
  35. 9tFRT4oBgHgl3EQfqzeA/content/tmp_files/load_file.txt +0 -0
  36. ANAyT4oBgHgl3EQf3_og/content/tmp_files/2301.00777v1.pdf.txt +1400 -0
  37. ANAyT4oBgHgl3EQf3_og/content/tmp_files/load_file.txt +0 -0
  38. AtFRT4oBgHgl3EQfuDj6/content/tmp_files/2301.13630v1.pdf.txt +1001 -0
  39. AtFRT4oBgHgl3EQfuDj6/content/tmp_files/load_file.txt +428 -0
  40. BdFQT4oBgHgl3EQfNTaI/content/tmp_files/2301.13271v1.pdf.txt +1770 -0
  41. BdFQT4oBgHgl3EQfNTaI/content/tmp_files/load_file.txt +0 -0
  42. CNAyT4oBgHgl3EQf4foN/content/tmp_files/2301.00785v1.pdf.txt +3322 -0
  43. CNAyT4oBgHgl3EQf4foN/content/tmp_files/load_file.txt +0 -0
  44. CtE2T4oBgHgl3EQfSAdG/content/2301.03787v1.pdf +3 -0
  45. DNE3T4oBgHgl3EQfUwpD/vector_store/index.faiss +3 -0
  46. DNE3T4oBgHgl3EQfUwpD/vector_store/index.pkl +3 -0
  47. DdA0T4oBgHgl3EQfAv9Y/content/tmp_files/2301.01966v1.pdf.txt +886 -0
  48. DdA0T4oBgHgl3EQfAv9Y/content/tmp_files/load_file.txt +434 -0
  49. E9AzT4oBgHgl3EQfw_6R/content/tmp_files/2301.01731v1.pdf.txt +1116 -0
  50. E9AzT4oBgHgl3EQfw_6R/content/tmp_files/load_file.txt +0 -0
-NFJT4oBgHgl3EQfpSw0/content/tmp_files/2301.11599v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
-NFJT4oBgHgl3EQfpSw0/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
-dAyT4oBgHgl3EQfRPaF/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab89333171752f4909ad64c8bd930ed5a050eac87c7742afcd6dddd3a656da00
3
+ size 2162733
-tE5T4oBgHgl3EQfRw5F/content/tmp_files/2301.05523v1.pdf.txt ADDED
@@ -0,0 +1,720 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.05523v1 [cond-mat.str-el] 13 Jan 2023
2
+ Bi layer properties in the Bi–FeNi GMR-type structures
3
+ probed by spectroscopic ellipsometry
4
+ Natalia Kovaleva,1, ∗ Dagmar Chvostova,2 Ladislav Fekete,2 and Alexandr Dejneka2, †
5
+ 1Lebedev Physical Institute, Russian Academy of Sciences
6
+ Leninsky prospect 53, Moscow 119991, Russia
7
+ 2Institute of Physics, Academy of Sciences of the Czech Republic
8
+ Na Slovance 2, Prague 18221, Czech Republic
9
+ (Dated: January 16, 2023)
10
+ Abstract
11
+ Bismuth (Bi) having a large atomic number is characterized by the strong spin-orbit coupling
12
+ (SOC) and is a parent compound of many 3D topological insulators (TIs). The ultrathin Bi films
13
+ are supposed to be 2D TIs possessing the nontrivial topology, which opens the possibility of devel-
14
+ oping new efficient technologies in the field of spintronics. Here we aimed at studying the dielectric
15
+ function properties of ultrathin Bi/FeNi periodic structures using spectroscopic ellipsometry. The
16
+ [Bi(d)–FeNi(1.8 nm)]N GMR-type structures were grown by rf sputtering deposition on Sitall-glass
17
+ (TiO2) substrates. The ellipsometric angles Ψ(ω) and ∆(ω) were measured for the grown series
18
+ (d=0.6, 1.4, 2.0, 2.5 nm, N = 16) of the multilayered film samples at room temperature for four an-
19
+ gles of incidence of 60◦, 65◦, 70◦, and 75◦ in a wide photon energy range of 0.5-6.5 eV. The measured
20
+ ellipsometric angles, Ψ(ω) and ∆(ω), were simulated in the framework of the corresponding mul-
21
+ tilayer model. The complex (pseudo)dielectric function spectra of the Bi layer were extracted.
22
+ The GMR effects relevant for the studied Bi–FeNi MLF systems are estimated from the optical
23
+ conductivity zero-limit (optical GMR effect). The obtained results demonstrate that the Bi layer
24
+ possesses the surface metallic conductivity induced by the SOC effects, which is strongly enhanced
25
+ on vanishing the semimetallic-like phase contribution on decreasing the layer thickness, indicating
26
+ its nontrivial 2D topology properties.
27
+ ∗Electronic address: [email protected]
28
+ †Electronic address: [email protected]
29
+ 1
30
+
31
+ I.
32
+ INTRODUCTION
33
+ The relativistic effect of spin–orbit (SOC) coupling is involved in the so-called Rashba
34
+ effect [1]. This phenomenon arises from the apparent loss of crystalline inversion symmetry
35
+ near the surface or heterojunction leading to the lifting of the spin degeneracy and generating
36
+ spin-polarized surface metallic states. In this respect, 3D (2D) topological insulators (TIs)
37
+ also exhibit spin-polarized surface metallic states due to SOC. However, contrary to the
38
+ Rashba effect, the surface metallic bands of a TI are determined by its bulk characteristics.
39
+ The TIs host metallic surface states in a bulk energy gap, which are topologically protected.
40
+ The surface (or interface) states of TIs can be topologically trivial or nontrivial. In the latter
41
+ case, for example, electrons cannot be backscattered by impurities. Bismuth (Bi) having a
42
+ large atomic number is characterized by the strong SOC and is a parent compound of many
43
+ 3D TIs, such as Bi1−xSbx or Bi2Se3, even although 3D bulk Bi itself is topologically trivial.
44
+ The specific feature of the electronic band structure of bulk Bi having R¯3m rhombohedral
45
+ symmetry [2–4] is its inverted band gaps at both the Γ and M points of the Brillouin zone
46
+ due to the strong SOC. The uniqueness of Bi films associated with the surface metallic states
47
+ [5, 6] and the semiconductor-to-metal transition [7, 8] are well documented in the literature.
48
+ Theoretical analyses predict a 1-bilayer (BL) Bi(111) film to be a 2D TI [9, 10].
49
+ If
50
+ there is no or weak inter-BL coupling, a stack of the odd-even 1-BL films will exhibit
51
+ nontrivial to trivial oscillations of topology (where the topological number ν [11] is equal
52
+ to 1 or 0, respectively). However, for the nontrivial topology in a stack of the 1-BL films,
53
+ the intermediate inter-BL coupling strength, which is, for example, higher than the van der
54
+ Waals strengths, is a mandatory condition. The direct (Γ point) and indirect band gap values
55
+ were calculated by Liu et al. as a function of the Bi film thickness [12]. It was established
56
+ that below 4BLs the film is a semiconductor with the direct gap open at the Γ point and
57
+ the positive indirect band gap leading to nontrivial topology peculiar for an intrinsic 2D TI.
58
+ Above 4BLs the indirect band gap becomes negative resulting in a semiconductor- semimetal
59
+ transition due to overlapping of two bands at the Fermi level around the Γ and M points.
60
+ This suggests that the Bi films from 5 to 8 BLs represent a 2D TI situated between two
61
+ trivial metallic surfaces [12].
62
+ A comprehensive study of the associated SOC effects in ultrathin Bi layers opens the pos-
63
+ sibility of developing new efficient technologies in the field of spintronics. For this purpose,
64
+ 2
65
+
66
+ here we aimed at studying the dielectric function properties of ultrathin periodic structures
67
+ Bi/Ni79Fe21, prepared by rf sputter deposition, which is one of the most common technologies
68
+ used to grow coatings and multilayered films (MLFs) exhibiting a giant magnetoresistance
69
+ (GMR) effect for various existing and modern nanotechnological applications. Earlier, we
70
+ have demonstrated that electronic band structure and surface electronic properties of ul-
71
+ trathin Bi layers in real GMR-type (Bi–FeNi)N MLF structures incorporating nanoisland
72
+ FeNi layers can be successfully studied by spectroscopic ellipsometry (SE) [13]. Here, by ap-
73
+ plying the elaborated SE approach, we investigate (Bi–FeNi) MLFs, where the thickness of
74
+ the FeNi layer was 1.8 nm, corresponding to the FeNi layer structural percolation threshold
75
+ [14, 15], and the Bi spacer layer was 0.6, 1.4, 2.0, and 2.5 nm thick, incorporating about 2,
76
+ 4, 6, and 8 Bi(012)-type planes, respectively. We found that the Bi spacer layers have the
77
+ metallic surface conductivity, which demonstrates strongly enhanced metallicity properties
78
+ on vanishing the Bi semimetallic-like phase contribution on decreasing the layer thickness,
79
+ which can be constructive in finding new nontrivial 2D topology properties of the (Bi–FeNi)
80
+ GMR-type structures for their different nanotechnological applications.
81
+ II.
82
+ MATERIALS AND METHODS
83
+ The (Bi–FeNi)N MLFs were prepared in a sputter deposition system by cathode sput-
84
+ tering from 99.95% pure Bi and Fe21Ni79 targets in an alternative way. The base pressure
85
+ in a sputter deposition chamber was 2×10−6 Torr. The multilayers were deposited at ap-
86
+ proximately 80 ◦C in an argon atmosphere of 6× 10−4 Torr on to insulating glassy Sitall
87
+ (TiO2) substrates. We utilized the substrates having typical dimensions 15 × 5 × 0.6 mm3.
88
+ The nominal thicknesses of the FeNi and Bi layers were controlled by the layer deposition
89
+ times in accordance with the material deposition rates. A series consisting of four MLF
90
+ samples was prepared. In the series of the grown (Bi–FeNi)N samples, the nominal thickness
91
+ of the FeNi layer was 1.8 nm and the Bi layer thickness was of 0.6, 1.4, 2.0, and 2.5 nm, the
92
+ number N of the periodically repeated Bi/FeNi layers was 16. The thickness of the FeNi
93
+ layer was chosen to be 1.8 nm matching the structural percolation threshold [14, 15]. The
94
+ Bi layer thicknesses were chosen in such a way that the conditions for ferromagnetic (FM)
95
+ or antiFM coupling in the GMR-type structures would be optimized. To prevent degra-
96
+ dation, the deposited (Bi–FeNi)16/Sitall samples were covered with the 2.1 nm-thick Al2O3
97
+ 3
98
+
99
+ FIG. 1: AFM images (a–d) 5 × 5 µm2 and (e–h) 1 × 1 µm2 of the Al2O3/(Bi–FeNi)16/Sitall MLF
100
+ samples, where the nominal Al2O3 and FeNi layer thicknesses are 2.1 and 1.8 nm and the nominal Bi
101
+ layer thicknesses are 0.6, 1.4, 2.0, and 2.5 nm, respectively. The estimated surface RMS roughness
102
+ values are in (a-d) 3.6, 3.0, 3.1, and 5.2 nm and in (e-h) 3.2, 2.6, 2.7, and 5.3 nm, respectively.
103
+ (i,j) The typical height profiles for the MLF samples with the nominal Bi layer thicknesses of 0.6
104
+ and 2.5 nm, respectively.
105
+ layer. The related [Bi–FeNi(0.8,1.2 nm)]N samples prepared by rf sputtering deposition onto
106
+ the Sitall substrates under similar conditions were investigated by X-ray diffraction (XRD)
107
+ as well as by X-ray reflectivity (XRR) experimental techniques in our previous study (see
108
+ Supplementary online information for the article [13]). The XRR spectra proved a good
109
+ periodicity and consistency with the corresponding nominal thicknesses of the FeNi and Bi
110
+ slices in the Bi/FeNi MLF structures, as well as relatively low interface roughness between
111
+ the constituent layers. The XRD characterization suggests (012)-type Bi plane orientation,
112
+ where the interlayer distance is 3.28 ˚A. It follows from this that in the studied MLF struc-
113
+ tures the Bi layers with a thickness corresponding to 0.6, 1.4, 2.0, and 2.5 nm incorporate
114
+ 4
115
+
116
+ D!2SUce ()
117
+ 0
118
+ 500
119
+ 300
120
+ 400
121
+ eoo
122
+ 100
123
+ 008
124
+ a0o
125
+ 0001
126
+ SO
127
+ 10
128
+ JO
129
+ (uw
130
+ SO
131
+ 30D!2Sce (u)
132
+ 0
133
+ 500
134
+ 300
135
+ 400
136
+ ooa
137
+ 100
138
+ 008
139
+ a0o
140
+ 0001
141
+ 2
142
+ Heiapr (uw)
143
+ 0
144
+ 2HGidpf 2GU20l
145
+ mn 0.00s
146
+ -34'2 UW
147
+ S4'0 UWHGidpf 2Gu20l
148
+ my o.r
149
+ mn 8.84-
150
+ 401 UwHGidpf 26GU20l
151
+ mn 0.00s
152
+ mn 8.07-
153
+ mn 8.eHGidpf 26GU20l
154
+ my o.7
155
+ mn c.er.HGidpf 26GU20l
156
+ mn 0.00s
157
+ n r.ar-
158
+ JI'SUwHeiauf 26u20l
159
+ my o.1
160
+ mn &.rr-
161
+ mn c.srHGidpf 26u20l
162
+ mn 0.00s
163
+ mn r.rr-
164
+ mn f.crHeidpf 26u20k
165
+ my o.r
166
+ -531UWtwo, four, six, and eight Bi(012)-type planes, respectively.
167
+ In the present study, the surface morphology of the Bi–FeNi(1.8 nm) MLF samples, pre-
168
+ pared by rf sputtering deposition on the Sitall (TiO2) substrates, was studied at room
169
+ temperature using an ambient AFM (Bruker, Dimension Icon) in the Peak Force Tapping
170
+ mode with ScanAsyst Air tips (Bruker, k=0.4 N/m, nominal tip radius 2 nm). The SE mea-
171
+ surements for the investigated Al2O3/(Bi–FeNi)16/Sitall samples were performed at room
172
+ temperature in a wide photon energy range of 0.5 – 6.5 eV using a J.A. Woollam VUV-VASE
173
+ ellipsometer (see the scheme illustrating the SE study of the (Bi–FeNi)N MLFs in Fig. 1(a)
174
+ of Ref. [13]). The measured ellipsometry spectra are represented by real values of the angles
175
+ Ψ(ω) and ∆(ω), which are defined through the complex Fresnel reflection coefficients for
176
+ light-polarized parallel rp and perpendicular rs to the plane of incidence, tan Ψ ei∆ = rp
177
+ rs .
178
+ The ellipsometric angles, Ψ(ω) and ∆(ω), measured for the Bi–FeNi MLF samples were sim-
179
+ ulated using the multilayer model simulation available in the J.A. Woollam VASE software
180
+ [16]. From the multilayer model simulations, the (pseudo)dielectric function spectra of the
181
+ ultrathin 0.6, 1.4, 2.0, and 2.5 nm Bi layers and 1.8 nm FeNi layer inside the Bi–FeNi MLF
182
+ structures were extracted. The corresponding calculated optical conductivity spectra were
183
+ analyzed.
184
+ III.
185
+ RESULTS
186
+ A.
187
+ Atomic force microscopy study
188
+ The retrieved 5×5 µm2 and 1×1 µm2 AFM images of the Al2O3(2.1 nm)/[Bi(0.6, 1.4, 2.0,
189
+ 2.5 nm)–FeNi(1.8 nm)]N/Sitall multilayered films (where the given layer thicknesses corre-
190
+ spond to their nominal values), presented in Figure 1a–h show discernable contrast because
191
+ of the available surface hight deviations. The surface roughness of the Sitall glass (TiO2)
192
+ substrates was investigated by us by AFM in our earlier publication [17]. The height profile
193
+ of the Sitall substrates (see Fig. 2a of Ref. [17]) demonstrated the height deviation within
194
+ the range 1-3 nm peculiar to the relatively large 0.3-1 µm lateral scale, which characterizes
195
+ the Sitall substrate surface roughness. From the AFM measurements on the areas 5×5 µm2
196
+ and 1×1 µm2 the root-mean square (RMS) surface roughness values were evaluated, which
197
+ are presented in the caption to Figure 1.
198
+ The corresponding RMS roughness values are
199
+ 5
200
+
201
+ notably higher for the Al2O3(2.1 nm)/[Bi(2.5 nm)–FeNi(1.8 nm)]16/Sitall MLF sample. The
202
+ smaller-scale (1 × 1 µm2) images clearly recognize a fine grainy-like structure of the surface
203
+ morphology, which seems to be characteristic for all studied film samples (see Figure 1e–h).
204
+ The typical grain size, being of about 50 nm, is notably larger for the FeNi(1.8 nm) – Bi MLF
205
+ sample incorporating the 2.5 nm-thick Bi layers, and, following the estimated RMS rough-
206
+ ness values, the average grain size decreases to about 20 nm with decreasing the Bi layer
207
+ thickness to 1.4 nm. As one can see from the typical height profiles presented in Figure 1i,j,
208
+ with decreasing the Bi layer thickness from 2.5 to about 0.6 nm, the surface morphology
209
+ becomes highly irregular due to the formation of conglomerates of nanoislands separated by
210
+ rather flat (relatively small roughness) areas of about 20 nm.
211
+ B.
212
+ Spectroscopic ellipsometry study of the ultrathin Bi–FeNi multilayer film samples
213
+ The ellipsometric angles Ψ(ω) and ∆(ω) were measured for the prepared Al2O3/(Bi–
214
+ FeNi)16/Sitall MLF samples at the angles of incidence of 60◦, 65◦, 70◦, and 75◦. Figure 2
215
+ demonstrates the ellipsometric angles Ψ(ω) and ∆(ω) recorded at 65◦ and 70◦. To model
216
+ the contributions from free charge carriers and interband optical transitions, the complex
217
+ dielectric function ˜ε(ω) = ε1(ω) + iε2(ω) of the Bi and FeNi layers was interpreted in terms
218
+ of the Drude and Lorentz parts, respectively,
219
+ ˜ε(E ≡ ℏω) = ǫ∞ −
220
+ AD
221
+ E2 + iEγD
222
+ +
223
+
224
+ j
225
+ AjγjEj
226
+ E2
227
+ j − E2 − iEγj
228
+ ,
229
+ (1)
230
+ where ε∞ is the high-frequency dielectric constant, which takes into account the contribution
231
+ from the higher-energy interband transitions. The fitted Drude parameters were AD and free
232
+ charge carrier’s scattering rate γD. The fitted parameters of Lorentz bands were Ej, γj, and
233
+ Aj of the band maximum energy, the full width at half maximum, and the ε2 band height,
234
+ respectively. The obtained ellipsometric angles Ψ(ω) and ∆(ω) measured at different angles
235
+ of incidence of 60◦, 65◦, 70◦, and 75◦ were fitted for each sample simultaneously using the
236
+ J.A. Woollam VASE software [16] in the framework of the designed multilayer model. The
237
+ multilayer model for the studied Al2O3/(Bi–FeNi)/Sitall multiayers was constructed as it is
238
+ schematically presented in Figure 3, exactly so as the layers were deposited. In addition, we
239
+ attempted to take into account the roughness properties of the surface by using the conven-
240
+ tional approach of effective medium approximation (EMA) based on the (50% Al2O3–50%
241
+ 6
242
+
243
+ FIG. 2: (a-d) Ellipsometric angles, Ψ(ω) and ∆(ω) (symbols), measured at the angles of incidence
244
+ of 65◦ and 70◦ for the Al2O3/[Bi(d)–NiFe(1.8 nm)]16/Sitall multilayered films where the Bi spacer
245
+ layer thicknesses d = 0.6, 1.4, 2.0, and 2.5 nm, respectively. The solid red, blue, green, and black
246
+ curves show the corresponding simulation results for the angle 65◦ by the dielectric function model
247
+ using Equation 1.
248
+ 7
249
+
250
+ vacuum) Bruggeman model. The dispersion model for the Bi layers included three or four
251
+ Lorentz terms as well as the Drude part. The dispersion model for the 1.8 nm permalloy
252
+ layers incorporated in the studied MLF structures included the Drude term responsible for
253
+ the free charge carrier contribution and one Lorentz oscillator to account for the most pro-
254
+ nounced interband optical transition. In addition, the dielectric function spectra of the bare
255
+ Sitall substrate derived from our earlier SE studies [18, 19] were introduced to the elabo-
256
+ rated multilayer model. The dielectric response of the Al2O3 capping layer was represented
257
+ by the tabular complex dielectric function spectra [20]. The thicknesses of the Bi and FeNi
258
+ layers, as well as of the surface layers, were fitted. The unknown parameters were allowed to
259
+ vary until the minimum of the mean squared error (MSE) is reached. The best simulation
260
+ result for the studied [Bi(0.6, 1.4, 2.0, 2.5 nm)–FeNi(1.8 nm)]16 MLF samples corresponded
261
+ to the lowest obtained MSE values of 0.3843, 0.297, 0.2934, and 0.4508, respectively. The
262
+ good quality of the fit allowed us to estimate the actual Bi and FeNi layer thicknesses in the
263
+ MLFs under study. The quality of the fit is demonstrated by Figure 2, where we plotted the
264
+ measured ellipsometric angles along with the simulation results. The Drude and Lorentz
265
+ parameters resulting from the simulation of the Al2O3/[Bi(d)–FeNi(1.8 nm)]16/Sitall MLF
266
+ samples are given in Tables I and II, and the resulting ε1(ω) and ε2(ω) parts of the Bi and
267
+ FeNi (pseudo)dielectric function spectra are presented in Figure 4.
268
+ From Figure 4a,b one can see that the complex (pseudo)dielectric functions of the 0.6, 1.4,
269
+ 2.0, and 2.5 nm thick Bi spacers inside the investigated Bi–FeNi MLFs demonstrate metal-
270
+ lic character. Moreover, the ε1(ω) function progressively decreases while the Bi thickness
271
+ decreases from 2.5–2.0 to 1.4 nm and the ε2(ω) increases at low photon energies, respec-
272
+ tively. According to our simulation results, we expect that the best metallicity properties
273
+ are demonstrated by the Bi layer in the [Bi(1.4 nm)–NiFe(1.8 nm)]16 structure. At the same
274
+ time, the complex (pseudo)dielectric functions of the thinnest 0.6 nm thick Bi layer look
275
+ somewhat different. Here, in addition to the low-energy metallic Drude response identified
276
+ by the characteristic behavior of the ε1(ω) and ε2(ω), the Lorentz band around 4–5 eV makes
277
+ an essential contribution to the dielectric function response (the corresponding Drude (AD
278
+ and γD) and Lorentz (Aj, Ej, and γj) parameters are listed in Table I). Next, being similar,
279
+ the dielectric functions of the 1.8 nm thick permalloy layers in the [FeNi–Bi(1.4, 2.0, 2.5 nm)]
280
+ MLFs are dominated by the ε2(ω) resonance and ε1(ω) antiresonance features, indicat-
281
+ ing the predominant contribution from the Lorentz oscillator peaking at around 3 eV (see
282
+ 8
283
+
284
+ (a)
285
+ (b)
286
+ ( )
287
+ (d)
288
+ c
289
+ FIG. 3:
290
+ The multilayer model applied for the simulation of the Al2O3/[Bi(0.6, 1.4, 2.0, and
291
+ 2.5 nm)–FeNi(1.8 nm)]16/Sitall samples. The Bi and FeNi thicknesses estimated from the model
292
+ simulations in (a) 0.684±0.037 nm and 2.082±0.116 nm, (b) 1.408±0.574 nm and 1.780±0.65 nm,
293
+ (c) 1.764±0.194 nm and 1.825±0.358 nm, and (d) 2.387±0.128 nm and 1.782±0.171 nm. Note good
294
+ agreement between the thicknesses of the FeNi and Bi layers estimated from the model simula-
295
+ tions and their respective nominal thickness values. The roughness and Al2O3 thicknesses esti-
296
+ mated from the model simulations in (a) 0.00±3.85 nm and 1.283±2.37 nm, (b) 0.000±4.97 nm and
297
+ 4.967±2.17 nm, (c) 0.848±5.86 nm and 4.738±2.92 nm, and (d) 0.000±2.95 nm and 5.389±1.23 nm.
298
+ Figure 4c,d).
299
+ An upturn evident in the ε2(ω) at low photon energies indicates an addi-
300
+ tional Drude contribution, which is relatively less pronounced. Following our simulation
301
+ results, we expect the advanced metallicity properties of the FeNi layer in the [Bi(0.6 nm)–
302
+ NiFe(1.8 nm)]16 structure (see the corresponding Drude (AD and γD) and Lorentz (Aj, Ej,
303
+ and γj) parameters listed in Table II).
304
+ Figure 5a–d presents the evolution of the Bi intralayer optical conductivity, σ1(ω) =
305
+ ε2(ω)ω(cm−1)/60, upon decreasing the Bi spacer layer thickness in the [FeNi(1.8 nm) –
306
+ Bi(2.5, 2.0, 1.4, 0.6 nm)]16 structures, and Figure 5e–h shows the associated optical conduc-
307
+ tivity spectra of the 1.8 nm FeNi permalloy layer. Here, the contributions from the Drude
308
+ and Lorentz oscillators following the multilayer model simulations using Equation 1 are evi-
309
+ 9
310
+
311
+ woge
312
+ 0
313
+ sti2(19VSl
314
+ mna.Oid) 19vsl
315
+ mn8.TingtS80.Sarmna.Oid
316
+ 19vsl↑80.03
317
+ 91503
318
+ Clε8S.T2lonap0'000wog6
319
+ 0
320
+ sti219Vsl
321
+ mn8, Tingt
322
+ S1081.1arS
323
+ 19Vsl
324
+ p!↓'4uw80↓.13
325
+ 91503
326
+ Cl2lonap
327
+ 40'000wogel
328
+ 0
329
+ lsti219Vsl
330
+ mn8, Tingt
331
+ 328.1armno.Sid1043
332
+ 91503
333
+ Cl881.2lonap
334
+ 488.0woge
335
+ 0
336
+ ti219Vsl
337
+ mn8. Tingt
338
+ 4S81.1arIS入Gl
339
+ mnc,.Sid188.S3
340
+ 91S03
341
+ Cle88.2lonap
342
+ 40'000FIG. 4: The complex (pseudo)dielectric function spectra, ε2(ω) and ε1(ω), of the (a,b) Bi layers and
343
+ (c,d) FeNi layers in the [Bi(d)–FeNi(1.8 nm)]16 structures shown for the Bi layer nominal thickness
344
+ values d = 0.6, 1.4, 2.0, and 2.5 nm by solid red, blue, green, and black curves, respectively.
345
+ TABLE I: Drude-Lorentz parameters for the Bi spacer layer in the [Bi(0.6, 1.4, 2.0, 2.5 nm)–
346
+ NiFe(1.8 nm)]16 multilayered films obtained from the model simulations of the dielectric functions
347
+ by using Equation 1. The values of Ej, γj, and γD are given in eV, and optical conductivity limit
348
+ σ1(ω→0) in Ω−1·cm−1.
349
+ Parameters 0.6 nm
350
+ 1.4 nm
351
+ 2.0 nm
352
+ 2.5 nm
353
+ Drude
354
+ AD
355
+ 46.(9)±4
356
+ 66.(7)±4
357
+ 24.(5)±4
358
+ 25.(1)±2
359
+ γD
360
+ 1.2(5)±0.09 1.51(0)±0.06 2.7(2)±0.4
361
+ 3.1(3)±0.2
362
+ σ1(ω→0)
363
+ 6300±540
364
+ 8970±540
365
+ 3290±540
366
+ 3370±270
367
+ Lorentz
368
+ E1
369
+
370
+ 0.45(8)±0.05 0.35(9)±0.01 0.38(6)±0.004
371
+ oscillator
372
+ A1
373
+
374
+ 15.(0)±6
375
+ 96.(0)±10
376
+ 70.(8)±2
377
+ γ1
378
+
379
+ 0.52(6)±0.09 0.79(1)±0.02 0.67(6)
380
+ Lorentz
381
+ E2
382
+ 4.67
383
+ 5.31(5)±0.03 5.08(7)±0.04 4.77(5)±0.04
384
+ oscillator
385
+ A2
386
+ 10.2(7)±0.6 2.53(2)±0.05 1.2(5)±0.1
387
+ 0.67(6)±0.08
388
+ γ2
389
+ 4.2(1)±0.07 3.99(3)±0.07 3.4(7)±0.2
390
+ 2.5(5)±0.2
391
+ Lorentz
392
+ E3
393
+ 11.1
394
+ 7.8
395
+ 7.7
396
+ 7.7
397
+ oscillator
398
+ A3
399
+ 7.2
400
+ 4.1
401
+ 4.1
402
+ 4.1
403
+ γ3
404
+ 8.9
405
+ 2.8
406
+ 2.8
407
+ 2.8
408
+ 10
409
+
410
+ TABLE
411
+ II:
412
+ Drude-Lorentz
413
+ parameters
414
+ for
415
+ the
416
+ 1.8 nm
417
+ thick
418
+ NiFe
419
+ layer
420
+ in
421
+ the
422
+ [Bi(0.6, 1.4, 2.0, 2.5 nm)–NiFe]16 multilayered films obtained from the simulations of the model
423
+ dielectric function described by Equation 1. The values of E1, γ1, and γD are given in eV, and
424
+ optical conductivity limit σ1(ω→0) in Ω−1·cm−1.
425
+ Parameters 0.6 nm
426
+ 1.4 nm
427
+ 2.0 nm
428
+ 2.5 nm
429
+ Drude
430
+ AD
431
+ 33.(8)±2
432
+ 15.(0)±1
433
+ 21.(7)±2
434
+ 13.(1)±2
435
+ γD
436
+ 0.876(5)±0.04 2.8(2)±0.3 3.4(2)±0.4 3.1(3)±0.2
437
+ σ1(ω→0)
438
+ 4540±270
439
+ 2020±130
440
+ 2920±270
441
+ 1760±270
442
+ Lorentz
443
+ E1
444
+ 1.87
445
+ 3.32
446
+ 3.32
447
+ 3.32
448
+ oscillator
449
+ A1
450
+ 14.76
451
+ 14.28
452
+ 15.23
453
+ 14.74
454
+ γ1
455
+ 3.62
456
+ 5.88
457
+ 5.65
458
+ 5.95
459
+ dently demonstrated. The optical conductivity spectra of the Bi and FeNi layers follow the
460
+ main trends identified in their complex dielectric function spectra presented in Figure 4.
461
+ IV.
462
+ DISCUSSION
463
+ Initially, we would like to discuss GMR effects relevant for the studied MLF sys-
464
+ tems.
465
+ Our simulations of the dielectric functions for the 1.8 nm-thick NiFe layer inside
466
+ the [Bi(0.6,1.4,2.0,2.5 nm)–NiFe(1.8 nm)] MLFs show the presence of the Drude term com-
467
+ plemented with the pronounced Lorentz band located at around 2–3 eV (see Table II). From
468
+ the corresponding optical conductivity spectra presented in Figure 5e–h one can notice that
469
+ the associated Drude dc limit, σ1ω→0, displays an oscillating character (in agreement with the
470
+ results deduced for the corresponding Drude parameter AD, see Table II and Figure 6). We
471
+ can expect that the Bi spacer thicknesses for which the FeNi layers are preferentially antiFM
472
+ coupled in the studied MLFs are around 1.4 and 2.5 nm implying that the [Bi(1.4,2.5 nm)–
473
+ NiFe(1.8 nm)]16 film structures will exhibit a drop in the resistance (being negative magne-
474
+ toresistance) when exposed to an external magnetic field. It is well known from the literature
475
+ that the first antiFM maximum exhibits negative magnetoresistance of about 20%, while
476
+ the second antiFM maximum decreases to about 10%, and the presence of the third antiFM
477
+ maximum cannot confidently be retrieved (see, for example, Ref. [21] and references therein).
478
+ 11
479
+
480
+ FIG. 5: The intralayer optical conductivity, σ1(ω) = ε2(ω)ω[cm−1]/60, for the (a-d) Bi layers and
481
+ (e-h) FeNi layers in the [Bi(d)–FeNi(1.8 nm)]16 structures shown for the Bi layer nominal thickness
482
+ values d = 2.5, 2.0, 1.4, and 0.6 nm by solid curves (a,e) black, (b,f) green, (c,g) blue, and (d,h)
483
+ red, respectively. The contributions from the Drude term and the Lorentz oscillator in (a-d) are
484
+ displayed by the yellow and cyan shaded area. In (e-h) the Drude term for the FeNi layers is
485
+ displayed by the magenta shaded area. Shown by the dotted curves are the summary of the Drude
486
+ and Lorentz contributions.
487
+ 12
488
+
489
+ Using a simple model of a two-current series resistor [22], the magnetoresistance ∆R
490
+ R can be
491
+ estimated as
492
+ ∆R
493
+ R = 100%
494
+ (α − β)2
495
+ 4
496
+
497
+ α +
498
+ dBi
499
+ dF eNi
500
+ � �
501
+ β +
502
+ dBi
503
+ dF eNi
504
+ �,
505
+ (2)
506
+ where dBi and dF eNi are the thicknesses of Bi and FeNi layers, and α =
507
+ ↓ρF eNi
508
+ ρBi
509
+ and β =
510
+ ↑ρF eNi
511
+ ρBi
512
+ are the ratios of the resistivity in the FeNi layer to that in the Bi layer in the spin down
513
+ and spin up current channel, respectively. Exploiting values for ρ = σ−1
514
+ 1ω→0 estimated for
515
+ the 1.4 nm Bi and 1.8 nm FeNi layers from the current model simulations (see Table I and
516
+ II), namely, ρBi=
517
+ 1
518
+ 8970Ω·cm, ↓ρF eNi=
519
+ 1
520
+ 2020Ω·cm and ↑ρF eNi=
521
+ 1
522
+ 4540Ω·cm (the latter estimate is
523
+ given by the FM coupling for the 0.6 nm Bi spacer), we obtain α=4.4 and β=2.0. Then,
524
+ using Equation (2) we have ∆R
525
+ R =10%. This means that the 1.4 nm Bi spacer corresponds
526
+ to the second antiFM maximum. Following the same approach for the 2.5 nm Bi spacer,
527
+ where ρBi=
528
+ 1
529
+ 3370Ω·cm, ↓ρF eNi=
530
+ 1
531
+ 1760Ω·cm and ↑ρF eNi=
532
+ 1
533
+ 2920Ω·cm (corresponding to the FM cou-
534
+ pling for the 2.0 nm Bi spacer), we obtain α=1.9 and β=1.2. Using Equation (2), we have
535
+ ∆R
536
+ R =1.4%, which may correspond to the very weakly pronounced third antiFM maximum.
537
+ From the analysis presented above, we may expect that the first antiFM maximum corre-
538
+ sponding to the magnetoresistance of about 20% occurs for the Bi spacer thickness of about
539
+ 0.9 nm, which is in agreement with the results presented in Ref. [21].
540
+ Further,
541
+ in
542
+ the
543
+ XRD
544
+ patterns
545
+ of
546
+ the
547
+ investigated
548
+ Al2O3/[Bi(1.4,2.0,2.5 nm)–
549
+ NiFe(1.8 nm)]16/Sitall film samples, the peak of the R¯3m crystalline Bi phase is identi-
550
+ fied at 2θ ≈ 26.2◦ suggesting (012) orientation of the Bi layers, which is characterized by
551
+ the interlayer distance of 3.28 ˚A. Using STM and reflection high-energy electron diffraction
552
+ (RHEED) techniques, it was shown that initial growth of Bi(012)-type films occurs in the
553
+ form of islands with the height increment of about 6.6 ˚A, indicating even-number layer sta-
554
+ bility leading to the laterally flat morphology of the Bi(012)-type islands [23]. Consequently,
555
+ we can expect that the 0.6, 1.4, 2.0, and 2.5 nm Bi spacer layers in the investigated MLFs
556
+ incorporate about 2, 4, 6, and 8 (012)-type Bi planes, respectively.
557
+ The model simulations for the [Bi(2.5, 2.0 nm)–FeNi(1.8 nm)]16 film samples reveal that
558
+ the low-energy dielectric function of the Bi intralayers has competing contributions from the
559
+ Drude term and from the intense Lorentz band around 0.36–0.39 eV with a ε2 maximum
560
+ height of 70–100 (see Table I). The Drude and Lorentz contributions are more clearly pro-
561
+ nounced in the corresponding optical conductivity spectra (see Figure 5a,b). The obtained
562
+ 13
563
+
564
+ Drude and Lorentz parameters are in excellent agreement with those deduced in our pre-
565
+ vious study [13] for the Bi spacer layer incorporated in the [Bi(2.5, 2.0 nm)–NiFe(1.2 nm)]16
566
+ structures under study. The pronounced Lorentz band found at low photon energies for
567
+ Bi single crystals (rhombohedral symmetry, space group R¯3m) [24, 25] and bulk Bi layers
568
+ [26, 27] is characteristic of the semimetallic-like electronic band structure due to the con-
569
+ tributions from the interband transitions near the Γ point, Γ+
570
+ 6 – Γ−
571
+ 6 and Γ+
572
+ 45 – Γ−
573
+ 6 [2], and
574
+ near the T point, T−
575
+ 6 – T−
576
+ 45 [4]. The estimated values (see Table I) of the Drude dc limit
577
+ σ1ω→0 (2750–3830 Ω−1·cm−1) as well as the free charge carrier’s γD (2.3–3.3 eV) are consis-
578
+ tent with those peculiar for the metallic surface states related to the Rashba SOC in Bi(111)
579
+ films, σ1ω→0 = 2300 Ω−1·cm−1 and γD = 2.0 eV) [6]. Meanwhile, the model simulation for
580
+ the [Bi(1.4 nm)–NiFe(1.8 nm)]16 structure indicates that for the 1.4 nm Bi layer the Drude
581
+ dc limit significantly increases to 8970±540 Ω−1·cm−1, while the γD essentially decreases to
582
+ 1.50±0.06 eV. In this case, the Lorentz band is nearly suppressed. The associated found
583
+ Drude parameters for the ultrathin Bi layer inside the [Bi(0.6 nm)–NiFe(1.8 nm)]16 structure
584
+ are slightly different, namely, σ1ω→0 = 6300±540 Ω−1·cm−1 and γD = 1.2±0.1 eV, and the
585
+ Lorentz band is not present clearly (see Figure 5c,d and Table I).
586
+ Thus, we have discovered that, on the one hand, the optical conductivity spectra spectra
587
+ of the 2.0 and 2.5 nm thick Bi spacer layers in the (Bi–FeNi) MLFs incorporating 8 and 6
588
+ Bi(012)-type monolayers, respectively, have contributions from the pronounced low-energy
589
+ Lorentz oscillator and from the free charge carrier Drude term (for details, see Figure 5a,b
590
+ and Table I). Here, the presence of the low-energy Lorentz band points on the Bi semimetallic
591
+ phase contribution, and the parameters obtained for the Drude conductivity indicate that
592
+ its origin can be associated with the surface metallic states [6].
593
+ Therefore, the 2.0 and
594
+ 2.5 nm Bi layers can be associated with the semimetallic Bi phase sandwiched between two
595
+ metallic layers on the top and bottom surfaces. On the other hand, the contribution from
596
+ the intrinsic Lorentz band is strongly suppressed for the 1.4 and 0.6 nm layers, where the
597
+ Drude conductivity displays notably improved metallicity properties, as one can see from
598
+ the optical conductivity spectra shown in Figure 5c,d (for details, see Table I).
599
+ From the above discussion of the obtained results, we can conclude that the Bi layer
600
+ consisting of 4 Bi(012)-type monolayers represents a kind of crossover regarding the contri-
601
+ butions from the semimetallic Bi phase and/or surface metallic-like states. Here we noticed
602
+ some similarity with the theory results presented for the ultrathin Bi(111) layers by Liu
603
+ 14
604
+
605
+ et al. [12]. There, it was established that below 4 Bi(111) BLs the film is a semiconduc-
606
+ tor with the direct gap open at the Γ point and the positive indirect band gap, leading
607
+ to nontrivial Z2 topology (ν=1) peculiar for an intrinsic 2D TI. Hovewer, above 4 Bi(111)
608
+ BLs, the indirect band gap becomes negative resulting in a semiconductor-semimetal tran-
609
+ sition due to overlapping of two bands at the Fermi level around the Γ and M points. It
610
+ is argued by Liu et al. [12] that the Bi layers consisting of 5 to 8 Bi(111) BLs represent
611
+ a 2D TI suited between two “trivial” metallic surfaces [12]. This means that for the sur-
612
+ face considered as an individual 2D system its Z2 number is trivial (ν=0). The surface
613
+ bands have no contribution to the nontrivial Z2 topology and, therefore, these trivial metal-
614
+ lic surfaces are not robust and can easily be removed by surface defects or impurities. It
615
+ was found by us [13] that the Bi layers in the [Bi(2.0, 2.5 nm)–NiFe(0.8 nm)] multilayers,
616
+ incorporating the nanoisland permalloy layer, exhibit bulk-like semimetallic properties of
617
+ the electronic band structure, although the surface (Drude) metallic conductivity is absent
618
+ there (see Fig. 4(d) of Ref. [13]). Indeed, strong magnetic and spatial disorder induced by
619
+ magnetic FeNi nanoislands, as well as long-range many-body interactions between magnetic
620
+ moments of permalloy nanoislands [17], may lead to specific localization of free charge car-
621
+ riers [28]. However, the surface conductivity (or interface) states for the 1.4 nm layer in
622
+ the Bi–FeNi(1.8 nm) multilayers may be topologically nontrivial and, in this case, the elec-
623
+ trons cannot be backscattered by impurities. Here, the Drude dc limit is 8970±540 Ω·cm−1
624
+ and the scattering rate γD=1.5±0.06 eV. We found that the 0.6 nm thick Bi layer exhibits
625
+ somewhat different Drude dc limit (6300±540 Ω·cm−1) and γD (1.2±0.1 eV), see Table I and
626
+ Figure 6, which can be attributed to the discontinuous nanoisland structure of this layer.
627
+ Finally, we would like to note that it will be challenging to investigate dc transport
628
+ and superconductivity properties of the ultrathin Bi films possessing 2D TI surface states
629
+ following the approach presented in Ref. [29], where the subkelvin superconductivity without
630
+ any external stimuli was discovered in 3D TI Cd3As2 films [30, 31].
631
+ V.
632
+ CONCLUSIONS
633
+ In summary, using wide-band (0.5-6.5 eV) spectroscopic ellipsometry we studied the
634
+ optical properies of the [Bi(0.6, 1.4, 2.0, 2.5 nm)–NiFe(1.8˙nm)]16 MLFs prepared by rf
635
+ sputtering. The XRD analysis suggested that the 0.6, 1.4, 2.0, and 2.5 nm Bi layers in the
636
+ 15
637
+
638
+ FIG. 6: (a,b) Parameters of the Drude term (AD and γD) for the Bi (filled symbols) and FeNi
639
+ (empty symbols) layers in the [Bi(0.6, 1.4, 2.0, 2.5 nm)–FeNi(1.8 nm)] MLF structures.
640
+ studied MLFs correspond to about two, four, six, and eight Bi(012)-type monolayers,
641
+ respectively.
642
+ From the multilayer model simulations of the measured ellipsometric data,
643
+ we extracted the Bi and FeNi layer dielectric functions. The dielectric function for the 2.0
644
+ and 2.5 nm Bi spacer layers are represented by the Drude resonance due to the surface
645
+ states and the low-energy Lorentz band peaking at around 0.3-0.4 eV. The pronounced
646
+ Lorentz band is characteristic of the semimetallic bulk-like Bi electronic zone structure
647
+ due to the contributions from the interband transitions near the Γ point. We discovered
648
+ that the 2.0 and 2.5 nm Bi spacer layers can be associated with the semimetallic Bi phase
649
+ sandwiched between two trivial (where the topology number ν=0) metallic layers on the
650
+ top and bottom surfaces. The contribution from the low-photon-energy Lorentz band is
651
+ strongly suppressed for the 1.4 and 0.6 nm Bi layers, where the Drude conductivity displays
652
+ notably improved metallicity properties. This indicates that the Bi layer consisting of 4
653
+ Bi(012)-type monolayers represents a kind of crossover regarding the contributions from
654
+ the semimetallic Bi phase and/or surface metallic-like states.
655
+ Therefore, the properties
656
+ of Bi layers below 4 monolayers may be associated with nontrivial topology (where the
657
+ topology number ν=1) peculiar for an intrinsic 2D TI. We expect that the Bi layers having
658
+ 16
659
+
660
+ thickness of 0.9 nm will exhibit maximal GMR effect of about 20% in the (Bi-FeNi) MLFs,
661
+ where the Drude dc limit is about 8970±540 Ω·cm−1. These states may be protected from
662
+ backscattering, which makes them promising in spintronic devices and quantum computing.
663
+ Acknowledgement
664
+ We thank F.A. Pudonin for providing us with the Bi/FeNi multilayer film samples
665
+ and O. Pacherova for their XRD analysis.
666
+ We thank A. Muratov for participation in
667
+ the spectroscopic ellipsometry measurements. This work was supported by the European
668
+ Structural and Investment Funds and the Czech Ministry of Education, Youth, and Sports
669
+ (Project No. SOLID21, Cz.02.1.01/0.0/0.0/16−019/0000760).
670
+ Declaration of competing interest
671
+ The authors declare no conflict of interest.
672
+ [1] Bychkov, Y.A.; Rashba, E.I. JETP Lett., 1984, 39, 78.
673
+ [2] Golin, S. Phys. Rev. B, 1968, 166, 643.
674
+ [3] Gonze, X.; Michenaud, J.-P.; Vigneron, J.-P. Phys. Rev. B, 1990, 41, 11827.
675
+ [4] Liu, Y.; Allen, R.E. Phys. Rev. B, 1995, 52, 1566.
676
+ [5] Hofmann, Ph. Prog. Surf. Sci., 2006, 81, 191.
677
+ [6] Yokota, Y.; Takeda, J.; Dang, C.; Han, G.; McCarthy, D.N.; Nagao, T.; Hishita, S.; Kitajima,
678
+ K.; Katayama, I. Appl.Phys. Lett., 2012, 100, 251605.
679
+ [7] Hoffman, C.A.; Meyer, J.R.; Bartoli, F.J. Phys. Rev. B, 1993, 48, 11431.
680
+ [8] Koroteev, Yu. M.; Bihlmayer, G.; Chulkov, E.V.; Blugel, S. Phys. Rev. B, 2008, 77, 045428.
681
+ [9] Wada, M.; Murakami, S.; Freimuth, F.; Bihlmayer, G. Phys.Rev.B, 2011, 83, 121310(R).
682
+ [10] Murakami, S. Phys. Rev. Lett., 2006, 97, 236805.
683
+ [11] Fu, L.; Kane, C.L.; Mele, E.J. Phys. Rev. Lett., 2007, 98, 106803.
684
+ [12] Liu, Z.; Liu, C.-X.; Wu, Y.-S.; Duan, W.-H.; Liu, F.; Wu, J. Phys. Rev. Lett., 2011, 107,
685
+ 136805.
686
+ [13] Kovaleva, N.N.; Chvostova, D.; Pacherova O.; Muratov A.V.; Fekete L.; Sherstnev I.A.; Kugel
687
+ K.I.; Pudonin F.A.; Dejneka A. Appl. Phys. Lett., 2021, 119, 183101.
688
+ 17
689
+
690
+ [14] Sherstnev, I.A. Ph. D. Thesis; P.N. Lebedev Physical Institute: Moscow, Russia, 2014.
691
+ [15] Boltaev, A.P.; Pudonin, F.A.; Shertnev, I.A.; Egorov, D.A. JETP, 2017, 125, 465.
692
+ [16] Woollam, J.A. VASE Spectroscopic Ellipsometry Data Analysis Software; J.A. Woollam, Co.:
693
+ Lincoln, NE, 2010.
694
+ [17] Stupakov, A.; Bagdinov, A.V.; Prokhorov, V.V.; Bagdinova, A.N.; Demikhov, E.I.; Dejneka,
695
+ A.; Kugel, K.I.; Gorbatsevich, A.A.; Pudonin, F.A.; Kovaleva, N.N. J. Nanomater., 2016,
696
+ Article ID 3190260.
697
+ [18] Kovaleva, N.N.; Chvostova, D.; Bagdinov, A.V.; Petrova M.G.; Demikhov E.I.; Pudonin F.A.;
698
+ Dejneka A. Appl. Phys. Lett., 2015, 106, 051907.
699
+ [19] Kovaleva, N.; Chvostova, D.; Dejneka, A. Metals, 2017, 7, 257.
700
+ [20] Palik, E.D. Handbook of Optical Constants of Solids; Elsevier Science: USA, 1991.
701
+ [21] H¨utten, A.; Mrozek, S.; Heitmann, S.; Hempel, T.; Br¨uckl H.; Reiss, G. Acta mater., 1999,
702
+ 47, 4245.
703
+ [22] Mathon, J. Contemporary Physics, 1991, 32, 143.
704
+ [23] Nagao, T.; Sadowski, J.T.; Saito, M.; Yaginuma, S.; Fujikawa, Y.; Kogure, T.; Ohno, T.;
705
+ Hasegawa, S.; Sakurai, T. Phys. Rev. Lett., 2004, 93, 105501.
706
+ [24] Wang, P.Y.; Jain, A.L. Phys. Rev. B, 1970, 2, 2978.
707
+ [25] Lenham, A.P.; Treherne, D.M.; Metcalfe, R.J. J. Opt. Soc. Am., 1965, 55, 1072.
708
+ [26] Hunderi, O. J. Phys. F, 1975, 5, 2214.
709
+ [27] Toudert, J.; Serna, R. Opt. Mater. Express, 2017, 7, 2299.
710
+ [28] Kovaleva, N.N.; Kusmartsev, F.V.; Mekhiya, A.B.; Trunkin, I.N.; Chvostova, D.; Davydov,
711
+ A.B.; Oveshnikov, L.N.; Pacherova, O.; Sherstnev, I.A.; Kusmartseva, A.; Kugel, K.I.; De-
712
+ jneka, A.; Pudonin, F.A.; Luo,Y.; Aronzon, B.A. Sci. Rep. , 2020, 10, 21172.
713
+ [29] Suslov, A.V.; Davydov, A.B.; Oveshnikov, L.N.; Morgun, L.A.; Kugel, K.I.; Zakhvalinskii,
714
+ V.S.; Pilyuk, E.A.; Kochura, A.V.; Kuzmenko, A.P.; Pudalov, V.M.; Aronzon, B.A. Phys.
715
+ Rev. B, 2019, 99, 094512.
716
+ [30] Kochura, A.V.; Zakhvalinskii, V.S.; Htet, A.Z.; Ril’, A.I.; Pilyuk, E.A.; Kuz’menko, A.P.;
717
+ Aronzon, B.A.; Marenkin, S.F. Inorg. Mater., 2019, 55, 879.
718
+ [31] Kovaleva, N.; Chvostova, D.; Fekete, L.; Muratov, A. Metals, 2020, 10, 1398.
719
+ 18
720
+
-tE5T4oBgHgl3EQfRw5F/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
-tFQT4oBgHgl3EQf7DaV/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f345d7ada0d4b65b920ecd2ac93a3e4568fb6d1aa73438b048baede596c153d5
3
+ size 193255
.gitattributes CHANGED
@@ -15351,3 +15351,74 @@ p9FAT4oBgHgl3EQfex0W/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex
15351
  p9FAT4oBgHgl3EQfex0W/content/2301.08577v1.pdf filter=lfs diff=lfs merge=lfs -text
15352
  M9FIT4oBgHgl3EQfcitV/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15353
  s9E3T4oBgHgl3EQfjgr4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15351
  p9FAT4oBgHgl3EQfex0W/content/2301.08577v1.pdf filter=lfs diff=lfs merge=lfs -text
15352
  M9FIT4oBgHgl3EQfcitV/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15353
  s9E3T4oBgHgl3EQfjgr4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15354
+ ONAzT4oBgHgl3EQfWfxA/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15355
+ YdAyT4oBgHgl3EQfWffL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15356
+ adFLT4oBgHgl3EQfXC8y/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15357
+ bNFIT4oBgHgl3EQfmCv-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15358
+ v9E5T4oBgHgl3EQfMQ4D/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15359
+ CtE2T4oBgHgl3EQfSAdG/content/2301.03787v1.pdf filter=lfs diff=lfs merge=lfs -text
15360
+ gNAzT4oBgHgl3EQfMftT/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15361
+ xdE3T4oBgHgl3EQf_gsD/content/2301.04834v1.pdf filter=lfs diff=lfs merge=lfs -text
15362
+ gNAzT4oBgHgl3EQfMftT/content/2301.01132v1.pdf filter=lfs diff=lfs merge=lfs -text
15363
+ 0tFRT4oBgHgl3EQfkzf9/content/2301.13597v1.pdf filter=lfs diff=lfs merge=lfs -text
15364
+ 99AzT4oBgHgl3EQfg_xe/content/2301.01477v1.pdf filter=lfs diff=lfs merge=lfs -text
15365
+ cdFQT4oBgHgl3EQfiTZ3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15366
+ LtFLT4oBgHgl3EQfMS8S/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15367
+ mdAzT4oBgHgl3EQfN_tE/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15368
+ q9E2T4oBgHgl3EQf0wi3/content/2301.04145v1.pdf filter=lfs diff=lfs merge=lfs -text
15369
+ PdFJT4oBgHgl3EQfIywm/content/2301.11457v1.pdf filter=lfs diff=lfs merge=lfs -text
15370
+ xdE3T4oBgHgl3EQf_gsD/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15371
+ 3dFAT4oBgHgl3EQfERwH/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15372
+ _9E1T4oBgHgl3EQf8wXt/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15373
+ QNE3T4oBgHgl3EQfyAt7/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15374
+ _tAzT4oBgHgl3EQfvv0d/content/2301.01710v1.pdf filter=lfs diff=lfs merge=lfs -text
15375
+ ltE2T4oBgHgl3EQfywiY/content/2301.04124v1.pdf filter=lfs diff=lfs merge=lfs -text
15376
+ _tAzT4oBgHgl3EQfvv0d/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15377
+ udE0T4oBgHgl3EQfbgBl/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15378
+ q9E2T4oBgHgl3EQf0wi3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15379
+ x9E3T4oBgHgl3EQfPAm3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15380
+ DNE3T4oBgHgl3EQfUwpD/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15381
+ NNE3T4oBgHgl3EQfwgvw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15382
+ udFKT4oBgHgl3EQf3i5N/content/2301.11928v1.pdf filter=lfs diff=lfs merge=lfs -text
15383
+ wtAyT4oBgHgl3EQfavdq/content/2301.00248v1.pdf filter=lfs diff=lfs merge=lfs -text
15384
+ pingpong/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15385
+ vtAyT4oBgHgl3EQfOPaQ/content/2301.00003v1.pdf filter=lfs diff=lfs merge=lfs -text
15386
+ 99AzT4oBgHgl3EQfg_xe/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15387
+ QNE3T4oBgHgl3EQfyAt7/content/2301.04716v1.pdf filter=lfs diff=lfs merge=lfs -text
15388
+ R9FPT4oBgHgl3EQfqTWc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15389
+ _9E1T4oBgHgl3EQf8wXt/content/2301.03550v1.pdf filter=lfs diff=lfs merge=lfs -text
15390
+ WtE3T4oBgHgl3EQfFgks/content/2301.04305v1.pdf filter=lfs diff=lfs merge=lfs -text
15391
+ vtAyT4oBgHgl3EQfOPaQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15392
+ itAzT4oBgHgl3EQfM_uk/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15393
+ jNFST4oBgHgl3EQfGzio/content/2301.13723v1.pdf filter=lfs diff=lfs merge=lfs -text
15394
+ dtAyT4oBgHgl3EQfwvmK/content/2301.00654v1.pdf filter=lfs diff=lfs merge=lfs -text
15395
+ WtE3T4oBgHgl3EQfFgks/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15396
+ VNE0T4oBgHgl3EQflwEn/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15397
+ WdE0T4oBgHgl3EQfmAG-/content/2301.02494v1.pdf filter=lfs diff=lfs merge=lfs -text
15398
+ WNFRT4oBgHgl3EQf9DhY/content/2301.13686v1.pdf filter=lfs diff=lfs merge=lfs -text
15399
+ mNE3T4oBgHgl3EQf6QvR/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15400
+ 6NE5T4oBgHgl3EQfPg5N/content/2301.05505v1.pdf filter=lfs diff=lfs merge=lfs -text
15401
+ pingpong/content/之江实验室2023年度乒乓球团体赛方案.pdf filter=lfs diff=lfs merge=lfs -text
15402
+ XtE2T4oBgHgl3EQfuwhB/content/2301.04083v1.pdf filter=lfs diff=lfs merge=lfs -text
15403
+ QNAzT4oBgHgl3EQfW_wb/content/2301.01309v1.pdf filter=lfs diff=lfs merge=lfs -text
15404
+ 4NFST4oBgHgl3EQfZjjS/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15405
+ wtAyT4oBgHgl3EQfavdq/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15406
+ jNFJT4oBgHgl3EQfXSyN/content/2301.11521v1.pdf filter=lfs diff=lfs merge=lfs -text
15407
+ itAzT4oBgHgl3EQfM_uk/content/2301.01142v1.pdf filter=lfs diff=lfs merge=lfs -text
15408
+ sNE1T4oBgHgl3EQfjQRT/content/2301.03260v1.pdf filter=lfs diff=lfs merge=lfs -text
15409
+ VNE0T4oBgHgl3EQflwEn/content/2301.02489v1.pdf filter=lfs diff=lfs merge=lfs -text
15410
+ O9AzT4oBgHgl3EQfIfuL/content/2301.01063v1.pdf filter=lfs diff=lfs merge=lfs -text
15411
+ iNA0T4oBgHgl3EQfIP_x/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15412
+ QNE3T4oBgHgl3EQfCwmU/content/2301.04279v1.pdf filter=lfs diff=lfs merge=lfs -text
15413
+ 89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf filter=lfs diff=lfs merge=lfs -text
15414
+ INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf filter=lfs diff=lfs merge=lfs -text
15415
+ ltE2T4oBgHgl3EQfywiY/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15416
+ -dAyT4oBgHgl3EQfRPaF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15417
+ LNE3T4oBgHgl3EQfYAr5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15418
+ jNFST4oBgHgl3EQfGzio/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15419
+ b9E2T4oBgHgl3EQfFgZ2/content/2301.03647v1.pdf filter=lfs diff=lfs merge=lfs -text
15420
+ QNAzT4oBgHgl3EQfW_wb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15421
+ jNFJT4oBgHgl3EQfXSyN/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15422
+ 0tFRT4oBgHgl3EQfkzf9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15423
+ INAzT4oBgHgl3EQfjf2_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15424
+ 89AzT4oBgHgl3EQfgvxw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
0NE3T4oBgHgl3EQfmwqR/content/tmp_files/2301.04619v1.pdf.txt ADDED
@@ -0,0 +1,1165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ TinyHD: Efficient Video Saliency Prediction with Heterogeneous Decoders using
2
+ Hierarchical Maps Distillation
3
+ Feiyan Hu1, Simone Palazzo2, Federica Proietto Salanitri2, Giovanni Bellitto2, Morteza Moradi2,
4
+ Concetto Spampinato2, Kevin McGuinness1
5
+ 1 Insight SFI Research Centre for Data Analytics, Dublin City University, Dublin, Ireland
6
+ {feiyan.hu, kevin.mcguinness}@dcu.ie
7
+ 2 PeRCeiVe Lab, University of Catania, Catania, Italy
8
+ {simone.palazzo, concetto.spampinato}@unict.it
9
+ Abstract
10
+ Video saliency prediction has recently attracted atten-
11
+ tion of the research community, as it is an upstream task
12
+ for several practical applications.
13
+ However, current so-
14
+ lutions are particularly computationally demanding, espe-
15
+ cially due to the wide usage of spatio-temporal 3D convolu-
16
+ tions. We observe that, while different model architectures
17
+ achieve similar performance on benchmarks, visual varia-
18
+ tions between predicted saliency maps are still significant.
19
+ Inspired by this intuition, we propose a lightweight model
20
+ that employs multiple simple heterogeneous decoders and
21
+ adopts several practical approaches to improve accuracy
22
+ while keeping computational costs low, such as hierarchi-
23
+ cal multi-map knowledge distillation, multi-output saliency
24
+ prediction, unlabeled auxiliary datasets and channel re-
25
+ duction with teacher assistant supervision. Our approach
26
+ achieves saliency prediction accuracy on par or better than
27
+ state-of-the-art methods on DFH1K, UCF-Sports and Hol-
28
+ lywood2 benchmarks, while enhancing significantly the ef-
29
+ ficiency of the model.
30
+ 1. Introduction
31
+ Video saliency prediction aims at estimating patterns of
32
+ human attention during free-viewing of dynamic scenes,
33
+ to emulate the capabilities of the human visual system of
34
+ quickly analyzing and interpreting the surrounding environ-
35
+ ment. Due to its several practical applications [7, 5, 28, 33,
36
+ 9, 40, 12, 27], it is an active area of research in computer
37
+ vision. However, the solution to this problem is not triv-
38
+ ial, for several reasons. First, attention mechanisms in the
39
+ human visual system are not fully known, so it is not clear
40
+ how to emulate them. Also, it requires complex modeling
41
+ of both visual features and their motion and interaction: an
42
+ object with striking visual patterns may be shadowed by a
43
+ TASED
44
+ 100% 84%
45
+ 72%
46
+ ViNet
47
+ 84% 100% 75%
48
+ HD2S
49
+ 72%
50
+ TASED
51
+ 75%
52
+ ViNet
53
+ 100%
54
+ HD2S
55
+ CC Metric
56
+ TASED
57
+ 100% 73%
58
+ 57%
59
+ ViNet
60
+ 73% 100% 58%
61
+ HD2S
62
+ 57%
63
+ TASED
64
+ 58%
65
+ ViNet
66
+ 100%
67
+ HD2S
68
+ SIM Metric
69
+ Figure 1: Measuring prediction similarity among video
70
+ saliency prediction models TASED, ViNet and HD2S on
71
+ DHF1K validation set.
72
+ bland element of the scene that starts moving in a pecu-
73
+ liar way. Finally, modeling the temporal dimension may
74
+ become computationally expensive, especially with current
75
+ deep learning methods based on spatio-temporal 3D convo-
76
+ lutions, thus limiting the applicability to low-power devices.
77
+ Many solutions have been proposed, based on different
78
+ assumptions on how to capture video saliency. It is inter-
79
+ esting to note that, in spite of the remarkably different re-
80
+ search directions followed by the variety of works in the
81
+ literature, top results over video saliency prediction bench-
82
+ marks are very close [1, 3, 36], suggesting that predictions
83
+ of different models are similar. We assessed the validity
84
+ of this conclusion by comparing three of the best perform-
85
+ ing methods on the DHF1K dataset [36] — TASED [25],
86
+ HD2S [1] and ViNet [16] — not in terms of their scores
87
+ on summary metrics, but in terms of the relative similarity
88
+ of the predicted saliency maps. To illustrate our findings,
89
+ Fig. 1 shows pairwise similarities between predicted maps
90
+ over two common metrics, Linear Correlation Coefficient
91
+ (CC) and Similarity (SIM). Although the three approaches
92
+ achieve similar scores on both metrics on DHF1K (between
93
+ 0.470 and 0.511 for CC, and between 0.361 and 0.406 for
94
+ SIM), the same metrics computed between each other are
95
+ relatively low, compared to what one would expect given
96
+ their similarity to the saliency ground truth. A visual in-
97
+ spection of the saliency maps generated by methods under
98
+ arXiv:2301.04619v1 [cs.CV] 11 Jan 2023
99
+
100
+ comparison confirms this behavior: Fig. 2 shows that it is
101
+ common to find cases where each approach produces re-
102
+ markably different saliency maps.
103
+ Notably, all three methods — TASED, HD2S and ViNet
104
+ — are encoder-decoder networks and share the same en-
105
+ coder, S3D [38], while employing different decoding strate-
106
+ gies (a U-Net–like approach for TASED and ViNet, a hi-
107
+ erarchical map aggregation for HD2S). This suggests that
108
+ a key factor underlying differences between current video
109
+ saliency prediction approaches lies in the way encoded fea-
110
+ tures are processed in the decoding path, leading to models
111
+ that learn specific (and often exclusive) representations. We
112
+ strengthen this hypothesis by experimenting with another
113
+ decoding strategy, exemplified by DLA [39], which com-
114
+ bines hierarchical decoding with complex feature interac-
115
+ tions: the results, also included in Fig. 2, show yet another
116
+ saliency prediction pattern, while using the same encoder
117
+ network, S3D. These results lead us to hypothesize that dif-
118
+ ferent model architectures introduce different inductive bi-
119
+ ases, which are more suitable to recognize certain patterns
120
+ more than others, thus requiring to increase model capacity
121
+ in order to generalize well to multiple saliency dynamics.
122
+ Indeed, the size of weights of models in our analysis range
123
+ between 82 MB and 116 MB, and the size of the top ten
124
+ models in the DHF1K leaderboard1 is on average 238 MB.
125
+ Given these premises, instead of increasing the complex-
126
+ ity of a single decoding strategy, it may be more efficient
127
+ to employ multiple simpler architectures with fewer param-
128
+ eters, relying on each architecture’s capability to attend to
129
+ different salient regions and combining their results. Hence,
130
+ we propose TinyHD, a lightweight, efficient and heteroge-
131
+ neous multi-decoder architecture for video saliency predic-
132
+ tion. The proposed method is inspired by encoder-decoder
133
+ architectures, but introduces the adoption of heterogeneous
134
+ decoding strategies in order to reduce the complexity of
135
+ each decoder, increasing efficiency (the weights of the re-
136
+ sulting model take only 16 MB) and improving the accu-
137
+ racy of predictions, as we show in our experiments. Fur-
138
+ thermore, along the direction of reducing computational
139
+ costs while retaining high accuracy, we also introduce a
140
+ novel knowledge distillation approach, based on exploiting
141
+ a teacher with multiple hierarchical predictions: this allows
142
+ the model to freely learn its own features, since no explicit
143
+ conditioning on representations is enforced, while at the
144
+ same time receiving a supervision signal that encodes in-
145
+ formation at different layers of abstraction.
146
+ Experiments confirm that our model can generate high-
147
+ quality predictions with low computational costs and model
148
+ size (only 16 MB). We assess the impact of our heteroge-
149
+ neous multi-decoder strategy by carrying out extensive ab-
150
+ lation studies and comparing alternative architectures. We
151
+ also demonstrate the effectiveness of our knowledge dis-
152
+ 1https://mmcheng.net/videosal/
153
+ tillation strategy, compared to the employment of a non-
154
+ hierarchical teacher. To summarize our contributions:
155
+ • We propose a decoding strategy for video saliency pre-
156
+ diction which combines heterogeneous decoders to ex-
157
+ ploit the specific pattern analysis capabilities, while re-
158
+ ducing the overall model complexity. To our knowl-
159
+ edge, we are first to propose multiple saliency maps
160
+ output using 3D CNN to improve model efficiency.
161
+ • We employ a knowledge distillation approach based
162
+ on a hierarchical teacher, providing saliency maps es-
163
+ timated from different abstraction layers.
164
+ • Extensive experiments show that our model achieves
165
+ state-of-the-art performance on the DHF1K bench-
166
+ mark, at lower computational costs of current meth-
167
+ ods. Ablation studies support the motivations for our
168
+ decoding and knowledge distillation strategies.
169
+ 2. Related Work
170
+ The main contributions of the proposed approach con-
171
+ sist of a novel heterogeneous multi-decoder scheme, which
172
+ combines lightweight versions of common decoding strate-
173
+ gies, and a multi-objective knowledge distillation approach.
174
+ In this section, we briefly present the state-of-the-art on
175
+ these topics.
176
+ Decoding
177
+ strategies
178
+ for
179
+ video
180
+ saliency
181
+ prediction.
182
+ Among recent methods from the state-of-the-art for video
183
+ saliency prediction, leveraging encoder-decoder networks
184
+ can be considered a mainstream approach; however, sev-
185
+ eral architectural variations have been proposed for feature
186
+ sharing between encoder and decoder and for output recon-
187
+ struction. As shown in the taxonomy presented in Fig. 3, a
188
+ simpler class of approaches employs independent encoder
189
+ and decoder, with no feature sharing between the two paths.
190
+ Among these, approaches based on recurrent layers typ-
191
+ ically model temporal dynamics at the bottleneck of the
192
+ architecture [18, 37, 22]. Non-recurrent architectures, in-
193
+ stead, model time by means of 3D convolutions [42, 38, 4],
194
+ Other approaches employ architectures similar to U-Net by
195
+ introducing skip connections that encourage feature shar-
196
+ ing between encoder and decoder.
197
+ TASED [25] aggre-
198
+ gates spatio-temporal features through the use of auxiliary
199
+ pooling for reducing the temporal dimension. ViNet [16]
200
+ integrates S3D features from multiple hierarchical levels
201
+ by employing trilinear interpolation and 3D convolutions.
202
+ UNISAL [6] proposes a multi-objective unified framework
203
+ for both 2D and 3D saliency with domain-specific modules
204
+ and a lightweight recurrent architecture to handle temporal
205
+ dynamics; While single-decoder approaches are common,
206
+ multi-decoder output integration has recently attracted in-
207
+ terest. DVA [35] and HD2S [1] fuse maps predicted by
208
+ independent decoders operating at different abstraction lev-
209
+ els. RecSal [30] predicts multiple saliency maps in a multi-
210
+ objective training framework. Recent works introduce more
211
+
212
+ (a) Frame
213
+ (b) GT
214
+ (c) TASED
215
+ (d) ViNet
216
+ (e) HD2S
217
+ (f) DLA
218
+ Figure 2: Examples of video saliency maps from state-of-the-art methods. Although they achieve very similar performance
219
+ on popular metrics, remarkable differences can be seem in the learned saliency patterns.
220
+ Figure 3: A taxonomy of decoding strategies commonly
221
+ employed in video saliency prediction.
222
+ Subfigures(top-
223
+ left, top-right, bottom-left, bottom-right): Independent en-
224
+ coder and decoder, with no feature sharing between the two
225
+ paths [4, 18, 22, 37, 42]; U-Net–like architecture, with fea-
226
+ tures sharing between encoder and decoder [6, 16, 19, 25];
227
+ Deep Layer Aggregation [39]; Hierarchical intermediate
228
+ map aggregation [1, 30, 35].
229
+ complex feature interactions among decoding paths, where
230
+ high-resolution features are affected by deeper high-level
231
+ features, as in DLA [39] and TSFP-Net [3]. All of the ap-
232
+ proaches presented employ either a single-decoder archi-
233
+ tecture or a homogeneous multi-decoder one, where differ-
234
+ ences between decoders lie in the number of layers rather
235
+ than in their structure. In our work, we propose an architec-
236
+ ture which combines heterogeneous decoder structures, in
237
+ order to better exploit their distinctive saliency prediction
238
+ properties and thus increase computational efficiency.
239
+ Knowledge distillation for visual saliency prediction.
240
+ Knowledge distillation [13, 11] is commonly employed
241
+ to train an efficient student model from a more complex
242
+ teacher model, with higher accuracy than when training
243
+ the student directly from dataset labels.
244
+ Several knowl-
245
+ edge distillation approaches have been recently proposed
246
+ for video saliency prediction.
247
+ SKD-DVA [20] proposes
248
+ spatio-temporal knowledge distillation with two teachers
249
+ and two students, with each pair focusing on either spatial
250
+ or temporal transfer. SV2T-SS [41] distills corresponding
251
+ features of teacher and student (implemented as encoder-
252
+ decoder networks), based on first- and second-order feature
253
+ statistics transfer. UVA-DVA [10] employs separate spatial
254
+ and temporal teachers, whose knowledge is transferred to
255
+ a single student model, which then fuses the resulting fea-
256
+ tures in the final saliency prediction, achieving reasonable
257
+ accuracy at impressive speed. Leveraging knowledge dis-
258
+ tillation for video salient object detection is the main theme
259
+ of the work in [34]. The knowledge distillation setting pro-
260
+ posed in our work differs from existing techniques in two
261
+ main aspects: 1) we define a multi-objective distillation tar-
262
+ get on saliency maps directly; 2) we employ a hierarchical
263
+ model as a teacher in order to further capture differences in
264
+ saliency patterns extracted at multiple scales.
265
+ 3. Methodology
266
+ 3.1. Overview
267
+ The overall architecture of the proposed saliency predic-
268
+ tion network with knowledge distillation is shown in Fig. 4.
269
+ Following the taxonomy introduced in Sect. 2, a shared en-
270
+ coder extracts multi-level features that are then processed
271
+ by three parallel decoding architectures: decoder 1 (D1)
272
+ implements hierarchical intermediate maps aggregation (in-
273
+ spired by HD2S); decoder 2 (D2) employs a U-Net–like ap-
274
+ proach; decoder 3 (D3) is based on deep layer aggregation
275
+ concepts (as in DLA [39]).
276
+ The hierarchical aggregation decoder (i.e., decoder 1 in
277
+ Fig. 4) produces four intermediate saliency maps from fea-
278
+ tures extracted at different encoder layers; then, the set of
279
+
280
+ DVSS三三三三predictions from all decoders are fused into the final pre-
281
+ diction. At training time, we compute a supervised loss
282
+ by comparing the final prediction to the ground-truth map,
283
+ and a knowledge distillation loss on the final prediction and
284
+ the intermediate maps extracted by D1 (all losses are based
285
+ on Kullback-Leibler divergence between saliency maps; see
286
+ Sect. 3.3). In order to have a correspondence between inter-
287
+ mediate maps produced by D1 and teacher maps, we em-
288
+ ploy HD2S as a teacher, since it naturally and semantically
289
+ matches the decoder’s hierarchical structure.
290
+ 3.2. Encoder structure
291
+ Depthwise separable convolutions are widely used for
292
+ efficient network design, as in MobileNetV2 [31], com-
293
+ monly pre-trained on ImageNet and used as a backbone for
294
+ lightweight models. In order to adapt it as a 3D video fea-
295
+ ture extractor, we follow the kernel inflation approach in-
296
+ troduced in [2] and already employed for static [14] and
297
+ dynamic [6] saliency prediction. A 2D convolutional ker-
298
+ nel of size Cin × Cout × H × W can be inflated into a 3D
299
+ kernel of size Cin × Cout × T × H × W by replicating
300
+ its weights along the temporal dimension T. This simple
301
+ trick provides a convenient initialization that responds to
302
+ common spatial patterns and can be gradually adapted to
303
+ temporal dynamics during training, eliminating the burden
304
+ of learning basic spatial structures from scratch. Given the
305
+ inflated MobileNetV2 encoder, we follow the approach in
306
+ FastSal [14] to extract four blocks of concatenated feature
307
+ from the whole set of layers.
308
+ 3.2.1
309
+ Decoder structure and multiple prediction
310
+ The set of heterogeneous decoders, employed in our model,
311
+ includes D1 (hierarchical map aggregation), D2(U-Net–
312
+ like) and D3 (deep layer aggregation). Our realizations of
313
+ each of these approaches are designed to process the four
314
+ input streams of features extracted by the encoder.
315
+ D1
316
+ produces four intermediate saliency maps, while D2 and
317
+ D3 produce a map each. The fusion layer that computes
318
+ the final output map is implemented as a 1×1 convolu-
319
+ tion of the predicted maps.
320
+ Architectural details are re-
321
+ ported in the supplementary materials. As an additional ef-
322
+ ficiency consideration, we note that the high computation
323
+ cost of many state-of-the-art approaches due to multiple-
324
+ input/single-output (MISO) prediction, where a sequence
325
+ of frames is used to predict a single saliency map, usu-
326
+ ally referring to the last frame. This provides a full con-
327
+ text of previous frames to the model, but also means that,
328
+ in order to predict N saliency maps (without interpolation),
329
+ N forward passes are also required, with a proportional in-
330
+ crease of computational power. In order to further improve
331
+ efficiency, we implement a multiple-input/multiple-output
332
+ (MIMO) schema for output generation, by designing de-
333
+ coders that predict a number of saliency maps equal to the
334
+ number of frames provided to the encoder. MIMO decoders
335
+ can intrinsically make use of the similarity between con-
336
+ secutive saliency maps, and employ this information to re-
337
+ duce the computational power required to generate the same
338
+ number of saliency maps by MISO decoders. Of course, the
339
+ downside is that each frame has a different amount of sur-
340
+ rounding context; however, in our experiments this has little
341
+ impact on our model’s accuracy.
342
+ 3.3. Knowledge distillation
343
+ Given the presence of multiple decoders in our model,
344
+ one of which also produces intermediate saliency maps,
345
+ choosing a distillation approach to supervise the student’s
346
+ training is not trivial. As illustrated in Fig. 4, we carry out
347
+ knowledge distillation by extracting intermediate and final
348
+ outputs from a hierarchical teacher to supervise intermedi-
349
+ ate of one student decoder and final student outputs. Our
350
+ design of the distillation process is guided by several ob-
351
+ servations. First and foremost, it is necessary to provide a
352
+ training signal at the very output of the model, in order to
353
+ train the final fusion layer. Second, carrying out distilla-
354
+ tion at the representation level, by enforcing similarity be-
355
+ tween teacher and student features, defeats the purpose of
356
+ having multiple decoders that are meant to recognize their
357
+ own distinctive saliency patterns and should therefore be
358
+ free to independently learn their own features. Also, us-
359
+ ing saliency maps directly ensures that the output and target
360
+ have the same size, so that the use of adaptation layers to
361
+ match feature size of student’s and teacher’s can be avoided.
362
+ We therefore choose to use intermediate saliency maps from
363
+ a hierarchical teacher, HD2S [1], since this makes it possi-
364
+ ble in a natural way to affect the model at different depths
365
+ of the encoder, without providing as strong a training signal
366
+ as internal features.
367
+ We formalize our knowledge distillation procedure as
368
+ follows. Let V be the space of video sequences and S be the
369
+ space of saliency maps (whether for the entire sequence or
370
+ for a single frame); let M be a family of models such that
371
+ each element in M is a function M : V → Sn+1, which
372
+ provides n intermediate and one output saliency maps. We
373
+ thus define a teacher T ∈ M and student S ∈ M. For
374
+ simplicity, the notations Si and Ti will indicate the i-th map
375
+ generated by, respectively, the student and teacher; indexes
376
+ from 1 to n will denote intermediate maps, while index n+1
377
+ will refer to the final output. Saliency map distance is mea-
378
+ sured by Kullback-Leibler (KL) divergence:
379
+ LKL (x, y) =
380
+
381
+ i
382
+ yi log yi
383
+ xi
384
+ ,
385
+ (1)
386
+ with i iterating over spatial locations of the saliency maps.
387
+ At each training iteration, we sample a video sequence
388
+ v ∈ V and its ground-truth saliency s ∈ S.
389
+ The em-
390
+
391
+ CNN
392
+ Block 1
393
+ CNN
394
+ Block 2
395
+ CNN
396
+ Block 3
397
+ CNN
398
+ Block 4
399
+ Feature Aggregation
400
+ Decoder 1
401
+ Decoder 2
402
+ Decoder 3
403
+ 6 Intermediate
404
+ Saliency
405
+ predictions
406
+ CNN
407
+ Block 4
408
+ Prediction
409
+ Teacher
410
+ Prediction
411
+ Ground
412
+ Truth
413
+ Teacher
414
+ Network
415
+ Student
416
+ Network
417
+ CNN
418
+ Block 1
419
+ CNN
420
+ Block 2
421
+ CNN
422
+ Block 3
423
+ Input
424
+ Input
425
+ Blocks of Convolutional layers used for
426
+ encoders and decoders. Student network
427
+ structures are detailed in supplementary
428
+ material and Teacher network is the same as
429
+ HD2S.
430
+ Features from all layers of mobileNet V2 are
431
+ reorganized and aggregated to features from
432
+ 4 abstraction level.
433
+ 4 intermediate maps from student network
434
+ generated from decoder 1 (similar to HD2S )
435
+ 1 intermediate map from student network
436
+ generated from decoder 2 (U-Net like)
437
+ 1 intermediate map from student network
438
+ generated from decoder 3 (DLA like)
439
+ Final predictions generated by fusing all 6
440
+ intermediate maps
441
+ 4 intermediate maps from teacher
442
+ network(HD2S )
443
+ Final predictions generated by teacher
444
+ network (pseudo labels)
445
+ Ground truth saliency maps
446
+ KL divergence
447
+ losses are
448
+ computed and
449
+ back propagated
450
+ through student
451
+ network
452
+ Figure 4: Overview of the proposed multi-decoder architecture with hierarchical knowledge distillation.
453
+ ployed loss function aims at minimizing the KL divergence
454
+ between student and teacher maps (both intermediate and
455
+ final) and between (final) student and ground-truth maps:
456
+ L =
457
+ n+1
458
+
459
+ i=1
460
+ LKL
461
+
462
+ Si (v) , Ti (v)
463
+
464
+ + LKL
465
+
466
+ Sn+1 (v) , s
467
+
468
+ . (2)
469
+ 3.3.1
470
+ Training with auxiliary dataset
471
+ The usage of unlabeled auxiliary datasets in a knowledge
472
+ distillation setting has been shown to help boost perfor-
473
+ mance [21, 32, 15]. Following this approach, we introduce
474
+ a new video distribution W, and extend the loss function
475
+ with a term that measures the distance between student’s
476
+ predicted saliency maps and “pseudo-labels” (which are, in
477
+ fact, also maps) provided by the teacher. As a result, given
478
+ a pair of input videos v ∈ V and w ∈ W, the new loss
479
+ function becomes:
480
+ L =
481
+ n+1
482
+
483
+ i=1
484
+ LKL
485
+
486
+ Si (v) , Ti (v)
487
+
488
+ +
489
+ n+1
490
+
491
+ i=1
492
+ LKL
493
+
494
+ Si (w) , Ti (w)
495
+
496
+ +LKL
497
+
498
+ Sn+1 (v) , s
499
+
500
+ .
501
+ (3)
502
+ 3.3.2
503
+ Channel reduction with teacher assistant
504
+ Previous works have shown that, with a suitable network de-
505
+ sign, it is possible to decrease the number of channels in the
506
+ encoder’s layers, in order to reduce the computational cost,
507
+ without an excessive loss in accuracy [8]. Our channel re-
508
+ duction strategy applies multiple knowledge distillation it-
509
+ erations: at each of them, a new student is initialized by av-
510
+ eraging the weights of each pair of consecutive kernels into
511
+ a new kernel. Although kernel ordering is essentially ran-
512
+ dom, this approach has been shown to provide a meaningful
513
+ initialization to the new student. Additionally, we also ex-
514
+ plore the “teacher assistant” [26] distillation strategy: rather
515
+ than using the original teacher to perform knowledge dis-
516
+ tillation on reduced-channel students, we employ the full-
517
+ capacity student (i.e., before any channel reduction) as a
518
+ new teacher. As a result, by combining the channel reduc-
519
+ tion and teacher assistant, we encourage the model to distill
520
+ more information while reducing computational cost.
521
+ 4. Experiments
522
+ 4.1. Datasets and Metrics
523
+ We conduct experiments on DHF1K [36],
524
+ UCF-
525
+ Sports [29, 24] and Hollywood2 [23, 24] datasets, com-
526
+ monly employed to evaluate video saliency prediction.
527
+ DHF1K contains 1000 videos split into 600/100/300 for
528
+ training, validation, and test (unreleased).
529
+ Eye fixations
530
+ are collected from 17 participants in free-viewing experi-
531
+ ments. UCF-Sports is a task-driven dataset that includes
532
+ 150 videos (103 for training, 47 for test) covering 9 sport
533
+ activities. Participants were asked to identify the activity in
534
+ each video sequence. Hollywood2 includes 1707 videos ex-
535
+ tracted from 69 movies and categorized between 12 action
536
+ classes. At data collection, 3 observers are free-viewing, 12
537
+ observers are asked recognize the action, and 4 observers
538
+ are asked to recognize the scene. 823 videos are used for
539
+ training and 884 for test.
540
+ We also employ the Kinetic-
541
+ 400 [17] action recognition benchmark as auxiliary dataset,
542
+ used by the teacher to generate additional training inputs
543
+ with pseudo-labels. For evaluation purposes, we report re-
544
+ sults in terms of the standard metrics for video saliency
545
+ prediction [35]: AUC-Judd (AUC-J), AUC-Borji (AUC-B),
546
+ Linear Correlation Coefficient (CC), Normalized Scanpath
547
+ Saliency (NSS), and Similarity Metric (SIM).
548
+
549
+ Table 1: Comparison with SoA on the DHF1K and Hollywood2 test set in both the MISO and MIMO settings.
550
+ (a) Prediction accuracy and computational cost on the DHF1K test set. GMACs are
551
+ estimated for 16 frames, hence the ×16 multiplication for MISO models. Models
552
+ marked with a ∗ are image saliency models.
553
+ Models
554
+ AUC-J SIM sAUC
555
+ CC
556
+ NSS
557
+ GMACs
558
+ #params
559
+ Multi-input/single-output (MISO) prediction
560
+ SalGAN∗
561
+ 0.866
562
+ 0.262 0.709 0.370 2.043
563
+ 45.73×16
564
+ 31.92M
565
+ FastSal∗
566
+ 0.887
567
+ 0.293 0.712 0.426 2.330
568
+ 2.64×16
569
+ 2.47M
570
+ 3DSal
571
+ 0.850
572
+ 0.321 0.623 0.356 1.996 136.45×16
573
+ 46.15M
574
+ TASED
575
+ 0.895
576
+ 0.361 0.712 0.470 2.667
577
+ 91.75×16
578
+ 21.26M
579
+ ViNet
580
+ 0.908
581
+ 0.381 0.729 0.511 2.872 115.28×16
582
+ 31.1M
583
+ HD2S
584
+ 0.908
585
+ 0.406 0.700 0.503 2.812
586
+ 11.08×16
587
+ 29.8M
588
+ TinyHD-S
589
+ 0.909
590
+ 0.396 0.714 0.505 2.921
591
+ 5.57×16
592
+ 3.94M
593
+ Multi-input/multi-output (MIMO) prediction
594
+ SalEMA
595
+ 0.890
596
+ 0.466 0.667 0.449 2.574
597
+ 640.16×1
598
+ 31.79M
599
+ STRA-Net
600
+ 0.895
601
+ 0.355 0.663 0.458 2.558
602
+ 266.01×3
603
+ 168.02M
604
+ UNISAL
605
+ 0.901
606
+ 0.390 0.691 0.490 2.776
607
+ 19.42×1
608
+ 3.71M
609
+ TinyHD-M
610
+ 0.905
611
+ 0.387 0.707 0.493 2.819
612
+ 7.95×1
613
+ 3.92M
614
+ (b) Prediction accuracy on Hollywood2
615
+ Models
616
+ AUC-J SIM
617
+ CC
618
+ NSS
619
+ Multi-input/single-output prediction
620
+ ACLNet
621
+ 0.913
622
+ 0.757 0.623 3.086
623
+ SalSAC
624
+ 0.931
625
+ 0.529 0.670 3.356
626
+ TASED
627
+ 0.918
628
+ 0.507 0.646 3.302
629
+ ViNet
630
+ 0.930
631
+ 0.550 0.693 3.730
632
+ HD2S
633
+ 0.936
634
+ 0.551 0.670 3.352
635
+ TinyHD-S
636
+ 0.935
637
+ 0.561 0.690 3.815
638
+ Multi-input/multi-output prediction
639
+ SalEMA
640
+ 0.919
641
+ 0.487 0.613 3.186
642
+ STRA-Net
643
+ 0.923
644
+ 0.536 0.662 3.478
645
+ UNISAL
646
+ 0.934
647
+ 0.542 0.673 3.901
648
+ TinyHD-M
649
+ 0.934
650
+ 0.553 0.686 3.744
651
+ Frame
652
+ GT
653
+ Output
654
+ D1 (1)
655
+ D1 (2)
656
+ D1 (3)
657
+ D1 (4)
658
+ D2
659
+ D3
660
+ Figure 5: Examples of video saliency maps predicted by the proposed model, as well as intermediate maps by multiple
661
+ decoders. Values between parentheses indicate one of the intermediate saliency maps by decoder D1.
662
+ 4.2. Training procedure
663
+ Models are trained for 200 epochs using mini-batch
664
+ stochastic gradient descent, with a mini-batch size of 12.
665
+ The initial learning rate is 0.01, and it is reduced by a factor
666
+ of 0.1 at epochs 100, 150, and 180. Input sequence length
667
+ is 16 frames, spatially resized to 192×256. We carry out
668
+ data augmentation by means of random horizontal flips; in
669
+ our experiments, spatial resize and cropping do not lead to
670
+ significant benefits. When the teacher assistant strategy is
671
+ employed for channel reduction, we perform two additional
672
+ knowledge distillation, each time training a new student net-
673
+ work whose encoder contains, respectively, half and a quar-
674
+ ter of the original number of channel at each encoder layer.
675
+ 4.3. Performance comparison with state-of-the-art
676
+ models
677
+ In these experiments, we report results of our model
678
+ in both the MISO and MIMO configurations (respectively,
679
+ TinyHD-S and TinyHD-M), trained with the auxiliary un-
680
+ labeled dataset but without channel reduction that using the
681
+ teacher assistant strategy (which introduces trade-offs be-
682
+ tween accuracy and computational costs that will be dis-
683
+ cussed later).
684
+ We also report the number of multiply-
685
+ accumulate operations (MAC) carried out by each method2
686
+ to generate a 16-frame saliency sequence.
687
+ Results on
688
+ 2Values are computed from official implementations when available
689
+ and from our own implementations otherwise.
690
+
691
+ 18-PAR4
692
+ JASONDUFNER
693
+ SNDT
694
+ 484
695
+ LIVEDHF1K are shown in Table 1a.
696
+ In the MISO configu-
697
+ ration, our model is on par with state-of-the-art methods
698
+ (and even better on NSS), but only employs a fraction of
699
+ the their computational cost. In the MIMO configuration,
700
+ our method sets a new state of the art, outperforming (on
701
+ four metrics out of five) also UNISAL, which has a sim-
702
+ ilar number of parameters but is about twice as demand-
703
+ ing in terms of GMACs. Fig. 5 presents a few examples
704
+ of saliency predictions by our model3. For each example,
705
+ we also show the intermediate maps provided by each de-
706
+ coder. Qualitatively, our model predicts reasonable saliency
707
+ regions, sometimes identifying additional elements not in-
708
+ cluded in the ground truth (e.g., the third example). In-
709
+ termediate maps also exhibit a certain variability, although
710
+ similar patterns can be found in pairs (e.g., maps 1-2 and
711
+ maps 3-4 from D1, and maps from D2 and D3). In general,
712
+ the highest-level map from D1 (the fourth) mostly affects
713
+ the output prediction: this is expected, since the correspond-
714
+ ing architecture matches the teacher’s. However, the fusion
715
+ layer includes all information from intermediate maps, as
716
+ shown in the last example, where two salient areas identi-
717
+ fied by the highest-level map from D1 are discarded.
718
+ Table 1b and 2 report results on Hollywood2 and UCF-
719
+ Sports. While the model performs very well on the for-
720
+ mer, especially in the more efficient MIMO setting, ViNet
721
+ and UNISAL achieve higher accuracy on UCF-Sports. This
722
+ may be due to the lower performance of the HD2S teacher
723
+ on that specific dataset, and to the arguable suitability of
724
+ UCF-Sports as a video saliency prediction benchmark: the
725
+ vast majority of its videos has fewer than 100 frames, and
726
+ user fixations are driven by action classification, rather than
727
+ free-viewing saliency [1].
728
+ 4.4. Ablation studies
729
+ In order to experimentally substantiate our architectural
730
+ and methodological choices, we carry out a set of abla-
731
+ tion studies on each component of the model. the results
732
+ of these experiments are reported on the DHF1K validation
733
+ set, since testing set is not publicly available. First, we as-
734
+ sess the effect of our heterogeneous multi-decoder strategy,
735
+ evaluating the model’s performance under several decoder
736
+ configurations. We carry out this experiment in the MISO
737
+ configuration, which achieves higher accuracy, as shown in
738
+ Table 1a. In order to demonstrate the importance of com-
739
+ bining different decoder architectures, Table 3 reports re-
740
+ sults when using homogeneous decoders in our architec-
741
+ ture. Table 3 show that the heterogeneous approach gen-
742
+ erally performs better than configurations with a single de-
743
+ coder type, most remarkably in the NSS metric. For the
744
+ sake of completeness, we also show configurations where
745
+ a smaller number of homogeneous decoders are employed;
746
+ 3More examples are provided in the supplementary materials, as well
747
+ as a visual comparison with state-of-the-art models.
748
+ Table 2: Performance comparison on UCF-Sports in both
749
+ the MISO and MIMO settings.
750
+ Models
751
+ AUC-J SIM
752
+ CC
753
+ NSS
754
+ Multi-input/single-output prediction
755
+ ACLNet
756
+ 0.897 0.406 0.510 2.567
757
+ 3DSal
758
+ 0.881 0.478 0.590 2.802
759
+ TASED
760
+ 0.899 0.469 0.582 2.920
761
+ ViNet
762
+ 0.924 0.522 0.673 3.620
763
+ HD2S
764
+ 0.904 0.507 0.604 3.114
765
+ TinyHD-S
766
+ 0.918 0.510 0.624 3.280
767
+ Multi-input/multi-output prediction
768
+ SalEMA
769
+ 0.906 0.431 0.544 2.638
770
+ STRA-Net
771
+ 0.910 0.479 0.593 3.018
772
+ UNISAL
773
+ 0.918 0.523 0.644 3.381
774
+ TinyHD-M 0.911 0.499 0.609 3.234
775
+ Table 3: Performance of our architecture with homoge-
776
+ neous decoders, on the DHF1K validation set.
777
+ Number
778
+ of parameters of models with homogeneous decoders are:
779
+ D1×1 (2.55M), D2×1 (3.57M). D3×1 (2.53M); D1×2
780
+ (2.75M), D2×2 (4.78M). D3×2 (2.70M); D1×3 (2.95M),
781
+ D2×3 (6.00M). D3×3 (2.88M); TinyHD-S (3.94M).
782
+ Decoder
783
+ AUC-J AUC-B
784
+ CC
785
+ NSS
786
+ SIM GMACs
787
+ D1×1
788
+ 0.8993 0.8210 0.4881 2.8163 0.3939 3.55×16
789
+ D2×1
790
+ 0.9040 0.8235 0.4837 2.7976 0.3820 2.88×16
791
+ D3×1
792
+ 0.9034 0.8248 0.4836 2.7851 0.3794 2.45×16
793
+ D1×2
794
+ 0.8998 0.8195 0.4882 2.8256 0.3928 5.45×16
795
+ D2×2
796
+ 0.9046 0.8251 0.4855 2.8117 0.3806 4.11×16
797
+ D3×2
798
+ 0.9046 0.8239 0.4864 2.8095 0.3819 3.24×16
799
+ D1×3
800
+ 0.9013 0.8253 0.4922 2.8420 0.3924 7.35×16
801
+ D2×3
802
+ 0.9049 0.8266 0.4847 2.8042 0.3774 5.33×16
803
+ D3×3
804
+ 0.9047 0.8242 0.4845 2.7967 0.3799 4.03×16
805
+ TinyHD-S 0.9075 0.8244 0.4945 2.8735 0.3887 5.57×16
806
+ these setups are, of course, more computationally efficient,
807
+ but exhibit lower performance on average in the accuracy
808
+ metrics.
809
+ In the second part of our ablation study, we evaluate of
810
+ the impact of our knowledge distillation strategy. Table 4 re-
811
+ ports the results obtained by the proposed model, in MISO
812
+ configuration, when trained on ground-truth maps only,
813
+ and when gradually adding knowledge distillation terms on
814
+ DHF1K and on Kinetics-400, using HD2S as teacher. The
815
+ full loss setting achieves better performance on average —
816
+ as previously. This is most evident in the NSS metric.
817
+
818
+ Table 4: Impact of loss terms on our model in the MISO
819
+ configuration, starting from training on ground-truth (GT)
820
+ maps only, and gradually adding knowledge distillation
821
+ terms on DHF1K (target dataset or TD) and on Kinetics-
822
+ 400 (auxiliary dataset or AD), using HD2S as a teacher.
823
+ Loss term
824
+ AUC-J AUC-B
825
+ CC
826
+ NSS
827
+ SIM
828
+ GT maps
829
+ 0.9033 0.8286 0.4864 2.7680 0.3765
830
+ + K.D. on TD
831
+ 0.9058 0.8237 0.4875 2.8182 0.3846
832
+ + K.D. on AD 0.9075 0.8244 0.4945 2.8735 0.3887
833
+ 4.5. Channel reduction with teacher assistant
834
+ Finally, we investigate further reducing computational
835
+ costs by means of our channel reduction strategy: mul-
836
+ tiple distillation steps are carried out, with each student
837
+ progressively halving its number of encoding and decod-
838
+ ing features, as described in Sect. 3.3.2. We also evalu-
839
+ ate the performance of this approach when training on the
840
+ original teacher (HD2S) and when using the “teacher as-
841
+ sistant” technique, with the full-capacity student used as a
842
+ teacher. Table 5 reports results, on both MISO and MIMO
843
+ settings, after one and two reduction steps steps, respec-
844
+ tively resulting in models with half (marked as × 1
845
+ 2) and a
846
+ quarter (marked as × 1
847
+ 4) of the original number of convolu-
848
+ tional features (marked as ×1). Rows with “+TA” denote
849
+ the use of the full-capacity student as teacher for knowl-
850
+ edge distillation, rather than HD2S. As expected, channel
851
+ reduction introduces a trade-off between retaining the ac-
852
+ curacy of the original model and reducing computational
853
+ costs. As multiply-accumulate operations and model pa-
854
+ rameters are significantly reduced, accuracy also decreases,
855
+ most evidently in the NSS and, to a smaller extent, in the
856
+ SIM metrics. It is noteworthy that configurations employing
857
+ a teacher assistant outperform the counterpart using HD2S.
858
+ 5. Conclusions
859
+ In this work, starting from the observation that differ-
860
+ ent encoder-decoder architectures recognize specific video
861
+ saliency patterns, we propose a heterogeneous multi-
862
+ decoder architecture that leverages simpler versions of
863
+ state-of-the-art decoding strategies to achieve high predic-
864
+ tion accuracy at a fraction of the computational cost. We
865
+ train our model in a multi-target knowledge distillation set-
866
+ ting, where a hierarchical decoder is used as a teacher to
867
+ supervise a matching internal decoder in our model and the
868
+ output prediction; additionally, we employ semi-supervised
869
+ learning on an unlabeled auxiliary dataset to further im-
870
+ prove model generalization. Our model sets new state-of-
871
+ the-art performance when employed in a multi-input/multi-
872
+ output setting, while being significantly more efficient in
873
+ terms of floating-point operations and number of parame-
874
+ Table 5: Performance of the proposed model when employ-
875
+ ing channel reduction and teacher assistant distillation.
876
+ (a) Number of parameters of models with reduced channels and
877
+ GMACs reported on generating 16 output saliency maps.
878
+ GMACs
879
+ #params
880
+ Models
881
+ ×1
882
+ × 1
883
+ 2
884
+ × 1
885
+ 4
886
+ ×1
887
+ × 1
888
+ 2
889
+ × 1
890
+ 4
891
+ TinyHD-S 89.12 59.52 37.44 3.94M 1.37M 513.1k
892
+ TinyHD-M 7.95
893
+ 6.92
894
+ 4.06 3.92M 1.37M 515.3k
895
+ (b) Performance of channel reduction reported on DHF1K valida-
896
+ tion set in both the MISO and MIMO settings.
897
+ Models
898
+ AUC-J AUC-B
899
+ CC
900
+ NSS
901
+ SIM
902
+ Multi-input/single-output prediction
903
+ TinyHD-S×1
904
+ 0.9075 0.8244 0.4945 2.8735 0.3887
905
+ TinyHD-S× 1
906
+ 2
907
+ 0.9038 0.8331 0.4754 2.7194 0.3641
908
+ +TA
909
+ 0.9052 0.8330 0.4805 2.7317 0.3684
910
+ TinyHD-S× 1
911
+ 4
912
+ 0.9005 0.8285 0.4560 2.5830 0.3514
913
+ +TA
914
+ 0.9018 0.8318 0.4667 2.6329 0.3569
915
+ Multi-input/multi-output prediction
916
+ TinyHD-M×1 0.9050 0.8239 0.4880 2.8178 0.3844
917
+ TinyHD-M× 1
918
+ 2 0.9016 0.8272 0.4687 2.6718 0.3612
919
+ +TA
920
+ 0.9021 0.8307 0.4718 2.6726 0.3630
921
+ TinyHD-M× 1
922
+ 4 0.8980 0.8294 0.4487 2.5257 0.3438
923
+ +TA
924
+ 0.8999 0.8333 0.4564 2.5581 0.3478
925
+ ters. We further push the limits of our model by applying
926
+ a channel reduction procedure through multiple distillation
927
+ steps and using the full-capacity student as a teacher, ac-
928
+ cording to the “teacher assistant” paradigm. In the resulting
929
+ model, the number of floating-point operations is approx-
930
+ imately halved compared to the full-capacity version, and
931
+ the number of parameters becomes as small as about 500k,
932
+ taking about 2.4 MB storage space without compression.
933
+ Acknowledgments
934
+ This publication has been financially supported by:
935
+ Science Foundation Ireland (SFI) under grant number
936
+ SFI/12/RC/2289 P2;
937
+ Regione Sicilia,
938
+ Italy,
939
+ RehaStart
940
+ project (grant identifier:
941
+ PO FESR 2014/2020, Azione
942
+ 1.1.5, N. 08ME6201000222, CUP G79J18000610007);
943
+ University of Catania, Piano della Ricerca di Ateneo,
944
+ 2020/2022, Linea 2D; MIUR, Italy, Azione 1.2 “Mobilit`a
945
+ dei Ricercatori” (grant identifier: Asse I, PON R&I 2014-
946
+ 2020, id. AIM 1889410, CUP: E64I18002520007).
947
+
948
+ References
949
+ [1] Giovanni Bellitto,
950
+ Federica Proietto Salanitri,
951
+ Simone
952
+ Palazzo, Francesco Rundo, Daniela Giordano, and Concetto
953
+ Spampinato. Hierarchical domain-adapted feature learning
954
+ for video saliency prediction. International Journal of Com-
955
+ puter Vision, 129(12):3216–3232, 2021.
956
+ [2] Joao Carreira and Andrew Zisserman.
957
+ Quo vadis, action
958
+ recognition? a new model and the kinetics dataset. In pro-
959
+ ceedings of the IEEE Conference on Computer Vision and
960
+ Pattern Recognition, pages 6299–6308, 2017.
961
+ [3] Qinyao Chang and Shiping Zhu.
962
+ Temporal-spatial fea-
963
+ ture pyramid for video saliency detection.
964
+ arXiv preprint
965
+ arXiv:2105.04213, 2021.
966
+ [4] Yasser Abdelaziz Dahou Djilali, Mohamed Sayah, Kevin
967
+ McGuinness, and Noel E O’Connor.
968
+ 3dsal: An efficient
969
+ 3d-cnn architecture for video saliency prediction. In VISI-
970
+ GRAPP (4: VISAPP), pages 27–36, 2020.
971
+ [5] Richard Droste, Yifan Cai, Harshita Sharma, Pierre Chate-
972
+ lain, Aris T Papageorghiou, and J Alison Noble. Towards
973
+ capturing sonographic experience: cognition-inspired ultra-
974
+ sound video saliency prediction. In Annual Conference on
975
+ Medical Image Understanding and Analysis, pages 174–186.
976
+ Springer, 2019.
977
+ [6] Richard Droste, Jianbo Jiao, and J Alison Noble. Unified
978
+ image and video saliency modeling. In European Conference
979
+ on Computer Vision, pages 419–435. Springer, 2020.
980
+ [7] Jianwu Fang, Dingxin Yan, Jiahuan Qiao, Jianru Xue, and
981
+ Hongkai Yu. Dada: Driver attention prediction in driving
982
+ accident scenarios. IEEE Transactions on Intelligent Trans-
983
+ portation Systems, 2021.
984
+ [8] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and
985
+ Kaiming He. Slowfast networks for video recognition. In
986
+ Proceedings of the IEEE/CVF international conference on
987
+ computer vision, pages 6202–6211, 2019.
988
+ [9] Jo˜ao Filipe Ferreira and Jorge Dias. Attentional mechanisms
989
+ for socially interactive robots–a survey. IEEE Transactions
990
+ on Autonomous Mental Development, 6(2):110–125, 2014.
991
+ [10] Kui Fu, Peipei Shi, Yafei Song, Shiming Ge, Xiangju Lu,
992
+ and Jia Li. Ultrafast video attention prediction with coupled
993
+ knowledge distillation.
994
+ In Proceedings of the AAAI Con-
995
+ ference on Artificial Intelligence, volume 34, pages 10802–
996
+ 10809, 2020.
997
+ [11] Jianping Gou, Baosheng Yu, Stephen J Maybank, and
998
+ Dacheng Tao. Knowledge distillation: A survey. Interna-
999
+ tional Journal of Computer Vision, 129(6):1789–1819, 2021.
1000
+ [12] Hadi Hadizadeh and Ivan V Baji´c.
1001
+ Saliency-aware video
1002
+ compression.
1003
+ IEEE Transactions on Image Processing,
1004
+ 23(1):19–33, 2013.
1005
+ [13] Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al.
1006
+ Distill-
1007
+ ing the knowledge in a neural network.
1008
+ arXiv preprint
1009
+ arXiv:1503.02531, 2(7), 2015.
1010
+ [14] Feiyan Hu and Kevin McGuinness.
1011
+ Fastsal: a computa-
1012
+ tionally efficient network for visual saliency prediction. In
1013
+ 2020 25th International Conference on Pattern Recognition
1014
+ (ICPR), pages 9054–9061. IEEE, 2021.
1015
+ [15] Sohei Itahara, Takayuki Nishio, Yusuke Koda, Masahiro
1016
+ Morikura, and Koji Yamamoto.
1017
+ Distillation-based semi-
1018
+ supervised federated learning for communication-efficient
1019
+ collaborative training with non-iid private data.
1020
+ arXiv
1021
+ preprint arXiv:2008.06180, 2020.
1022
+ [16] Samyak Jain, Pradeep Yarlagadda, Shreyank Jyoti, Shyam-
1023
+ gopal Karthik,
1024
+ Ramanathan Subramanian,
1025
+ and Vineet
1026
+ Gandhi.
1027
+ Vinet: Pushing the limits of visual modality for
1028
+ audio-visual saliency prediction. In 2021 IEEE/RSJ Interna-
1029
+ tional Conference on Intelligent Robots and Systems (IROS),
1030
+ pages 3520–3527. IEEE, 2020.
1031
+ [17] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang,
1032
+ Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola,
1033
+ Tim Green, Trevor Back, Paul Natsev, et al. The kinetics hu-
1034
+ man action video dataset. arXiv preprint arXiv:1705.06950,
1035
+ 2017.
1036
+ [18] Qiuxia Lai, Wenguan Wang, Hanqiu Sun, and Jianbing Shen.
1037
+ Video saliency prediction using spatiotemporal residual at-
1038
+ tentive networks. IEEE Transactions on Image Processing,
1039
+ 29:1113–1126, 2019.
1040
+ [19] Hao Li, Fei Qi, and Guangming Shi.
1041
+ A novel spatio-
1042
+ temporal 3d convolutional encoder-decoder network for dy-
1043
+ namic saliency prediction.
1044
+ IEEE Access, 9:36328–36341,
1045
+ 2021.
1046
+ [20] Jia Li, Kui Fu, Shengwei Zhao, and Shiming Ge. Spatiotem-
1047
+ poral knowledge distillation for efficient estimation of aerial
1048
+ video saliency.
1049
+ IEEE Transactions on Image Processing,
1050
+ 29:1902–1914, 2019.
1051
+ [21] Xuejun Liao, Ya Xue, and Lawrence Carin. Logistic regres-
1052
+ sion with an auxiliary data source. In Proceedings of the
1053
+ 22nd international conference on Machine learning, pages
1054
+ 505–512, 2005.
1055
+ [22] Panagiotis Linardos, Eva Mohedano, Juan Jos´e Nieto,
1056
+ Noel E. O’Connor, Xavier Gir´o-i-Nieto, and Kevin McGuin-
1057
+ ness.
1058
+ Simple vs complex temporal recurrences for video
1059
+ saliency prediction. In 30th British Machine Vision Confer-
1060
+ ence 2019, BMVC 2019, Cardiff, UK, September 9-12, 2019,
1061
+ 2019.
1062
+ [23] Marcin Marszalek, Ivan Laptev, and Cordelia Schmid. Ac-
1063
+ tions in context.
1064
+ In 2009 IEEE Conference on Computer
1065
+ Vision and Pattern Recognition, pages 2929–2936. IEEE,
1066
+ 2009.
1067
+ [24] Stefan Mathe and Cristian Sminchisescu. Actions in the eye:
1068
+ Dynamic gaze datasets and learnt saliency models for visual
1069
+ recognition. IEEE transactions on pattern analysis and ma-
1070
+ chine intelligence, 37(7):1408–1424, 2014.
1071
+ [25] Kyle Min and Jason J Corso.
1072
+ Tased-net:
1073
+ Temporally-
1074
+ aggregating spatial encoder-decoder network for video
1075
+ saliency detection. In Proceedings of the IEEE/CVF Inter-
1076
+ national Conference on Computer Vision, pages 2394–2403,
1077
+ 2019.
1078
+ [26] Seyed Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir
1079
+ Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh. Im-
1080
+ proved knowledge distillation via teacher assistant. In Pro-
1081
+ ceedings of the AAAI Conference on Artificial Intelligence,
1082
+ volume 34, pages 5191–5198, 2020.
1083
+ [27] K. L. Bhanu Moorthy, Moneish Kumar, Ramanathan Subra-
1084
+ manian, and Vineet Gandhi. GAZED– Gaze-Guided Cine-
1085
+ matic Editing of Wide-Angle Monocular Video Recordings,
1086
+
1087
+ page 1–11.
1088
+ Association for Computing Machinery, New
1089
+ York, NY, USA, 2020.
1090
+ [28] Anne-Flore Perrin, Lu Zhang, and Olivier Le Meur. How
1091
+ well current saliency prediction models perform on uavs
1092
+ videos?
1093
+ In International Conference on Computer Analy-
1094
+ sis of Images and Patterns, pages 311–323. Springer, 2019.
1095
+ [29] Mikel D Rodriguez, Javed Ahmed, and Mubarak Shah. Ac-
1096
+ tion mach a spatio-temporal maximum average correlation
1097
+ height filter for action recognition. In 2008 IEEE confer-
1098
+ ence on computer vision and pattern recognition, pages 1–8.
1099
+ IEEE, 2008.
1100
+ [30] Oindrila Saha and Sandeep Mishra.
1101
+ Recsal : Deep re-
1102
+ cursive supervision for visual saliency prediction. In 31st
1103
+ British Machine Vision Conference 2020, BMVC 2020, Vir-
1104
+ tual Event, UK, September 7-10, 2020, 2020.
1105
+ [31] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zh-
1106
+ moginov, and Liang-Chieh Chen.
1107
+ Mobilenetv2: Inverted
1108
+ residuals and linear bottlenecks.
1109
+ In Proceedings of the
1110
+ IEEE conference on computer vision and pattern recogni-
1111
+ tion, pages 4510–4520, 2018.
1112
+ [32] Felix Sattler, Tim Korjakow, Roman Rischke, and Wojciech
1113
+ Samek. Fedaux: Leveraging unlabeled auxiliary data in fed-
1114
+ erated learning. IEEE Transactions on Neural Networks and
1115
+ Learning Systems, 2021.
1116
+ [33] Xiao Sun, Yuxing Hu, Luming Zhang, Yanxiang Chen, Ping
1117
+ Li, Zhao Xie, and Zhenguang Liu. Camera-assisted video
1118
+ saliency prediction and its applications. IEEE transactions
1119
+ on cybernetics, 48(9):2520–2530, 2017.
1120
+ [34] Yi Tang, Yuanman Li, and Wenbin Zou. Fast video salient
1121
+ object detection via spatiotemporal knowledge distillation.
1122
+ arXiv preprint arXiv:2010.10027, 2020.
1123
+ [35] Wenguan Wang and Jianbing Shen.
1124
+ Deep visual atten-
1125
+ tion prediction.
1126
+ IEEE Transactions on Image Processing,
1127
+ 27(5):2368–2378, 2017.
1128
+ [36] Wenguan Wang, Jianbing Shen, Jianwen Xie, Ming-Ming
1129
+ Cheng, Haibing Ling, and Ali Borji.
1130
+ Revisiting video
1131
+ saliency prediction in the deep learning era. IEEE Trans-
1132
+ actions on Pattern Analysis and Machine Intelligence, 2019.
1133
+ [37] Xinyi Wu, Zhenyao Wu, Jinglin Zhang, Lili Ju, and Song
1134
+ Wang. Salsac: A video saliency prediction model with shuf-
1135
+ fled attentions and correlation-based convlstm. In Proceed-
1136
+ ings of the AAAI Conference on Artificial Intelligence, vol-
1137
+ ume 34, pages 12410–12417, 2020.
1138
+ [38] Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and
1139
+ Kevin Murphy.
1140
+ Rethinking spatiotemporal feature learn-
1141
+ ing: Speed-accuracy trade-offs in video classification.
1142
+ In
1143
+ Proceedings of the European conference on computer vision
1144
+ (ECCV), pages 305–321, 2018.
1145
+ [39] Fisher Yu, Dequan Wang, Evan Shelhamer, and Trevor
1146
+ Darrell.
1147
+ Deep layer aggregation.
1148
+ In Proceedings of the
1149
+ IEEE conference on computer vision and pattern recogni-
1150
+ tion, pages 2403–2412, 2018.
1151
+ [40] Geng Zhang, Zejian Yuan, Nanning Zheng, Xingdong
1152
+ Sheng, and Tie Liu.
1153
+ Visual saliency based object track-
1154
+ ing. In Asian conference on computer vision, pages 193–203.
1155
+ Springer, 2009.
1156
+ [41] Peng Zhang, Li Su, Liang Li, BingKun Bao, Pamela Cos-
1157
+ man, GuoRong Li, and Qingming Huang. Training efficient
1158
+ saliency prediction models with knowledge distillation. In
1159
+ Proceedings of the 27th ACM International Conference on
1160
+ Multimedia, pages 512–520, 2019.
1161
+ [42] Wenbin Zou, Shengkai Zhuo, Yi Tang, Shishun Tian, Xia Li,
1162
+ and Chen Xu. Sta3d: Spatiotemporally attentive 3d network
1163
+ for video saliency prediction. Pattern Recognition Letters,
1164
+ 147:78–84, 2021.
1165
+
0NE3T4oBgHgl3EQfmwqR/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
0NE4T4oBgHgl3EQfyw06/content/tmp_files/2301.05268v1.pdf.txt ADDED
@@ -0,0 +1,855 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The non-linear perturbation of a black hole by
2
+ gravitational waves. III. Newman-Penrose constants
3
+ J Frauendiener1, A Goodenbour2 and C Stevens2
4
+ 1Department of Mathematics and Statistics, University of Otago, Dunedin 9016, New
5
+ Zealand
6
+ 2Department of Mathematics and Statistics, University of Canterbury, Christchurch
7
+ 8041, New Zealand
8
+ E-mail: [email protected],
9
10
+ Abstract.
11
+ In this paper we continue our study of the non-linear response of a
12
+ Schwarzschild black hole to an ingoing gravitational wave by computing the Newman-
13
+ Penrose (NP) constants. The NP constants are five complex, supertranslation-invariant
14
+ quantities defined on null infinity I + and although put forward in the 60’s, they
15
+ have never been computed in a non-stationary setting. We accomplish this through
16
+ a numerical implementation of Friedrich’s generalized conformal field equations whose
17
+ semi-global evolution yields direct access to I +. Generalizations of the NP constants’
18
+ integral expressions are made to allow their computation in a more general gauge
19
+ that better suits the output of a numerical evolution. Canonical methods of fixing
20
+ inherent degrees of freedom in their definitions are discussed. The NP constants are
21
+ then computed for a variety of different ingoing wave profiles in axisymmetry, and then
22
+ with no symmetry assumptions in 3+1 for which all five are non-zero.
23
+ Submitted to: Class. Quantum Grav.
24
+ arXiv:2301.05268v1 [gr-qc] 12 Jan 2023
25
+
26
+ The non-linear perturbation of a black hole by gravitational waves
27
+ 2
28
+ 1. Introduction
29
+ Gravitational waves are a robust prediction of general relativity. The existence of wave
30
+ solutions to the field equations has been known since the early days of the theory,
31
+ but there was doubt that wave-like behaviour occurred generically outside of overly-
32
+ symmetric exact solutions [25]. The way we characterise radiation today emerged out of
33
+ the work of Bondi [4], Sachs [26], Newman and Penrose [18, 19, 21]. Penrose’s procedure
34
+ of conformal compactification [22] succinctly encodes the asymptotic fall-off conditions
35
+ hard-coded by Bondi and Sachs. The conformal boundary, I , emerges as the natural
36
+ place to define gravitational radiation.
37
+ This
38
+ picture
39
+ of
40
+ gravitational
41
+ radiation
42
+ owes
43
+ a
44
+ great
45
+ debt
46
+ to
47
+ Maxwell’s
48
+ electromagnetism.
49
+ The isolation of radiative degrees of freedom by Bondi and
50
+ Sachs echoes an earlier analysis of electromagnetic radiation via the Liénard-Wiechert
51
+ potential and Penrose’s conformal compactification is premised on the conformal-
52
+ invariance of zero-rest-mass fields, a fact which was shown for a Maxwell field much
53
+ earlier [1, 6].
54
+ This paper is focused on the explicit calculation of another such import from
55
+ Maxwell electromagnetism, the Kirchhoff integral formula. Generalised to fields of spin-
56
+ s in Minkowski space, it is called the generalised Kirchhoff-d’Adhémar formula and
57
+ relates the value of the field at a point to an integral over an arbitrary smooth cut of its
58
+ (past) light cone. Formally applied to the spin-2 Weyl spinor on the conformal boundary
59
+ I + of an asymptotically flat spacetime, the Kirchhoff-d’Adhémar formula yields a set
60
+ of five complex supertranslation invariant quantities on I + which are ostensibly the
61
+ components of the Weyl spinor at timelike infinity. These are the NP constants [20],
62
+ whose physical interpretation has proved elusive for over half a century.
63
+ Nevertheless, much has been said about these constants in the intervening years.
64
+ In their original paper, Newman and Penrose make the argument that the existence
65
+ of the constants has non-trivial physical significance [20, 24]. It has since been shown
66
+ that the vanishing of the NP constants distinguishes between fundamentally different
67
+ late-time behaviour of self-gravitating waves [16]. They play an important role in the
68
+ early radiative properties of isolated systems close to spatial infinity [17]. Another line
69
+ of analysis has found that the NP constants appear as subleading BMS charges [15].
70
+ However, as far as we are aware, the NP constants have never been explicitly computed
71
+ in a general space-time.
72
+ Explicit numerical computation of quantities at the conformal boundary without
73
+ the use of limiting procedures can be done by employing conformal compactification in
74
+ a numerical scheme. Friedrich’s conformal field equations regularly extend the Einstein
75
+ equations to include the conformal boundary [10–12]. Recently, an initial boundary
76
+ value problem (IBVP) framework for the generalised conformal field equations (GCFE)
77
+ was presented [3]. This framework puts I + within the computational domain, allowing
78
+ for the non-linear perturbation of black hole space-times.
79
+ Because the computational domain includes at least a portion of the future null
80
+
81
+ The non-linear perturbation of a black hole by gravitational waves
82
+ 3
83
+ boundary, quantities defined there can be computed with local differential geometrical
84
+ methods. Most asymptotic quantities, including the NP constants, are defined on a
85
+ 2-dimensional cut of I +, therefore one can see how a quantity evolves along the set of
86
+ successive cuts. However, in the literature, these quantities are often defined in terms of
87
+ a very specific set of coordinates, frame, and conformal factor. These choices are usually
88
+ incompatible with the requirements of the numerical scheme. Therefore, to compute a
89
+ quantity at null infinity with this scheme, it must be written in a conformally invariant
90
+ way.
91
+ The aim of this paper is to use the numerical framework provided by the IBVP
92
+ formulation of the GCFE to compute for the first time, the NP constants explicitly on
93
+ I +. The case considered is the non-linear perturbation of a Schwarzschild space-time
94
+ by gravitational waves. The reader is referred to [3] for details of the numerical scheme
95
+ and checks of correctness such as constraint convergence tests.
96
+ The layout of the paper is as follows: Section 2 summarizes the IBVP framework
97
+ for the GCFE. Section 3 presents the NP constants and proves their supertranslation-
98
+ invariance in the general form required for their computation. Section 4 presents the
99
+ details of aligning the frame of the GCFE with the frame in which the NP constants are
100
+ defined and discusses the details of their calculation. Section 5 presents numerical checks
101
+ of correctness and results for a range of initial wave profiles and Section 6 concludes
102
+ with a brief discussion. We follow conventions of Penrose and Rindler [24] throughout.
103
+ 2. Overview of the GCFE and its numerical IBVP implementation
104
+ We implement Friedrich’s generalized conformal field equations analogously to previous
105
+ papers in this series [8] and here just give a brief overview.
106
+ The conformal field equations are a regular extension of the Einstein equations
107
+ defined on a physical space-time to another conformally related Lorentzian manifold,
108
+ related by a conformal factor Θ, where the points at ’infinity’ of the physical space-time
109
+ are given by Θ = 0. Imposing the conformal Gauß gauge on the GCFE [12] yields a
110
+ system of evolution equations of which most are ordinary differential equations except
111
+ those governing the components of the gravitational tensor which form a symmetric
112
+ hyperbolic system. These evolution equations are complemented by a set of constraint
113
+ equations which are preserved by the evolution. The associated IBVP is completed by
114
+ constraint preserving boundary conditions [3] which are used to generate fully non-linear
115
+ gravitational dynamics.
116
+ The Schwarzschild space-time of mass m written in isotropic coordinates is again
117
+ used as the initial space-time. The specific choice of conformal Gauß gauge given by
118
+ Friedrich [13] is used by which regular coordinates, frame and conformal factor up to
119
+ and beyond null infinity can be defined.
120
+ Our numerical implementation is capable of general 3 + 1 dimensional simulations
121
+ and we use this capability to generate a complete set of non-trivial NP constants going
122
+ beyond the axisymmetric case.
123
+
124
+ The non-linear perturbation of a black hole by gravitational waves
125
+ 4
126
+ For all simulations presented here (excluding convergence tests) we use coordinates
127
+ {t, r, θ, φ}‡.
128
+ The spatial coordinates are discretized into equidistant points in the
129
+ intervals r ∈ [m/2, m/2 + 2m], θ ∈ [0, π) and φ ∈ [0, 2π] with 401, 33 and 64
130
+ points respectively. The temporal discretization is also equidistant in this study with
131
+ timestep given by dt = dr/2 giving a Courant-Friedrichs-Lewy number of 0.5. The MPI-
132
+ parallelized Python package COFFEE [7] contains all the necessary numerical methods
133
+ to evolve this initial boundary value problem. The standard explicit Runge-Kutta fourth
134
+ order method is used to march in time, Strand’s fourth order summation-by-parts finite
135
+ difference operator (third order on the boundary) [27] is used to approximate radial
136
+ derivatives and the simultaneous-approximation-method [5] is used to stably impose
137
+ maximally dissipative boundary conditions.
138
+ Finally, we use spin-weighted spherical
139
+ harmonics to allow for fast and accurate angular derivatives through a pseudo-spectral
140
+ implementation of Penrose’s ð-calculus [2].
141
+ Regridding is also performed, whereby
142
+ regions outside of future null infinity are chopped away from the computational domain
143
+ to maintain a stable evolution. This is also performed inside the black hole to avoid the
144
+ singularity.
145
+ 3. Newman-Penrose constants
146
+ The Kirchhoff-d’Adhémar construction in Minkowski space expresses a solution to the
147
+ zero-rest-mass field equations at a point P as an integral of an arbitrary smooth cut
148
+ of the (past) light cone of P [23]. It is a conformally invariant construction and so
149
+ we may apply it specifically to relate a zero-rest-mass field φAB...L at the future (past)
150
+ timelike infinity of a conformally rescaled spacetime to a cut of its light cone, future
151
+ (past) null infinity. Because the construction is invariant with respect to the cut of the
152
+ light cone on which it is evaluated, it gives a set of supertranslation invariant constants
153
+ corresponding to the 2s+1 components of the zero-rest-mass field at timelike infinity. In
154
+ a spacetime with matter, timelike infinity becomes a singular point of the conformally
155
+ rescaled manifold, but the integrand of the Kirchoff-d’Adhémar construction, being
156
+ evaluated on null infinity, remains regular and so applied to the spin-2 Weyl spinor, we
157
+ are left with a set of five complex supertranslation invariant quantities defined on any
158
+ asymptotically flat spacetime. These are absolutely conserved in the sense that they
159
+ remain constant even with non-vanishing news.
160
+ The physical set-up of the Kirchoff-d’Adhémar construction is as follows. Consider
161
+ a point P with a (past or future) light cone I and an arbitrary smooth 3-dimensional
162
+ null hypersurface N intersecting I . The intersection has spherical topology and is
163
+ labeled C .
164
+ The Kirchhoff-d’Adhémar construction will take the form of an integral
165
+ evaluated on C whose value is independent of the intersecting null hypersurface N and
166
+ thus of the specific intersection C . In fact, any smooth cut of the light cone I can be
167
+ ‡ The axisymmetric numerical implementation is analogous to the proceeding outline but without the
168
+ φ-direction and with optimized spin-weighted spherical harmonic transformations and corresponding
169
+ ð-calculus calculations.
170
+
171
+ The non-linear perturbation of a black hole by gravitational waves
172
+ 5
173
+ said to have come about by the intersection of I with some null hypersurface N .
174
+ The method of proof consists in showing that the Kirchhoff-d’Adhémar integral
175
+ evaluated on two arbitrary cuts of I denoted C and C ′ gives the same result by treating
176
+ C − C ′ as the oriented boundary of a 3-dimensional section of the light cone I . The
177
+ generalised Stokes’ theorem can be used to relate cut invariance to the vanishing of a
178
+ related integral on the region between the cuts. This is explained in detail in [20, 24]
179
+ At each point of an intersection C there are two distinguished null directions,
180
+ one along the generators of the light cone I and the other along the intersecting null
181
+ hypersurface so it is advantageous to use the GHP formalism [14]. We can choose a
182
+ spin-frame such that oA points along the intersecting hypersurface, and ιA along I ,
183
+ normalised so that oAιA = 1.
184
+ Formally, the Kirchhoff-d’Adhémar formula reads
185
+ φ[U]
186
+ ��
187
+ P=
188
+
189
+ C
190
+ Uþcφ d2C
191
+ (3.1)
192
+ where φ := φAB...LoAoB . . . oL, U is a weighted scalar satisfying
193
+ (i) ¯ðcU = 0,
194
+ and
195
+ (ii) þ′
196
+ cU = 0.
197
+ (3.2)
198
+ and the derivative operators with a subscript ’c’ are conformally weighted operators of
199
+ the cGHP formalism as introduced in [9].
200
+ In Minkowski spacetime, in a gauge where each cut C is represented as a unit
201
+ 2-sphere, U will be a component of the spinor ιAιB . . . ιL, i.e., one of the 2s + 1 spin-
202
+ weighted spherical harmonics −sYsm with spin-weight −s.
203
+ In this gauge, these are
204
+ constant, thus trivially propagating along the light cone.
205
+ In a curved spacetime, U is a generalisation of these spin-weighted spherical
206
+ harmonics to a topological but not necessarily metric sphere.
207
+ In this general case,
208
+ there are still 2s + 1 independent solutions of (3.2(i)) for U.
209
+ 4. Calculating the NP constants
210
+ In the remainder of this work we focus on the gravitational NP constants, i.e., with
211
+ φABCD = ψABCD, the conformally rescaled Weyl spinor. Many system variables of the
212
+ GCFE are components of spinors with respect to a certain spin-frame. In general, this
213
+ spin-frame, and its associated null tetrad,does not agree with the frame adapted to I
214
+ that was used in the definition of the NP constants but we can use null rotations to
215
+ transform between the GCFE null frame and the frame adapted to I herein referred
216
+ to as a Bondi frame§.
217
+ § This is not strictly correct, since we are referring here only to a single cut, whereas the standard
218
+ usage of the term Bondi frame refers to an entire system of cuts parametrised by the retarded Bondi
219
+ time.
220
+
221
+ The non-linear perturbation of a black hole by gravitational waves
222
+ 6
223
+ A null rotation mixes one component of a spin-frame into another. For example, a
224
+ null rotation of oA around ιA is given by
225
+ oA → oA + Y ιA,
226
+ ιA → ιA
227
+ (4.1)
228
+ thus keeping ιA fixed, where Y is a function of the spacetime coordinates.
229
+ If we
230
+ denote the Bondi spin-frame by OA and IA, the GCFE frame by oA and ιA, and the
231
+ corresponding Bondi and GCFE null-tetrad vectors by capital and lowercase letters
232
+ respectively, then we may transform between frames by two null successive rotations
233
+ (first fixing oA and then the new ιA) which have the combined form
234
+ OA = oA + Y (ιA + XoA),
235
+ IA = ιA + XoA.
236
+ (4.2)
237
+ The null rotation functions X and Y are determined by the conditions
238
+ ∇aΘ = −ANa,
239
+ M a∇at = 0,
240
+ (4.3)
241
+ where the scaling A is fixed given the conformal factor Θ and the above expression of the
242
+ adapted spin-frame. These conditions impose that N a points along the null generators of
243
+ I and that the complex vector M a lies within the t = const. cuts of I . Appropriately
244
+ fixing the freedom in how the frame propagates along the timelike conformal geodesics
245
+ of the conformal Gauß gauge allows one to satisfy the second condition automatically,
246
+ yielding X = 0. The first condition gives us the value of the null rotation function Y
247
+ on I +.
248
+ The transformation between the GCFE frame and the Bondi frame is then known
249
+ and so we may write components with respect to the Bondi frame in terms of components
250
+ with respect to the GCFE frame which are known numerically. As a simple example,
251
+ the third component of the gravitational spinor is written in the Bondi frame as
252
+ ψABCDOAIBICID = ψABCD(oA + Y ιA)ιBιCιD = ψ3 + Y ψ4.
253
+ Both the GCFE and the NP constants are defined with respect to a spin and boost
254
+ covariant formalism and so a properly weighted expression with respect to one frame
255
+ results in a properly weighted expression with respect to another.
256
+ The same process is used to compute the area-form in terms of numerically available
257
+ quantities.
258
+ 4.1. Fixing the behaviour of the frame off I +
259
+ With the above two null rotations, the frame on I + is fixed, but we also have some
260
+ freedom to choose how our frame changes as we move away from I +. The presentation
261
+ of I + in the proof of supertranslation invariance makes use of this and takes κ = 0
262
+ since the intersecting null hypersurface is foliated by a null geodetic congruence [24].
263
+ The Bondi frame so far is only fixed on I + and in order to achieve κ = 0 we need
264
+ to enforce that DoA ∝ oA which means that we need to determine the null rotation
265
+
266
+ The non-linear perturbation of a black hole by gravitational waves
267
+ 7
268
+ function Y away from I +. Suppose, we have fixed Y on I + with the above procedure,
269
+ then we can get the required result with a third null rotation that becomes the identity
270
+ on I +
271
+ oA → ˆoA = oA + ZιA,
272
+ ιA → ˆιA = ιA,
273
+ where Z = O(Θ),
274
+ (4.4)
275
+ recalling the conformal factor Θ. Under this transformation,
276
+ ˆκ = κ − DZ,
277
+ (4.5)
278
+ where D := La∇a and choosing Z so that κ = DZ on I + we obtain ˆκ = 0 there.
279
+ Although the transformation becomes the identity on I +, we must worry about
280
+ derivatives of the frame. In the Kirchhoff-d’Adhémar integral, we have a derivative of
281
+ the form Dφ where φ = φAB..LoAoB..oL (2s indices). Under this null rotation,
282
+ ˆD ˆφ
283
+ ���
284
+ I + = D ˆφ
285
+ (4.6)
286
+ = D(φA..L(oA + ZιA)..(oL + ZιL))
287
+ (4.7)
288
+ = Dφ + 2sκφ1
289
+ (4.8)
290
+ since the derivative of any term containing powers of Z higher than one will vanish
291
+ on I +.
292
+ 4.2. Computing U
293
+ In the NP constant integrand (3.1), the active component would appear to be the term
294
+ þcφ since this brings information of the arrival of the field φ at I +, while the quantity
295
+ U appears to be somewhat inert, being used to project out certain pieces of information
296
+ from the integrand. Before the jump was made to curved spacetime, the Kirchhoff-
297
+ d’Adhémar integral could represent the value of the field φ at timelike infinity. In this
298
+ case, the quantity U was replaced by components of ιAιB...ιL which are spin-weighted
299
+ spherical harmonics in an appropriate frame. When curved spacetime is introduced,
300
+ the U takes the role of these components and so different choices of U which satisfy the
301
+ underlying equations (3.2), represent what would have been components of φ at timelike
302
+ infinity. The job of U is to lend its spin, boost, and conformal weight to the expression,
303
+ and so provide alignment between cuts of I + allowing for comparison from cut to cut.
304
+ To compute U we must solve the “constraint equation” ¯ðcU = 0 on a cut∥ and evolve
305
+ it along the null generators of I + with the evolution equation þ′
306
+ cU = 0. Following
307
+ methods from our earlier paper [8], we can expand these operators written in the Bondi
308
+ frame in terms of the numerically implemented, standard operators ˜ð, ˜ð′, to obtain
309
+ A˜ðU + B˜ð′U + CU = 0.
310
+ (4.9)
311
+ ∥ This term is justified since, by considering the commutator [þ′
312
+ c, ¯ðc] one can show that any U satisfying
313
+ the evolution equation þ′
314
+ cU = 0 will satisfy ¯ðcU = 0 on every cut if it satisfies this equation on a single
315
+ cut. In this sense, the constraint is propagated by the evolution.
316
+
317
+ The non-linear perturbation of a black hole by gravitational waves
318
+ 8
319
+ Expanding the known coefficients A, B, and C, and the unknown function U in terms of
320
+ spin-weighted spherical harmonics, and using the well-known relationship for products
321
+ of spin-weighted spherical harmonics in terms of Clebsch-Gordon coefficients (see [23])
322
+ results in a system of homogeneous linear equations for the spectral coefficients of the
323
+ function U. There are five linearly independent solutions which span the solution space
324
+ to the constraint equation for U.
325
+ The evolution equation can similarly be written in terms of numerically available
326
+ quantities as,
327
+ A∂tU + B˜ðU + C˜ð′U + DU = 0,
328
+ (4.10)
329
+ and may be evolved along I + by the method of lines given an initial solution to the
330
+ constraint equation. An adaptive fourth-order Runge-Kutta method is used since the
331
+ numerical output is not linearly spaced in t.
332
+ 4.3. Fixing a basis in the solution space U
333
+ It is clear that solution U of (3.2(ii)) leads to another solution αU provided that α is
334
+ a complex constant on the cut C . More generally, any complex linear combination of
335
+ solutions will also be a solution. Thus, changing the basis of the solution space U of
336
+ (3.2(ii)) will also change the individual values of the five NP constants. Therefore, these
337
+ do not, by themselves, carry independent physical information. Only the combination
338
+ of the values of the integrals together with the knowledge of the basis of U carries the
339
+ full information.
340
+ This means that in order to compare NP constants across different spacetimes we
341
+ need to make sure that we specify “the same basis” for the solution space for each
342
+ spacetime. There are several ways to do this: two complicated ones which are also
343
+ physically relevant, and a third easier one which not as physically meaningful but much
344
+ more pragmatic.
345
+ The first idea that comes to mind is to first conformally rescale the metric on the cut
346
+ to make it into a unit-sphere, and, in a second step, to change the coordinate system by
347
+ a Möbius transformation so that it becomes a standard polar coordinate system on the
348
+ unit-sphere. In this situation, the solutions of (3.2(ii)) are the standard spin-weighted
349
+ spherical harmonics Ym := −2Y2m with −2 ≤ m ≤ 2.
350
+ While the first step is rather straightforward, the second step leads to a Poisson-type
351
+ equation on the sphere with a δ-like source term which is difficult (but not impossible)
352
+ to treat numerically. In addition, after the coordinate transformation all quantities must
353
+ be transformed which may introduce several numerical errors.
354
+ The second way to introduce a basis in U is to make use of the fact that the standard
355
+ spin 2 spherical harmonics Ym form an irreducible representation of the group SU(2).
356
+ They each are an eigenvector of an infinitesimal generator with different eigenvalue,
357
+ and they are obtained one from the other by the action of two ladder operators (very
358
+ much akin to the angular momentum algebra of quantum mechanics). Fixing one of
359
+ them as being annihilated by one of the ladder operators, one can generate the others
360
+
361
+ The non-linear perturbation of a black hole by gravitational waves
362
+ 9
363
+ by successive application of the other ladder operator. This fixes the complete basis in
364
+ terms of the first vector and leaves the freedom of scaling with one complex number.
365
+ This can be almost fixed by normalising the vectors with respect to an appropriate
366
+ Hermitian product, leaving the remaining freedom of a single phase.
367
+ In principle, this program could be carried out but it is very cumbersome. First,
368
+ one needs to find the infinitesimal generators of the group action.
369
+ This leads to a
370
+ series of elliptic equations to be solved on the sphere. Next, one needs to use these
371
+ generators to determining the function that is killed by one of the ladder operators,
372
+ which is again an elliptic equation on the sphere, and then generate the other functions
373
+ by successive application of the other ladder operator. As an alternative, one could solve
374
+ the eigenvalue problem for the third operator. Obviously, this procedure is numerically
375
+ quite involved and prone to inaccuracies due to successive numerical differentiation.
376
+ For this reason, we use a third method to fix a “universal” basis of U , and we
377
+ use the “universal structure” that is available to us, namely our numerical setup which
378
+ is the same for every spacetime that we compute. Recall that our method is based
379
+ on concentric round spheres and that every function we compute can be expanded as
380
+ a linear combination of spin-weighted spherical harmonics defined with respect to the
381
+ numerical round spheres. Therefore, we proceed as follows: first, on an initial cut, we
382
+ compute five linearly independent solutions (uk)k=−2:2 of (3.2(ii)). These have the form
383
+ uk =
384
+ 2
385
+
386
+ m=−2
387
+ cm
388
+ kYm + Zl>2,
389
+ k = −2 : 2
390
+ where Zl>2 stands for terms with higher values of l.
391
+ Then a straightforward linear
392
+ combination of these solutions leads to the universal basis Uk which is defined by
393
+ Uk = Yk + Zl>2
394
+ where Zl>2 again stands for higher l terms. We can interpret this basis as being the
395
+ deformation of the standard basis provided by the Ym due to the impact of the incoming
396
+ gravitational wave. If there was no gravitational wave, then the cut would be spherically
397
+ symmetric and the yk would agree with the standard basis. The basis, thus defined, is
398
+ then propagated along I + using the evolution equation (3.2(i)). In this process, the
399
+ form of the yk will change. This process of fixing a “universal basis” of U leaves no
400
+ further freedom (except, of course, for the free phase inherent in the definition of the
401
+ spin-weighted spherical harmonics).
402
+ 4.4. Integrating the Newman-Penrose integrand
403
+ At this stage, all elements of the Newman-Penrose integral (3.1) have expressions in
404
+ terms of known quantities. Integration can be performed against the basis Uk as defined
405
+ in 4.3 by simply computing the s = l = m = 0 spectral coefficient of the complete
406
+ integrand and dividing by 2π. The theory shows these five complex numbers, obtained
407
+ on a cut, come out the same independently of which cut was chosen for their evaluation.
408
+ In the next section we present numerical results that showcase these properties.
409
+
410
+ The non-linear perturbation of a black hole by gravitational waves
411
+ 10
412
+ 5. Numerical Results
413
+ Using the above procedure, the NP constants were computed with data on I + from a
414
+ numerically evolved spacetime modelling the non-linear perturbation of a Schwarzschild
415
+ black hole by an incoming gravitational wave pulse.
416
+ Because the NP constants are
417
+ evaluated on each cut of I + defined as the intersection with a t = const. hypersurface,
418
+ we obtain five complex numbers for every t.
419
+ The initial mass of the black hole is m = 0.5 for all simulations considered.
420
+ 5.1. Ingoing wave
421
+ The ingoing pulse is defined as the choice of free data q0 of the lightlike, ingoing
422
+ characteristic variable on the outer boundary. This is chosen to be a linear combination
423
+ of the spin-weighted spherical harmonics 2Y2m for m = 0, 1, 2 and with amplitudes a, b
424
+ and c respectively. The choices of these amplitudes will vary in the upcoming sections.
425
+ This gives the wave profile on the outer boundary
426
+ q0(t, θ, φ) =
427
+
428
+
429
+
430
+ [4a
431
+
432
+
433
+ 15 2Y20 − 2b
434
+
435
+ 5
436
+ π 2Y21 + 2c�π
437
+ 5 2Y22] sin8(8πt)
438
+ t ≤ 1
439
+ 8
440
+ 0
441
+ t > 1
442
+ 8
443
+ .
444
+ 5.2. Checks of correctness
445
+ We have demonstrated in several papers [3, 8] that the solutions computed by the GCFE
446
+ system converge at the correct order for the 2 + 1 axisymmetric case. Here we show
447
+ that this is also true in the general case of 3 + 1 dimensions. We also show that the NP
448
+ constants converge to constant values on I +.
449
+ (a) θ = π/2 and φ = π.
450
+ (b) r = 0.978 and φ = π.
451
+ (c) r = 0.978 and θ = π/2.
452
+ Figure 1:
453
+ The imaginary part of a constraint equation from the Bianchi identity
454
+ when
455
+ a
456
+ single
457
+ spatial
458
+ coordinate
459
+ is
460
+ fixed
461
+ at
462
+ t
463
+ =
464
+ 1
465
+ for
466
+ r, θ, φ
467
+ resolutions
468
+ of {101, 9, 16}, {201, 17, 32} and {401, 33, 64} (denoted by Res1, Res2 and Res3
469
+ respectively). The dashed vertical line represents I + in the radial plot. The curves
470
+ from top to bottom correspond to increasing resolution.
471
+ We use an ingoing wave profile proportional to 2Y22 with a = b = 0, c = i to allow
472
+ excitation in the φ-direction. Fig. 1 demonstrates convergence in all spatial directions
473
+ at t = 1.
474
+
475
+ 0
476
+ Resl
477
+ Res3
478
+ Res2
479
+ log10 lErrorl
480
+ .5
481
+ .10
482
+ -15
483
+ 0.4
484
+ 0.6
485
+ 0.8
486
+ 1.0
487
+ r0
488
+ Res1
489
+ Res3
490
+ Res2
491
+ log1o lErrorl
492
+ -5
493
+ -10
494
+ -15
495
+ 0
496
+ 1
497
+ 2
498
+ 30
499
+ Res1
500
+ Res3
501
+ Res2
502
+ log1o lErrorl
503
+ -5
504
+ -10
505
+ -15
506
+ 0
507
+ 2
508
+ 4
509
+ 6
510
+ 0The non-linear perturbation of a black hole by gravitational waves
511
+ 11
512
+ Focusing now on the NP constants, we demonstrate how they approach constant
513
+ values along I +. Fig. 2 shows a convergence test of the discrepancy from constancy
514
+ of the single non-vanishing NP constant in axisymmetry, choosing amplitudes a = 1,
515
+ b = c = 0, along I + as the spatial resolution is increased. The resolution in r (number
516
+ of intervals) is denoted rres and corresponds to an equivalently scaled resolution θres along
517
+ the θ-direction. The coarsest values are rres = 100 and θres = 8, and the resolutions are
518
+ doubled in successive simulations.
519
+ 0.8
520
+ 1.0
521
+ 1.2
522
+ 1.4
523
+ 1.6
524
+ Conformal time t
525
+ −8
526
+ −6
527
+ −4
528
+ −2
529
+ 0
530
+ log10 Error
531
+ rres = 100
532
+ rres = 200
533
+ rres = 400
534
+ Figure 2: Convergence of the log10 difference between the magnitude of the NP constant
535
+ at time t and at the initial cut with increasing spatial resolution.
536
+ 5.3. Variable ingoing amplitude
537
+ An superficial glance at (3.1) would suggest that due to the linearity of the integrand
538
+ and the zero-rest-mass field equations with respect to φ, scaling the amplitude of the
539
+ ingoing wave would just scale the NP constants linearly. However, this turns out not
540
+ to be the case because φ satisfies the Bianchi equation into which φ enters non-linearly
541
+ through the connection coefficients of the covariant derivative [23, §5.7]. Hence, it is
542
+ interesting to numerically probe the scaling of the NP constants as the ingoing wave
543
+ profile is scaled.
544
+ We performed four simulations in axisymmetry with the amplitudes b = c = 0 and
545
+ a taking the values 1,2,5, and 10. These are evolved up to t = 1.77 at which point the
546
+ system variables start to diverge due to the close ‘conformal’ proximity to i+ at t ≈ 1.79.
547
+ Table 1 shows the corresponding single NP constant for each amplitude as well as the
548
+ relative error from a linear fit through the origin. Fitting the NP constants to the ansatz
549
+
550
+ The non-linear perturbation of a black hole by gravitational waves
551
+ 12
552
+ αaβ yields α ≈ 0.53865 and β ≈ 0.99803. Fig. 3 shows the log10 deviation of the NP
553
+ constant value from the value on the initial cut for each amplitude as a function of t.
554
+ The deviation from a linear fit is orders of magnitude greater than the error. This is due
555
+ to the amplitude of the initial wave profile entering into the field equations non-linearly
556
+ resulting in a non-linear relationship between initial amplitude and Newman-Penrose
557
+ constant.
558
+ Amplitude
559
+ 1
560
+ 2
561
+ 5
562
+ 10
563
+ NPC
564
+ 0.53882
565
+ 1.07638
566
+ 2.68405
567
+ 5.36222
568
+ Rel. Err.
569
+ 0
570
+ 0.00116
571
+ 0.00373
572
+ 0.00482
573
+ Table 1: The one non-vanishing NP constant for different ingoing wave amplitudes and
574
+ the deviation from a linear fit through the origin. This deviation is orders of magnitude
575
+ larger than the error for each. This is a result of the amplitude entering non-linearly
576
+ into the field equations.
577
+ 0.8
578
+ 1.0
579
+ 1.2
580
+ 1.4
581
+ 1.6
582
+ Conformal time t
583
+ −10
584
+ −9
585
+ −8
586
+ −7
587
+ −6
588
+ −5
589
+ −4
590
+ log10 error
591
+ a = 1
592
+ a = 2
593
+ a = 5
594
+ a = 10
595
+ Figure 3: The log10 difference of the NP constant from the value on an initial cut as
596
+ a function of conformal time t for a variable amplitude of the initial wave profile as
597
+ a measure of deviation from constancy due to error. Cumulative error grows as we
598
+ integrate along I +.
599
+ 5.4. Deviation from axisymmetry
600
+ In a general asymptotically flat spacetime there are five complex NP constants
601
+ corresponding to the five independent solutions to the equations (3.2). In axisymmetry,
602
+
603
+ The non-linear perturbation of a black hole by gravitational waves
604
+ 13
605
+ these collapse to only one independent solution, when the frame and coordinates also
606
+ respect the symmetry, because then only the m = 0 modes of a spherical harmonic
607
+ expansion remain.
608
+ We can investigate the collapse of five NP constants into one by using the initial
609
+ wave profile given by (5.1) and using a = i, b = c = ϵ i, where ϵ parametrises a deviation
610
+ from axisymmetry.
611
+ Six simulations were run for this wave profile for ϵ = 0, 1, 2, 3, 4, 5. Fig. 4 shows
612
+ the magnitudes of the corresponding NP constants. Although we do have access to the
613
+ full ten real degrees of freedom (five complex degrees of freedom) for each simulation,
614
+ the trends can be seen in the behaviour of the magnitudes. Fig. 5 shows the same
615
+ quantities but separated by the value of m of the corresponding U so that individual
616
+ trends can be seen.
617
+ We can see that for ϵ = 0, there is only one non-trivial constant
618
+ 0
619
+ 1
620
+ 2
621
+ 3
622
+ 4
623
+ 5
624
+ Amplitude ϵ
625
+ 0.00
626
+ 0.25
627
+ 0.50
628
+ 0.75
629
+ 1.00
630
+ 1.25
631
+ 1.50
632
+ NPC
633
+ m = −2
634
+ m = −1
635
+ m = 0
636
+ m = 1
637
+ m = 2
638
+ Figure 4: The magnitudes of the five complex NP constants as a function of a parameter
639
+ ϵ which breaks axisymmetry in the initial wave profile. For ϵ > 0 all constants are non-
640
+ zero although most are small.
641
+ corresponding to the axisymmetric m = 0 mode, but as non-axisymmetric modes are
642
+ introduced for ϵ > 0, all five constants take on non-trivial values and grow with ϵ.
643
+ 6. Discussion
644
+ In this paper, we continue our numerical investigation into the non-linear perturbation
645
+ of a Schwarzschild black hole using an initial boundary value problem for the general
646
+ conformal field equations. This novel numerical scheme allows us to include I + within
647
+ the computational domain and so compute quantities there directly. Thereby, we have
648
+ computed the NP constants for the first time in a physically realistic spacetime.
649
+ The gauge quantities of the system were fixed by numerical needs which implies
650
+ that we were unable to directly use the very specific set of coordinates, frame, and
651
+
652
+ The non-linear perturbation of a black hole by gravitational waves
653
+ 14
654
+ 0
655
+ 2
656
+ 4
657
+ 0.0
658
+ 0.5
659
+ 1.0
660
+ 1.5
661
+ m = −2
662
+ 0
663
+ 2
664
+ 4
665
+ 0.0
666
+ 0.5
667
+ 1.0
668
+ 1.5
669
+ m = −1
670
+ 0
671
+ 2
672
+ 4
673
+ 0.540
674
+ 0.541
675
+ m = 0
676
+ 0
677
+ 2
678
+ 4
679
+ Amplitude ϵ
680
+ 0.00
681
+ 0.02
682
+ 0.04
683
+ NPC
684
+ m = 1
685
+ 0
686
+ 2
687
+ 4
688
+ 0.000
689
+ 0.005
690
+ 0.010
691
+ m = 2
692
+ Figure 5: The magnitudes of the five complex NP constants as a function of a parameter
693
+ ϵ which breaks axisymmetry in the initial wave profile split by the value of m of the
694
+ corresponding U. Note that m is a label on spherical harmonics, not a mass.
695
+ conformal factor typically used when defining quantities at I +. To compute physical
696
+ quantities such as the NP constants with data from the numerical simulation, we need
697
+ an explicitly conformally invariant expression for the quantity. However, this concept
698
+ of conformal invariance is a rather special one, and it might be appropriate to highlight
699
+ it again here.
700
+ Physical quantities make reference to the physical metric ˜gab of the
701
+ spacetime. In our context, the physical metric is represented as ˜gab = Ω−2gab, i.e., in
702
+ terms of another metric in the same conformal class and the conformal factor relating
703
+ the two. By conformal invariance we do not mean invariance under ˜gab �→ θ2˜gab, but
704
+ rather the invariance under (gab, Ω) �→ (θ2gab, θΩ), which corresponds to the free choice
705
+ of the splitting of ˜gab into a conformal and a scale part.
706
+ For example, the recent analysis of the Bondi-Sachs energy-momentum in this
707
+ framework involved generalising the procedure of constructing a basis of translations
708
+ with respect to which components of the Bondi-Sachs 4-vector may be taken.
709
+ The
710
+ standard procedure of choosing the first four spherical harmonics is certainly not
711
+ conformally invariant in this sense.
712
+ This led to an invariant characterisation of the
713
+ Lorentzian metric on the space of BMS translations [9]. Of course, the existence of this
714
+ metric still leaves the freedom of Lorentz transformations for the choice of the basis.
715
+ We run into the same problem when defining a basis for the quantity U which,
716
+ when integrated against the Newman-Penrose integrand, gives the linearly independent
717
+ NP constants. Again, it is the solution space which is defined in a conformally invariant
718
+ way. But in this case there is no obvious inner product that one could use to select a
719
+ basis. Even if there was one, the basis would still be defined only up to the appropriate
720
+
721
+ REFERENCES
722
+ 15
723
+ (pseudo-) orthogonal transformations. We circumvent the non-uniqueness of the basis
724
+ in this case by refering to the universal structure that is imposed on the problem by
725
+ the numerical setup as explained in Sec. 4.3. This seems to be the best way to ensure
726
+ comparability across the different space-times that we investigate.
727
+ Acknowledgments
728
+ Supported by the Marsden Fund Council from Government funding, managed by Royal
729
+ Society Te Ap¯arangi.
730
+ The authors would like to thank L. Escobar for sharing the general form of his
731
+ SWSH code.
732
+ We wish to acknowledge the use of New Zealand eScience Infrastructure (NeSI) high
733
+ performance computing facilities, consulting support and/or training services as part of
734
+ this research. New Zealand’s national facilities are provided by NeSI and funded jointly
735
+ by NeSI’s collaborator institutions and through the Ministry of Business, Innovation &
736
+ Employment’s Research Infrastructure programme. URL https://www.nesi.org.nz.
737
+ References
738
+ [1]
739
+ H. Bateman, “The transformation of the electrodynamical equations”, Proc LMS
740
+ s2-8, 223–264 (1910).
741
+ [2]
742
+ F. Beyer, L. Escobar, and J. Frauendiener, “Numerical solutions of Einstein’s
743
+ equations for cosmological spacetimes with spatial topology S3 and symmetry
744
+ group U(1)”, Phys. Rev. D 93, 043009 (2016).
745
+ [3]
746
+ F. Beyer, J. Frauendiener, C. Stevens, and B. Whale, “Numerical initial boundary
747
+ value problem for the generalized conformal field equations”, Phys. Rev. D 96,
748
+ 084020 (2017).
749
+ [4]
750
+ H. Bondi, M. G. van der Burg, and A. W. K. Metzner, “Gravitational waves in
751
+ general relativity. VII. Waves from axi-symmetric isolated systems”, Proc. Roy.
752
+ Soc. A 269, 21–52 (1962).
753
+ [5]
754
+ M. H. Carpenter, D. Gottlieb, and S. Abarbanel, “Time-stable boundary
755
+ conditions for finite-difference schemes solving hyperbolic systems: methodology
756
+ and application to high-order compact schemes”, J. Comp. Phys. 111, 220–236
757
+ (1994).
758
+ [6]
759
+ E. Cunningham, “The principle of relativity in electrodynamics and an extension
760
+ thereof”, Proc. LMS s2-8, 77–98 (1910).
761
+ [7]
762
+ G. Doulis, J. Frauendiener, C. Stevens, and B. Whale, “COFFEE—An MPI-
763
+ parallelized Python package for the numerical evolution of differential equations”,
764
+ SoftwareX 10, 100283 (2019).
765
+
766
+ REFERENCES
767
+ 16
768
+ [8]
769
+ J. Frauendiener and C. Stevens, “The non-linear perturbation of a black hole by
770
+ gravitational waves. I. The Bondi-Sachs mass loss”, Class. Quantum Grav. 38,
771
+ 194002 (2021).
772
+ [9]
773
+ J.
774
+ Frauendiener
775
+ and
776
+ C.
777
+ Stevens,
778
+ “A
779
+ new
780
+ look
781
+ at
782
+ the
783
+ Bondi–Sachs
784
+ en-
785
+ ergy–momentum”, Class. Quantum Grav. 39, 025007 (2022).
786
+ [10]
787
+ H. Friedrich, “On the regular and the asymptotic characteristic initial value
788
+ problem for Einstein’s vacuum field equations”, Proc. Roy. Soc. A 375, 169–184
789
+ (1981).
790
+ [11]
791
+ H. Friedrich, “The asymptotic characteristic initial value problem for Einstein’s
792
+ vacuum field equations as an initial value problem for a first-order quasilinear
793
+ symmetric hyperbolic system”, Proc. Roy. Soc. A 378, 401–421 (1981).
794
+ [12]
795
+ H. Friedrich, “Einstein equations and conformal structure: Existence of anti-de
796
+ Sitter-type space-times”, J. Geom. Phys. 17, 125–184 (1995).
797
+ [13]
798
+ H. Friedrich, “Conformal geodesics on vacuum space-times”, Commun. Math. Phys.
799
+ 235, 513–543 (2003).
800
+ [14]
801
+ R. P. Geroch, A. Held, and R. Penrose, “A space-time calculus based on pairs of
802
+ null directions”, J Math Phys 14, 874–881 (1973).
803
+ [15]
804
+ H. Godazgar, M. Godazgar, and C. N. Pope, “Subleading BMS charges and fake
805
+ news near null infinity”, J. High Energ. Phys. 2019, 143 (2019).
806
+ [16]
807
+ R. Gómez, J. Winicour, and B. G. Schmidt, “Newman-Penrose constants and the
808
+ tails of self-gravitating waves”, Phys. Rev. D 49, 2828–2836 (1994).
809
+ [17]
810
+ J. A. V. Kroon, “Early radiative properties of the developments of time-symmetric
811
+ conformally flat initial data”, Class. Quantum Grav. 20, L53 (2003).
812
+ [18]
813
+ E. T. Newman and T. W. J. Unti, “Behavior of asymptotically flat empty spaces”,
814
+ J Math Phys 3, 891–901 (1962).
815
+ [19]
816
+ E. T. Newman and R. Penrose, “An approach to gravitational radiation by a
817
+ method of spin coefficients”, J Math Phys 3, 566–578 (1962).
818
+ [20]
819
+ E. T. Newman and R. Penrose, “New conservation laws for zero rest-mass fields
820
+ in asymptotically flat space-time”, Proc. Roy. Soc. A 305, 175–204 (1968).
821
+ [21]
822
+ R. Penrose, “Asymptotic properties of fields and space-times”, Phys. Rev. Lett.
823
+ 10, 66–68 (1963).
824
+ [22]
825
+ R. Penrose, “Zero rest-mass fields including gravitation: asymptotic behaviour”,
826
+ Proc. Roy. Soc. A 284, 159–203 (1965).
827
+ [23]
828
+ R. Penrose and W. Rindler, Spinors and Spacetime: Two-spinor calculus and
829
+ relativistic fields, Vol. 1 (Cambridge University Press, Cambridge, 1984).
830
+ [24]
831
+ R. Penrose and W. Rindler, Spinors and Spacetime: Spinor and twistor methods
832
+ in space- time geometry, Vol. 2 (Cambridge University Press, 1986).
833
+ [25]
834
+ N. Rosen, “Plane polarized waves in the general theory of relativity”, Phys. Z.
835
+ Sowjetunion 12, 366–372 (1937).
836
+
837
+ REFERENCES
838
+ 17
839
+ [26]
840
+ R.
841
+ K.
842
+ Sachs,
843
+ “Gravitational
844
+ waves
845
+ in
846
+ general
847
+ relativity.
848
+ VIII.
849
+ Waves
850
+ in
851
+ asymptotically flat space-time”, Proc. Roy. Soc. A 270, 103–126 (1962).
852
+ [27]
853
+ B. Strand, “Summation by parts for finite difference approximations for d/dx”, J.
854
+ Comp. Phys. 110, 47–67 (1994).
855
+
0NE4T4oBgHgl3EQfyw06/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
0tFRT4oBgHgl3EQfkzf9/content/2301.13597v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b8d07b834a9a0f2578b5f0537410e74eaf4e65fbad64946dc1f01f5b5b07e64
3
+ size 178882
0tFRT4oBgHgl3EQfkzf9/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7bde347df668af9fa6995ad847071117570c1d9201f6ce45905b87310982bc3
3
+ size 3014701
0tFRT4oBgHgl3EQfkzf9/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f82d04b44e60cef259295806b9cf38e6628b7f74949fd55ac4bd0bcade3341f
3
+ size 99489
1tE0T4oBgHgl3EQfdgCu/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:066768391395de35eef6bc7da478cc8dc98d04a2f78421cf3536c21afb5231bb
3
+ size 192645
1tFAT4oBgHgl3EQfkB3r/content/tmp_files/2301.08609v1.pdf.txt ADDED
@@ -0,0 +1,841 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Approximate Quantum Compiling for Quantum Simulation: A
2
+ Tensor Network based approach
3
+ Niall F. Robertson1, Albert Akhriev1, Jiri Vala2,3 and Sergiy Zhuk1
4
+ 1 IBM Quantum, IBM Research Europe - Dublin, IBM Technology Campus, Dublin 15, Ireland
5
+ 2 Maynooth University, Maynooth, Ireland
6
+ 3 Tyndall National Institute, Cork, Ireland
7
+ Abstract
8
+ The simulation of quantum spin chains is a promising candidate for the demonstration of quantum
9
+ advantage. One of the main obstacles to achieving this is the noise that arises from implementing
10
+ the deep circuits that appear in standard quantum time evolution algorithms. Compiling these deep
11
+ circuits into shallower ones is thus a key issue that we address in this work. We use a Tensor Network
12
+ based approach to Approximate Quantum Compiling to produce short depth quantum circuits that
13
+ simulate the time evolution of the Heisenberg spin chain on up to 100 qubits. Furthermore, we run
14
+ these short depth circuits on a ibmq-mumbai - a 27 qubit device - and show that the accuracy of
15
+ the measured observables is significantly improved after applying our Tensor Network compilation
16
+ scheme.
17
+ 1
18
+ Introduction
19
+ The simulation of quantum many-body systems is a task of immense scientific interest. The study of
20
+ quantum dynamics, in particular, allows for the study of thermalisation, many-body localisation, Hub-
21
+ bard model physics and the applicability of field theory to out-of-equilibrium phenomena. In all of these
22
+ fields there are many open scientific questions whose answers are likely to demand accurate simulation
23
+ of quantum dynamics. However, the classical computational requirements of a brute-force approach to
24
+ quantum dynamical simulations scales exponentially in the size of the system. Approximate techniques
25
+ such as Tensor Networks are thus often called upon. Tensor Networks represent one of the best set of
26
+ tools available to simulate time evolution and can also be applied to other problems such as ground state
27
+ calculations [1] and machine learning [2, 3].
28
+ Matrix Product States (MPS) are a particular type of Tensor Network that are particularly suited to
29
+ describe quantum systems in one dimension. They form a key component of modern implementations of
30
+ the well known Density Matrix Renormalisation Group (DMRG) algorithm used to find the ground state
31
+ of local Hamiltonians. The DMRG algorithm was designed many years before [4] it was realised that it
32
+ could be understood as a variational optimisation algorithm where a Matrix Product State is used as an
33
+ Ansatz for the ground state [5]. This insight shed light on the reasons behind the spectacular success of
34
+ DMRG; the ground states of local Hamiltonians are only weakly entangled and so too are Matrix Product
35
+ States. More precisely, the bipartite entanglement entropy S of the ground state of a local Hamiltonian
36
+ satisfies an area law, meaning that the entanglement entropy is proportional to the area of the boundary
37
+ of the two subsystems in the bipartition. In 1D, this means that the entanglement entropy is independent
38
+ of the system size [6]. This is in contrast to typical states in Hilbert space whose entanglement structures
39
+ satisfy a volume law. Matrix Product States are also known to satisfy an area law [5] and thus have the
40
+ same entanglement structure as the ground state by design.
41
+ Since the weak entanglement of ground states of local Hamiltonians allow for their efficient storage as
42
+ Matrix Product States, it is natural to ask if this is also possible for states that are generated by time
43
+ evolution as these states are no longer necessarily weakly entangled. It turns out that for many physical
44
+ systems of interest, entanglement entropy increases linearly until it saturates, at which point an MPS
45
+ 1
46
+ arXiv:2301.08609v1 [quant-ph] 20 Jan 2023
47
+
48
+ will no longer be an efficient representation of the state. However, if the initial state is weakly entangled
49
+ then the MPS representation can be used to store the state at early times. A paradigmatic example of
50
+ this scenario is a quantum quench, whereby a quantum system is initially prepared in the ground state of
51
+ some local Hamiltonian, the parameters of the Hamiltonian are subsequently changed very rapidly and
52
+ the system then evolves according to Schr¨odinger’s equation. The TEBD algorithm (Time Evolving Block
53
+ Decimation) can be used to simulate time evolution after a quantum quench; the state is stored as an MPS
54
+ and this MPS is updated as a function of time. Despite the success of DMRG, TEBD and other Tensor
55
+ Network algorithms, these approaches are not without limitations. The memory requirements to store
56
+ an MPS is characterised by the bond dimension, given by the dimension of the largest matrix used in the
57
+ description of the state. For constant approximation error ϵ this bond dimension increases exponentially
58
+ with the entanglement entropy and thus with time. Therefore, for a fixed maximum bond dimension,
59
+ the error ϵ increases exponentially with time. This limits the applicability of Tensor Network algorithms
60
+ to short time simulations. A quantum algorithm however, does not in principle suffer from this issue -
61
+ the key difference between a quantum and a classical device being the ability to store highly entangled
62
+ states. A quantum computer therefore has the potential to simulate quantum many-body systems for
63
+ long times. The accurate simulation of the time evolution of 1D quantum systems is thus a promising
64
+ route for the demonstration of quantum advantage in the short term. One such quantum algorithm is
65
+ Trotterisation, where a discrete time step dt is used and the time evolution operator is approximated as
66
+ a quantum circuit with an error that scales polynomially in dt. The depth of the quantum circuit used
67
+ in such an approach increases with decreasing dt, leading to a trade-off between the noise arising from
68
+ using deep circuits and the decreasing accuracy of the approximation when dt is increased. A number
69
+ of variational quantum algorithms for the simulation of time evolution have therefore been developed
70
+ that aim to use shallower circuits [7, 8, 9, 10]. Each of these approaches suffer from a number of issues
71
+ such as convergence, runtime and limited device connectivity. As a result, it has been argued that such
72
+ variational approaches are not practical for use on near term quantum hardware [11].
73
+ One approach that aims to overcome the issue of deep circuits is Approximate Quantum Compiling
74
+ [12, 13, 14], where one defines a parametric circuit of fixed depth and uses techniques from optimisation
75
+ to minimise the distance between the parametric circuit and the target circuit of interest - where distance
76
+ is defined by some carefully chosen metric. In principle, this approach can lead to short depth circuits
77
+ that implement the target circuit of interest within some error tolerance. In practice, a classical imple-
78
+ mentation of such an approach [14] is limited to act on a small number of qubits due to the exponential
79
+ scaling of the Hilbert space with the number of qubits.
80
+ Here we develop a new approach to quantum simulation that combines Matrix Product States, Approx-
81
+ imate Quantum Compiling and Trotterisation to produce short depth quantum circuits that implement
82
+ the time evolution operator of the Heisenberg spin chain. This approach is scalable thanks to the im-
83
+ mense power of Matrix Product States. Figure 1 shows a schematic of our approach: first we apply
84
+ Trotterisation classically for the maximum length of time for which we can still store the state as an
85
+ MPS. We then apply a Matrix Product State implementation of Approximate Quantum Compiling to
86
+ squeeze the circuit (purple box in the figure) to find a much shallower circuit that still reproduces the
87
+ same state as Trotterisation, up to some small error in the fidelity. We then use the squeezed circuit as
88
+ the input for the Trotter circuit which can now generate a quantum state beyond what can be stored
89
+ classically.
90
+ 2
91
+ Setup
92
+ 2.1
93
+ The model
94
+ We will consider the XXX spin-chain - a paradigmatic model for quantum magnetism - defined by the
95
+ Hamiltonian:
96
+ HXXX = −
97
+ L−1
98
+
99
+ i=0
100
+ hi,i+1 = −
101
+ L−1
102
+
103
+ i=0
104
+
105
+ Sx
106
+ i Sx
107
+ i+1 + Sy
108
+ i Sy
109
+ i+1 + Sz
110
+ i Sz
111
+ i+1
112
+
113
+ ,
114
+ (1)
115
+ where Sx, Sy and Sz are written in terms of Pauli matrices as Sx = σx
116
+ 2 , Sy = σy
117
+ 2 and Sz = σz
118
+ 2 . The
119
+ Hamiltonian in (1) is a prototypical example of an integrable 1D model and its dynamical behaviour has
120
+ been studied extensively [15], including on a quantum computer [16]. The time evolution of a quantum
121
+ 2
122
+
123
+ Compress with AQC
124
+ ...
125
+ ...
126
+ ...
127
+ ...
128
+ ...
129
+ ...
130
+ ...
131
+ ...
132
+ Figure 1: Schematic of our approach: Trotterisation is applied classically (purple box) and then a Matrix
133
+ Product State implementation of Approximate Quantum Compiling is applied to compress the first part
134
+ of the circuit. Standard Trotterisation is then applied on a quantum device afterwards to simulate longer
135
+ times, i.e. times which are beyond what is possible classically.
136
+ Rz(θ)
137
+ Rz( π
138
+ 2 )
139
+ Rz(− π
140
+ 2 )
141
+ Ry(φ)
142
+ Ry(λ)
143
+ Figure 2: Implementation of two site operator ei(ασx⊗σx+βσy⊗σy+γσz⊗σz) as a quantum circuit. We have
144
+ the correspondences θ = π
145
+ 2 − 2γ, φ = 2α − π
146
+ 2 and λ = π
147
+ 2 − 2β. The Hamiltonian in (1) corresponds to
148
+ the case α = β = γ = dt
149
+ state |ψ(t)⟩ is governed by the Schr¨odinger equation:
150
+ |ψ(t)⟩ = e−iHXXXt |ψ(0)⟩
151
+ (2)
152
+ where |ψ(0)⟩ is the wavefunction at time t = 0. In this work, we will consider the N´eel state, written as:
153
+ |↑↓↑↓ ... ↑↓⟩ where ↑ and ↓ represent up and down spins respectively. The N´eel state for n spins is simply
154
+ implemented on n qubits as |1010...10⟩.
155
+ The time evolution operator U(t) ≡ e−iHt can be executed as a quantum circuit in a resource efficient
156
+ way; we first write the Hamiltonian in (1) as HXXX = H1 + H2 where H1 = − �
157
+ i odd
158
+ hi,i+1 and H2 =
159
+ − �
160
+ i even
161
+ hi,i+1. Note that all operators in a given sum commute with all other operators in their respective
162
+ sums. We then define the Suzuki-Trotter time evolution operator Utrot(dt) in the following way:
163
+ U(1)
164
+ trot(dt) =
165
+ L/2−1
166
+
167
+ j=0
168
+ U2j,2j+1(dt)
169
+ L/2−1
170
+
171
+ j=1
172
+ U2j−1,2j(dt) = e−iHXXZdt + O(dt2)
173
+ (3)
174
+ where Ujk(dt) = e−ihjkdt. The exact time evolution operator U(t) is thus approximated by m repeated
175
+ applications of Utrot(dt =
176
+ t
177
+ m), i.e. U(t) ≈ Um
178
+ trot(dt =
179
+ t
180
+ m). As discussed in [16], each Ujk(dt) appearing in
181
+ (3) can be implemented by the quantum circuit with just three CNOTs as in Figure 2. We can reduce
182
+ the error in the Trotter formula in equation (3) by using higher order expressions [17]. It turns out that
183
+ the second order Trotter formula can be implemented on a quantum circuit with only one extra layer in
184
+ the circuit [16]. We have:
185
+ U(2)
186
+ trot(dt) =
187
+ L/2−1
188
+
189
+ j=0
190
+ U2j,2j+1
191
+ �dt
192
+ 2
193
+ � L/2−1
194
+
195
+ j=1
196
+ U2j−1,2j (dt)
197
+ L/2−1
198
+
199
+ j=0
200
+ U2j,2j+1
201
+ �dt
202
+ 2
203
+
204
+ = e−iHXXZdt + O(dt2)
205
+ (4)
206
+ which can be implemented on a quantum device by the circuit in Figure 4.
207
+ 3
208
+
209
+ U(dt)
210
+ U(dt)
211
+ U(dt)
212
+ U(dt)
213
+ U(dt)
214
+ U(dt)
215
+ U(dt)
216
+ U(dt)
217
+ U(dt)
218
+ U(dt)
219
+ U(dt)
220
+ U(dt)
221
+ U(dt)
222
+ U(dt)
223
+ U(dt)
224
+ Figure 3: First order Trotter circuit acting on six qubits.
225
+ U( dt
226
+ 2 )
227
+ U(dt)
228
+ U(dt)
229
+ U( dt
230
+ 2 )
231
+ U(dt)
232
+ U(dt)
233
+ U(dt)
234
+ U( dt
235
+ 2 )
236
+ U(dt)
237
+ U(dt)
238
+ U( dt
239
+ 2 )
240
+ U(dt)
241
+ U(dt)
242
+ U(dt)
243
+ U( dt
244
+ 2 )
245
+ U(dt)
246
+ U(dt)
247
+ U( dt
248
+ 2 )
249
+ Figure 4: Second order Trotter circuit acting on six qubits.
250
+ 4
251
+
252
+ A(1)
253
+ A(2)
254
+ A(3)
255
+ A(4)
256
+ A(5)
257
+ A(6)
258
+ Figure 5: Graphical representation of an MPS. There are two matrices A(i) for each qubit at position i.
259
+ A(1)
260
+ A(2)
261
+ A(3)
262
+ A(4)
263
+ A(5)
264
+ A(6)
265
+ B(1)
266
+ B(2)
267
+ B(3)
268
+ B(4)
269
+ B(5)
270
+ B(6)
271
+ Figure 6: The inner product ⟨ψ1|ψ2⟩ of two Matrix Product States - see equations (11) and (6).
272
+ 2.2
273
+ Matrix Product States
274
+ An arbitrary quantum state on n qubits can be written in terms of complex variables cj1,...,jn, the number
275
+ of which scales as 2n:
276
+ |ψ⟩ =
277
+
278
+ {j1,...,jn}
279
+ cj1,...,jn |j1, ..., jn⟩
280
+ (5)
281
+ where the sum is over all configurations of the binary variables j1, ..., jn. The bipartite entanglement
282
+ entropy of an arbitrary quantum state picked at random from Hilbert space satisfies a volume law which,
283
+ as was discussed in the introduction, is distinct from area law entanglement in which case the entanglement
284
+ entropy of two regions after the bipartition of the system is proportional to the area of the boundary of
285
+ the system. A small subset of states in Hilbert space satisfies an area law. The coefficients cj1,...,jn of
286
+ such states have a certain structure that we can take advantage of to study classically. Any state |ψ⟩ can
287
+ be written in the following way:
288
+ cj1,...,jn = A(1)
289
+ j1 · A(2)
290
+ j2 ... · A(n)
291
+ jn
292
+ (6)
293
+ where the Aj are χj × χj+1 dimensional matrices. Quantum states of the form (6) are known as Matrix
294
+ Product States (MPS). The maximum value of χj is referred to as the bond dimension of the MPS. We
295
+ can represent an MPS graphically as in Figure 5. We associate one matrix A(i) to each qubit. Note
296
+ that for each qubit i we have two matrices. We thus have a total of 2n matrices to keep track of. The
297
+ bond dimension χj can be seen as a measure of the entanglement between the two subsystems when a
298
+ bipartition is made at qubit j. Therefore, states in Hilbert space that satisfy an area law - and therefore
299
+ have a low bond dimension in their MPS representation - can be efficiently stored as Matrix Product
300
+ States. States that satisfy a volume law will have a bond dimension that is exponential in the number of
301
+ qubits. We will consider in this work the non-trivial dynamics governed by equation (2). As discussed in
302
+ the introduction, the bipartite entanglement entropy of a ground state of a one-dimensional Hamiltonian
303
+ that has a gap between its ground state and its excited state is independent of the size of the subsystems.
304
+ The ground state of such a system - and hence the initial state in our setup - can be efficiently stored
305
+ as an MPS. One can then use an algorithm such as TEBD (Time Evolving Block Decimation) [18] to
306
+ update the MPS as a function of time to study the dynamics of the system. However, the entanglement
307
+ entropy of the state increases linearly with time, hence the bond dimension χ that is required to keep
308
+ the error constant diverges exponentially with time. To simulate for longer times, a quantum computer
309
+ would be needed. In section 2.3, we will discuss how Matrix Product States can be leveraged to reduce
310
+ the resource requirements for this simulation problem when implemented on a quantum device.
311
+ 2.3
312
+ Matrix Product States applied to Approximate Quantum Compiling
313
+ j
314
+ k
315
+ Ry(θ1)
316
+ Rz(θ2)
317
+ Ry(θ3)
318
+ Rx(θ4)
319
+ Figure 7: CNOT block forms the basic building block of our circuit ansatz.
320
+ 5
321
+
322
+ Rz(θ1)
323
+ Ry(θ2)
324
+ Rz(θ3)
325
+ Rz(θ4)
326
+ Ry(θ5)
327
+ Rz(θ6)
328
+ Rz(θ7)
329
+ Ry(θ8)
330
+ Rz(θ9)
331
+ Rz(θ10)
332
+ Ry(θ11)
333
+ Rz(θ12)
334
+ Figure 8: Parameterised circuit inspired by the structure of the first order Trotter circuit in Figure 3.
335
+ Rz(θ1)
336
+ Ry(θ2)
337
+ Rz(θ3)
338
+ Rz(θ4)
339
+ Ry(θ5)
340
+ Rz(θ6)
341
+ Rz(θ7)
342
+ Ry(θ8)
343
+ Rz(θ9)
344
+ Rz(θ10)
345
+ Ry(θ11)
346
+ Rz(θ12)
347
+ Figure 9: Parameterised circuit inspired by the structure of the second order Trotter circuit in Figure 4.
348
+ Approximate quantum compiling (AQC) involves the design of a parametric quantum circuit with
349
+ fixed depth - the parameters are then adjusted to bring it as close as possible to the target, where “close”
350
+ is defined via some carefully chosen metric, see below. As discussed in [12], one can use so-called CNOT
351
+ blocks to construct a natural circuit Ansatz. A CNOT block is a CNOT gate followed by single qubit
352
+ rotations (see Figure 7). A block with a CNOT gate acting on a “control” qubit j and “target” qubit
353
+ k is written as CUjk(θ1, θ2, θ3, θ4). For a given hardware connectivity, one can then write down a fully
354
+ parameterised circuit as:
355
+ Vct(θ) =CUct(L) (θ3n+4L−3, . . . , θ3n+4L) · · · CUct(1) (θ3n+1, . . . , θ3n+4)
356
+ [Rz (θ1) Ry (θ2) Rz (θ3)] ⊗ · · · ⊗ [Rz (θ3n−2) Ry (θ3n−1) Rz (θ3n)]
357
+ (7)
358
+ The position of the CNOT blocks in the parameterised circuit can be customised to suit the particular
359
+ target circuit that one is interested in. Here we are interested in finding a circuit that implements the
360
+ unitary time evolution operator as in equation (2). We thus consider a structure inspired by the first and
361
+ second-order Trotter circuits in Figures 3 and 4 respectively. Recall that each block U(dt) in Figures 3
362
+ and 4 represents the 2-qubit sub-circuit with three CNOTs in Figure 2; it is therefore natural to consider
363
+ a circuit Ansatz with sub-circuits each with three CNOT blocks as in Figures 8 and 9, such that the
364
+ circuit Ansatz mimics the structure of the first and second order Trotter circuits. In the notation of [14],
365
+ the parameterised circuits in Figures 8 and 9 correspond to n = 4 qubits, l = 2 layers and b = 3 CNOT
366
+ blocks in each layer. In both Figure 8 and Figure 9 there are three rotation gates acting on each qubit at
367
+ the beginning of the circuit. In the examples considered in this work we will take the initial state to be |0⟩
368
+ - the initial rotation gate Rz(θ) is redundant for these cases but is necessary for more general initial states.
369
+ One can define the distance between the target and parameterised circuit via a number of different
370
+ metrics. Here we use a cost function based on the Hilbert-Schmidt test:
371
+ Cstate
372
+ hs
373
+ = 1 − | ⟨0| V †(θ) |ψ0⟩ |2
374
+ (8)
375
+ The goal of AQC is to tune the parameters θ to minimise the cost function under consideration. Note
376
+ that here we are considering the application of AQC to state preparation as opposed to full circuit com-
377
+ pilation. More precisely, this means that our cost function is designed such that it is minimised when the
378
+ action of V (θ) on the initial state |0⟩ produces a state that is as close as possible to a target state |ψ0⟩
379
+ (up to some global phase). This is in contrast to the situation where one starts with some target circuit
380
+ U and the cost function is designed to bring the full matrix V (θ) as close as possible to U.
381
+ 6
382
+
383
+ As pointed out in [19], the gradient of the cost function in (8) vanishes exponentially. This observation
384
+ lead to the distinction between global and local cost functions; local cost functions have only polynomially
385
+ vanishing gradients in some cases of interest - see [19, 20, 14] for details. As was shown in [14], the Hilbert-
386
+ Schmidt test - which is a global cost function - can be turned into a local one by adding several “bit-flip”
387
+ terms which increases the magnitude of the gradient:
388
+ Cstate
389
+ lhs
390
+ = 1−| ⟨0| V †(θ) |ψ0⟩ |2 −
391
+ �n − 1
392
+ n
393
+
394
+ n
395
+
396
+ j=1
397
+ | ⟨0| XjV †(θ) |ψ0⟩ |2
398
+
399
+ �n − 2
400
+ n
401
+ � �
402
+ j<k
403
+ | ⟨0| XjXkV †(θ) |ψ0⟩ |2 − ... − 1
404
+ n
405
+
406
+ j<k<l<...
407
+ | ⟨0| XjXkXl...V †(θ) |ψ0⟩ |2
408
+ (9)
409
+ Convergence of the cost function can be significantly improved by adding these terms, however the
410
+ computational cost of calculating the gradient becomes prohibitive. It was demonstrated in [14] that this
411
+ can be overcome by truncating the expression in (9) to get:
412
+ C(1)
413
+ L (α) = 1 − | ⟨0| V †(θ) |ψ0⟩ |2 − α
414
+ n
415
+
416
+ j=1
417
+ | ⟨0| XjV †(θ) |ψ0⟩ |2
418
+ (10)
419
+ where α is a parameter that can be tuned throughout the optimisation procedure - a scheme to implement
420
+ this tuning effectively was demonstrated in [14]. In (10), we have only kept 1 “bit-flip” term, i.e. we have
421
+ dropped all terms with more than one NOT operators Xi. As discussed in [14], one can obtain higher
422
+ order expressions C(k)
423
+ L
424
+ with more “bit-flip” terms included - doing so induces a larger gradient in the cost
425
+ function but increases the computational burden.
426
+ Note that each term in (9) or (10) is an overlap of quantum states, and since the overlap of two MPS
427
+ can be calculated very efficiently the architecture of Matrix Product States can be leveraged to calculate
428
+ the cost function and solve the approximate quantum compilation problem for large numbers of qubits.
429
+ Consider for example two quantum states |ψ1⟩ and |ψ2⟩:
430
+ |ψ1⟩ =
431
+
432
+ {j1,...,jn}
433
+ c(1)
434
+ j1,...,jn |j1, ..., jn⟩
435
+ |ψ2⟩ =
436
+
437
+ {j1,...,jn}
438
+ c(2)
439
+ j1,...,jn |j1, ..., jn⟩
440
+ (11)
441
+ As discussed in section 2.2, for weakly entangled states the coefficients c(1)
442
+ j1,...,jn and c(2)
443
+ j1,...,jn are not all
444
+ independent and can be represented efficiently as Matrix Product States - see Figure 5:
445
+ c(1)
446
+ j1,...,jn = A(1)
447
+ j1 · A(2)
448
+ j2 ... · A(n)
449
+ jn
450
+ c(2)
451
+ j1,...,jn = B(1)
452
+ j1 · B(2)
453
+ j2 ... · B(n)
454
+ jn
455
+ (12)
456
+ We want to calculate the quantity:
457
+ f(|ψ1⟩ , |ψ2⟩) = || ⟨ψ1|ψ2⟩ ||2
458
+ (13)
459
+ The overlap of two MPS, and hence the fidelity f in (13) can be calculated efficiently by “contracting”
460
+ the Tensor Network shown in Figure 6.
461
+ 3
462
+ Results
463
+ We now present the results of our simulations of the Schr¨odinger equation in equation (2) using the
464
+ second order Trotter formula in equation (4). First let’s define and clarify some notation:
465
+ • |a1⟩: The state generated by the optimised parametric circuit in Figure 9.
466
+ • Number of layers l in Ansatz: in Figures 8 and 9 there are l = 2 layers.
467
+ • |t1⟩: The state generated by the Trotter circuit in Figure 4.
468
+ 7
469
+
470
+ Figure 10: 50 qubits: fidelities of the parametric circuit and the Trotter circuit with the ”ground truth”
471
+ obtained by Tensor Network simulations. The two circuits are of identical length but the parametric
472
+ circuit achieves a significantly higher fidelity at late times.
473
+ Figure 11: 50 qubits: the maximum number of layers in the parametric circuit is 12 while it is 18 for the
474
+ Trotter circuit. Despite this, the parametric circuit achieves a higher fidelity.
475
+ • Number of Trotter steps: the analogue of the number of layers in the Ansatz circuits. In Figure 3
476
+ and 4 there are 3 Trotter steps.
477
+ • |t1 gt⟩: the “ground truth” generated by classical Tensor Network simulations of deep Trotter
478
+ circuits, i.e. extremely small time steps. We take dt = 0.04 to generate the ground truth state
479
+ while |t1⟩ is generated with a time step of dt = 0.4.
480
+ All circuits considered here take the form of the second-order Trotter structure. More precisely, in each
481
+ graph we use the labels “Trotter” and “Ansatz” and these circuits have the structure in Figures 4 and 9
482
+ respectively. In Figures 10, 11 and 12 we plot fidelity vs evolution time for the 50 qubit XXX Hamiltonian
483
+ and we compare the result of the Trotter circuit vs the AQC circuit. In Figure 10 we can see that the
484
+ fidelity of the Trotter circuit decays rapidly while the fidelity of the AQC circuit remains above 0.99.
485
+ Note that the Trotter circuit and the AQC circuit are of equal depth. In Figures 11 and 12 we compare
486
+ short depth circuits generated by AQC with deep Trotter circuits. We observe that the AQC circuits
487
+ can achieve comparable or better fidelities with much shorter depth. In particular, in Figure 12 the two
488
+ fidelities are almost identical but the AQC circuit is half the depth of the Trotter circuit. We plot the
489
+ 8
490
+
491
+ Fidelity: lal> vs Itl_gt> and Itl> vs Itl gt>, num.qubits: 50
492
+ 1.00
493
+ 0.99
494
+ 0.98
495
+ 0.97
496
+ 0.96
497
+ fidelity
498
+ 0.95
499
+ 0.94
500
+ 0.93
501
+ 0.92
502
+ Ansatz
503
+ 0.91
504
+ Trotter
505
+ 0.90
506
+ 1.2
507
+ 2.4
508
+ 3.6
509
+ 4.8
510
+ 6.0
511
+ 7.2
512
+ evolution time
513
+ 13
514
+
515
+ 9
516
+ 12
517
+ 15
518
+ 18
519
+ number of layers in ansatz
520
+ 13
521
+
522
+
523
+ 12
524
+ 15
525
+ 18
526
+ number of Trotter stepsFidelity: lal> vs Itl_gt> and Itl> vs Itl gt>, num.qubits: 50
527
+ 1.00
528
+ 0.99
529
+ 0.98
530
+ 0.97
531
+ 0.96
532
+ fidel
533
+ 0.95
534
+ 0.94
535
+ 0.93
536
+ 0.92
537
+ 0.91
538
+ Ansatz
539
+ Trotter
540
+ 0.90
541
+ 1.2
542
+ 2.4
543
+ 3.6
544
+ 4.8
545
+ 6.0
546
+ 7.2
547
+ evolution time
548
+ 2
549
+ 12
550
+ 4
551
+ 6
552
+ 8
553
+ 10
554
+ number of layers in ansatz
555
+ 13
556
+
557
+ 9
558
+ 12
559
+ 15
560
+ 18
561
+ number of Trotter stepsFigure 12: 50 qubits: the maximum number of layers in the parametric circuit is 9 while it is 18 for the
562
+ Trotter circuit. Both circuits achieve very similar fidelities despite the parametric circuit being half the
563
+ depth of the Trotter circuit.
564
+ Figure 13: 100 qubits: The maximum number of layers in the parametric circuit and the Trotter circuit
565
+ is 18. It can be seen that the fidelity of the Trotter circuit decays rapidly but that of the parametric
566
+ circuit remains high.
567
+ same data for 100 qubits in Figures 13 and 14.
568
+ Now we would like to consider how these results affect the implementation on a real quantum device.
569
+ We consider a 20 qubit spin-chain on the 27 qubit device ibmq-mumbai. First we plot the fidelity results
570
+ for 20 qubits in Figure 15. We ran the resulting parametric circuit on ibmq-mumbai and in Figure 16
571
+ we plot the expectation values ⟨ψ(t)|Sz
572
+ 0|ψ(t)⟩ as obtained from the quantum device using the parametric
573
+ AQC circuit, the Trotter circuit and from a classical Tensor Network simulation. We observe that this
574
+ observable is more accurate when obtained with the AQC circuit due to its reduced depth. Note that
575
+ the difference between the results from the simulation and the results from the quantum device would be
576
+ greatly reduced after applying error mitigation [21, 22]. We have not attempted to apply error mitigation
577
+ to either circuit as this would be outside the scope of this work.
578
+ We expect that, since our Tensor
579
+ Network compilation scheme greatly reduces the noise of the circuit, any error mitigation scheme would
580
+ be enhanced by our approach.
581
+ 9
582
+
583
+ Fidelity: lal> vs |tl_gt> and Itl> vs Itl_gt>, num.qubits: 50
584
+ 1.00
585
+ Ansatz
586
+ 0.99
587
+ Trotter
588
+ 0.98
589
+ 0.97
590
+ 0.96
591
+ fideli
592
+ 0.95
593
+ 0.94
594
+ 0.93
595
+ 0.92
596
+ 0.91
597
+ 0.90
598
+ 1.2
599
+ 2.4
600
+ 3.6
601
+ 4.8
602
+ 6.0
603
+ 7.2
604
+ evolution time
605
+ 2
606
+ 4
607
+ 8
608
+
609
+ 6
610
+ number of layers in ansatz
611
+ 13
612
+
613
+ 9
614
+ 12
615
+ 15
616
+ 18
617
+ number of Trotter stepsFidelity: lai) and Iti> vs. ground-truth state, Ngqubits: 1o0
618
+ 1.00
619
+ 0.99
620
+ 0.98
621
+ 0.97
622
+ 0.96
623
+ 0.95
624
+ p!
625
+ 0.94
626
+ 0.93
627
+ 0.92
628
+ 0.91
629
+ Ansatz
630
+ Trotter
631
+ 0.90
632
+ 1.2
633
+ 2.4
634
+ 3.6
635
+ 4.8
636
+ 6.0
637
+ 7.2
638
+ evolution time
639
+ 13
640
+
641
+ 9
642
+ 12
643
+ 15
644
+ 18
645
+ number of layers in ansatz
646
+ 13
647
+ 6
648
+
649
+ 12
650
+ 15
651
+ 18
652
+ number of Trotter stepsFigure 14: 100 qubits: The maximum number of layers in the parametric circuit is 12 while it is 18 for
653
+ the Trotter circuit.
654
+ Figure 15: The maximum depth of the parametric circuit is half that of the Trotter circuit - there are 9
655
+ and 18 layers respectively. These 20 qubit circuits were implemented on ibmq-mumbai - see Figure 16.
656
+ 4
657
+ Discussion
658
+ In this paper we applied Tensor Network methods to Quantum Compiling and demonstrated their efficacy
659
+ on the 27 qubit device ibmq-mumbai. Our method is similar in spirit to [23] where Matrix Product States
660
+ were used to prepare the initial state for VQE to find the ground state of some Hamiltonian - here we
661
+ use Matrix Product States to prepare a short depth quantum circuit that simulates the time evolution of
662
+ a 1D Hamiltonian. We chose the XXX Hamiltonian in equation (1) because it has been well studied, but
663
+ we would be particularly interested to apply the compilation methods developed here to non-integrable
664
+ systems by e.g. adding a random field to the Hamiltonian in (1) and studying phenomena of scientific
665
+ interest such as many-body localisation.
666
+ We have shown results of our simulations on up to 100 qubits. In principle we can significantly increase
667
+ the number of qubits and the length of time to which we apply our MPS compilation scheme; the limiting
668
+ factor at present seems to be the particular implementation that we apply to SVD and to calculate the
669
+ gradient. We believe that both of these can be improved significantly, in particular by using an efficient
670
+ parallel implementation - this is the subject of ongoing work. In our current framework we use the Qiskit
671
+ 10
672
+
673
+ Fidelity: lai) and Iti> vs. ground-truth state, Ngqubits: 1o0
674
+ 1.00
675
+ Ansatz
676
+ 0.99
677
+ Trotter
678
+ 0.98
679
+ 0.97
680
+ 0.96
681
+ 0.95
682
+ p!
683
+ 0.94
684
+ 0.93
685
+ 0.92
686
+ 0.91
687
+ 0.90
688
+ 1.2
689
+ 2.4
690
+ 3.6
691
+ 4.8
692
+ 6.0
693
+ 7.2
694
+ evolution time
695
+ 2
696
+ 12
697
+ 4
698
+ 6
699
+ 8
700
+ 10
701
+ number of layers in ansatz
702
+ 13
703
+
704
+ 9
705
+ 12
706
+ 15
707
+ 18
708
+ number of Trotter stepsFidelity: lai> and Iti> vs. ground-truth state, Nqubits: 20
709
+ 1.000
710
+ 0.995
711
+ 0.990
712
+ 0.985
713
+ 0.980
714
+ 0.975
715
+ fide
716
+ 0.970
717
+ 0.965
718
+ 0.960
719
+ Ansatz
720
+ 0.955
721
+ Trotter
722
+ 0.950
723
+ 1.2
724
+ 2.4
725
+ 3.6
726
+ 4.8
727
+ 6.0
728
+ 7.2
729
+ evolution time
730
+ 2
731
+ 4
732
+
733
+ 6
734
+ 7
735
+ 8
736
+ number of layers in ansatz
737
+ 13
738
+
739
+
740
+ 12
741
+ 15
742
+ 18
743
+ number of Trotter stepsFigure 16: The expectation value of Sz
744
+ 0 vs time for a chain of 20 qubits as measured on the 27 qubit
745
+ quantum device ibmq-mumbai.
746
+ The circuit produced from our MPS implementation of AQC uses is
747
+ shallower than the Trotter circuit, and thus produces an expectation value that is much closer to the true
748
+ value plotted in the blue curve, obtained by classical Tensor Network simulations.
749
+ MPS package which is designed for generic situations in which long range connectivity may be required
750
+ and thus does not take advantage of the short range structure of the circuits in Figures 8 and 9.
751
+ 5
752
+ Acknowledgements
753
+ This work was funded by the Disruptive Technologies Innovation Fund (DTIF), by Enterprise Ireland,
754
+ under project number DTIF2019-090 (project QCoIR) and also supported by IBM Quantum.
755
+ 11
756
+
757
+ 0.0 -
758
+ +
759
+ -0.1
760
+ +
761
+ 0.2
762
+ 0.3
763
+ -0.4
764
+ Simulation
765
+ -0.5
766
+ ibm mumbai: Trotter circuit
767
+ +
768
+ ibm mumbai: AQC circuit
769
+ -0.6.
770
+ 0.D
771
+ 0.5
772
+ 15
773
+ 2D
774
+ 25
775
+ 3.D
776
+ 3.5
777
+ 4.D
778
+ tReferences
779
+ [1] Frank Verstraete and J Ignacio Cirac.
780
+ Matrix product states represent ground states faithfully.
781
+ Physical review b, 73(9):094423, 2006.
782
+ [2] Edwin Stoudenmire and David J Schwab. Supervised learning with tensor networks. Advances in
783
+ Neural Information Processing Systems, 29, 2016.
784
+ [3] Tom Vieijra, Laurens Vanderstraeten, and Frank Verstraete. Generative modeling with projected
785
+ entangled-pair states. arXiv preprint arXiv:2202.08177, 2022.
786
+ [4] Steven R White. Density-matrix algorithms for quantum renormalization groups. Physical review b,
787
+ 48(14):10345, 1993.
788
+ [5] Ulrich Schollw¨ock. The density-matrix renormalization group in the age of matrix product states.
789
+ Annals of physics, 326(1):96–192, 2011.
790
+ [6] Matthew B Hastings.
791
+ An area law for one-dimensional quantum systems.
792
+ Journal of statistical
793
+ mechanics: theory and experiment, 2007(08):P08024, 2007.
794
+ [7] Joe Gibbs, Kaitlin Gili, Zo¨e Holmes, Benjamin Commeau, Andrew Arrasmith, Lukasz Cincio,
795
+ Patrick J Coles, and Andrew Sornborger.
796
+ Long-time simulations with high fidelity on quantum
797
+ hardware. arXiv preprint arXiv:2102.04313, 2021.
798
+ [8] Cristina Cirstoiu, Zoe Holmes, Joseph Iosue, Lukasz Cincio, Patrick J Coles, and Andrew Sornborger.
799
+ Variational fast forwarding for quantum simulation beyond the coherence time. npj Quantum Infor-
800
+ mation, 6(1):1–10, 2020.
801
+ [9] Christa Zoufal, David Sutter, and Stefan Woerner.
802
+ Error bounds for variational quantum time
803
+ evolution. arXiv preprint arXiv:2108.00022, 2021.
804
+ [10] Kishor Bharti and Tobias Haug. Quantum-assisted simulator. Physical Review A, 104(4):042418,
805
+ 2021.
806
+ [11] Alexander Miessen, Pauline J Ollitrault, and Ivano Tavernelli. Quantum algorithms for quantum
807
+ dynamics: a performance study on the spin-boson model. Physical Review Research, 3(4):043212,
808
+ 2021.
809
+ [12] Liam Madden and Andrea Simonetto. Best approximate quantum compiling problems. ACM Trans-
810
+ actions on Quantum Computing, 3(2):1–29, 2022.
811
+ [13] Liam Madden, Albert Akhriev, and Andrea Simonetto. Sketching the best approximate quantum
812
+ compiling problem. arXiv preprint arXiv:2205.04025, 2022.
813
+ [14] Niall F Robertson, Albert Akhriev, Jiri Vala, and Sergiy Zhuk. Escaping barren plateaus in approx-
814
+ imate quantum compiling. arXiv preprint arXiv:2210.09191, 2022.
815
+ [15] Lorenzo Piroli, Bal´azs Pozsgay, and Eric Vernier. From the quantum transfer matrix to the quench
816
+ action: the loschmidt echo in xxz heisenberg spin chains. Journal of Statistical Mechanics: Theory
817
+ and Experiment, 2017(2):023106, 2017.
818
+ [16] Adam Smith, MS Kim, Frank Pollmann, and Johannes Knolle. Simulating quantum many-body
819
+ dynamics on a current digital quantum computer. npj Quantum Information, 5(1):1–13, 2019.
820
+ [17] David Layden. First-order trotter error from a second-order perspective. Physical Review Letters,
821
+ 128(21):210501, 2022.
822
+ [18] Johannes Hauschild and Frank Pollmann.
823
+ Efficient numerical simulations with tensor networks:
824
+ Tensor network python (tenpy). SciPost Physics Lecture Notes, page 005, 2018.
825
+ [19] Sumeet Khatri, Ryan LaRose, Alexander Poremba, Lukasz Cincio, Andrew T Sornborger, and
826
+ Patrick J Coles. Quantum-assisted quantum compiling. Quantum, 3:140, 2019.
827
+ [20] Marco Cerezo, Akira Sone, Tyler Volkoff, Lukasz Cincio, and Patrick J Coles. Cost function depen-
828
+ dent barren plateaus in shallow parametrized quantum circuits. Nature communications, 12(1):1–12,
829
+ 2021.
830
+ 12
831
+
832
+ [21] Kristan Temme, Sergey Bravyi, and Jay M Gambetta. Error mitigation for short-depth quantum
833
+ circuits. Physical review letters, 119(18):180509, 2017.
834
+ [22] Youngseok Kim, Christopher J Wood, Theodore J Yoder, Seth T Merkel, Jay M Gambetta, Kris-
835
+ tan Temme, and Abhinav Kandala. Scalable error mitigation for noisy quantum circuits produces
836
+ competitive expectation values. arXiv preprint arXiv:2108.09197, 2021.
837
+ [23] Manuel S Rudolph, Jacob Miller, Jing Chen, Atithi Acharya, and Alejandro Perdomo-Ortiz. Syn-
838
+ ergy between quantum circuits and tensor networks: Short-cutting the race to practical quantum
839
+ advantage. arXiv preprint arXiv:2208.13673, 2022.
840
+ 13
841
+
1tFAT4oBgHgl3EQfkB3r/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,460 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf,len=459
2
+ page_content='Approximate Quantum Compiling for Quantum Simulation: A Tensor Network based approach Niall F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
3
+ page_content=' Robertson1, Albert Akhriev1, Jiri Vala2,3 and Sergiy Zhuk1 1 IBM Quantum, IBM Research Europe - Dublin, IBM Technology Campus, Dublin 15, Ireland 2 Maynooth University, Maynooth, Ireland 3 Tyndall National Institute, Cork, Ireland Abstract The simulation of quantum spin chains is a promising candidate for the demonstration of quantum advantage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
4
+ page_content=' One of the main obstacles to achieving this is the noise that arises from implementing the deep circuits that appear in standard quantum time evolution algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
5
+ page_content=' Compiling these deep circuits into shallower ones is thus a key issue that we address in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
6
+ page_content=' We use a Tensor Network based approach to Approximate Quantum Compiling to produce short depth quantum circuits that simulate the time evolution of the Heisenberg spin chain on up to 100 qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
7
+ page_content=' Furthermore, we run these short depth circuits on a ibmq-mumbai - a 27 qubit device - and show that the accuracy of the measured observables is significantly improved after applying our Tensor Network compilation scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
8
+ page_content=' 1 Introduction The simulation of quantum many-body systems is a task of immense scientific interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
9
+ page_content=' The study of quantum dynamics, in particular, allows for the study of thermalisation, many-body localisation, Hub- bard model physics and the applicability of field theory to out-of-equilibrium phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
10
+ page_content=' In all of these fields there are many open scientific questions whose answers are likely to demand accurate simulation of quantum dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
11
+ page_content=' However, the classical computational requirements of a brute-force approach to quantum dynamical simulations scales exponentially in the size of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
12
+ page_content=' Approximate techniques such as Tensor Networks are thus often called upon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
13
+ page_content=' Tensor Networks represent one of the best set of tools available to simulate time evolution and can also be applied to other problems such as ground state calculations [1] and machine learning [2, 3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
14
+ page_content=' Matrix Product States (MPS) are a particular type of Tensor Network that are particularly suited to describe quantum systems in one dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
15
+ page_content=' They form a key component of modern implementations of the well known Density Matrix Renormalisation Group (DMRG) algorithm used to find the ground state of local Hamiltonians.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
16
+ page_content=' The DMRG algorithm was designed many years before [4] it was realised that it could be understood as a variational optimisation algorithm where a Matrix Product State is used as an Ansatz for the ground state [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
17
+ page_content=' This insight shed light on the reasons behind the spectacular success of DMRG;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
18
+ page_content=' the ground states of local Hamiltonians are only weakly entangled and so too are Matrix Product States.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
19
+ page_content=' More precisely, the bipartite entanglement entropy S of the ground state of a local Hamiltonian satisfies an area law, meaning that the entanglement entropy is proportional to the area of the boundary of the two subsystems in the bipartition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
20
+ page_content=' In 1D, this means that the entanglement entropy is independent of the system size [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
21
+ page_content=' This is in contrast to typical states in Hilbert space whose entanglement structures satisfy a volume law.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
22
+ page_content=' Matrix Product States are also known to satisfy an area law [5] and thus have the same entanglement structure as the ground state by design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
23
+ page_content=' Since the weak entanglement of ground states of local Hamiltonians allow for their efficient storage as Matrix Product States, it is natural to ask if this is also possible for states that are generated by time evolution as these states are no longer necessarily weakly entangled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
24
+ page_content=' It turns out that for many physical systems of interest, entanglement entropy increases linearly until it saturates, at which point an MPS 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
25
+ page_content='08609v1 [quant-ph] 20 Jan 2023 will no longer be an efficient representation of the state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
26
+ page_content=' However, if the initial state is weakly entangled then the MPS representation can be used to store the state at early times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
27
+ page_content=' A paradigmatic example of this scenario is a quantum quench, whereby a quantum system is initially prepared in the ground state of some local Hamiltonian, the parameters of the Hamiltonian are subsequently changed very rapidly and the system then evolves according to Schr¨odinger’s equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
28
+ page_content=' The TEBD algorithm (Time Evolving Block Decimation) can be used to simulate time evolution after a quantum quench;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
29
+ page_content=' the state is stored as an MPS and this MPS is updated as a function of time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
30
+ page_content=' Despite the success of DMRG, TEBD and other Tensor Network algorithms, these approaches are not without limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
31
+ page_content=' The memory requirements to store an MPS is characterised by the bond dimension, given by the dimension of the largest matrix used in the description of the state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
32
+ page_content=' For constant approximation error ϵ this bond dimension increases exponentially with the entanglement entropy and thus with time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
33
+ page_content=' Therefore, for a fixed maximum bond dimension, the error ϵ increases exponentially with time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
34
+ page_content=' This limits the applicability of Tensor Network algorithms to short time simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
35
+ page_content=' A quantum algorithm however, does not in principle suffer from this issue - the key difference between a quantum and a classical device being the ability to store highly entangled states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
36
+ page_content=' A quantum computer therefore has the potential to simulate quantum many-body systems for long times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
37
+ page_content=' The accurate simulation of the time evolution of 1D quantum systems is thus a promising route for the demonstration of quantum advantage in the short term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
38
+ page_content=' One such quantum algorithm is Trotterisation, where a discrete time step dt is used and the time evolution operator is approximated as a quantum circuit with an error that scales polynomially in dt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
39
+ page_content=' The depth of the quantum circuit used in such an approach increases with decreasing dt, leading to a trade-off between the noise arising from using deep circuits and the decreasing accuracy of the approximation when dt is increased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
40
+ page_content=' A number of variational quantum algorithms for the simulation of time evolution have therefore been developed that aim to use shallower circuits [7, 8, 9, 10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
41
+ page_content=' Each of these approaches suffer from a number of issues such as convergence, runtime and limited device connectivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
42
+ page_content=' As a result, it has been argued that such variational approaches are not practical for use on near term quantum hardware [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
43
+ page_content=' One approach that aims to overcome the issue of deep circuits is Approximate Quantum Compiling [12, 13, 14], where one defines a parametric circuit of fixed depth and uses techniques from optimisation to minimise the distance between the parametric circuit and the target circuit of interest - where distance is defined by some carefully chosen metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
44
+ page_content=' In principle, this approach can lead to short depth circuits that implement the target circuit of interest within some error tolerance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
45
+ page_content=' In practice, a classical imple- mentation of such an approach [14] is limited to act on a small number of qubits due to the exponential scaling of the Hilbert space with the number of qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
46
+ page_content=' Here we develop a new approach to quantum simulation that combines Matrix Product States, Approx- imate Quantum Compiling and Trotterisation to produce short depth quantum circuits that implement the time evolution operator of the Heisenberg spin chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
47
+ page_content=' This approach is scalable thanks to the im- mense power of Matrix Product States.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
48
+ page_content=' Figure 1 shows a schematic of our approach: first we apply Trotterisation classically for the maximum length of time for which we can still store the state as an MPS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
49
+ page_content=' We then apply a Matrix Product State implementation of Approximate Quantum Compiling to squeeze the circuit (purple box in the figure) to find a much shallower circuit that still reproduces the same state as Trotterisation, up to some small error in the fidelity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
50
+ page_content=' We then use the squeezed circuit as the input for the Trotter circuit which can now generate a quantum state beyond what can be stored classically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
51
+ page_content=' 2 Setup 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
52
+ page_content='1 The model We will consider the XXX spin-chain - a paradigmatic model for quantum magnetism - defined by the Hamiltonian: HXXX = − L−1 � i=0 hi,i+1 = − L−1 � i=0 � Sx i Sx i+1 + Sy i Sy i+1 + Sz i Sz i+1 � , (1) where Sx, Sy and Sz are written in terms of Pauli matrices as Sx = σx 2 , Sy = σy 2 and Sz = σz 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
53
+ page_content=' The Hamiltonian in (1) is a prototypical example of an integrable 1D model and its dynamical behaviour has been studied extensively [15], including on a quantum computer [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
54
+ page_content=' The time evolution of a quantum 2 Compress with AQC .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
55
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
56
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
57
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
58
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
59
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
60
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
61
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
62
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
63
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
64
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
65
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
66
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
67
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
68
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
69
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
70
+ page_content=' Figure 1: Schematic of our approach: Trotterisation is applied classically (purple box) and then a Matrix Product State implementation of Approximate Quantum Compiling is applied to compress the first part of the circuit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
71
+ page_content=' Standard Trotterisation is then applied on a quantum device afterwards to simulate longer times, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
72
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
73
+ page_content=' times which are beyond what is possible classically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
74
+ page_content=' Rz(θ) Rz( π 2 ) Rz(− π 2 ) Ry(φ) Ry(λ) Figure 2: Implementation of two site operator ei(ασx⊗σx+βσy⊗σy+γσz⊗σz) as a quantum circuit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
75
+ page_content=' We have the correspondences θ = π 2 − 2γ, φ = 2α − π 2 and λ = π 2 − 2β.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
76
+ page_content=' The Hamiltonian in (1) corresponds to the case α = β = γ = dt state |ψ(t)⟩ is governed by the Schr¨odinger equation: |ψ(t)⟩ = e−iHXXXt |ψ(0)⟩ (2) where |ψ(0)⟩ is the wavefunction at time t = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
77
+ page_content=' In this work, we will consider the N´eel state, written as: |↑↓↑↓ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
78
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
79
+ page_content=' ↑↓⟩ where ↑ and ↓ represent up and down spins respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
80
+ page_content=' The N´eel state for n spins is simply implemented on n qubits as |1010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
81
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
82
+ page_content='10⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
83
+ page_content=' The time evolution operator U(t) ≡ e−iHt can be executed as a quantum circuit in a resource efficient way;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
84
+ page_content=' we first write the Hamiltonian in (1) as HXXX = H1 + H2 where H1 = − � i odd hi,i+1 and H2 = − � i even hi,i+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
85
+ page_content=' Note that all operators in a given sum commute with all other operators in their respective sums.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
86
+ page_content=' We then define the Suzuki-Trotter time evolution operator Utrot(dt) in the following way: U(1) trot(dt) = L/2−1 � j=0 U2j,2j+1(dt) L/2−1 � j=1 U2j−1,2j(dt) = e−iHXXZdt + O(dt2) (3) where Ujk(dt) = e−ihjkdt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
87
+ page_content=' The exact time evolution operator U(t) is thus approximated by m repeated applications of Utrot(dt = t m), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
88
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
89
+ page_content=' U(t) ≈ Um trot(dt = t m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
90
+ page_content=' As discussed in [16], each Ujk(dt) appearing in (3) can be implemented by the quantum circuit with just three CNOTs as in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
91
+ page_content=' We can reduce the error in the Trotter formula in equation (3) by using higher order expressions [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
92
+ page_content=' It turns out that the second order Trotter formula can be implemented on a quantum circuit with only one extra layer in the circuit [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
93
+ page_content=' We have: U(2) trot(dt) = L/2−1 � j=0 U2j,2j+1 �dt 2 � L/2−1 � j=1 U2j−1,2j (dt) L/2−1 � j=0 U2j,2j+1 �dt 2 � = e−iHXXZdt + O(dt2) (4) which can be implemented on a quantum device by the circuit in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
94
+ page_content=' 3 U(dt) U(dt) U(dt) U(dt) U(dt) U(dt) U(dt) U(dt) U(dt) U(dt) U(dt) U(dt) U(dt) U(dt) U(dt) Figure 3: First order Trotter circuit acting on six qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
95
+ page_content=' U( dt 2 ) U(dt) U(dt) U( dt 2 ) U(dt) U(dt) U(dt) U( dt 2 ) U(dt) U(dt) U( dt 2 ) U(dt) U(dt) U(dt) U( dt 2 ) U(dt) U(dt) U( dt 2 ) Figure 4: Second order Trotter circuit acting on six qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
96
+ page_content=' 4 A(1) A(2) A(3) A(4) A(5) A(6) Figure 5: Graphical representation of an MPS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
97
+ page_content=' There are two matrices A(i) for each qubit at position i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
98
+ page_content=' A(1) A(2) A(3) A(4) A(5) A(6) B(1) B(2) B(3) B(4) B(5) B(6) Figure 6: The inner product ⟨ψ1|ψ2⟩ of two Matrix Product States - see equations (11) and (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
99
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
100
+ page_content='2 Matrix Product States An arbitrary quantum state on n qubits can be written in terms of complex variables cj1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
101
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
102
+ page_content=',jn, the number of which scales as 2n: |ψ⟩ = � {j1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
103
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
104
+ page_content=',jn} cj1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
105
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
106
+ page_content=',jn |j1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
107
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
108
+ page_content=', jn⟩ (5) where the sum is over all configurations of the binary variables j1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
109
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
110
+ page_content=', jn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
111
+ page_content=' The bipartite entanglement entropy of an arbitrary quantum state picked at random from Hilbert space satisfies a volume law which, as was discussed in the introduction, is distinct from area law entanglement in which case the entanglement entropy of two regions after the bipartition of the system is proportional to the area of the boundary of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
112
+ page_content=' A small subset of states in Hilbert space satisfies an area law.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
113
+ page_content=' The coefficients cj1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
114
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
115
+ page_content=',jn of such states have a certain structure that we can take advantage of to study classically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
116
+ page_content=' Any state |ψ⟩ can be written in the following way: cj1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
117
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
118
+ page_content=',jn = A(1) j1 · A(2) j2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
119
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
120
+ page_content=' · A(n) jn (6) where the Aj are χj × χj+1 dimensional matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
121
+ page_content=' Quantum states of the form (6) are known as Matrix Product States (MPS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
122
+ page_content=' The maximum value of χj is referred to as the bond dimension of the MPS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
123
+ page_content=' We can represent an MPS graphically as in Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
124
+ page_content=' We associate one matrix A(i) to each qubit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
125
+ page_content=' Note that for each qubit i we have two matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
126
+ page_content=' We thus have a total of 2n matrices to keep track of.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
127
+ page_content=' The bond dimension χj can be seen as a measure of the entanglement between the two subsystems when a bipartition is made at qubit j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
128
+ page_content=' Therefore, states in Hilbert space that satisfy an area law - and therefore have a low bond dimension in their MPS representation - can be efficiently stored as Matrix Product States.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
129
+ page_content=' States that satisfy a volume law will have a bond dimension that is exponential in the number of qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
130
+ page_content=' We will consider in this work the non-trivial dynamics governed by equation (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
131
+ page_content=' As discussed in the introduction, the bipartite entanglement entropy of a ground state of a one-dimensional Hamiltonian that has a gap between its ground state and its excited state is independent of the size of the subsystems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
132
+ page_content=' The ground state of such a system - and hence the initial state in our setup - can be efficiently stored as an MPS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
133
+ page_content=' One can then use an algorithm such as TEBD (Time Evolving Block Decimation) [18] to update the MPS as a function of time to study the dynamics of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
134
+ page_content=' However, the entanglement entropy of the state increases linearly with time, hence the bond dimension χ that is required to keep the error constant diverges exponentially with time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
135
+ page_content=' To simulate for longer times, a quantum computer would be needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
136
+ page_content=' In section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
137
+ page_content='3, we will discuss how Matrix Product States can be leveraged to reduce the resource requirements for this simulation problem when implemented on a quantum device.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
138
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
139
+ page_content='3 Matrix Product States applied to Approximate Quantum Compiling j k Ry(θ1) Rz(θ2) Ry(θ3) Rx(θ4) Figure 7: CNOT block forms the basic building block of our circuit ansatz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
140
+ page_content=' 5 Rz(θ1) Ry(θ2) Rz(θ3) Rz(θ4) Ry(θ5) Rz(θ6) Rz(θ7) Ry(θ8) Rz(θ9) Rz(θ10) Ry(θ11) Rz(θ12) Figure 8: Parameterised circuit inspired by the structure of the first order Trotter circuit in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
141
+ page_content=' Rz(θ1) Ry(θ2) Rz(θ3) Rz(θ4) Ry(θ5) Rz(θ6) Rz(θ7) Ry(θ8) Rz(θ9) Rz(θ10) Ry(θ11) Rz(θ12) Figure 9: Parameterised circuit inspired by the structure of the second order Trotter circuit in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
142
+ page_content=' Approximate quantum compiling (AQC) involves the design of a parametric quantum circuit with fixed depth - the parameters are then adjusted to bring it as close as possible to the target, where “close” is defined via some carefully chosen metric, see below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
143
+ page_content=' As discussed in [12], one can use so-called CNOT blocks to construct a natural circuit Ansatz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
144
+ page_content=' A CNOT block is a CNOT gate followed by single qubit rotations (see Figure 7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
145
+ page_content=' A block with a CNOT gate acting on a “control” qubit j and “target” qubit k is written as CUjk(θ1, θ2, θ3, θ4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
146
+ page_content=' For a given hardware connectivity, one can then write down a fully parameterised circuit as: Vct(θ) =CUct(L) (θ3n+4L−3, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
147
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
148
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
149
+ page_content=' , θ3n+4L) · · · CUct(1) (θ3n+1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
150
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
151
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
152
+ page_content=' , θ3n+4) [Rz (θ1) Ry (θ2) Rz (θ3)] ⊗ · · · ⊗ [Rz (θ3n−2) Ry (θ3n−1) Rz (θ3n)] (7) The position of the CNOT blocks in the parameterised circuit can be customised to suit the particular target circuit that one is interested in.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
153
+ page_content=' Here we are interested in finding a circuit that implements the unitary time evolution operator as in equation (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
154
+ page_content=' We thus consider a structure inspired by the first and second-order Trotter circuits in Figures 3 and 4 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
155
+ page_content=' Recall that each block U(dt) in Figures 3 and 4 represents the 2-qubit sub-circuit with three CNOTs in Figure 2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
156
+ page_content=' it is therefore natural to consider a circuit Ansatz with sub-circuits each with three CNOT blocks as in Figures 8 and 9, such that the circuit Ansatz mimics the structure of the first and second order Trotter circuits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
157
+ page_content=' In the notation of [14], the parameterised circuits in Figures 8 and 9 correspond to n = 4 qubits, l = 2 layers and b = 3 CNOT blocks in each layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
158
+ page_content=' In both Figure 8 and Figure 9 there are three rotation gates acting on each qubit at the beginning of the circuit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
159
+ page_content=' In the examples considered in this work we will take the initial state to be |0⟩ the initial rotation gate Rz(θ) is redundant for these cases but is necessary for more general initial states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
160
+ page_content=' One can define the distance between the target and parameterised circuit via a number of different metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
161
+ page_content=' Here we use a cost function based on the Hilbert-Schmidt test: Cstate hs = 1 − | ⟨0| V †(θ) |ψ0⟩ |2 (8) The goal of AQC is to tune the parameters θ to minimise the cost function under consideration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
162
+ page_content=' Note that here we are considering the application of AQC to state preparation as opposed to full circuit com- pilation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
163
+ page_content=' More precisely, this means that our cost function is designed such that it is minimised when the action of V (θ) on the initial state |0⟩ produces a state that is as close as possible to a target state |ψ0⟩ (up to some global phase).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
164
+ page_content=' This is in contrast to the situation where one starts with some target circuit U and the cost function is designed to bring the full matrix V (θ) as close as possible to U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
165
+ page_content=' 6 As pointed out in [19], the gradient of the cost function in (8) vanishes exponentially.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
166
+ page_content=' This observation lead to the distinction between global and local cost functions;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
167
+ page_content=' local cost functions have only polynomially vanishing gradients in some cases of interest - see [19, 20, 14] for details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
168
+ page_content=' As was shown in [14], the Hilbert- Schmidt test - which is a global cost function - can be turned into a local one by adding several “bit-flip�� terms which increases the magnitude of the gradient: Cstate lhs = 1−| ⟨0| V †(θ) |ψ0⟩ |2 − �n − 1 n � n � j=1 | ⟨0| XjV †(θ) |ψ0⟩ |2 − �n − 2 n � � j<k | ⟨0| XjXkV †(θ) |ψ0⟩ |2 − .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
169
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
170
+ page_content=' − 1 n � j<k<l<.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
171
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
172
+ page_content=' | ⟨0| XjXkXl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
173
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
174
+ page_content='V †(θ) |ψ0⟩ |2 (9) Convergence of the cost function can be significantly improved by adding these terms, however the computational cost of calculating the gradient becomes prohibitive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
175
+ page_content=' It was demonstrated in [14] that this can be overcome by truncating the expression in (9) to get: C(1) L (α) = 1 − | ⟨0| V †(θ) |ψ0⟩ |2 − α n � j=1 | ⟨0| XjV †(θ) |ψ0⟩ |2 (10) where α is a parameter that can be tuned throughout the optimisation procedure - a scheme to implement this tuning effectively was demonstrated in [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
176
+ page_content=' In (10), we have only kept 1 “bit-flip” term, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
177
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
178
+ page_content=' we have dropped all terms with more than one NOT operators Xi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
179
+ page_content=' As discussed in [14], one can obtain higher order expressions C(k) L with more “bit-flip” terms included - doing so induces a larger gradient in the cost function but increases the computational burden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
180
+ page_content=' Note that each term in (9) or (10) is an overlap of quantum states, and since the overlap of two MPS can be calculated very efficiently the architecture of Matrix Product States can be leveraged to calculate the cost function and solve the approximate quantum compilation problem for large numbers of qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
181
+ page_content=' Consider for example two quantum states |ψ1⟩ and |ψ2⟩: |ψ1⟩ = � {j1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
182
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
183
+ page_content=',jn} c(1) j1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
184
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
185
+ page_content=',jn |j1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
186
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
187
+ page_content=', jn⟩ |ψ2⟩ = � {j1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
188
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
189
+ page_content=',jn} c(2) j1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
190
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
191
+ page_content=',jn |j1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
192
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
193
+ page_content=', jn⟩ (11) As discussed in section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
194
+ page_content='2, for weakly entangled states the coefficients c(1) j1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
195
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
196
+ page_content=',jn and c(2) j1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
197
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
198
+ page_content=',jn are not all independent and can be represented efficiently as Matrix Product States - see Figure 5: c(1) j1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
199
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
200
+ page_content=',jn = A(1) j1 · A(2) j2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
201
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
202
+ page_content=' · A(n) jn c(2) j1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
203
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
204
+ page_content=',jn = B(1) j1 · B(2) j2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
205
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
206
+ page_content=' · B(n) jn (12) We want to calculate the quantity: f(|ψ1⟩ , |ψ2⟩) = || ⟨ψ1|ψ2⟩ ||2 (13) The overlap of two MPS, and hence the fidelity f in (13) can be calculated efficiently by “contracting” the Tensor Network shown in Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
207
+ page_content=' 3 Results We now present the results of our simulations of the Schr¨odinger equation in equation (2) using the second order Trotter formula in equation (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
208
+ page_content=' First let’s define and clarify some notation: |a1⟩: The state generated by the optimised parametric circuit in Figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
209
+ page_content=' Number of layers l in Ansatz: in Figures 8 and 9 there are l = 2 layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
210
+ page_content=' |t1⟩: The state generated by the Trotter circuit in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
211
+ page_content=' 7 Figure 10: 50 qubits: fidelities of the parametric circuit and the Trotter circuit with the ”ground truth” obtained by Tensor Network simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
212
+ page_content=' The two circuits are of identical length but the parametric circuit achieves a significantly higher fidelity at late times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
213
+ page_content=' Figure 11: 50 qubits: the maximum number of layers in the parametric circuit is 12 while it is 18 for the Trotter circuit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
214
+ page_content=' Despite this, the parametric circuit achieves a higher fidelity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
215
+ page_content=' Number of Trotter steps: the analogue of the number of layers in the Ansatz circuits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
216
+ page_content=' In Figure 3 and 4 there are 3 Trotter steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
217
+ page_content=' |t1 gt⟩: the “ground truth” generated by classical Tensor Network simulations of deep Trotter circuits, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
218
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
219
+ page_content=' extremely small time steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
220
+ page_content=' We take dt = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
221
+ page_content='04 to generate the ground truth state while |t1⟩ is generated with a time step of dt = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
222
+ page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
223
+ page_content=' All circuits considered here take the form of the second-order Trotter structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
224
+ page_content=' More precisely, in each graph we use the labels “Trotter” and “Ansatz” and these circuits have the structure in Figures 4 and 9 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
225
+ page_content=' In Figures 10, 11 and 12 we plot fidelity vs evolution time for the 50 qubit XXX Hamiltonian and we compare the result of the Trotter circuit vs the AQC circuit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
226
+ page_content=' In Figure 10 we can see that the fidelity of the Trotter circuit decays rapidly while the fidelity of the AQC circuit remains above 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
227
+ page_content='99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
228
+ page_content=' Note that the Trotter circuit and the AQC circuit are of equal depth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
229
+ page_content=' In Figures 11 and 12 we compare short depth circuits generated by AQC with deep Trotter circuits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
230
+ page_content=' We observe that the AQC circuits can achieve comparable or better fidelities with much shorter depth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
231
+ page_content=' In particular, in Figure 12 the two fidelities are almost identical but the AQC circuit is half the depth of the Trotter circuit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
232
+ page_content=' We plot the 8 Fidelity: lal> vs Itl_gt> and Itl> vs Itl gt>, num.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
233
+ page_content='qubits: 50 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
234
+ page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
235
+ page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
236
+ page_content='98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
237
+ page_content='97 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
238
+ page_content='96 fidelity 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
239
+ page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
240
+ page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
241
+ page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
242
+ page_content='92 Ansatz 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
243
+ page_content='91 Trotter 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
244
+ page_content='90 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
245
+ page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
246
+ page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
247
+ page_content='6 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
248
+ page_content='8 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
249
+ page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
250
+ page_content='2 evolution time 13 6 9 12 15 18 number of layers in ansatz 13 6 9 12 15 18 number of Trotter stepsFidelity: lal> vs Itl_gt> and Itl> vs Itl gt>, num.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
251
+ page_content='qubits: 50 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
252
+ page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
253
+ page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
254
+ page_content='98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
255
+ page_content='97 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
256
+ page_content='96 fidel 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
257
+ page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
258
+ page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
259
+ page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
260
+ page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
261
+ page_content='91 Ansatz Trotter 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
262
+ page_content='90 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
263
+ page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
264
+ page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
265
+ page_content='6 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
266
+ page_content='8 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
267
+ page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
268
+ page_content='2 evolution time 2 12 4 6 8 10 number of layers in ansatz 13 6 9 12 15 18 number of Trotter stepsFigure 12: 50 qubits: the maximum number of layers in the parametric circuit is 9 while it is 18 for the Trotter circuit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
269
+ page_content=' Both circuits achieve very similar fidelities despite the parametric circuit being half the depth of the Trotter circuit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
270
+ page_content=' Figure 13: 100 qubits: The maximum number of layers in the parametric circuit and the Trotter circuit is 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
271
+ page_content=' It can be seen that the fidelity of the Trotter circuit decays rapidly but that of the parametric circuit remains high.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
272
+ page_content=' same data for 100 qubits in Figures 13 and 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
273
+ page_content=' Now we would like to consider how these results affect the implementation on a real quantum device.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
274
+ page_content=' We consider a 20 qubit spin-chain on the 27 qubit device ibmq-mumbai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
275
+ page_content=' First we plot the fidelity results for 20 qubits in Figure 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
276
+ page_content=' We ran the resulting parametric circuit on ibmq-mumbai and in Figure 16 we plot the expectation values ⟨ψ(t)|Sz 0|ψ(t)⟩ as obtained from the quantum device using the parametric AQC circuit, the Trotter circuit and from a classical Tensor Network simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
277
+ page_content=' We observe that this observable is more accurate when obtained with the AQC circuit due to its reduced depth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
278
+ page_content=' Note that the difference between the results from the simulation and the results from the quantum device would be greatly reduced after applying error mitigation [21, 22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
279
+ page_content=' We have not attempted to apply error mitigation to either circuit as this would be outside the scope of this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
280
+ page_content=' We expect that, since our Tensor Network compilation scheme greatly reduces the noise of the circuit, any error mitigation scheme would be enhanced by our approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
281
+ page_content=' 9 Fidelity: lal> vs |tl_gt> and Itl> vs Itl_gt>, num.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
282
+ page_content='qubits: 50 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
283
+ page_content='00 Ansatz 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
284
+ page_content='99 Trotter 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
285
+ page_content='98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
286
+ page_content='97 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
287
+ page_content='96 fideli 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
288
+ page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
289
+ page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
290
+ page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
291
+ page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
292
+ page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
293
+ page_content='90 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
294
+ page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
295
+ page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
296
+ page_content='6 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
297
+ page_content='8 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
298
+ page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
299
+ page_content='2 evolution time 2 4 8 9 6 number of layers in ansatz 13 6 9 12 15 18 number of Trotter stepsFidelity: lai) and Iti> vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
300
+ page_content=' ground-truth state, Ngqubits: 1o0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
301
+ page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
302
+ page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
303
+ page_content='98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
304
+ page_content='97 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
305
+ page_content='96 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
306
+ page_content='95 p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
307
+ page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
308
+ page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
309
+ page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
310
+ page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
311
+ page_content='91 Ansatz Trotter 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
312
+ page_content='90 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
313
+ page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
314
+ page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
315
+ page_content='6 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
316
+ page_content='8 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
317
+ page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
318
+ page_content='2 evolution time 13 6 9 12 15 18 number of layers in ansatz 13 6 9 12 15 18 number of Trotter stepsFigure 14: 100 qubits: The maximum number of layers in the parametric circuit is 12 while it is 18 for the Trotter circuit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
319
+ page_content=' Figure 15: The maximum depth of the parametric circuit is half that of the Trotter circuit - there are 9 and 18 layers respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
320
+ page_content=' These 20 qubit circuits were implemented on ibmq-mumbai - see Figure 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
321
+ page_content=' 4 Discussion In this paper we applied Tensor Network methods to Quantum Compiling and demonstrated their efficacy on the 27 qubit device ibmq-mumbai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
322
+ page_content=' Our method is similar in spirit to [23] where Matrix Product States were used to prepare the initial state for VQE to find the ground state of some Hamiltonian - here we use Matrix Product States to prepare a short depth quantum circuit that simulates the time evolution of a 1D Hamiltonian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
323
+ page_content=' We chose the XXX Hamiltonian in equation (1) because it has been well studied, but we would be particularly interested to apply the compilation methods developed here to non-integrable systems by e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
324
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
325
+ page_content=' adding a random field to the Hamiltonian in (1) and studying phenomena of scientific interest such as many-body localisation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
326
+ page_content=' We have shown results of our simulations on up to 100 qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
327
+ page_content=' In principle we can significantly increase the number of qubits and the length of time to which we apply our MPS compilation scheme;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
328
+ page_content=' the limiting factor at present seems to be the particular implementation that we apply to SVD and to calculate the gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
329
+ page_content=' We believe that both of these can be improved significantly, in particular by using an efficient parallel implementation - this is the subject of ongoing work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
330
+ page_content=' In our current framework we use the Qiskit 10 Fidelity: lai) and Iti> vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
331
+ page_content=' ground-truth state, Ngqubits: 1o0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
332
+ page_content='00 Ansatz 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
333
+ page_content='99 Trotter 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
334
+ page_content='98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
335
+ page_content='97 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
336
+ page_content='96 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
337
+ page_content='95 p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
338
+ page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
339
+ page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
340
+ page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
341
+ page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
342
+ page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
343
+ page_content='90 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
344
+ page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
345
+ page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
346
+ page_content='6 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
347
+ page_content='8 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
348
+ page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
349
+ page_content='2 evolution time 2 12 4 6 8 10 number of layers in ansatz 13 6 9 12 15 18 number of Trotter stepsFidelity: lai> and Iti> vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
350
+ page_content=' ground-truth state, Nqubits: 20 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
351
+ page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
352
+ page_content='995 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
353
+ page_content='990 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
354
+ page_content='985 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
355
+ page_content='980 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
356
+ page_content='975 fide 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
357
+ page_content='970 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
358
+ page_content='965 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
359
+ page_content='960 Ansatz 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
360
+ page_content='955 Trotter 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
361
+ page_content='950 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
362
+ page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
363
+ page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
364
+ page_content='6 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
365
+ page_content='8 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
366
+ page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
367
+ page_content='2 evolution time 2 4 9 6 7 8 number of layers in ansatz 13 6 9 12 15 18 number of Trotter stepsFigure 16: The expectation value of Sz 0 vs time for a chain of 20 qubits as measured on the 27 qubit quantum device ibmq-mumbai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
368
+ page_content=' The circuit produced from our MPS implementation of AQC uses is shallower than the Trotter circuit, and thus produces an expectation value that is much closer to the true value plotted in the blue curve, obtained by classical Tensor Network simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
369
+ page_content=' MPS package which is designed for generic situations in which long range connectivity may be required and thus does not take advantage of the short range structure of the circuits in Figures 8 and 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
370
+ page_content=' 5 Acknowledgements This work was funded by the Disruptive Technologies Innovation Fund (DTIF), by Enterprise Ireland, under project number DTIF2019-090 (project QCoIR) and also supported by IBM Quantum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
371
+ page_content=' 11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
372
+ page_content='0 - + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
373
+ page_content='1 + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
374
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
375
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
376
+ page_content='4 Simulation 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
377
+ page_content='5 ibm mumbai: Trotter circuit + ibm mumbai: AQC circuit 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
378
+ page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
379
+ page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
380
+ page_content='D 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
381
+ page_content='5 15 2D 25 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
382
+ page_content='D 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
383
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
384
+ page_content='D tReferences [1] Frank Verstraete and J Ignacio Cirac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
385
+ page_content=' Matrix product states represent ground states faithfully.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
386
+ page_content=' Physical review b, 73(9):094423, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
387
+ page_content=' [2] Edwin Stoudenmire and David J Schwab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
388
+ page_content=' Supervised learning with tensor networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
389
+ page_content=' Advances in Neural Information Processing Systems, 29, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
390
+ page_content=' [3] Tom Vieijra, Laurens Vanderstraeten, and Frank Verstraete.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
391
+ page_content=' Generative modeling with projected entangled-pair states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
392
+ page_content=' arXiv preprint arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
393
+ page_content='08177, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
394
+ page_content=' [4] Steven R White.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
395
+ page_content=' Density-matrix algorithms for quantum renormalization groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
396
+ page_content=' Physical review b, 48(14):10345, 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
397
+ page_content=' [5] Ulrich Schollw¨ock.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
398
+ page_content=' The density-matrix renormalization group in the age of matrix product states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
399
+ page_content=' Annals of physics, 326(1):96–192, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
400
+ page_content=' [6] Matthew B Hastings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
401
+ page_content=' An area law for one-dimensional quantum systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
402
+ page_content=' Journal of statistical mechanics: theory and experiment, 2007(08):P08024, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
403
+ page_content=' [7] Joe Gibbs, Kaitlin Gili, Zo¨e Holmes, Benjamin Commeau, Andrew Arrasmith, Lukasz Cincio, Patrick J Coles, and Andrew Sornborger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
404
+ page_content=' Long-time simulations with high fidelity on quantum hardware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
405
+ page_content=' arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
406
+ page_content='04313, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
407
+ page_content=' [8] Cristina Cirstoiu, Zoe Holmes, Joseph Iosue, Lukasz Cincio, Patrick J Coles, and Andrew Sornborger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
408
+ page_content=' Variational fast forwarding for quantum simulation beyond the coherence time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
409
+ page_content=' npj Quantum Infor- mation, 6(1):1–10, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
410
+ page_content=' [9] Christa Zoufal, David Sutter, and Stefan Woerner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
411
+ page_content=' Error bounds for variational quantum time evolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
412
+ page_content=' arXiv preprint arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
413
+ page_content='00022, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
414
+ page_content=' [10] Kishor Bharti and Tobias Haug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
415
+ page_content=' Quantum-assisted simulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
416
+ page_content=' Physical Review A, 104(4):042418, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
417
+ page_content=' [11] Alexander Miessen, Pauline J Ollitrault, and Ivano Tavernelli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
418
+ page_content=' Quantum algorithms for quantum dynamics: a performance study on the spin-boson model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
419
+ page_content=' Physical Review Research, 3(4):043212, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
420
+ page_content=' [12] Liam Madden and Andrea Simonetto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
421
+ page_content=' Best approximate quantum compiling problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
422
+ page_content=' ACM Trans- actions on Quantum Computing, 3(2):1–29, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
423
+ page_content=' [13] Liam Madden, Albert Akhriev, and Andrea Simonetto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
424
+ page_content=' Sketching the best approximate quantum compiling problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
425
+ page_content=' arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
426
+ page_content='04025, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
427
+ page_content=' [14] Niall F Robertson, Albert Akhriev, Jiri Vala, and Sergiy Zhuk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
428
+ page_content=' Escaping barren plateaus in approx- imate quantum compiling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
429
+ page_content=' arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
430
+ page_content='09191, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
431
+ page_content=' [15] Lorenzo Piroli, Bal´azs Pozsgay, and Eric Vernier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
432
+ page_content=' From the quantum transfer matrix to the quench action: the loschmidt echo in xxz heisenberg spin chains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
433
+ page_content=' Journal of Statistical Mechanics: Theory and Experiment, 2017(2):023106, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
434
+ page_content=' [16] Adam Smith, MS Kim, Frank Pollmann, and Johannes Knolle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
435
+ page_content=' Simulating quantum many-body dynamics on a current digital quantum computer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
436
+ page_content=' npj Quantum Information, 5(1):1–13, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
437
+ page_content=' [17] David Layden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
438
+ page_content=' First-order trotter error from a second-order perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
439
+ page_content=' Physical Review Letters, 128(21):210501, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
440
+ page_content=' [18] Johannes Hauschild and Frank Pollmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
441
+ page_content=' Efficient numerical simulations with tensor networks: Tensor network python (tenpy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
442
+ page_content=' SciPost Physics Lecture Notes, page 005, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
443
+ page_content=' [19] Sumeet Khatri, Ryan LaRose, Alexander Poremba, Lukasz Cincio, Andrew T Sornborger, and Patrick J Coles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
444
+ page_content=' Quantum-assisted quantum compiling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
445
+ page_content=' Quantum, 3:140, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
446
+ page_content=' [20] Marco Cerezo, Akira Sone, Tyler Volkoff, Lukasz Cincio, and Patrick J Coles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
447
+ page_content=' Cost function depen- dent barren plateaus in shallow parametrized quantum circuits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
448
+ page_content=' Nature communications, 12(1):1–12, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
449
+ page_content=' 12 [21] Kristan Temme, Sergey Bravyi, and Jay M Gambetta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
450
+ page_content=' Error mitigation for short-depth quantum circuits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
451
+ page_content=' Physical review letters, 119(18):180509, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
452
+ page_content=' [22] Youngseok Kim, Christopher J Wood, Theodore J Yoder, Seth T Merkel, Jay M Gambetta, Kris- tan Temme, and Abhinav Kandala.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
453
+ page_content=' Scalable error mitigation for noisy quantum circuits produces competitive expectation values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
454
+ page_content=' arXiv preprint arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
455
+ page_content='09197, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
456
+ page_content=' [23] Manuel S Rudolph, Jacob Miller, Jing Chen, Atithi Acharya, and Alejandro Perdomo-Ortiz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
457
+ page_content=' Syn- ergy between quantum circuits and tensor networks: Short-cutting the race to practical quantum advantage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
458
+ page_content=' arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
459
+ page_content='13673, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
460
+ page_content=' 13' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tFAT4oBgHgl3EQfkB3r/content/2301.08609v1.pdf'}
3NE4T4oBgHgl3EQf0Q0i/content/tmp_files/2301.05280v1.pdf.txt ADDED
@@ -0,0 +1,1456 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Pointwise Bi-Slant Submanifolds in Locally
2
+ Conformal Kähler Manifolds Immersed as
3
+ Warped Products∗
4
+ Umar Mohd Khan & Viqar Azam Khan
5
+ Department of Mathematics
6
+ Aligarh Muslim University
7
+ Aligarh-202002, India
8
9
+ Abstract
10
+ We study immersions of pointwise bi-slant submanifolds of locally conformal
11
+ Kähler manifolds as warped products. In particular, we establish characterisation
12
+ theorem for a pointwise bi-slant submanifold of a locally conformal Kähler manifold
13
+ to be immersed as a warped product and show that a necessary condition is that
14
+ the Lee vector field B is orthogonal to the second factor and the warping function
15
+ λ satisfies grad(ln λ) =
16
+ 1
17
+ 2BT , where BT denotes the tangential part of the Lee
18
+ vector field. We also extend Chen’s inequality for the squared length of the second
19
+ fundamental form to our case and study the corresponding equality case.
20
+ 1
21
+ Introduction
22
+ Vaisman introduced locally conformal Kähler (lcK) manifolds as a generalisation of Kähler
23
+ manifolds [33, 34, 35, 36, 21, 37, 38]. An lcK manifold is a Hermitian manifold that can
24
+ be written as the union of Kähler manifolds such that the lcK metric is locally conformal
25
+ to these Kähler metrics. LcK manifolds are characterised by the existence of a globally
26
+ defined closed 1-form ω, called the Lee form, such that the fundamental 2-form of the
27
+ lcK metric satisfies dΩ = Ω ∧ ω. The Lee form and its associated Lee Vector field play
28
+ an important part in the geometry of lcK manifolds.
29
+ ∗Keywords and phrases: warped product submanifolds; locally conformal Kähler manifolds (lcK);
30
+ pointwise bi-slant submanifolds
31
+ 2020 AMS Subject Classification: 53C15, 53C40, 53C42, 53B25
32
+ 1
33
+ arXiv:2301.05280v1 [math.GM] 11 Jan 2023
34
+
35
+ From an extrinsic geometric standpoint, holomorphic and totally real submanifolds
36
+ are important objects of study in the setting of almost Hermitian manifolds. Bejancu
37
+ [5, 6] defined CR submanifolds as a generalisation of holomorphic and totally real sub-
38
+ manifolds which were further studied by Chen [11, 12]. Later, Chen [13, 14] extended the
39
+ class of holomorphic and totally real submanifolds by introducing the notion of slant sub-
40
+ manifolds. The concept was further generalised to pointwise slant submanifolds [20] by
41
+ the same author. The study of CR submanifolds and slant submanifolds was later gener-
42
+ alised by several authors to semi-slant submanifolds, hemi-slant submanifolds(also called
43
+ pseudo-slant submanifolds) and bi-slant submanifolds, in various ambient manifolds.
44
+ Semi-slant submanifolds in almost Hermitian manifolds were studied by Papaghiuc
45
+ [28].
46
+ Cabrerizo et al.
47
+ [9, 10] studied semi-slant submanifolds in Sasakian manifolds.
48
+ Slant and semi-slant submanifolds in almost product Riemannian manifolds were studied
49
+ in [2, 24, 29]. Hemi-slant submanifolds were also studied in nearly Kenmotsu manifolds
50
+ [4], LCS-manifolds [3] and locally product Riemannian manifolds [31].
51
+ Bishop and O’Neill [7] while studying examples of manifolds with negative sectional
52
+ curvature, defined warped product manifolds by homothetically warping the product
53
+ metric on a product manifold. Warped products are a natural generalisation of Rieman-
54
+ nian products and they have found extensive applications in relativity. Most notably
55
+ the Schwarzschild metric describing the gravitational field outside a spherical mass under
56
+ certain assumptions and the Robertsen Walker metric (FLRW metric) are examples of
57
+ warped product metrics. A natural example of warped product manifolds are surfaces
58
+ of revolution. Hiepko [22] gave a characterisation for a Riemannian manifold to be the
59
+ warped product of its submanifolds, generalising the deRham decomposition theorem for
60
+ product manifolds. Later on Nölker [27] and Chen [15, 16, 19] initiated the study of
61
+ extrinsic geometry of warped product manifolds.
62
+ Chen [17, 18] initiated the study of CR submanifolds immersed as warped products in
63
+ Kähler manifolds. He proved that given any holomorphic MT and totally real submanifold
64
+ M⊥ of a Kähler manifold, every warped product of the form MT ×λ M⊥ in a Kähler
65
+ manifold satisfies the inequality
66
+ ||h||2 ≥ 2n2||grad(ln λ)||2
67
+ (1.1)
68
+ where λ is the warping function, n2 is the dimension of M⊥, ||h||2 is the squared norm
69
+ of the second fundamental form and grad(ln λ) is the gradient of ln λ. Bonanzinga and
70
+ Matsumoto [8, 26, 25] continued the study in the setting of lcK manifolds. Nargis Ja-
71
+ mal et al. [23] studied Generic warped products in lcK manifolds. Further studies of
72
+ semi-slant and hemi-slant submanifolds of lcK manifolds were carried out in [1, 30, 32].
73
+ Generic submanifolds, CR-submanifolds and pointwise semi-slant submanifolds immersed
74
+ as warped products in lcK manifolds were studied by [23, 1].
75
+ We continue the study by considering pointwise bi-slant submanifolds in an lcK man-
76
+ ifold. In particular we give characterisation theorems and establish Chen’s inequality
77
+ 2
78
+
79
+ for the squared norm of the second fundamental form of pointwise bi-slant submanifolds
80
+ immersed as warped products in an lcK manifold.
81
+ 2
82
+ Preliminaries
83
+ Definition 2.1. A Hermitian Manifold (�
84
+ M 2n, J, g) is said to be a locally conformal Kähler
85
+ (lcK) manifold if there exists an open cover {Ui}i∈I of �
86
+ M 2n and a family {fi}i∈I of C∞
87
+ functions fi : Ui → R such that for each i ∈ I, the metric
88
+ gi = e−fig|Ui
89
+ (2.1)
90
+ on Ui is a Kähler metric.
91
+ Given an lcK manifold (�
92
+ M 2n, J, g), let U, V denote smooth sections of T �
93
+ M 2n, then
94
+ the local 1-forms dfi glue up to a globally defined closed 1-form ω, called the Lee form,
95
+ and it satisfies the following equation
96
+ dΩ = Ω ∧ ω
97
+ (2.2)
98
+ where Ω(U, V ) = g(JU, V ) is the fundamental 2-form associated to (J, g).
99
+ Denote by B the vector field equivalent to ω with respect to g, i.e. ω(U) = g(B, U).
100
+ B is called the Lee vector field.
101
+ Let ∇ denote the Levi-Civita connection of (�
102
+ M 2n, g) and �∇i denote the Levi-Civita
103
+ connection of the local metrics gi for all i ∈ I. Then �∇i glue up to a globally defined
104
+ torsion-free linear connection �∇ on �
105
+ M 2n given by
106
+ �∇UV = ∇UV − 1
107
+ 2 {ω(U)V + ω(V )U − g(U, V )B}
108
+ (2.3)
109
+ where U, V ∈ T �
110
+ M 2n and satisfying
111
+ �∇g = ω ⊗ g
112
+ (2.4)
113
+ �∇ is called the Weyl connection of the lcK manifold (�
114
+ M 2n, J, g). As gi are Kähler metrics,
115
+ the almost complex structure J is parallel with respect to the Weyl connection, i.e.
116
+ �∇J = 0. This gives
117
+ ∇UJV = J∇UV + 1
118
+ 2 {Θ(V )U − ω(V )JU − g(U, V )A + Ω(U, V )B}
119
+ (2.5)
120
+ Now as ω is a closed form on �
121
+ M 2n, we have
122
+ (∇Uω)V = (∇V ω)U
123
+ (2.6)
124
+ 3
125
+
126
+ Let M m be a Riemannian manifold isometrically immersed in an lcK manifold (�
127
+ M 2n, J, g).
128
+ Let U, V, W denote smooth sections of TM m and ξ, η denote smooth sections of T ⊥M m.
129
+ The Gauss and Weingarten formulae with respect to the Riemannian connection of
130
+
131
+ M 2n are given as
132
+ ∇UV = ∇UV + h(U, V )
133
+ (2.7)
134
+ ∇Uξ = −AξU + ∇⊥
135
+
136
+ (2.8)
137
+ where h is the second fundamental form, A is the shape operator and ∇, ∇⊥ are respec-
138
+ tively the induced connections in the tangent bundle and the normal bundle of M m with
139
+ respect to ∇.
140
+ The Gauss and Weingarten formulae with respect to the Weyl connection of �
141
+ M 2n are
142
+ given as
143
+ �∇UV = ˆ∇UV + �h(U, V )
144
+ (2.9)
145
+ �∇Uξ = −�AξU + �∇⊥
146
+
147
+ (2.10)
148
+ where �h is the second fundamental form, �A is the shape operator and ˆ∇, �∇⊥ are respec-
149
+ tively the induced connections in the tangent bundle and the normal bundle of M m with
150
+ respect to �∇.
151
+ Let H denote the trace of h, then H is called the mean curvature vector of M m in
152
+ (�
153
+ M 2n, J, g) and is a smooth section of T ⊥M m. We say M m is a totally umbilic submanifold
154
+ of (�
155
+ M 2n, J, g), if h(U, V ) = g(U, V )H. We say M m is a totally geodesic submanifold of
156
+ (�
157
+ M 2n, J, g), if h(U, V ) = 0.
158
+ Let BT, BN denote the tangential and normal components of the Lee vector field B.
159
+ From (2.3), we have the following relations
160
+ ˆ∇UV = ∇UV − 1
161
+ 2
162
+
163
+ ω(U)V + ω(V )U − g(U, V )BT�
164
+ (2.11)
165
+ �h(U, V ) = h(U, V ) + 1
166
+ 2g(U, V )BN
167
+ (2.12)
168
+ �AξU = AξU + 1
169
+ 2ω(ξ)U
170
+ (2.13)
171
+ �∇⊥
172
+ Uξ = ∇⊥
173
+ Uξ − 1
174
+ 2ω(U)ξ
175
+ (2.14)
176
+ Now define
177
+ JU = PU + FU
178
+ Jξ = tξ + fξ
179
+ (2.15)
180
+ where PU, tξ and FU, fξ are respectively the tangential and normal parts. Then, we have
181
+ P 2 + tF = −I
182
+ f 2 + Ft = −I
183
+ FP + fF = 0
184
+ tf + Pt = 0
185
+ (2.16)
186
+ 4
187
+
188
+ Define the covariant differentiation of P, F, t and f with respect to the Levi-Civita
189
+ connection of �
190
+ M 2n as
191
+ (∇UP)V = ∇UPV − P∇UV
192
+ (∇UF)V = ∇⊥
193
+ UFV − F∇UV
194
+ (∇Ut)ξ = ∇utξ − t(∇⊥
195
+ Uξ)
196
+ (∇Uf)ξ = ∇⊥
197
+ Ufξ − f(∇⊥
198
+ Uξ)
199
+ (2.17)
200
+ Then as �∇J = 0, using (2.11), (2.12), (2.13), (2.14) we have
201
+ (∇UP)V = AFV U + th(U, V ) + 1
202
+ 2
203
+
204
+ Θ(V )U − ω(V )PU + g(PU, V )BT − g(U, V )AT�
205
+ (∇UF)V = fh(U, V ) − h(U, PV ) + 1
206
+ 2
207
+
208
+ g(PU, V )BN − g(U, V )AN − ω(V )FU
209
+
210
+ (∇Ut)ξ = AfξU − PAξU + 1
211
+ 2
212
+
213
+ g(FU, ξ)BT − ω(ξ)PU + Θ(ξ)U
214
+
215
+ (∇Uf)ξ = −h(U, tξ) − FAξU + 1
216
+ 2
217
+
218
+ g(FU, ξ)BN − ω(ξ)FU
219
+
220
+ (2.18)
221
+ Bishop and O’Neill [7] defined warped product as
222
+ Definition 2.2. Let (M n1
223
+ 1 , g1) and (M n2
224
+ 2 , g2) be Riemmanian manifolds and let
225
+ π1 : M1 × M2 → M1 and π2 : M1 × M2 → M2
226
+ be the canonical projections.
227
+ Let λ : M1 → (0, ∞) be a smooth function.
228
+ Then the
229
+ warped product manifold (M, g) = M1 × λM2 is defined as the manifold M1×M2 equipped
230
+ with the Riemannian metric
231
+ g = π⋆
232
+ 1g1 + λ2π⋆
233
+ 2g2
234
+ (2.19)
235
+ Warped product manifolds are a generalization of the usual product of two Riemannian
236
+ manifolds. In fact we have the following characterisation theorem
237
+ Theorem 2.1 ([22]). Let (M m, g) be a connected Riemannian manifold equipped with
238
+ orthogonal, complementary, involutive distributions D1 and D2. Further let the leaves of
239
+ D1 be totally geodesic and the leaves of D2 be extrinsic spheres in M m, where by extrinsic
240
+ spheres we mean totally umbilic submanifolds such that the mean curvature vector is par-
241
+ allel in the normal bundle. Then (M m, g) is locally a warped product (M, g) = M1 × λM2,
242
+ where M1 and M2 respectively denote the leaves of D1 and D2 and λ : M1 → (0, ∞) is a
243
+ smooth function such that grad(ln λ) is the mean curvature vector of M2 in M.
244
+ Further, if (M m, g) is simply connected and complete, then (M m, g) is globally a
245
+ warped product.
246
+ For (M n1
247
+ 1 , g1), (M n2
248
+ 2 , g2) and (M, g) denote respectively the Levi-Civita connections by
249
+ ∇1, ∇2 and ∇. Given any smooth function λ : M1 → R, let grad(λ) denote the lift of
250
+ the gradient vector field of λ to (M, g).
251
+ 5
252
+
253
+ Theorem 2.2 ([22]). Given a warped product manifold (M, g) = M1 × λM2 of Riem-
254
+ manian manifolds (M n1
255
+ 1 , g1) and (M n2
256
+ 2 , g2), we have for all X, Y ∈ L(M1) and Z, W ∈
257
+ L(M2),
258
+ ∇XY = ∇1
259
+ XY
260
+ (2.20)
261
+ ∇XZ = ∇ZX = X(ln λ)Z
262
+ (2.21)
263
+ ∇ZW = ∇2
264
+ ZW − g(Z, W)grad(ln λ)
265
+ (2.22)
266
+ It follows from Lemma 2.2 that H = −grad(ln λ) is the mean curvature vector of M2 in
267
+ M.
268
+ 3
269
+ Pointwise Bi-Slant Submanifolds of lcK manifolds
270
+ Let M m be a Riemannian manifold isometrically immersed in an lcK manifold (�
271
+ M 2n, J, g).
272
+ M m is said to be a pointwise bi-slant submanifold if it admits two orthogonal comple-
273
+ mentary distributions Dθ1 and Dθ2, such that Dθ1 and Dθ2 are pointwise slant with slant
274
+ angles θ1, θ2 ∈
275
+
276
+ 0, π
277
+ 2
278
+
279
+ and θ1 ̸= θ2, i.e. P 2X = − cos2 θ1X, for every smooth vector field
280
+ X ∈ Dθ1 and P 2Z = − cos2 θ2Z, for every smooth vector field Z ∈ Dθ2.
281
+ The tangent bundle and the normal bundle of a pointwise bi-slant submanifold admits
282
+ an orthogonal decomposition as
283
+ TM m = Dθ1 ⊕ Dθ2
284
+ T ⊥M m = FDθ1 ⊕ FDθ2 + µ
285
+ (3.1)
286
+ where µ is the orthogonal complementary distribution of FDθ1 ⊕ FDθ2 in T ⊥M m and is
287
+ an invariant subbundle of T ⊥M m with respect to J. It is easy to observe that for i = 1, 2,
288
+ PDθi = Dθi
289
+ t(FDθi) = Dθi
290
+ t(µ) = {0}
291
+ f(FDθi) = FDθi
292
+ f(µ) = µ
293
+ (3.2)
294
+ Let M m be a pointwise bi-slant manifold isometrically immersed in an lcK manifold
295
+ (�
296
+ M 2n, J, g) such that the distributions Dθ1, Dθ2 are both involutive. Let M 2n1
297
+ θ1
298
+ and M 2n2
299
+ θ2
300
+ respectively denote the leaves of Dθ1 and Dθ2, where 2n1 = dimR Dθ1 and 2n2 = dimR Dθ2.
301
+ We say M m is a
302
+ • mixed totally geodesic pointwise bi-slant submanifold if h(Dθ1, Dθ2) = {0}.
303
+ • pointwise bi-slant product submanifold if M m can be expressed locally as Mθ1 × Mθ2.
304
+ • pointwise bi-slant warped product submanifold if M m can be expressed locally as
305
+ Mθ1 × λMθ2 for some smooth function λ : Mθ1 → (0, ∞).
306
+ Let X, Y be smooth vector fields in Dθ1 and Z, W be smooth vector fields in Dθ2. Then
307
+ we have,
308
+ 6
309
+
310
+ Theorem 3.1. Let M m be a pointwise bi-slant submanifold of an lcK manifold �
311
+ M 2n.
312
+ • the slant distribution Dθ1 is involutive if and only if
313
+ g(AFPXZ − AFXPZ, Y ) − g(AFPY Z − AFY PZ, X)
314
+ = g(∇⊥
315
+ XFY, FZ) − g(∇⊥
316
+ Y FX, FZ)
317
+ (3.3)
318
+ • the leaves of the slant distribution Dθ1 are totally geodesic in M m if and only if
319
+ ω(Dθ2) = {0} and g(AFPXZ − AFXPZ, Y ) + g(∇⊥
320
+ Y FX, FZ) = 0
321
+ (3.4)
322
+ • the leaves of the slant distribution Dθ1 are totally umbilic in M m if and only if
323
+ g(AFPXZ − AFXPZ, Y ) + g(∇⊥
324
+ Y FX, FZ)
325
+ = sin2 θ1
326
+ �1
327
+ 2ω(Z) + g(H, Z)
328
+
329
+ g(X, Y )
330
+ (3.5)
331
+ for some smooth vector field H ∈ Dθ2.
332
+ Proof. From (2.5) and (2.16), we have
333
+ g(∇XY, Z) = g(∇XPY + ∇XFY − 1
334
+ 2g(X, Y )JB − 1
335
+ 2g(PX, Y )B, JZ)
336
+ = −g(J∇XPY, Z) − g(AFY X, PZ) + g(∇⊥
337
+ XFY, FZ)
338
+ − 1
339
+ 2g(X, Y )g(B, Z) − 1
340
+ 2g(PX, Y )g(B, JZ)
341
+ = −g(∇XJPY, Z) + 1
342
+ 2g(PX, PY )g(B, Z) − g(AFY X, PZ)
343
+ − 1
344
+ 2g(X, Y )g(B, Z) + g(∇⊥
345
+ XFY, FZ)
346
+ = cos2 θ1g(∇XY, Z) + g(AFPY X, Z) + 1
347
+ 2 sin2 θ1g(X, Y )g(B, Z)
348
+ − g(AFY X, PZ) + g(∇⊥
349
+ XFY, FZ)
350
+ i.e. we have
351
+ sin2 θ1
352
+
353
+ g(∇XY, Z) + 1
354
+ 2g(X, Y )g(B, Z)
355
+
356
+ = g(AFPY Z − AFY PZ, X) + g(∇⊥
357
+ XFY, FZ)
358
+ Hence, the result follows.
359
+ Similarly, we have
360
+ 7
361
+
362
+ Theorem 3.2. Let M m be a pointwise bi-slant submanifold of an lcK manifold �
363
+ M 2n.
364
+ Then
365
+ • the slant distribution Dθ2 is involutive if and only if
366
+ g(AFPZX − AFZPX, W) − g(AFPWX − AFWPX, Z)
367
+ = g(∇⊥
368
+ ZFW, FX) − g(∇⊥
369
+ WFZ, FX)
370
+ (3.6)
371
+ • the leaves of the slant distribution Dθ2 are totally geodesic in M m if and only if
372
+ ω(Dθ1) = {0} and g(AFPZX − AFZPX, W) + g(∇⊥
373
+ WFZ, FX) = 0
374
+ (3.7)
375
+ • the leaves of the slant distribution Dθ2 are totally umbilic in M m if and only if
376
+ g(AFPZX − AFZPX, W) + g(∇⊥
377
+ WFZ, FX)
378
+ = sin2 θ2
379
+ �1
380
+ 2ω(X) + g(H, X)
381
+
382
+ g(Z, W)
383
+ (3.8)
384
+ for some smooth vector field H ∈ Dθ1.
385
+ Notations: Let Dθ1 and Dθ2 be the slant distributions on a pointwise bi-slant subman-
386
+ ifold M m of an lcK manifold �
387
+ M 2n such that both distributions are involutive and let
388
+ Mθ1 and Mθ2 respectively denote the leaves of the distributions Dθ1 and Dθ2 respectively.
389
+ Then Dθ1(p, q) = T(p,q)(Mθ1 × {q}) and Dθ2(p, q) = T(p,q)({p} × Mθ2). Let L(Mθ1) and
390
+ L(Mθ2) respectively denote the set of lifts of vector fields from Mθ1 and Mθ2 to M. Then
391
+ X ∈ L(Mθ1) if and only if X|{p}×Mθ2 is constant for every p ∈ Mθ1. Similarly, Z ∈ L(Mθ2)
392
+ if and only if Z|Mθ1×{q} is constant for every q ∈ Mθ2. Also, if πθ1 : Mθ1 × Mθ2 → Mθ1
393
+ and πθ2 : Mθ1 × Mθ2 → Mθ2 are the canonical projections, we have dπθ1(L(Mθ1)) = TMθ1
394
+ and dπθ2(L(Mθ2)) = TMθ2. It is clear that a general vector field in Dθ1 (respectively Dθ2)
395
+ need not be in L(Mθ1) (respectively L(Mθ2)).
396
+ From here on we use X, Y to denote smooth vector fields in L(Mθ1) and Z, W to
397
+ denote smooth vector fields in L(Mθ2)
398
+ 4
399
+ Some Lemmas
400
+ We give the following lemmas which will be used to prove our main results.
401
+ Lemma 4.1. Given a pointwise bi-slant warped product submanifold M = Mθ1 × λMθ2
402
+ in an lcK manifold (�
403
+ M 2n, J, g), we have for all X, Y ∈ L(Mθ1) and Z, W ∈ L(Mθ2),
404
+ g(h(X, Z), FY ) = g(h(Y, Z), FX)
405
+ (4.1)
406
+ 8
407
+
408
+ g(h(X, Z), FW) = g(h(X, W), FZ)
409
+ (4.2)
410
+ g(h(X, Y ), FZ) = g(h(X, Z), FY ) − 1
411
+ 2g(X, Y )g(B, FZ)
412
+ (4.3)
413
+ g(h(Z, W), FX) = g(h(X, Z), FW) − 1
414
+ 2g(Z, W)g(B, FX)
415
+ (4.4)
416
+ X(ln λ) = 1
417
+ 2g(B, X)
418
+ (4.5)
419
+ g(B, Z) = 0
420
+ (4.6)
421
+ Proof. For all X, Y ∈ L(Mθ1) and Z, W ∈ L(Mθ2), we have using (2.5) and (2.21),
422
+ g(h(X, Z), FW) = g(∇XZ, JW − PW)
423
+ = −g(J∇XZ, W) − g(∇XZ, PW)
424
+ = −g(∇XJZ, W) − X(ln λ)g(Z, PW)
425
+ = −g(∇XPZ, W) − g(∇XFZ, W) − X(ln λ)g(Z, PW)
426
+ = −X(ln λ)g(PZ, W) + g(AFZX, W) − X(ln λ)g(Z, PW)
427
+ = g(h(X, W), FZ)
428
+ which gives (4.2). Repeating the above calculation, we have
429
+ g(h(X, Z), FW) = g(∇ZX, JW − PW)
430
+ = −g(J∇ZX, W) − g(∇ZX, PW)
431
+ = −g(∇ZJX, W) − 1
432
+ 2g(JB, X)g(Z, W) − 1
433
+ 2g(B, X)g(JZ, W)
434
+ − X(ln λ)g(Z, PW)
435
+ = −g(∇ZPX, W) − g(∇ZFX, W) − 1
436
+ 2g(JB, X)g(Z, W)
437
+ − 1
438
+ 2g(B, X)g(JZ, W) − X(ln λ)g(Z, PW)
439
+ = −PX(ln λ)g(Z, W) + g(AFXZ, W) + 1
440
+ 2g(B, JX)g(Z, W)
441
+ − 1
442
+ 2g(B, X)g(PZ, W) − X(ln λ)g(Z, PW)
443
+ Using (4.2) and comparing symmetric and skew symmetric terms in Z and W we have,
444
+ X(ln λ) = 1
445
+ 2g(B, X)
446
+ which gives (4.5) and
447
+ g(h(X, Z), FW) = g(h(Z, W), FX) − PX(ln λ)g(Z, W) + 1
448
+ 2g(B, PX + FX)g(Z, W)
449
+ 9
450
+
451
+ which on substituting from (4.5) gives (4.4). Similarly,
452
+ g(h(X, Z), FY ) = g(∇ZX, JY − PY )
453
+ = −g(J∇ZX, Y ) − g(∇ZX, PY )
454
+ = −g(∇ZJX, Y )
455
+ = −g(∇ZPX, Y ) − g(∇ZFX, Y )
456
+ = g(AFXZ, Y )
457
+ = g(h(Y, Z), FX)
458
+ which gives (4.1). Repeating the above calculation, we have
459
+ g(h(X, Z), FY ) = g(∇XZ, JY − PY )
460
+ = −g(J∇XZ, Y ) − g(∇XZ, PY )
461
+ = −g(∇XJZ, Y ) − 1
462
+ 2g(JB, Z)g(X, Y ) − 1
463
+ 2g(B, Z)g(JX, Y )
464
+ = −g(∇XPZ, Y ) − g(∇XFZ, Y ) − 1
465
+ 2g(JB, Z)g(X, Y )
466
+ − 1
467
+ 2g(B, Z)g(JX, Y )
468
+ = g(AFZX, Y ) + 1
469
+ 2g(B, JZ)g(X, Y ) − 1
470
+ 2g(B, Z)g(PX, Y )
471
+ Using (4.1) and comparing symmetric and skew symmetric terms in X and Y we have,
472
+ 1
473
+ 2g(B, Z) = 0
474
+ which gives (4.6) and
475
+ g(h(X, Z), FY ) = g(h(X, Y ), FZ) + 1
476
+ 2g(B, PZ + FZ)g(X, Y )
477
+ which on substituting from (4.6) gives (4.3).
478
+ From (4.5) and (4.6) we have
479
+ Corollary 4.2. Given a pointwise bi-slant warped product submanifold M = Mθ1 × λMθ2
480
+ in an lcK manifold (�
481
+ M 2n, J, g), we have the Lee vector field B is orthogonal to the second
482
+ factor and the warping function λ satisfies grad(ln λ) =
483
+ 1
484
+ 2BT, where BT denotes the
485
+ tangential part of the Lee vector field along M.
486
+ 10
487
+
488
+ Remark 4.1. Given a pointwise bi-slant warped product submanifold Mθ1 × λMθ2 of an
489
+ l.c.K manifold �
490
+ M 2n, let {Xi, β1Xi}p
491
+ i=1 and {Zj, β2PZj}q
492
+ j=1 respectively be local orthonor-
493
+ mal frames of TMθ1 and TMθ2. Then a local orthonormal frame of �
494
+ M 2n is
495
+
496
+
497
+ Xi = Xi, �
498
+ PXi = β1PXi
499
+
500
+
501
+
502
+
503
+ Zj = Zj
504
+ λ , �
505
+ PZj = β2PZj
506
+ λ
507
+
508
+
509
+
510
+ �ξk, �
511
+ Jξk
512
+
513
+
514
+
515
+
516
+ FXi = α1FXi, �
517
+ FPXi = α1β1FPXi
518
+
519
+
520
+
521
+
522
+ FZj = α2FZj
523
+ λ
524
+ , �
525
+ FPZj = α2β2FPZj
526
+ λ
527
+
528
+ where αi = csc θi, βi = sec θi for i = 1, 2 and
529
+
530
+
531
+ Xi �
532
+ PXi : 1 ≤ i ≤ n1
533
+
534
+ is an orthonormal basis of Dθ1
535
+
536
+
537
+ Zj, �
538
+ PZj : 1 ≤ j ≤ n2
539
+
540
+ is an orthonormal basis of Dθ2
541
+
542
+
543
+ FXi, �
544
+ FPXi : 1 ≤ j ≤ n1
545
+
546
+ is an orthonormal basis of FDθ1
547
+
548
+
549
+ FZj, �
550
+ FPZj : 1 ≤ j ≤ n2
551
+
552
+ is an orthonormal basis of FDθ2
553
+
554
+ �ξk, �
555
+ Jξk : 1 ≤ s ≤ n − 2n1 − 2n2
556
+ 2
557
+
558
+ is an orthonormal basis of µ
559
+ However, while Zj, βPZj ∈ L(Mθ2) we have �
560
+ Zj, �
561
+ PZj /∈ L(Mθ2) in general, as λ is a
562
+ function on Mθ1. Also, note that
563
+ J
564
+
565
+
566
+ Zj
567
+
568
+ = J
569
+ �Zj
570
+ λ
571
+
572
+ = PZj
573
+ λ
574
+ + FZj
575
+ λ
576
+ = cos θ2 �
577
+ PZj + sin θ2 �
578
+ FZj
579
+ J
580
+
581
+
582
+ PZj
583
+
584
+ = J
585
+
586
+ sec θ2
587
+ PZj
588
+ λ
589
+
590
+ = sec θ2P 2Zj
591
+ λ
592
+ + sec θ2FPZj
593
+ λ
594
+ = − cos θ2�
595
+ Zj + sin θ2 �
596
+ FPZj
597
+ 5
598
+ Main Results
599
+ We first give a characterisation for pointwise bi-slant warped product submanifolds of
600
+ lcK manifolds.
601
+ Theorem 5.1. Let M m be a pointwise bi-slant submanifold of an lcK manifold �
602
+ M 2n.
603
+ Then the following are equivalent
604
+ 1. M m is a pointwise bi-slant warped product submanifold Mθ1 × λMθ2 of �
605
+ M 2n
606
+ 2. For every X, Y ∈ L(Mθ1) and Z, W ∈ L(Mθ2) we have
607
+ ω(Dθ2) = {0} and g (AFPXZ − AFXPZ, Y ) + g(∇⊥
608
+ Y FX, FZ) = 0
609
+ 11
610
+
611
+ g(AFPZX − AFZPX, W) + g(∇⊥
612
+ WFZ, FX)
613
+ (5.1)
614
+ = sin2 θ2
615
+ �1
616
+ 2ω(X) − X(ln λ)
617
+
618
+ g(Z, W)
619
+ for some smooth function λ : Mθ1 → (0, ∞).
620
+ 3. For every X ∈ L(Mθ1) and Z ∈ L(Mθ2) we have
621
+ ω(Dθ2) = {0} and ∇XZ = ∇ZX = 1
622
+ 2ω(X)Z
623
+ (5.2)
624
+ Also, in this case we have the mean curvature vector H of Mθ2 in M m is
625
+ H = −grad(ln λ) = −1
626
+ 2BT
627
+ (5.3)
628
+ where BT is the tangential component of B along M.
629
+ Proof. (1)⇔(2) This follows from Theorem 3.1, Theorem 3.2 and grad(ln λ) ∈ L(Mθ1)
630
+ which implies
631
+ g(∇Z(grad(ln λ)), X) = ZX(ln λ) − g(grad(ln λ), ∇ZX)
632
+ = [Z, X](ln λ) − ∇ZX(ln λ)
633
+ = −∇XZ(ln λ)
634
+ = g(Z, ∇X(grad(ln λ)))
635
+ = 0
636
+ as Z(ln λ) = 0 and Mθ1 is totally geodesic in M. Also, (5.3) follows from Lemma 4.1
637
+ (4.5).
638
+ (1)⇔(3) Let M = Mθ1 × λMθ2 be a pointwise bi-slant warped product submanifold. Then
639
+ (5.2) and (5.3) follow from (2.21) and Lemma 4.1 (4.5).
640
+ Conversely, let M m be a pointwise bi-slant submanifold of an lcK manifold �
641
+ M 2n such
642
+ that (5.2) holds. Then for all X, Y ∈ L(Mθ1) and Z, W ∈ L(Mθ2) we have
643
+ g([X, Y ], Z) = g(∇XY − ∇Y X, Z)
644
+ = −g(∇XZ, Y ) + g(∇Y Z, X)
645
+ = 0
646
+ which implies Dθ1 is involutive.
647
+ g(∇XY, Z) = −g(∇XZ, Y ) = 0
648
+ 12
649
+
650
+ which implies leaves of Dθ1 are totally geodesic in M.
651
+ g([Z, W], X) = g(∇ZW − ∇WZ, X)
652
+ = −g(∇ZX, W) + g(∇WX, Z)
653
+ = −1
654
+ 2ω(X)g(Z, W) + 1
655
+ 2ω(X)g(W, Z) = 0
656
+ which implies Dθ2 is involutive.
657
+ g(∇ZW, X) = −g(∇ZX, W)
658
+ = −1
659
+ 2ω(X)g(Z, W)
660
+ = −1
661
+ 2g(Z, W)g(BT, X)
662
+ which implies leaves of Dθ2 are totally umbilical in M with mean curvature vector − 1
663
+ 2BT.
664
+ g
665
+
666
+ ∇ZBT, X
667
+
668
+ = 1
669
+
670
+
671
+ BT�
672
+ g(Z, X) = 0
673
+ which implies BT is parallel in the normal bundle of Mθ2 in M.
674
+ Hence by Theorem 2.1 we have M = Mθ1 × λMθ2 is a pointwise bi-slant warped prod-
675
+ uct submanifold.
676
+ We conclude our study of pointwise bi-slant warped product submanifolds of lcK
677
+ manifolds by giving an inequality for the squared norm of the second fundamental form.
678
+ Theorem 5.2. Let M = Mθ1 × λMθ2 be a pointwise bi-slant warped product submanifold
679
+ in an lcK manifold (�
680
+ M 2n, J, g). Then the norm of the second fundamental form satisfies
681
+ the inequality
682
+ ||h||2 ≥ n1
683
+ 2 sin2 θ2∥B|FDθ2∥2 + n2
684
+ 2 sin2 θ1∥B|FDθ1∥2 + sin θ2g (HDθ1|FDθ2, B|FDθ2)
685
+ + sin θ1g (HDθ2|FDθ1, B|FDθ1)
686
+ (5.4)
687
+ where 2n1 = dimR Dθ1, 2n2 = dimR Dθ2 and HDθ1 and HDθ2 are respectively the compo-
688
+ nents of the mean curvature vector H of M in �
689
+ M 2n along Dθ1 and Dθ2.
690
+ If equality holds then we have
691
+ • Image(h) ⊆ (FDθ1 ⊕ FDθ2), and
692
+ • M is minimal in �
693
+ M 2n, if and only if, M is mixed-totally geodesic in �
694
+ M 2n.
695
+ 13
696
+
697
+ Proof.
698
+ ||h||2 =
699
+ ��h(Dθ1, Dθ1)
700
+ ��
701
+ FDθ1
702
+ ��2 +
703
+ ��h(Dθ1, Dθ2)
704
+ ��
705
+ FDθ1
706
+ ��2 +
707
+ ��h(Dθ2, Dθ2)
708
+ ��
709
+ FDθ1
710
+ ��2
711
+ +
712
+ ��h(Dθ1, Dθ1)
713
+ ��
714
+ FDθ2
715
+ ��2 +
716
+ ��h(Dθ1, Dθ2)
717
+ ��
718
+ FDθ2
719
+ ��2 +
720
+ ��h(Dθ2, Dθ2)
721
+ ��
722
+ FDθ2
723
+ ��2
724
+ +
725
+ ���h(Dθ1, Dθ1)
726
+ ��
727
+ µ
728
+ ���
729
+ 2
730
+ +
731
+ ���h(Dθ1, Dθ2)
732
+ ��
733
+ µ
734
+ ���
735
+ 2
736
+ +
737
+ ���h(Dθ2, Dθ2)
738
+ ��
739
+ µ
740
+ ���
741
+ 2
742
+ From (4.3) and Remark 4.1 we have
743
+ g
744
+
745
+ h
746
+
747
+
748
+ Xi, �
749
+ Zp
750
+
751
+ , �
752
+ FXj
753
+
754
+ =csc θ1
755
+ λ
756
+
757
+ g (h (Xi, Xj) , FZp) + 1
758
+ 2δijg (B, FZp)
759
+
760
+ =⇒ sin θ1g
761
+
762
+ h
763
+
764
+
765
+ Xi, �
766
+ Zp
767
+
768
+ , �
769
+ FXj
770
+
771
+ = sin θ2g
772
+
773
+ h
774
+
775
+
776
+ Xi, �
777
+ Xj
778
+
779
+ , �
780
+ FZp
781
+
782
+ + 1
783
+ 2 sin θ2δijg
784
+
785
+ B, �
786
+ FZp
787
+
788
+ Similarly,
789
+ sin θ1g
790
+
791
+ h
792
+
793
+
794
+ Xi, �
795
+ PZp
796
+
797
+ , �
798
+ FXj
799
+
800
+ = sin θ2g
801
+
802
+ h
803
+
804
+
805
+ Xi, �
806
+ Xj
807
+
808
+ , �
809
+ FPZp
810
+
811
+ + 1
812
+ 2 sin θ2δijg
813
+
814
+ B, �
815
+ FPZp
816
+
817
+ sin θ1g
818
+
819
+ h
820
+
821
+
822
+ PXi, �
823
+ Zp
824
+
825
+ , �
826
+ FPXj
827
+
828
+ = sin θ2g
829
+
830
+ h
831
+
832
+
833
+ PXi, �
834
+ PXj
835
+
836
+ , �
837
+ FZp
838
+
839
+ + 1
840
+ 2 sin θ2δijg
841
+
842
+ B, �
843
+ FZp
844
+
845
+ sin θ1g
846
+
847
+ h
848
+
849
+
850
+ PXi, �
851
+ PZp
852
+
853
+ , �
854
+ FPXj
855
+
856
+ = sin θ2g
857
+
858
+ h
859
+
860
+
861
+ PXi, �
862
+ PXj
863
+
864
+ , �
865
+ FPZp
866
+
867
+ + 1
868
+ 2 sin θ2δijg
869
+
870
+ B, �
871
+ FPZp
872
+
873
+ sin θ1g
874
+
875
+ h
876
+
877
+
878
+ PXi, �
879
+ Zp
880
+
881
+ , �
882
+ FXj
883
+
884
+ = sin θ2g
885
+
886
+ h
887
+
888
+
889
+ PXi, �
890
+ Xj
891
+
892
+ , �
893
+ FZp
894
+
895
+ sin θ1g
896
+
897
+ h
898
+
899
+
900
+ PXi, �
901
+ PZp
902
+
903
+ , �
904
+ FXj
905
+
906
+ = sin θ2g
907
+
908
+ h
909
+
910
+
911
+ PXi, �
912
+ Xj
913
+
914
+ , �
915
+ FPZp
916
+
917
+ sin θ1g
918
+
919
+ h
920
+
921
+
922
+ Xi, �
923
+ Zp
924
+
925
+ , �
926
+ FPXj
927
+
928
+ = sin θ2g
929
+
930
+ h
931
+
932
+
933
+ Xi, �
934
+ PXj
935
+
936
+ , �
937
+ FZp
938
+
939
+ sin θ1g
940
+
941
+ h
942
+
943
+
944
+ Xi, �
945
+ PZp
946
+
947
+ , �
948
+ FPXj
949
+
950
+ = sin θ2g
951
+
952
+ h
953
+
954
+
955
+ Xi, �
956
+ PXj
957
+
958
+ , �
959
+ FPZp
960
+
961
+ which implies
962
+ ��h(Dθ1, Dθ2)
963
+ ��
964
+ FDθ1
965
+ ��2 = cos2 θ1
966
+ ��h(Dθ1, Dθ2)
967
+ ��
968
+ FDθ1
969
+ ��2 + sin2 θ1
970
+ ��h(Dθ1, Dθ2)
971
+ ��
972
+ FDθ1
973
+ ��2
974
+ = cos2 θ1
975
+ ��h(Dθ1, Dθ2)
976
+ ��
977
+ FDθ1
978
+ ��2 + sin2 θ2
979
+ ��h(Dθ1, Dθ1)
980
+ ��
981
+ FDθ2
982
+ ��2
983
+ + 1
984
+ 2 sin2 θ2
985
+
986
+ i,p
987
+
988
+ g
989
+
990
+ B, �
991
+ FZp
992
+ �2
993
+ + g
994
+
995
+ B, �
996
+ FPZp
997
+ �2�
998
+ + sin θ2
999
+
1000
+ i,p
1001
+
1002
+ g
1003
+
1004
+ h
1005
+
1006
+
1007
+ Xi, �
1008
+ Xi
1009
+
1010
+ , �
1011
+ FZp
1012
+
1013
+ g
1014
+
1015
+ B, �
1016
+ FZp
1017
+
1018
+ +g
1019
+
1020
+ h
1021
+
1022
+
1023
+ Xi, �
1024
+ Xi
1025
+
1026
+ , �
1027
+ FPZp
1028
+
1029
+ g
1030
+
1031
+ B, �
1032
+ FPZp
1033
+
1034
+ 14
1035
+
1036
+ +g
1037
+
1038
+ h
1039
+
1040
+
1041
+ PXi, �
1042
+ PXi
1043
+
1044
+ , �
1045
+ FZp
1046
+
1047
+ g
1048
+
1049
+ B, �
1050
+ FZp
1051
+
1052
+ +g
1053
+
1054
+ h
1055
+
1056
+
1057
+ PXi, �
1058
+ PXi
1059
+
1060
+ , �
1061
+ FPZp
1062
+
1063
+ g
1064
+
1065
+ B, �
1066
+ FPZp
1067
+ ��
1068
+ i.e.
1069
+ sin2 θ1
1070
+ ��h(Dθ1, Dθ2)
1071
+ ��
1072
+ FDθ1
1073
+ ��2 = sin2 θ2
1074
+ ��h(Dθ1, Dθ1)
1075
+ ��
1076
+ FDθ2
1077
+ ��2 + 2n1
1078
+ 4 sin2 θ2
1079
+ ����B
1080
+ ��
1081
+ FDθ2
1082
+ ����
1083
+ 2
1084
+ + sin θ2g
1085
+ ��
1086
+ i
1087
+
1088
+ h
1089
+
1090
+
1091
+ Xi, �
1092
+ Xi
1093
+
1094
+ + h
1095
+
1096
+
1097
+ PXi, �
1098
+ PXi
1099
+ �������
1100
+ FDθ2
1101
+ , B|FDθ2
1102
+
1103
+ ≥ n1
1104
+ 2 sin2 θ2∥B|FDθ2∥2 + sin θ2g (HDθ1|FDθ2, B|FDθ2)
1105
+ As done above we have
1106
+ sin2 θ2
1107
+ ��h(Dθ1, Dθ2)
1108
+ ��
1109
+ FDθ2
1110
+ ��2 = sin2 θ1
1111
+ ��h(Dθ2, Dθ2)
1112
+ ��
1113
+ FDθ1
1114
+ ��2 + 2n2
1115
+ 4 sin2 θ1
1116
+ ����B
1117
+ ��
1118
+ FDθ1
1119
+ ����
1120
+ 2
1121
+ + sin θ1g
1122
+
1123
+ ��
1124
+ p
1125
+
1126
+ h
1127
+
1128
+
1129
+ Zp, �
1130
+ Zp
1131
+
1132
+ + h
1133
+
1134
+
1135
+ PZp, �
1136
+ PZp
1137
+ �������
1138
+ FDθ1
1139
+ , B|FDθ1
1140
+
1141
+
1142
+ ≥ n2
1143
+ 2 sin2 θ1∥B|FDθ1∥2 + sin θ1g (HDθ2|FDθ1, B|FDθ1)
1144
+ Combining we have (5.4).
1145
+ If equality holds in (5.4), then the only non-zero components of ||h|| are
1146
+ ��h(Dθ1, Dθ1)|FDθ2
1147
+ ��2
1148
+ ,
1149
+ ��h(Dθ1, Dθ2)|FDθ1
1150
+ ��2 ,
1151
+ ��h(Dθ2, Dθ2)|FDθ1
1152
+ ��2 and
1153
+ ��h(Dθ1, Dθ2)|FDθ2
1154
+ ��2 .
1155
+ Also, from the calculations above we have,
1156
+ ��h(Dθ1, Dθ1)|FDθ2
1157
+ ��2 = 0 if and only if
1158
+ ��h(Dθ1, Dθ2)|FDθ1
1159
+ ��2 = 0
1160
+ and
1161
+ ��h(Dθ2, Dθ2)|FDθ1
1162
+ ��2 = 0 if and only if
1163
+ ��h(Dθ1, Dθ2)|FDθ2
1164
+ ��2 = 0
1165
+ Hence, the result follows.
1166
+ 6
1167
+ Example
1168
+ Consider Cn = E2n, J, g0) where E2n is the Euclidean space of dimension 2n with coor-
1169
+ dinates (x1, . . . , xn, y1, . . . , yn) equipped with the standard Euclidean metric g0 and the
1170
+ canonical almost complex structure
1171
+ J(x1, . . . , xn, y1, . . . , yn) = (−y1, . . . , −yn, x1, . . . , xn)
1172
+ (6.1)
1173
+ 15
1174
+
1175
+ Then Cn = (E2n, J, g0) is a flat Kähler manifold.
1176
+ We use the following result about pointwise slant immersions.
1177
+ Theorem 6.1 ([20] (Proposition 2.2)). Given a Kähler manifold (�
1178
+ M 2n, J, g0), let Mθ be a
1179
+ pointwise slant submanifold. Then for any smooth function f : �
1180
+ M → (0, ∞), we have Mθ
1181
+ is again a pointwise slant submanifold of the globally conformal Kähler (gcK) manifold
1182
+ (�
1183
+ M 2n, J, e−fg0) with the same slant angle.
1184
+ Example 6.1. Let C4 = (E8, J, g0) be as defined above. Consider an open subset of
1185
+ E4 with u1u2 ̸= 1, u3u4 ̸= 1, (u1 − u2) ∈
1186
+
1187
+ 0, π
1188
+ 4
1189
+
1190
+ and (u3 − u4) ∈
1191
+ � π
1192
+ 4, π
1193
+ 2
1194
+
1195
+ . Define the
1196
+ 4-dimensional submanifold M of C4 given by
1197
+ x1 = u1 cos u2
1198
+ ,
1199
+ y1 = u1 sin u2
1200
+ x2 = u2 cos u1
1201
+ ,
1202
+ y2 = u2 sin u1
1203
+ x3 = u3 cos u4
1204
+ ,
1205
+ y3 = u3 sin u4
1206
+ x4 = u4 cos u3
1207
+ ,
1208
+ y4 = u4 sin u3
1209
+ (6.2)
1210
+ An orthonormal frame of the tangent bundle TM of M is
1211
+ X1 =
1212
+ 1
1213
+
1214
+ 1 + u2
1215
+ 2
1216
+
1217
+ cos u2
1218
+
1219
+ ∂x1
1220
+ − u2 sin u1
1221
+
1222
+ ∂x2
1223
+ + sin u2
1224
+
1225
+ ∂y1
1226
+ + u2 cos u1
1227
+
1228
+ ∂y2
1229
+
1230
+ X2 =
1231
+ 1
1232
+
1233
+ 1 + u2
1234
+ 1
1235
+
1236
+ −u1 sin u2
1237
+
1238
+ ∂x1
1239
+ + cos u1
1240
+
1241
+ ∂x2
1242
+ + u1 cos u2
1243
+
1244
+ ∂y1
1245
+ + sin u1
1246
+
1247
+ ∂y2
1248
+
1249
+ X3 =
1250
+ 1
1251
+
1252
+ 1 + u2
1253
+ 4
1254
+
1255
+ cos u4
1256
+
1257
+ ∂x3
1258
+ − u4 sin u3
1259
+
1260
+ ∂x4
1261
+ + sin u4
1262
+
1263
+ ∂y3
1264
+ + u4 cos u3
1265
+
1266
+ ∂y4
1267
+
1268
+ X4 =
1269
+ 1
1270
+
1271
+ 1 + u2
1272
+ 3
1273
+
1274
+ −u3 sin u4
1275
+
1276
+ ∂x3
1277
+ + cos u3
1278
+
1279
+ ∂x4
1280
+ + u3 cos u4
1281
+
1282
+ ∂y3
1283
+ + sin u3
1284
+
1285
+ ∂y4
1286
+
1287
+ Then, M is a proper pointwise bi-slant submanifold with slant distributions given by
1288
+ Dθ1 = Span{X1, X2} and Dθ2 = Span{X3, X4}. Also, the slant angles are given by
1289
+ cos2 θ1 = (u1u2 − 1)2 cos2(u1 − u2)
1290
+ (1 + u2
1291
+ 1)(1 + u2
1292
+ 2)
1293
+ , and , cos2 θ2 = (u3u4 − 1)2 cos2(u3 − u4)
1294
+ (1 + u2
1295
+ 3)(1 + u2
1296
+ 4)
1297
+ It is straightforward to check that Dθ1 and Dθ2 are both involutive and totally geodesic
1298
+ in M. Let Mθ1 and Mθ2 be the leaves of Dθ1 and Dθ2 respectively. Then we have M is
1299
+ the Riemannian product M = Mθ1 × Mθ2 and the metric gM induced on M from C4 is
1300
+ given by
1301
+ gM = g1 + g2
1302
+ (6.3)
1303
+ where
1304
+ g1 = (1 + u2
1305
+ 2)du2
1306
+ 1 + (1 + u2
1307
+ 1)du2
1308
+ 2 , and , g2 = (1 + u2
1309
+ 4)du2
1310
+ 3 + (1 + u2
1311
+ 3)du2
1312
+ 4
1313
+ (6.4)
1314
+ 16
1315
+
1316
+ Now, for any non-constant positive smooth function f = f(x1, x2, y1, y2) on C4, depending
1317
+ only on coordinates x1, x2, y1, y2, consider the Riemannian metric ˜g = e−fg0, conformal
1318
+ to the standard metric g0. Then, �
1319
+ M = (E8, J, ˜g) is a globally conformal Kähler manifold
1320
+ and the metric on M induced from �
1321
+ M is the warped product metric
1322
+ ˜gM = ˜g1 + e−fg2
1323
+ (6.5)
1324
+ where
1325
+ ˜g1 = e−fg1
1326
+ (6.6)
1327
+ is conformal to g1 by the choice of f.
1328
+ Hence from Theorem 6.1 we have (M, ˜gM) is a proper pointwise bi-slant warped prod-
1329
+ uct submanifold of �
1330
+ M = (E8, J, ˜g).
1331
+ Also, as f = f(x1, x2, y1, y2) is a non-constant positive smooth function on C4, depend-
1332
+ ing only on coordinates x1, x2, y1, y2, from (6.2) we have that restricted to the submanifold
1333
+ M, the Lee form ω of �
1334
+ M is given by
1335
+ ω = df = ∂f
1336
+ ∂u1
1337
+ du1 + ∂f
1338
+ ∂u2
1339
+ du2
1340
+ (6.7)
1341
+ Hence, it follows that the Lee-vector field B is orthogonal to Dθ2 and the warping function
1342
+ λ = −e− f
1343
+ 2 |M satisfies grad(ln λ) = 1
1344
+ 2grad(f|M) = 1
1345
+ 2BT.
1346
+ References
1347
+ [1] Alghamdi, Fatimah; Chen, Bang-Yen and Uddin, Siraj. “Geometry of pointwise semi-
1348
+ slant warped products in locally conformal Kaehler manifolds.” Results Math. 76
1349
+ (2021), no. 4, Paper No. 204, 25 pp. MR4318461
1350
+ [2] Atçeken, Mehmet. “Slant submanifolds of a Riemannian product manifold.” Acta
1351
+ Math. Sci. Ser. B (Engl. Ed.) 30 (2010), no. 1, 215–224. MR2658956
1352
+ [3] Atçeken, Mehmet and Hui, Shyamal Kumar. “Slant and pseudo-slant submanifolds in
1353
+ LCS-manifolds.” Czechoslovak Math. J. 63(138) (2013), no. 1, 177–190. MR3035505
1354
+ [4] Atçeken, Mehmet and Dirik, Süleyman. “Pseudo-slant submanifolds of a nearly Ken-
1355
+ motsu manifold.” Serdica Math. J. 41 (2015), no. 2-3, 243–262. MR3363604
1356
+ [5] Bejancu, Aurel. “CR submanifolds of a Kaehler manifold. I.” Proc. Amer. Math. Soc.
1357
+ 69 (1978), no. 1, 135–142. MR0467630
1358
+ [6] Bejancu, Aurel. “CR submanifolds of a Kaehler manifold. II.” Trans. Amer. Math.
1359
+ Soc. 250 (1979), 333–345. MR0530059
1360
+ 17
1361
+
1362
+ [7] Bishop, R. L. and O’Neill, B. “Manifolds of negative curvature.” Trans. Amer. Math.
1363
+ Soc. 145 (1969), 1–49. MR0251664
1364
+ [8] Bonanzinga, Vittoria and Matsumoto, Koji. “Warped product CR-submanifolds in
1365
+ locally conformal Kaehler manifolds.” Period. Math. Hungar. 48 (2004), no. 1-2, 207–
1366
+ 221. MR2077697
1367
+ [9] Cabrerizo, J. L.; Carriazo, A.; Fernández, L. M. and Fernández, M. “Semi-slant
1368
+ submanifolds of a Sasakian manifold.” Geom. Dedicata 78 (1999), no. 2, 183–199.
1369
+ MR1722833
1370
+ [10] J. L. Cabrerizo, A. Carriazo, L. M.Fernández and M. Fernández, “Slant submanifolds
1371
+ in Sasakian manifolds.” Glasg. Math. J. 42 (2000), no. 1, 125–138. MR1739684
1372
+ [11] Chen, Bang-Yen. “CR-submanifolds of a Kaehler manifold. I.” J. Differential Geom-
1373
+ etry 16 (1981), no. 2, 305–322. MR0638795
1374
+ [12] Chen, Bang-Yen. “CR-submanifolds of a Kaehler manifold. II.” J. Differential Ge-
1375
+ ometry 16 (1981), no. 3, 493–509 (1982). MR0654640
1376
+ [13] Chen, Bang-Yen. “Slant immersions.” Bull. Austral. Math. Soc. 41 (1990), no. 1,
1377
+ 135–147. MR1043974
1378
+ [14] Chen, Bang-Yen. “Geometry of slant submanifolds.” Katholieke Universiteit Leuven,
1379
+ Louvain, 1990. 123 pp. MR1099374
1380
+ [15] Chen, Bang-Yen. “Twisted product CR-submanifolds in Kaehler manifolds.” Tamsui
1381
+ Oxf. J. Math. Sci. 16 (2000), no. 2, 105–121. MR1833002
1382
+ [16] Chen, Bang-Yen. “Complex extensors, warped products and Lagrangian immersions.”
1383
+ Soochow J. Math. 26 (2000), no. 1, 1–17. MR1755131
1384
+ [17] Chen, Bang-Yen. “Geometry of warped product CR-submanifolds in Kaehler mani-
1385
+ folds.” Monatsh. Math. 133 (2001), no. 3, 177–195. MR1861136
1386
+ [18] Chen, Bang-Yen. “Geometry of warped product CR-submanifolds in Kaehler mani-
1387
+ folds. II.” Monatsh. Math. 134 (2001), no. 2, 103–119. MR1878074
1388
+ [19] Chen, Bang-Yen. “Geometry of warped products as Riemannian submanifolds and
1389
+ related problems.” Soochow J. Math. 28 (2002), no. 2, 125–156. MR1897183
1390
+ [20] Chen, Bang-Yen. and Garay, Oscar J. “Pointwise slant submanifolds in almost Her-
1391
+ mitian manifolds.” Turkish J. Math. 36 (2012), no. 4, 630–640. MR2993593
1392
+ 18
1393
+
1394
+ [21] Goldberg, Samuel I. and Vaisman, Izu. “On compact locally conformal Kaehler man-
1395
+ ifolds with nonnegative sectional curvature.” Ann. Fac. Sci. Toulouse Math. (5) 2
1396
+ (1980), no. 2, 117–123. MR0595194
1397
+ [22] Hiepko, Sönke. “Eine innere Kennzeichnung der verzerrten Produkte (German).”
1398
+ Math. Ann. 241 (1979), no. 3, 209–215. MR0535555
1399
+ [23] Jamal, Nargis; Khan, Khalid Ali and Khan, Viqar Azam. “Generic warped product
1400
+ submanifolds of locally conformal Keahler manifolds.” Acta Math. Sci. Ser. B (Engl.
1401
+ Ed.) 30 (2010), no. 5, 1457–1468. MR2778614
1402
+ [24] Li, Hongxia and Liu, Ximin. “Semi-slant submanifolds of a locally product manifold.”
1403
+ Georgian Math. J. 12 (2005), no. 2, 273–282. MR2174183
1404
+ [25] Matsumoto,
1405
+ Koji
1406
+ and
1407
+ Bonanzinga,
1408
+ Vittoria.
1409
+ “Doubly
1410
+ warped
1411
+ product
1412
+ CR-
1413
+ submanifolds in a locally conformal Kaehler space form. II.” An. Ştiinţ. Univ. Al.
1414
+ I. Cuza Iaşi. Mat. (N.S.) 53 (2007), suppl. 1, 235–248. MR2522397
1415
+ [26] Matsumoto,
1416
+ Koji
1417
+ and
1418
+ Bonanzinga,
1419
+ Vittoria.
1420
+ “Doubly
1421
+ warped
1422
+ product
1423
+ CR-
1424
+ submanifolds in a locally conformal Kaehler space form.” Acta Math. Acad. Paedagog.
1425
+ Nyházi. (N.S.) 24 (2008), no. 1, 93–102. MR2430238
1426
+ [27] Nölker, Stefan. “Isometric immersions of warped products.” Differential Geom. Appl.
1427
+ 6 (1996), no. 1, 1–30. MR1384876
1428
+ [28] Papaghiuc, Neculai. “Semi-slant submanifolds of a Kaehlerian manifold.” An. Ştiinţ.
1429
+ Univ. Al. I. Cuza Iaşi Secţ. I a Mat. 40 (1994), no. 1, 55–61. MR1328947
1430
+ [29] Sahin, Bayram. “Slant submanifolds of an almost product Riemannian manifold.” J.
1431
+ Korean Math. Soc. 43 (2006), no. 4, 717–732. MR2234930
1432
+ [30] Taştan, Hakan Mete and Gerdan, Sibel. “Hemi-slant submanifolds of a locally con-
1433
+ formal Kähler manifold.” Int. Electron. J. Geom. 8 (2015), no. 2, 46–56. MR3418457
1434
+ [31] Taştan, Hakan Mete and Özdemir, Fatma. “The geometry of hemi-slant submanifolds
1435
+ of a locally product Riemannian manifold.” Turkish J. Math. 39 (2015), no. 2, 268–
1436
+ 284. MR3311690
1437
+ [32] Taştan, H. M. and Tripathi, M. M. “Semi-slant submanifolds of a locally conformal
1438
+ Kähler manifold.” An. Ştiinţ. Univ. Al. I. Cuza Iaşi. Mat. (N.S.) 62 (2016), no. 2, vol.
1439
+ 1, 337–347. MR3680211
1440
+ [33] Vaisman, Izu. “On locally conformal almost Kähler manifolds.” Israel J. Math. 24
1441
+ (1976), no. 3-4, 338–351. MR0418003
1442
+ 19
1443
+
1444
+ [34] Vaisman, Izu. “Holomorphic vector fields on locally conformal Kähler manifolds.”
1445
+ An. Ştiinţ. Univ. "Al. I. Cuza” Iaşi Secţ. I a Mat. (N.S.) 24 (1978), no. 2, 357–362.
1446
+ MR0533764
1447
+ [35] Vaisman, Izu. “Locally conformal Kähler manifolds with parallel Lee form.” Rend.
1448
+ Mat. (6) 12 (1979), no. 2, 263–284. MR0557668
1449
+ [36] Vaisman, Izu. “A theorem on compact locally conformal Kähler manifolds.” Proc.
1450
+ Amer. Math. Soc. 75 (1979), no. 2, 279–283. MR0532151
1451
+ [37] Vaisman, Izu. “On locally and globally conformal Kähler manifolds.” Trans. Amer.
1452
+ Math. Soc. 262 (1980), no. 2, 533–542. MR0586733
1453
+ [38] Vaisman, Izu. “Some curvature properties of locally conformal Kähler manifolds.”
1454
+ Trans. Amer. Math. Soc. 259 (1980), no. 2, 439–447. MR0567089
1455
+ 20
1456
+
3NE4T4oBgHgl3EQf0Q0i/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
3dFAT4oBgHgl3EQfERwH/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb0166223196d7c2fc2844ff8ab16d0c92069cb2fcf2b121e78fae76b97bf78c
3
+ size 6881325
4NFST4oBgHgl3EQfZjjS/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ca57ef941567ba8d238586dfb980ff4d79882ff2600769fbbe44279d8b4fa94
3
+ size 2949165
69AzT4oBgHgl3EQf-P6_/content/tmp_files/2301.01932v1.pdf.txt ADDED
@@ -0,0 +1,689 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ PA-GM: POSITION-AWARE LEARNING OF EMBEDDING NETWORKS FOR DEEP GRAPH
2
+ MATCHING
3
+ Dongdong Chen1, Yuxing Dai2, Lichi Zhang1, Zhihong Zhang2, Edwin R. Hancock3
4
+ 1 Shanghai Jiao Tong University
5
+ 2 Xiamen University
6
+ 3 University of York
7
+ ABSTRACT
8
+ Graph matching can be formalized as a combinatorial opti-
9
+ mization problem, where there are corresponding relation-
10
+ ships between pairs of nodes that can be represented as edges.
11
+ This problem becomes challenging when there are potential
12
+ ambiguities present due to nodes and edges with high simi-
13
+ larity, and there is a need to find accurate results for similar
14
+ content matching. In this paper, we introduce a novel end-to-
15
+ end neural network that can map the linear assignment prob-
16
+ lem into a high-dimensional space augmented with node-level
17
+ relative position information, which is crucial for improving
18
+ the method’s performance for similar content matching. Our
19
+ model constructs the anchor set for the relative position of
20
+ nodes and then aggregates the feature information of the tar-
21
+ get node and each anchor node based on a measure of rela-
22
+ tive position. It then learns the node feature representation by
23
+ integrating the topological structure and the relative position
24
+ information, thus realizing the linear assignment between the
25
+ two graphs. To verify the effectiveness and generalizability of
26
+ our method, we conduct graph matching experiments, includ-
27
+ ing cross-category matching, on different real-world datasets.
28
+ Comparisons with different baselines demonstrate the supe-
29
+ riority of our method. Our source code is available under
30
+ https://github.com/anonymous.
31
+ Index Terms— Graph Matching, Graph Embedding,
32
+ Deep Neural Network
33
+ 1. INTRODUCTION
34
+ Graph matching aims to establish the relationship between
35
+ two or more graphs based on information derived from their
36
+ nodes and edges [1].
37
+ Due to its ability to better express
38
+ and encode data relationships, graph matching has gained
39
+ considerable traction in computer vision and related fields.
40
+ Applications of graph matching techniques include image
41
+ registration in medical image analysis [2], link analysis in
42
+ social networks [3] and image extrapolation in computer
43
+ vision[4]. Current methods for solving the graph matching
44
+ problem can be divided into two main classes: a) learning-
45
+ free or b) learning-based [5, 6]. In the first case, the critical
46
+ Fig. 1. A failed example of mismatching the position of the
47
+ animal dog’s ears in the two images.
48
+ element is the mathematical method used for obtaining an
49
+ approximate solution to an intrinsically NP-hard problem.
50
+ On the other hand, learning-based methods aim to improve
51
+ the solver with a number of methods including deep neural
52
+ networks.
53
+ However, these methods can only compute a similarity
54
+ score for the whole graph, or rely on inefficient global match-
55
+ ing procedures[7]. For example, [8] only considers the em-
56
+ bedding of local information for nodes in the graph, leading
57
+ to a tendency to inconsistently match similar nodes from dif-
58
+ ferent regions of the graph, and thus resulted to have ambigu-
59
+ ities. As is illustrated in Fig. 1, it shows that the node em-
60
+ bedding representation, which relies only on local structural
61
+ information and semantic node information, lacks sufficient
62
+ discrimination to effectively resolve these ambiguities. As a
63
+ result, it is difficult to distinguish the left and right ears of
64
+ the dog in the example. From this, relative positional infor-
65
+ mation is the key to the matching of diagrams, especially in
66
+ cases where the graphs have similar semantic and structural
67
+ content.
68
+ In this paper, we introduce the idea of position-awareness
69
+ to solve the above-mentioned problem in graph matching. We
70
+ first construct the graph as input to the model with node fea-
71
+ tures extracted from an image, and then extract a collection
72
+ of anchor points as reference coordinates for each node in the
73
+ graph. We further propose a corresponding position-aware
74
+ node embedding algorithm to capture the relative positional
75
+ arXiv:2301.01932v1 [cs.CV] 5 Jan 2023
76
+
77
+ Xinformation of the nodes in the graph. With the node-wise
78
+ graph embedding to hand, we identify a node permutation for
79
+ node-to-node correspondence.
80
+ The main contributions of this paper are as follows:
81
+ • We propose a novel Position-Aware Learning of Em-
82
+ bedding Network for Deep Graph Matching (PA-GM).
83
+ To the best of our knowledge, there are no methods that
84
+ consider the learning of an embedding augmented with
85
+ relative positional information in graph matching tasks.
86
+ This restriction hampers their applications in terms of
87
+ matching accuracy.
88
+ • We extract the relative positional information for the
89
+ nodes in constructing the node embedding instead of
90
+ the traditional neighborhood aggregation approach. We
91
+ fully consider the positional information relevant to the
92
+ matching of keypoints, which is proved to be effective
93
+ in our experiments.
94
+ • The proposed framework is both scalable and flexible,
95
+ and benefits from the use of a relative position coef-
96
+ ficient together with an alignment loss. Experiments
97
+ show that the model achieves state-of-the-art results on
98
+ real-world datasets.
99
+ 2. METHODS
100
+ In this paper, we intend to resolve the graph matching prob-
101
+ lem based on the supervised matching of graphs. Specifi-
102
+ cally, we aim to learn an end-to-end model which can extract
103
+ graph information and their matches through given pair-wise
104
+ ground-truth correspondences for a set of graphs, and which
105
+ can be further generalized to unseen graph pairs. The overall
106
+ pipeline for graph matching using the position-aware embed-
107
+ ding network is presented in Fig. 2.
108
+ 2.1. Problem Definition
109
+ In this paper, we denote a graph by the triple G = (V, A, X)
110
+ which consists of a finite set of nodes V , an adjacency ma-
111
+ trix A, and a set of node attributes X extracted from im-
112
+ ages using CNN-based models[9, 10]. We construct a vec-
113
+ tor v ∈ {0, 1}nm×1 to indicate the match of vertices in the
114
+ source graph Gs = (Vs, As, Xs) and the target graph Gt =
115
+ (Vt, At, Xt).
116
+ The vector has elements vi,j = 1 if vertex
117
+ i ∈ Vs is matched to vertex j ∈ Vt, and vi,j = 0 if other-
118
+ wise. It is worth noting that all the vertex matches are subject
119
+ to one-to-one mapping constraints �
120
+ j∈Vt vi,j = 1 ∀i ∈ Vs
121
+ and �
122
+ i∈Vs vi,j ≤ 1 ∀j ∈ Vt. Furthermore, we construct a
123
+ square symmetric positive matrix M ∈ Rnm×nm as the affin-
124
+ ity matrix, to encode the edge-to-edge affinity between two
125
+ graphs in the off-diagonal elements. In this way, the two-
126
+ graph matching between Gs and Gt can be formulated as an
127
+ edge-preserving, quadratic assignment programming (QAP)
128
+ problem [11]:
129
+ argmax
130
+ v
131
+ v⊤Mv
132
+ s.t.
133
+
134
+ j∈Vt
135
+ vi,j = 1 ∀i ∈ Vs,
136
+
137
+ i∈Vs
138
+ vi,j ≤ 1 ∀j ∈ Vt,
139
+ (1)
140
+ where v ∈ {0, 1}nm×1 and M ∈ Rnm×nm. The goal of
141
+ graph matching is to establish a correspondence between two
142
+ attributed graphs, which minimizes the sum of local and ge-
143
+ ometric costs of assignment between the vertices of the two
144
+ graphs.
145
+ 2.2. Position-Aware Node Embedding
146
+ Pixels at neighboring positions in the image convey similar
147
+ semantic information, therefore we need to distinguish them
148
+ to resolve matching ambiguities. Here, we refer to the nodes
149
+ used as reference positions named as anchors and propose an
150
+ effective strategy to construct an anchor set P consisting of
151
+ all nodes in the graph, denoted by P = V , serving as a sta-
152
+ ble reference for all nodes. To combine information from the
153
+ nodes and the anchors, we design the information aggregation
154
+ mechanism as is shown in Fig.3.
155
+ Considering that the amount of effective information
156
+ transmission to the nodes by the anchors with different rela-
157
+ tive distances is different, we first compute a relative position
158
+ coefficient qv,u between a pair of nodes v and u as:
159
+ qv,u =
160
+ e−dv,u
161
+
162
+ i∈V e−dv,i ,
163
+ (2)
164
+ where dv,u is the shortest path distance between the two
165
+ nodes. In practice, to effectively reduce the time complexity
166
+ of calculating the shortest path and also to reduce interfer-
167
+ ence from those anchor points that are distant from the node
168
+ under consideration, we demand that the maximum shortest
169
+ path distance does not exceed the ceiling value r. Otherwise,
170
+ the value of the coefficient is set to infinity. We continue to
171
+ compute the information aggregation function I(v, u)(l) for
172
+ the l-th layer between node v and u as:
173
+ I(v, u)(l) = qv,uCONCAT(h(l−1)
174
+ v
175
+ , h(l−1)
176
+ u
177
+ ),
178
+ (3)
179
+ where h(l−1)
180
+ v
181
+ and h(l−1)
182
+ u
183
+ are the feature representation and po-
184
+ sition representation in the (l − 1)-th layer, and are combined
185
+ through the message aggregation function. We further aggre-
186
+ gate the information from all node-anchor pairs to obtaining a
187
+ new representation in a high dimensional space using the non-
188
+ linear variation. This hidden representation can be computed
189
+ using the update
190
+ h(l)
191
+ v
192
+ = σ(AGG(I(v, u)(l)| ∀u ∈ V )W (l)),
193
+ (4)
194
+
195
+ Fig. 2. Overview of the end-to-end position embedding networks for deep graph matching. The blue source graph Gs and the
196
+ green target graph Gt are extracted with the node-wise graph feature representation in high-level through the two frameworks
197
+ of position-aware node embedding.
198
+ Fig. 3. The position-aware embedding first aggregates the
199
+ message of each target node hv and each anchor point hui in
200
+ the anchor set via the aggregation function I(v, u) based on
201
+ the relative position coefficients q(v, u). The representation
202
+ matrix is then further aggregated using the learnable function
203
+ AGG and is finally transformed in a non-linear way to obtain
204
+ the feature output hv for the target node in the current layer.
205
+ where AGG is typically a permutation-invariant function
206
+ (e.g., sum), and W (l) is a learnable weight vector for the l-th
207
+ layer.
208
+ 2.3. Graph Feature Matching
209
+ Utilizing the proposed position-aware node embedding, the
210
+ proposed scheme encodes each node with position informa-
211
+ tion into a high-level embedding space. In this way, we can
212
+ simplify the second-order affinity matrix of paired graphs to
213
+ a be a linear one with position-aware learning of the embed-
214
+ ding. With the final hidden representation of the source graph
215
+ Hs ∈ Rn∗F ′ and the target graph Ht ∈ Rm∗F ′ to hand, we
216
+ can obtain a soft correspondence between these graphs with
217
+ a node-wise affinity matrix through the inner product of the
218
+ embeddings of the two graphs being matched. Specifically, to
219
+ satisfy the condition that the original graph maps injectively
220
+ onto the target graph, we apply the Sinkhorn normalization
221
+ [12] to obtain rectangular doubly-stochastic correspondence
222
+ matrices.
223
+ S = sinkhorn(HsHT
224
+ t ).
225
+ (5)
226
+ We further employ the cross-entropy loss function as the
227
+ permutation loss between the predicted permutation matrix
228
+ and the ground truth:
229
+ L = −
230
+
231
+ i∈Vs,j∈Vt
232
+
233
+ Sgt
234
+ i,j log Si,j +
235
+
236
+ 1 − Sgt
237
+ i,j
238
+
239
+ log (1 − Si,j)
240
+
241
+ ,
242
+ (6)
243
+ where Sgt is the ground truth permutation matrix. Conse-
244
+ quently, the cross-entropy loss based on linear allocations can
245
+ be learned end-to-end no matter the number of nodes and
246
+ edges in the graph.
247
+ 3. EXPERIMENTS
248
+ 3.1. Experimental Setting
249
+ Dataset. We use the Willow Object Class [13], consisting
250
+ of 9963 images in total and is used as a benchmark by many
251
+ methods to measure the accuracy of image classification
252
+ and recognition algorithms.
253
+ And IMC-PT-SparseGM [14]
254
+ datasets, containing 16 object categories and 25061 images,
255
+ which gather from 16 tourist attractions around the world.
256
+ https://www.di.ens.fr/willow/research/graphlearning/.
257
+ Implementation Details. The experiments are conducted
258
+ using two GeForce GTX 1080 Ti GPUs. We employ a batch
259
+ size of 8 in training and evaluate the results of our method af-
260
+ ter 2000 epochs for each iteration. We employ the Adam [15]
261
+
262
+ hs: hidden representation of source graph
263
+ ht: hidden representation of target graph
264
+ Position-Aware
265
+ Node embedding
266
+ Iteration
267
+ 3
268
+ Ground truth
269
+ ?
270
+ (6)
271
+ Update
272
+ ?
273
+
274
+ Invariant
275
+ {1,2,3, ., 8]
276
+ Gs
277
+ Anchor Matrix
278
+ Anchor-set
279
+ Embedding layer
280
+ Hs
281
+ HT
282
+ ZAND
283
+ Accuracy
284
+ Position-Aware
285
+ Node embedding
286
+ Graph Feature Matching
287
+ ?
288
+ Iteration
289
+ 4-3
290
+ ①2
291
+ 4-3
292
+ 6
293
+ ?
294
+
295
+ Update
296
+ Invariant
297
+ {1,2,3, ...,8]
298
+ Gt
299
+ :
300
+ Anchor-set
301
+ Anchor Matrix
302
+ Embedding layerq(v,
303
+ hu?
304
+ AGG
305
+ q(v, u3)
306
+ W
307
+ hv
308
+ v,u1~p
309
+ ,up)
310
+ (V,
311
+ qFig. 4.
312
+ Confusion matrix analysis of cross-category generalizations using the Willow ObjectClass dataset. The y-axis repre-
313
+ sents categories of images used for training the model, and the x-axis represents categories of samples used to test. Among
314
+ them, (a), (b), and (c) are the comparison of the generalization ability of the aforementioned baseline in matching accuracy. (d)
315
+ represents the method proposed in this paper.
316
+ Table 1. Matching accuracy (%) on the Willow ObjectClass.
317
+ Method
318
+ Car
319
+ Duck
320
+ Face
321
+ M-bike
322
+ W-bottle
323
+ Mean
324
+ GMN[17]
325
+ 67.90
326
+ 76.70
327
+ 99.80
328
+ 69.20
329
+ 83.10
330
+ 79.34
331
+ NHGM[18]
332
+ 86.50
333
+ 72.20
334
+ 99.90
335
+ 79.30
336
+ 89.40
337
+ 85.50
338
+ CIE-H[19]
339
+ 82.20
340
+ 81.20
341
+ 100.00
342
+ 90.00
343
+ 97.60
344
+ 90.20
345
+ PIA-GM[8]
346
+ 88.60
347
+ 87.00
348
+ 100.00
349
+ 70.30
350
+ 87.80
351
+ 86.74
352
+ PCA-GM[8]
353
+ 87.60
354
+ 83.60
355
+ 100.00
356
+ 77.60
357
+ 88.40
358
+ 87.44
359
+ IPCA-GM[16]
360
+ 90.40
361
+ 88.60
362
+ 100.00
363
+ 83.00
364
+ 88.30
365
+ 90.06
366
+ PA-GM (ours)
367
+ 92.70
368
+ 91.30
369
+ 100.00
370
+ 84.50
371
+ 93.80
372
+ 92.46
373
+ Table 2. Matching accuracy (%) on the IMC-PT-SparseGM.
374
+ Method
375
+ Reichstag
376
+ Sacre coeur
377
+ St peters square
378
+ Mean
379
+ CIE-H[19]
380
+ 42.24
381
+ 28.47
382
+ 30.78
383
+ 33.83
384
+ PIA-GM[8]
385
+ 71.46
386
+ 41.31
387
+ 42.64
388
+ 51.80
389
+ PCA-GM[8]
390
+ 69.38
391
+ 39.86
392
+ 42.10
393
+ 50.40
394
+ IPCA-GM[16]
395
+ 72.96
396
+ 43.80
397
+ 44.93
398
+ 53.89
399
+ GANN-GM[20]
400
+ 76.02
401
+ 44.15
402
+ 50.49
403
+ 56.89
404
+ PA-GM (ours)
405
+ 96.28
406
+ 75.93
407
+ 81.66
408
+ 84.63
409
+ optimizer to train our models with a learning rate of 1×10−4.
410
+ To overcome the over-smoothing problem common to graph
411
+ neural network models, we adopt a two-layer graph embed-
412
+ ding and restrict the degree of smoothing in our experiments.
413
+ Metrics. We evaluate the graph matching capacity of the
414
+ proposed method using the matching accuracy, which is de-
415
+ fined by accuracy = � AND(Si,j, Sgt
416
+ i,j)/N from [8, 16].
417
+ Note that Si,j is the element of the predicted permutation ma-
418
+ trix representing the correspondence matching of node i and
419
+ node j from two different graphs. Similarly, Sgt
420
+ i,j is the corre-
421
+ spondence of the ground truth between two nodes, and N is
422
+ the number of matching node pairs.
423
+ 3.2. Graph Matching Results on Real-world Datasets
424
+ Table 1 and Table 2 report the overall comparison of the per-
425
+ formance results for graph matching accuracy. The CIE-H
426
+ method excels on the M-bike and W-bottle classes of the
427
+ Willow ObjectClass dataset, while our method scores highest
428
+ in the two categories of accuracy, and the average accuracy.
429
+ By successfully incorporating the position information, our
430
+ model achieves excellent results in the average accuracy of
431
+ Willow ObjectClass and IMC-PT-SparseGM datasets. Based
432
+ on these results, it can be concluded that our method performs
433
+ well on graph matching compared with existing methods.
434
+ 3.3. Cross-category Generalization Study
435
+ To assess the robustness and generalization ability of our
436
+ method on the different object categories, we further conduct
437
+ the cross-category generalization study. Specifically, we train
438
+ each of the five classes separately, and then test the general-
439
+ ization ability of the validation model for all five classes with
440
+ the separate training classes. Comparing the elements of Fig.
441
+ 4, it is clear that the generalization ability of the model frame-
442
+ work that incorporates position information is superior to the
443
+ alternative methods. In addition, the matching accuracy for
444
+ face data is generally rather large. This may be related to the
445
+ relatively simple background of the data and less interference
446
+ from noise.
447
+ 4. CONCLUSION
448
+ In this paper, we propose a novel deep learning method for
449
+ graph matching based on node-wise embeddings between
450
+ graphs that combine position-aware node embeddings. Dur-
451
+ ing the experiments, the proposed method is compared with
452
+ alternative methods to demonstrate its robustness and effec-
453
+ tiveness. Comparing our methodology to existing methods
454
+ on real-world datasets demonstrates its state-of-the-art per-
455
+ formance. Further improvements of graph matching will be
456
+ achieved with the use of different relative positions strategies
457
+ in future work.
458
+
459
+ PIA-GM Accuracy (diag:0.8580, all:0.5800)
460
+ PCA-GM Accuracy (diag:0.8580, all:0.5800)
461
+ IPCA-GM Accuracy (diag:0.8580, all:0.5800)
462
+ PA-GM Accuracy (diag:0.8580, all:0.5800)
463
+ 1.0
464
+ 0.814
465
+ 0.374
466
+ 0.983
467
+ 0.423
468
+ 0.489
469
+ 0.832
470
+ 0.512
471
+ 0.706
472
+ 0.426
473
+ 0.425
474
+ 0.947
475
+ 0.608
476
+ 0.949
477
+ 0.509
478
+ 0.538
479
+ 0.93
480
+ 0.72
481
+ 0.996
482
+ 0.561
483
+ 0.849
484
+ 0.9
485
+ Duck
486
+ Duck
487
+ 0.496
488
+ 0.458
489
+ 0.591
490
+ 0.864
491
+ 0.795
492
+ 0.365
493
+ 0.498
494
+ 0.552
495
+ 0.902
496
+ 0.908
497
+ 0.417
498
+ 0.52
499
+ Duck
500
+ 0.802
501
+ 0.8537
502
+ 0.445
503
+ 0.822
504
+ 0.902
505
+ 0.996
506
+ 0.665
507
+ 0.773
508
+ 0.8
509
+ Face
510
+ Face
511
+ Face
512
+ 0.7
513
+ 0.236
514
+ 0.132
515
+ 0.998
516
+ 0.252
517
+ 0.165
518
+ 0.482
519
+ 0.424
520
+ 1.0
521
+ 0.358
522
+ 0.328
523
+ 0.618
524
+ 0.626
525
+ 1.0
526
+ 0.468
527
+ 0.474
528
+ 0.466
529
+ 0.32
530
+ 1.0
531
+ 0.437
532
+ 0.309
533
+ Winebottle Motorbike
534
+ Motorbike
535
+ : Motorbike
536
+ : Motorbike
537
+ 0.6
538
+ 0.487
539
+ 0.47
540
+ 0.995
541
+ 0.762
542
+ 0.459
543
+ 0.383
544
+ 0.259
545
+ 0.75
546
+ 0.69
547
+ 0.283
548
+ 0.594
549
+ 0.549
550
+ 0.9
551
+ 0.792
552
+ 0.425
553
+ 0.779
554
+ 0.682
555
+ 1.0
556
+ 0.838
557
+ 0.696
558
+ 0.5
559
+ Winebottle
560
+ Winebottle
561
+ Winebotle
562
+ 0.507
563
+ 0.53
564
+ 0.9699
565
+ 0.486
566
+ 0.914
567
+ 0.543
568
+ 0.606
569
+ 0.766
570
+ 0.37
571
+ 0.877
572
+ 0.64
573
+ 0.678
574
+ 0.96
575
+ 0.476
576
+ 0.92
577
+ 0.716
578
+ 0.625
579
+ 0.9279
580
+ 0.546
581
+ 0.934
582
+ 0.4
583
+ Winebottle Motorbike Face
584
+ Duck
585
+ Car
586
+ Winebottle Motorbike Face
587
+ Duck
588
+ Car
589
+ Winebottle Motorbike Face
590
+ Duck
591
+ Car
592
+ Winebottle Motorbike Face
593
+ Duck
594
+ Car
595
+ (a)
596
+ (b)
597
+ (c)
598
+ (d)5. REFERENCES
599
+ [1] Steven Gold and Anand Rangarajan, “A graduated as-
600
+ signment algorithm for graph matching,” IEEE Transac-
601
+ tions on pattern analysis and machine intelligence, vol.
602
+ 18, no. 4, pp. 377–388, 1996.
603
+ [2] Kexin Deng, Jie Tian, Jian Zheng, Xing Zhang, Xiao-
604
+ qian Dai, and Min Xu,
605
+ “Retinal fundus image regis-
606
+ tration via vascular structure graph matching,” Interna-
607
+ tional Journal of Biomedical Imaging, vol. 2010, 2010.
608
+ [3] Jiawei Zhang and S Yu Philip, “Multiple anonymized
609
+ social networks alignment,” in 2015 IEEE International
610
+ Conference on Data Mining. IEEE, 2015, pp. 599–608.
611
+ [4] Miao Wang, Yu-Kun Lai, Yuan Liang, Ralph R Martin,
612
+ and Shi-Min Hu, “Biggerpicture: data-driven image ex-
613
+ trapolation using graph matching,” ACM Transactions
614
+ on Graphics, vol. 33, no. 6, 2014.
615
+ [5] Deepti Pachauri, Risi Kondor, and Vikas Singh, “Solv-
616
+ ing the multi-way matching problem by permutation
617
+ synchronization,”
618
+ in Advances in neural information
619
+ processing systems. Citeseer, 2013, pp. 1860–1868.
620
+ [6] Junchi Yan, Shuang Yang, and Edwin R Hancock,
621
+ “Learning for graph matching and related combinatorial
622
+ optimization problems,” in Proceedings of the Twenty-
623
+ Ninth International Joint Conference on Artificial Intel-
624
+ ligence, IJCAI-20. International Joint Conferences on
625
+ Artificial Intelligence Organization, 2020, pp. 4988–
626
+ 4996.
627
+ [7] Kun Xu, Liwei Wang, Mo Yu, Yansong Feng, Yan Song,
628
+ Zhiguo Wang, and Dong Yu, “Cross-lingual knowledge
629
+ graph alignment via graph matching neural network,”
630
+ Proceedings of the 57th Annual Meeting of the Associa-
631
+ tion for Computational Linguistics, 2019.
632
+ [8] Runzhong Wang, Junchi Yan, and Xiaokang Yang,
633
+ “Learning combinatorial embedding networks for deep
634
+ graph matching,” in IEEE International Conference on
635
+ Computer Vision, 2019, pp. 3056–3065.
636
+ [9] Karen Simonyan and Andrew Zisserman, “Very deep
637
+ convolutional networks for large-scale image recogni-
638
+ tion,” 3rd International Conference on Learning Repre-
639
+ sentations, 2015.
640
+ [10] Hussam Qassim, Abhishek Verma, and David Feinz-
641
+ imer, “Compressed residual-vgg16 cnn model for big
642
+ data places image recognition,”
643
+ in 2018 IEEE 8th
644
+ Annual Computing and Communication Workshop and
645
+ Conference (CCWC). IEEE, 2018, pp. 169–175.
646
+ [11] Feng Zhou and Fernando De la Torre, “Factorized graph
647
+ matching,” in 2012 IEEE Conference on Computer Vi-
648
+ sion and Pattern Recognition. IEEE, 2012, pp. 127–134.
649
+ [12] Richard Sinkhorn and Paul Knopp, “Concerning non-
650
+ negative matrices and doubly stochastic matrices,” Pa-
651
+ cific Journal of Mathematics, vol. 21, no. 2, pp. 343–
652
+ 348, 1967.
653
+ [13] Minsu Cho, Karteek Alahari, and Jean Ponce, “Learn-
654
+ ing graphs to match,” in International Conference on
655
+ Computer Vision, 2013, pp. 25–32.
656
+ [14] Yuhe Jin, Dmytro Mishkin, Anastasiia Mishchuk, Jiri
657
+ Matas, Pascal Fua, Kwang Moo Yi, and Eduard Trulls,
658
+ “Image matching across wide baselines: From paper to
659
+ practice,” International Journal of Computer Vision, pp.
660
+ 517–547, 2021.
661
+ [15] Diederik P Kingma and Jimmy Ba, “Adam: A method
662
+ for stochastic optimization,” International Conference
663
+ on Learning Representations, 2015.
664
+ [16] Wang, Runzhong and Yan, Junchi and Yang, Xiaokang,
665
+ “Combinatorial learning of robust deep graph matching:
666
+ an embedding based approach,” IEEE Transactions on
667
+ Pattern Analysis and Machine Intelligence, 2020.
668
+ [17] A. Zanfir and C. Sminchisescu, “Deep learning of graph
669
+ matching,” in IEEE Conference on Computer Vision and
670
+ Pattern Recognition, 2018, pp. 2684–2693.
671
+ [18] Runzhong Wang, Junchi Yan, and Xiaokang Yang,
672
+ “Neural graph matching network:
673
+ Learning lawler’s
674
+ quadratic assignment problem with extension to hy-
675
+ pergraph and multiple-graph matching,” IEEE Trans-
676
+ actions on Pattern Analysis and Machine Intelligence,
677
+ 2021.
678
+ [19] Tianshu Yu, Runzhong Wang, Junchi Yan, and Baoxin
679
+ Li,
680
+ “Learning deep graph matching with channel-
681
+ independent embedding and hungarian attention,”
682
+ in
683
+ International Conference on Learning Representations,
684
+ 2020.
685
+ [20] Runzhong Wang, Junchi Yan, and Xiaokang Yang,
686
+ “Graduated assignment for joint multi-graph matching
687
+ and clustering with application to unsupervised graph
688
+ matching network learning.,” in NeurIPS, 2020.
689
+
69AzT4oBgHgl3EQf-P6_/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,371 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf,len=370
2
+ page_content='PA-GM: POSITION-AWARE LEARNING OF EMBEDDING NETWORKS FOR DEEP GRAPH MATCHING Dongdong Chen1, Yuxing Dai2, Lichi Zhang1, Zhihong Zhang2, Edwin R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
3
+ page_content=' Hancock3 1 Shanghai Jiao Tong University 2 Xiamen University 3 University of York ABSTRACT Graph matching can be formalized as a combinatorial opti- mization problem, where there are corresponding relation- ships between pairs of nodes that can be represented as edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
4
+ page_content=' This problem becomes challenging when there are potential ambiguities present due to nodes and edges with high simi- larity, and there is a need to find accurate results for similar content matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
5
+ page_content=' In this paper, we introduce a novel end-to- end neural network that can map the linear assignment prob- lem into a high-dimensional space augmented with node-level relative position information, which is crucial for improving the method’s performance for similar content matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
6
+ page_content=' Our model constructs the anchor set for the relative position of nodes and then aggregates the feature information of the tar- get node and each anchor node based on a measure of rela- tive position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
7
+ page_content=' It then learns the node feature representation by integrating the topological structure and the relative position information, thus realizing the linear assignment between the two graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
8
+ page_content=' To verify the effectiveness and generalizability of our method, we conduct graph matching experiments, includ- ing cross-category matching, on different real-world datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
9
+ page_content=' Comparisons with different baselines demonstrate the supe- riority of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
10
+ page_content=' Our source code is available under https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
11
+ page_content='com/anonymous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
12
+ page_content=' Index Terms— Graph Matching, Graph Embedding, Deep Neural Network 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
13
+ page_content=' INTRODUCTION Graph matching aims to establish the relationship between two or more graphs based on information derived from their nodes and edges [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
14
+ page_content=' Due to its ability to better express and encode data relationships, graph matching has gained considerable traction in computer vision and related fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
15
+ page_content=' Applications of graph matching techniques include image registration in medical image analysis [2], link analysis in social networks [3] and image extrapolation in computer vision[4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
16
+ page_content=' Current methods for solving the graph matching problem can be divided into two main classes: a) learning- free or b) learning-based [5, 6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
17
+ page_content=' In the first case, the critical Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
18
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
19
+ page_content=' A failed example of mismatching the position of the animal dog’s ears in the two images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
20
+ page_content=' element is the mathematical method used for obtaining an approximate solution to an intrinsically NP-hard problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
21
+ page_content=' On the other hand, learning-based methods aim to improve the solver with a number of methods including deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
22
+ page_content=' However, these methods can only compute a similarity score for the whole graph, or rely on inefficient global match- ing procedures[7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
23
+ page_content=' For example, [8] only considers the em- bedding of local information for nodes in the graph, leading to a tendency to inconsistently match similar nodes from dif- ferent regions of the graph, and thus resulted to have ambigu- ities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
24
+ page_content=' As is illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
25
+ page_content=' 1, it shows that the node em- bedding representation, which relies only on local structural information and semantic node information, lacks sufficient discrimination to effectively resolve these ambiguities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
26
+ page_content=' As a result, it is difficult to distinguish the left and right ears of the dog in the example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
27
+ page_content=' From this, relative positional infor- mation is the key to the matching of diagrams, especially in cases where the graphs have similar semantic and structural content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
28
+ page_content=' In this paper, we introduce the idea of position-awareness to solve the above-mentioned problem in graph matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
29
+ page_content=' We first construct the graph as input to the model with node fea- tures extracted from an image, and then extract a collection of anchor points as reference coordinates for each node in the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
30
+ page_content=' We further propose a corresponding position-aware node embedding algorithm to capture the relative positional arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
31
+ page_content='01932v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
32
+ page_content='CV] 5 Jan 2023 Xinformation of the nodes in the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
33
+ page_content=' With the node-wise graph embedding to hand, we identify a node permutation for node-to-node correspondence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
34
+ page_content=' The main contributions of this paper are as follows: We propose a novel Position-Aware Learning of Em- bedding Network for Deep Graph Matching (PA-GM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
35
+ page_content=' To the best of our knowledge, there are no methods that consider the learning of an embedding augmented with relative positional information in graph matching tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
36
+ page_content=' This restriction hampers their applications in terms of matching accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
37
+ page_content=' We extract the relative positional information for the nodes in constructing the node embedding instead of the traditional neighborhood aggregation approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
38
+ page_content=' We fully consider the positional information relevant to the matching of keypoints, which is proved to be effective in our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
39
+ page_content=' The proposed framework is both scalable and flexible, and benefits from the use of a relative position coef- ficient together with an alignment loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
40
+ page_content=' Experiments show that the model achieves state-of-the-art results on real-world datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
41
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
42
+ page_content=' METHODS In this paper, we intend to resolve the graph matching prob- lem based on the supervised matching of graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
43
+ page_content=' Specifi- cally, we aim to learn an end-to-end model which can extract graph information and their matches through given pair-wise ground-truth correspondences for a set of graphs, and which can be further generalized to unseen graph pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
44
+ page_content=' The overall pipeline for graph matching using the position-aware embed- ding network is presented in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
45
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
46
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
47
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
48
+ page_content=' Problem Definition In this paper, we denote a graph by the triple G = (V, A, X) which consists of a finite set of nodes V , an adjacency ma- trix A, and a set of node attributes X extracted from im- ages using CNN-based models[9, 10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
49
+ page_content=' We construct a vec- tor v ∈ {0, 1}nm×1 to indicate the match of vertices in the source graph Gs = (Vs, As, Xs) and the target graph Gt = (Vt, At, Xt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
50
+ page_content=' The vector has elements vi,j = 1 if vertex i ∈ Vs is matched to vertex j ∈ Vt, and vi,j = 0 if other- wise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
51
+ page_content=' It is worth noting that all the vertex matches are subject to one-to-one mapping constraints � j∈Vt vi,j = 1 ∀i ∈ Vs and � i∈Vs vi,j ≤ 1 ∀j ∈ Vt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
52
+ page_content=' Furthermore, we construct a square symmetric positive matrix M ∈ Rnm×nm as the affin- ity matrix, to encode the edge-to-edge affinity between two graphs in the off-diagonal elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
53
+ page_content=' In this way, the two- graph matching between Gs and Gt can be formulated as an edge-preserving, quadratic assignment programming (QAP) problem [11]: argmax v v⊤Mv s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
54
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
55
+ page_content=' � j∈Vt vi,j = 1 ∀i ∈ Vs, � i∈Vs vi,j ≤ 1 ∀j ∈ Vt, (1) where v ∈ {0, 1}nm×1 and M ∈ Rnm×nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
56
+ page_content=' The goal of graph matching is to establish a correspondence between two attributed graphs, which minimizes the sum of local and ge- ometric costs of assignment between the vertices of the two graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
57
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
58
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
59
+ page_content=' Position-Aware Node Embedding Pixels at neighboring positions in the image convey similar semantic information, therefore we need to distinguish them to resolve matching ambiguities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
60
+ page_content=' Here, we refer to the nodes used as reference positions named as anchors and propose an effective strategy to construct an anchor set P consisting of all nodes in the graph, denoted by P = V , serving as a sta- ble reference for all nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
61
+ page_content=' To combine information from the nodes and the anchors, we design the information aggregation mechanism as is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
62
+ page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
63
+ page_content=' Considering that the amount of effective information transmission to the nodes by the anchors with different rela- tive distances is different, we first compute a relative position coefficient qv,u between a pair of nodes v and u as: qv,u = e−dv,u � i∈V e−dv,i , (2) where dv,u is the shortest path distance between the two nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
64
+ page_content=' In practice, to effectively reduce the time complexity of calculating the shortest path and also to reduce interfer- ence from those anchor points that are distant from the node under consideration, we demand that the maximum shortest path distance does not exceed the ceiling value r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
65
+ page_content=' Otherwise, the value of the coefficient is set to infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
66
+ page_content=' We continue to compute the information aggregation function I(v, u)(l) for the l-th layer between node v and u as: I(v, u)(l) = qv,uCONCAT(h(l−1) v , h(l−1) u ), (3) where h(l−1) v and h(l−1) u are the feature representation and po- sition representation in the (l − 1)-th layer, and are combined through the message aggregation function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
67
+ page_content=' We further aggre- gate the information from all node-anchor pairs to obtaining a new representation in a high dimensional space using the non- linear variation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
68
+ page_content=' This hidden representation can be computed using the update h(l) v = σ(AGG(I(v, u)(l)| ∀u ∈ V )W (l)), (4) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
69
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
70
+ page_content=' Overview of the end-to-end position embedding networks for deep graph matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
71
+ page_content=' The blue source graph Gs and the green target graph Gt are extracted with the node-wise graph feature representation in high-level through the two frameworks of position-aware node embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
72
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
73
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
74
+ page_content=' The position-aware embedding first aggregates the message of each target node hv and each anchor point hui in the anchor set via the aggregation function I(v, u) based on the relative position coefficients q(v, u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
75
+ page_content=' The representation matrix is then further aggregated using the learnable function AGG and is finally transformed in a non-linear way to obtain the feature output hv for the target node in the current layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
76
+ page_content=' where AGG is typically a permutation-invariant function (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
77
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
78
+ page_content=', sum), and W (l) is a learnable weight vector for the l-th layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
79
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
80
+ page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
81
+ page_content=' Graph Feature Matching Utilizing the proposed position-aware node embedding, the proposed scheme encodes each node with position informa- tion into a high-level embedding space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
82
+ page_content=' In this way, we can simplify the second-order affinity matrix of paired graphs to a be a linear one with position-aware learning of the embed- ding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
83
+ page_content=' With the final hidden representation of the source graph Hs ∈ Rn∗F ′ and the target graph Ht ∈ Rm∗F ′ to hand, we can obtain a soft correspondence between these graphs with a node-wise affinity matrix through the inner product of the embeddings of the two graphs being matched.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
84
+ page_content=' Specifically, to satisfy the condition that the original graph maps injectively onto the target graph, we apply the Sinkhorn normalization [12] to obtain rectangular doubly-stochastic correspondence matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
85
+ page_content=' S = sinkhorn(HsHT t ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
86
+ page_content=' (5) We further employ the cross-entropy loss function as the permutation loss between the predicted permutation matrix and the ground truth: L = − � i∈Vs,j∈Vt � Sgt i,j log Si,j + � 1 − Sgt i,j � log (1 − Si,j) � , (6) where Sgt is the ground truth permutation matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
87
+ page_content=' Conse- quently, the cross-entropy loss based on linear allocations can be learned end-to-end no matter the number of nodes and edges in the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
88
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
89
+ page_content=' EXPERIMENTS 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
90
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
91
+ page_content=' Experimental Setting Dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
92
+ page_content=' We use the Willow Object Class [13], consisting of 9963 images in total and is used as a benchmark by many methods to measure the accuracy of image classification and recognition algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
93
+ page_content=' And IMC-PT-SparseGM [14] datasets, containing 16 object categories and 25061 images, which gather from 16 tourist attractions around the world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
94
+ page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
95
+ page_content='di.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
96
+ page_content='ens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
97
+ page_content='fr/willow/research/graphlearning/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
98
+ page_content=' Implementation Details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
99
+ page_content=' The experiments are conducted using two GeForce GTX 1080 Ti GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
100
+ page_content=' We employ a batch size of 8 in training and evaluate the results of our method af- ter 2000 epochs for each iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
101
+ page_content=' We employ the Adam [15] hs: hidden representation of source graph ht: hidden representation of target graph Position-Aware Node embedding Iteration 3 Ground truth ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
102
+ page_content=' (6) Update ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
103
+ page_content=' ① Invariant {1,2,3, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
104
+ page_content=', 8] Gs Anchor Matrix Anchor-set Embedding layer Hs HT ZAND Accuracy Position-Aware Node embedding Graph Feature Matching ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
105
+ page_content=' Iteration 4-3 ①2 4-3 6 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
106
+ page_content=' ① Update Invariant {1,2,3, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
107
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
108
+ page_content=',8] Gt : Anchor-set Anchor Matrix Embedding layerq(v, hu?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
109
+ page_content=' AGG q(v, u3) W hv v,u1~p ,up) (V, qFig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
110
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
111
+ page_content=' Confusion matrix analysis of cross-category generalizations using the Willow ObjectClass dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
112
+ page_content=' The y-axis repre- sents categories of images used for training the model, and the x-axis represents categories of samples used to test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
113
+ page_content=' Among them, (a), (b), and (c) are the comparison of the generalization ability of the aforementioned baseline in matching accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
114
+ page_content=' (d) represents the method proposed in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
115
+ page_content=' Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
116
+ page_content=' Matching accuracy (%) on the Willow ObjectClass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
117
+ page_content=' Method Car Duck Face M-bike W-bottle Mean GMN[17] 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
118
+ page_content='90 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
119
+ page_content='70 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
120
+ page_content='80 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
121
+ page_content='20 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
122
+ page_content='10 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
123
+ page_content='34 NHGM[18] 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
124
+ page_content='50 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
125
+ page_content='20 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
126
+ page_content='90 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
127
+ page_content='30 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
128
+ page_content='40 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
129
+ page_content='50 CIE-H[19] 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
130
+ page_content='20 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
131
+ page_content='20 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
132
+ page_content='00 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
133
+ page_content='00 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
134
+ page_content='60 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
135
+ page_content='20 PIA-GM[8] 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
136
+ page_content='60 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
137
+ page_content='00 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
138
+ page_content='00 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
139
+ page_content='30 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
140
+ page_content='80 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
141
+ page_content='74 PCA-GM[8] 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
142
+ page_content='60 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
143
+ page_content='60 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
144
+ page_content='00 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
145
+ page_content='60 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
146
+ page_content='40 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
147
+ page_content='44 IPCA-GM[16] 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
148
+ page_content='40 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
149
+ page_content='60 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
150
+ page_content='00 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
151
+ page_content='00 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
152
+ page_content='30 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
153
+ page_content='06 PA-GM (ours) 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
154
+ page_content='70 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
155
+ page_content='30 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
156
+ page_content='00 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
157
+ page_content='50 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
158
+ page_content='80 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
159
+ page_content='46 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
160
+ page_content=' Matching accuracy (%) on the IMC-PT-SparseGM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
161
+ page_content=' Method Reichstag Sacre coeur St peters square Mean CIE-H[19] 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
162
+ page_content='24 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
163
+ page_content='47 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
164
+ page_content='78 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
165
+ page_content='83 PIA-GM[8] 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
166
+ page_content='46 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
167
+ page_content='31 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
168
+ page_content='64 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
169
+ page_content='80 PCA-GM[8] 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
170
+ page_content='38 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
171
+ page_content='86 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
172
+ page_content='10 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
173
+ page_content='40 IPCA-GM[16] 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
174
+ page_content='96 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
175
+ page_content='80 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
176
+ page_content='93 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
177
+ page_content='89 GANN-GM[20] 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
178
+ page_content='02 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
179
+ page_content='15 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
180
+ page_content='49 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
181
+ page_content='89 PA-GM (ours) 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
182
+ page_content='28 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
183
+ page_content='93 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
184
+ page_content='66 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
185
+ page_content='63 optimizer to train our models with a learning rate of 1×10−4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
186
+ page_content=' To overcome the over-smoothing problem common to graph neural network models, we adopt a two-layer graph embed- ding and restrict the degree of smoothing in our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
187
+ page_content=' Metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
188
+ page_content=' We evaluate the graph matching capacity of the proposed method using the matching accuracy, which is de- fined by accuracy = � AND(Si,j, Sgt i,j)/N from [8, 16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
189
+ page_content=' Note that Si,j is the element of the predicted permutation ma- trix representing the correspondence matching of node i and node j from two different graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
190
+ page_content=' Similarly, Sgt i,j is the corre- spondence of the ground truth between two nodes, and N is the number of matching node pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
191
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
192
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
193
+ page_content=' Graph Matching Results on Real-world Datasets Table 1 and Table 2 report the overall comparison of the per- formance results for graph matching accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
194
+ page_content=' The CIE-H method excels on the M-bike and W-bottle classes of the Willow ObjectClass dataset, while our method scores highest in the two categories of accuracy, and the average accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
195
+ page_content=' By successfully incorporating the position information, our model achieves excellent results in the average accuracy of Willow ObjectClass and IMC-PT-SparseGM datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
196
+ page_content=' Based on these results, it can be concluded that our method performs well on graph matching compared with existing methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
197
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
198
+ page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
199
+ page_content=' Cross-category Generalization Study To assess the robustness and generalization ability of our method on the different object categories, we further conduct the cross-category generalization study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
200
+ page_content=' Specifically, we train each of the five classes separately, and then test the general- ization ability of the validation model for all five classes with the separate training classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
201
+ page_content=' Comparing the elements of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
202
+ page_content=' 4, it is clear that the generalization ability of the model frame- work that incorporates position information is superior to the alternative methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
203
+ page_content=' In addition, the matching accuracy for face data is generally rather large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
204
+ page_content=' This may be related to the relatively simple background of the data and less interference from noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
205
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
206
+ page_content=' CONCLUSION In this paper, we propose a novel deep learning method for graph matching based on node-wise embeddings between graphs that combine position-aware node embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
207
+ page_content=' Dur- ing the experiments, the proposed method is compared with alternative methods to demonstrate its robustness and effec- tiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
208
+ page_content=' Comparing our methodology to existing methods on real-world datasets demonstrates its state-of-the-art per- formance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
209
+ page_content=' Further improvements of graph matching will be achieved with the use of different relative positions strategies in future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
210
+ page_content=' PIA-GM Accuracy (diag:0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
211
+ page_content='8580, all:0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
212
+ page_content='5800) PCA-GM Accuracy (diag:0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
213
+ page_content='8580, all:0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
214
+ page_content='5800) IPCA-GM Accuracy (diag:0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
215
+ page_content='8580, all:0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
216
+ page_content='5800) PA-GM Accuracy (diag:0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
217
+ page_content='8580, all:0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
218
+ page_content='5800) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
219
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
220
+ page_content='814 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
221
+ page_content='374 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
222
+ page_content='983 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
223
+ page_content='423 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
224
+ page_content='489 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
225
+ page_content='832 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
226
+ page_content='512 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
227
+ page_content='706 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
228
+ page_content='426 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
229
+ page_content='425 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
230
+ page_content='947 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
231
+ page_content='608 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
232
+ page_content='949 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
233
+ page_content='509 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
234
+ page_content='538 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
235
+ page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
236
+ page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
237
+ page_content='996 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
238
+ page_content='561 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
239
+ page_content='849 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
240
+ page_content='9 Duck Duck 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
241
+ page_content='496 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
242
+ page_content='458 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
243
+ page_content='591 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
244
+ page_content='864 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
245
+ page_content='795 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
246
+ page_content='365 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
247
+ page_content='498 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
248
+ page_content='552 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
249
+ page_content='902 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
250
+ page_content='908 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
251
+ page_content='417 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
252
+ page_content='52 Duck 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
253
+ page_content='802 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
254
+ page_content='8537 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
255
+ page_content='445 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
256
+ page_content='822 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
257
+ page_content='902 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
258
+ page_content='996 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
259
+ page_content='665 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
260
+ page_content='773 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
261
+ page_content='8 Face Face Face 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
262
+ page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
263
+ page_content='236 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
264
+ page_content='132 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
265
+ page_content='998 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
266
+ page_content='252 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
267
+ page_content='165 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
268
+ page_content='482 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
269
+ page_content='424 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
270
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
271
+ page_content='358 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
272
+ page_content='328 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
273
+ page_content='618 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
274
+ page_content='626 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
275
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
276
+ page_content='468 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
277
+ page_content='474 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
278
+ page_content='466 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
279
+ page_content='32 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
280
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
281
+ page_content='437 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
282
+ page_content='309 Winebottle Motorbike Motorbike : Motorbike : Motorbike 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
283
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
284
+ page_content='487 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
285
+ page_content='47 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
286
+ page_content='995 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
287
+ page_content='762 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
288
+ page_content='459 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
289
+ page_content='383 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
290
+ page_content='259 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
291
+ page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
292
+ page_content='69 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
293
+ page_content='283 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
294
+ page_content='594 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
295
+ page_content='549 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
296
+ page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
297
+ page_content='792 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
298
+ page_content='425 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
299
+ page_content='779 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
300
+ page_content='682 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
301
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
302
+ page_content='838 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
303
+ page_content='696 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
304
+ page_content='5 Winebottle Winebottle Winebotle 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
305
+ page_content='507 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
306
+ page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
307
+ page_content='9699 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
308
+ page_content='486 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
309
+ page_content='914 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
310
+ page_content='543 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
311
+ page_content='606 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
312
+ page_content='766 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
313
+ page_content='37 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
314
+ page_content='877 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
315
+ page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
316
+ page_content='678 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
317
+ page_content='96 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
318
+ page_content='476 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
319
+ page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
320
+ page_content='716 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
321
+ page_content='625 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
322
+ page_content='9279 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
323
+ page_content='546 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
324
+ page_content='934 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
325
+ page_content='4 Winebottle Motorbike Face Duck Car Winebottle Motorbike Face Duck Car Winebottle Motorbike Face Duck Car Winebottle Motorbike Face Duck Car (a) (b) (c) (d)5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
326
+ page_content=' REFERENCES [1] Steven Gold and Anand Rangarajan, “A graduated as- signment algorithm for graph matching,” IEEE Transac- tions on pattern analysis and machine intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
327
+ page_content=' 18, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
328
+ page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
329
+ page_content=' 377–388, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
330
+ page_content=' [2] Kexin Deng, Jie Tian, Jian Zheng, Xing Zhang, Xiao- qian Dai, and Min Xu, “Retinal fundus image regis- tration via vascular structure graph matching,” Interna- tional Journal of Biomedical Imaging, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
331
+ page_content=' 2010, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
332
+ page_content=' [3] Jiawei Zhang and S Yu Philip, “Multiple anonymized social networks alignment,” in 2015 IEEE International Conference on Data Mining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
333
+ page_content=' IEEE, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
334
+ page_content=' 599–608.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
335
+ page_content=' [4] Miao Wang, Yu-Kun Lai, Yuan Liang, Ralph R Martin, and Shi-Min Hu, “Biggerpicture: data-driven image ex- trapolation using graph matching,” ACM Transactions on Graphics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
336
+ page_content=' 33, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
337
+ page_content=' 6, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
338
+ page_content=' [5] Deepti Pachauri, Risi Kondor, and Vikas Singh, “Solv- ing the multi-way matching problem by permutation synchronization,” in Advances in neural information processing systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
339
+ page_content=' Citeseer, 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
340
+ page_content=' 1860–1868.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
341
+ page_content=' [6] Junchi Yan, Shuang Yang, and Edwin R Hancock, “Learning for graph matching and related combinatorial optimization problems,” in Proceedings of the Twenty- Ninth International Joint Conference on Artificial Intel- ligence, IJCAI-20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
342
+ page_content=' International Joint Conferences on Artificial Intelligence Organization, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
343
+ page_content=' 4988– 4996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
344
+ page_content=' [7] Kun Xu, Liwei Wang, Mo Yu, Yansong Feng, Yan Song, Zhiguo Wang, and Dong Yu, “Cross-lingual knowledge graph alignment via graph matching neural network,” Proceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
345
+ page_content=' [8] Runzhong Wang, Junchi Yan, and Xiaokang Yang, “Learning combinatorial embedding networks for deep graph matching,” in IEEE International Conference on Computer Vision, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
346
+ page_content=' 3056–3065.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
347
+ page_content=' [9] Karen Simonyan and Andrew Zisserman, “Very deep convolutional networks for large-scale image recogni- tion,” 3rd International Conference on Learning Repre- sentations, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
348
+ page_content=' [10] Hussam Qassim, Abhishek Verma, and David Feinz- imer, “Compressed residual-vgg16 cnn model for big data places image recognition,” in 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
349
+ page_content=' IEEE, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
350
+ page_content=' 169���175.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
351
+ page_content=' [11] Feng Zhou and Fernando De la Torre, “Factorized graph matching,” in 2012 IEEE Conference on Computer Vi- sion and Pattern Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
352
+ page_content=' IEEE, 2012, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
353
+ page_content=' 127–134.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
354
+ page_content=' [12] Richard Sinkhorn and Paul Knopp, “Concerning non- negative matrices and doubly stochastic matrices,” Pa- cific Journal of Mathematics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
355
+ page_content=' 21, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
356
+ page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
357
+ page_content=' 343– 348, 1967.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
358
+ page_content=' [13] Minsu Cho, Karteek Alahari, and Jean Ponce, “Learn- ing graphs to match,” in International Conference on Computer Vision, 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
359
+ page_content=' 25–32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
360
+ page_content=' [14] Yuhe Jin, Dmytro Mishkin, Anastasiia Mishchuk, Jiri Matas, Pascal Fua, Kwang Moo Yi, and Eduard Trulls, “Image matching across wide baselines: From paper to practice,” International Journal of Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
361
+ page_content=' 517–547, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
362
+ page_content=' [15] Diederik P Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” International Conference on Learning Representations, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
363
+ page_content=' [16] Wang, Runzhong and Yan, Junchi and Yang, Xiaokang, “Combinatorial learning of robust deep graph matching: an embedding based approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
364
+ page_content=' [17] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
365
+ page_content=' Zanfir and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
366
+ page_content=' Sminchisescu, “Deep learning of graph matching,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
367
+ page_content=' 2684–2693.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
368
+ page_content=' [18] Runzhong Wang, Junchi Yan, and Xiaokang Yang, “Neural graph matching network: Learning lawler’s quadratic assignment problem with extension to hy- pergraph and multiple-graph matching,” IEEE Trans- actions on Pattern Analysis and Machine Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
369
+ page_content=' [19] Tianshu Yu, Runzhong Wang, Junchi Yan, and Baoxin Li, “Learning deep graph matching with channel- independent embedding and hungarian attention,” in International Conference on Learning Representations, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
370
+ page_content=' [20] Runzhong Wang, Junchi Yan, and Xiaokang Yang, “Graduated assignment for joint multi-graph matching and clustering with application to unsupervised graph matching network learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
371
+ page_content=',” in NeurIPS, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/69AzT4oBgHgl3EQf-P6_/content/2301.01932v1.pdf'}
6NE5T4oBgHgl3EQfPg5N/content/2301.05505v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0a3e156d8946597fb76fb6c562ff589ce21d28e3caa33b418d4483c05a22b45
3
+ size 410692
6NE5T4oBgHgl3EQfPg5N/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e2ebe165ff093b4d5f77505313fac75a9e9c617428425f6a54e04e485883845
3
+ size 111493
7dE1T4oBgHgl3EQfTgPr/content/tmp_files/2301.03080v1.pdf.txt ADDED
@@ -0,0 +1,2589 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.03080v1 [math.NA] 8 Jan 2023
2
+ Perron-Frobenius operator filter for stochastic dynamical systems
3
+ Ningxin Liu∗
4
+ Lijian Jiang†
5
+ Abstract
6
+ The filtering problems are derived from a sequential minimization of a quadratic function
7
+ representing a compromise between model and data.
8
+ In this paper, we use the Perron-
9
+ Frobenius operator in stochastic process to develop a Perron-Frobenius operator filter. The
10
+ proposed method belongs to Bayesian filtering and works for non-Gaussian distributions for
11
+ nonlinear stochastic dynamical systems. The recursion of the filtering can be characterized
12
+ by the composition of Perron-Frobenius operator and likelihood operator.
13
+ This gives a
14
+ significant connection between the Perron-Frobenius operator and Bayesian filtering. We
15
+ numerically fulfil the recursion through approximating the Perron-Frobenius operator by
16
+ Ulam’s method. In this way, the posterior measure is represented by a convex combination
17
+ of the indicator functions in Ulam’s method.
18
+ To get a low rank approximation for the
19
+ Perron-Frobenius operator filter, we take a spectral decomposition for the posterior measure
20
+ by using the eigenfunctions of the discretized Perron-Frobenius operator. A convergence
21
+ analysis is carried out and shows that the Perron-Frobenius operator filter achieves a higher
22
+ convergence rate than the particle filter, which uses Dirac measures for the posterior. The
23
+ proposed method is explored for the data assimilation of the stochastic dynamical systems.
24
+ A few numerical examples are presented to illustrate the advantage of the Perron-Frobenius
25
+ operator filter over particle filter and extend Kalman filter.
26
+ keywords: Perron-Frobenius operator, Bayesian filtering, stochastic dynamical systems,
27
+ particle filter
28
+ 1
29
+ Introduction
30
+ In recent years, the operator-based approach has been extensively exploited to analyze dy-
31
+ namical systems. The two primary candidates of the approach are Perron-Frobenius operator
32
+ and its dual operator, Koopman operator. Many data-driven methods have been developed
33
+ for numerical approximation of these operators. The two operators are motivated to approxi-
34
+ mate the dynamical system’s behavior from different perspectives. The Koopman operator is
35
+ used to study the evolution of observations, while Perron-Frobenius operator (PFO) charac-
36
+ terizes the transition of densities. Therefore, the PFO deals with the system’s uncertainties
37
+ ∗School of Mathematical Sciences, Tongji University, Shanghai 200092, China. ([email protected]).
38
+ †School
39
+ of
40
+ Mathematical
41
+ Sciences,
42
+ Tongji
43
+ University,
44
+ Shanghai
45
+ 200092,
46
+ China.
47
48
+ 1
49
+
50
+ in the form of probability density functions of the state. In practice, it determines an abso-
51
+ lutely continuous probability measure preserved by a given measurable transformation on a
52
+ measure space.
53
+ The Perron-Frobenius operator has been widely used to characterize the global asymp-
54
+ totic behavior of dynamical systems derived from many different domains such as fluid dy-
55
+ namics [1], molecular dynamics [2], meteorology and atmospheric sciences [3], and to estimate
56
+ invariant sets or metastable sets with a toolbox like in [4]. It is of great interest to study the
57
+ invariant density of Perron-Frobenius operator [5] and design efficient numerical approaches.
58
+ Then one can apply ergodic theorems to the statistical properties of deterministic dynamical
59
+ systems.
60
+ Since PFO is able to transport density of a Markov process, its approximation is necessary
61
+ for numerical model transition probability of the Markov process. Many different numerical
62
+ methods [6], such as Ulam’s method and Petrov-Galerkin method, are proposed for approx-
63
+ imation of the Perron-Frobenius operator.
64
+ As the PFO operates on infinite-dimensional
65
+ spaces, it is natural to project it onto a finite-dimensional subspace spanned by suitable
66
+ basis functions. The projection is usually accomplished by Galerkin methods with weak ap-
67
+ proximation. It was originally proposed by Ulam [7], who suggested that one can study the
68
+ discrete Perron-Frobenius operator on the finite-dimensional subspace L1 of indicator func-
69
+ tions according to a finite partition of the region. Convergence analysis of Ulam’s method is
70
+ discussed in many literatures [8, 9].
71
+ The classical filtering problems in stochastic processes are investigated in [10, 11]. In the
72
+ paper, the models of filtering problems are considered with discrete-time and continuous-
73
+ time stochastic processes defined by the solutions of SDEs, which can model a majority of
74
+ stochastic dynamical systems in the real world. The filtering methods have been widely used
75
+ for geophysical applications, such as oceanography [12], oil recovery [13], atmospheric science,
76
+ and weather forecast [14]. Remarkably, as one of the filtering methods, Kalman filter [15] has
77
+ been well-known for low-dimensional engineering applications in linear Gaussian models, and
78
+ it has been also developed and utilized in many other fields [16–18]. For nonlinear problems,
79
+ the classical filters, such as 3DVAR [19], Extended Kalman filter [10] and Ensemble Kalman
80
+ filter [20], usually invoke a Gaussian ansatz.
81
+ They are often used in the scenarios with
82
+ small noisy observation and high dimensional spaces. However, these extensions rely on the
83
+ invocations of Gaussian assumption. As a sequential Monte Carlo method, particle filter [21]
84
+ is able to work well for the nonlinear and non-Gaussian filtering problems. It can be proved
85
+ to estimate true posterior filtering problems in the limit of large number of particles.
86
+ Although the particle filter (PF) can treat the nonlinear and non-Gaussian filtering prob-
87
+ lems, it has some limitations in practice. First of all, particle filter handles well in low-
88
+ dimensional systems, but it may occur degeneracy [22] when the systems have very large
89
+ scale. It means that the maximum of the weights associated with the sample ensemble con-
90
+ verges to one as the system dimension tends to infinity. To avoid degeneracy, it requires
91
+ a great number of particles that scales exponentially with the system dimension. This is a
92
+ manifestation of the curse of dimensionality. Resampling, adding jitter and localisation are
93
+ introduced to circumvent this curse of dimensionality and get the accurate estimation of high-
94
+ dimensional probability density functions (PDFs). The particle filter also does not perform
95
+ well in geophysical applications of data assimilation, because the data in these application
96
+ have strongly constraints on particle location [23]. This impacts on the filtering performance.
97
+ 2
98
+
99
+ Besides, the prior knowledge of the transition probability density functions in particle filter
100
+ is necessary to be known, and the efficient sampling methods such as acceptance-rejection
101
+ method and Markov chain Monte Carlo, need to be used for complicated density functions.
102
+ The sampling is particularly a challenge in high dimensional spaces. To overcome these diffi-
103
+ culties, we propose a Perron-Frobenius operator filter (PFOF), which does not use particles
104
+ and any sampling methods. The information of prior probability density is not required in
105
+ the method, which needs data information instead. The method works well in nonlinear and
106
+ non-Gaussian models.
107
+ In this paper, we propose PFOF to treat nonlinear stochastic filtering problems. The
108
+ method transfer filtering distribution with the Perron-Frobenius operator. For filtering prob-
109
+ lems, the update of filtering distribution involves two steps: predication and analysis. In
110
+ prediction, the density is transported with a transition density function given by a Markov
111
+ chain of the underlying system. In analysis, the density is corrected by Bayes’ rule under
112
+ the likelihood function given by observations. Thus, the update of filtering distribution can
113
+ be expressed as a composition of PFO and likelihood functions. In the simulation process,
114
+ Ulam’ method is used to discretize the PFO and project it onto a finite-dimensional space
115
+ of indicator functions. Hence the filtering density is also projected onto the subspace and
116
+ is represented by a linear convex combination of the indicator functions. The recursion of
117
+ filtering distribution is then expressed by a linear map of weights vectors associated with
118
+ the basis functions via the PFO and likelihood function. For the high dimensional problems,
119
+ we propose a low-rank PFOF (lr-PFOF) using a spectral decomposition. To this end, we
120
+ first use the eigenfunctions of the discretized PFO to represent the spectral decomposition of
121
+ the density functions. Then we make a truncation of the decomposition and use the eigen-
122
+ functions corresponding to the first few dominant eigenvalues. This can improve the online
123
+ assimilation efficiency. The idea of PFO is extended to the continuous-time filtering prob-
124
+ lems. In these problems, Zakai equation characterizes the transition of filtering density. We
125
+ utilize the approximation of the Perron-Frobenius operator to compute the Zaikai equation
126
+ and obtain the posterior density functions of the continuous-time filtering problems.
127
+ We compare the proposed method with the particle filter. For PFOF, we give an error
128
+ estimate in the total-variance distance between the approximated posterior measure and the
129
+ truth posterior. The estimate implies that PFOF achieves a convergence rate O( 1
130
+ N ), which
131
+ is faster than particle filters with the same number N of basis functions. Our numerical
132
+ results show that PFOF also renders better accuracy than that of extended Kalman filter.
133
+ The rest of the paper is organized as follows. In Section 2, we express the Bayesian
134
+ filter in terms of the Perron-Frobenius operator. In Section 3, we present the recursion of
135
+ the filtering empirical measure with an approximated Perron-Frobenius operator. Then we
136
+ derive PFOF as well as lr-PFOF, and analyze an error estimate subsequently.
137
+ PFOF is
138
+ also extended to the continuous-time filtering problems. A comprehensive comparison with
139
+ particle filter is give in Section 4. A few numerical results of stochastic filtering problems
140
+ are given in Section 5. Section 6 concludes the paper in a summary.
141
+ 3
142
+
143
+ 2
144
+ Preliminaries
145
+ We give a background review on Perron-Frobenius operator [24] (PFO) and Bayesian filter
146
+ in this section. The Perron-Frobenius operator transports the distributions over state space
147
+ and describes the stochastic behavior of the dynamical systems. The framework of Bayesian
148
+ filter is introduced and summarized as a recursive formula with PFO.
149
+ 2.1
150
+ Perron-Frobenius operator
151
+ Let X be a metric space, B the Borel-σ-algebra on X, and Φ : X → X a nonsingular
152
+ transformation. Let M denote the space of all finite measures on (X, B) and µ is a finite
153
+ measure. The phase space is defined on a measure space (X, B, µ). The Perron-Frobenius
154
+ operator P : M → M is a linear and infinite-dimensional operator defined by
155
+ Pµ(A) = µ(Φ−1(A)),
156
+ ∀A ∈ B.
157
+ (2.1)
158
+ The PFO is a linear, positive and non-expansive operator, and hence a Markov operator.
159
+ We can also track the action on distributions in the function space. In the paper, we denote
160
+ L1(X) := L1(X, B, µ). Let f ∈ L1(X) be the probability density function (PDF) of a X-
161
+ valued random variable x. Since Φ is a nonsingular with respect to µ, there is a g ∈ L1(X)
162
+ satisfying
163
+
164
+ Φ−1(A) fdµ =
165
+
166
+ A gdµ for all A ∈ B. Then g is the function characterizing the
167
+ distribution of Φ(x). The mapping f �→ g, defined uniquely by a linear operator P : L1(X) →
168
+ L1(X) :
169
+
170
+ A
171
+ Pf dµ =
172
+
173
+ Φ−1(A)
174
+ f dµ,
175
+ ∀A ∈ B.
176
+ (2.2)
177
+ The operator P is called the Perron-Frobenius operator. With the definition (2.1) and (2.2),
178
+ we make the connection between probability density function and the measure associated
179
+ with the PFO. When f is a probability density function with respect to an absolutely con-
180
+ tinuous probability measure µ ∈ M(X), g is another PDF with respect to the absolutely
181
+ continuous probability measure µ ◦ Φ−1. In addition, the measure µ ∈ M(X) is an invariant
182
+ measure of P when Pµ = µ holds.
183
+ Let Ψ : R+ × X → X be a nonsingular flow map for a deterministic continuous-time
184
+ dynamical system. Then Ψτ : X → X is nonsingular for each τ ∈ R+. The transfer operator
185
+ Pτ : L1(X) → L1(X) is time-dependent and has an analogous definition to (2.2), such that
186
+
187
+ A
188
+ Pτf dµ =
189
+
190
+ Ψ−1
191
+ τ
192
+ (A)
193
+ f dµ.
194
+ The {Pτ}τ≥0 forms a semigroup of the Perron-Frobenius operators. We note that {Pτ}τ≥0
195
+ has an infinitesimal generator AP F by Hille-Yosida Theorem.
196
+ Let us consider the Perron-Frobenius operator in stochastic dynamic systems and the
197
+ infinitesimal generator of PFO associated to the stochastic solution semiflow induced by
198
+ a stochastic dynamical equation (SDE). Let b : X → X and σ : X → X be smooth
199
+ time-invariant functions. Suppose that a stochastic process xt is the solution to the time-
200
+ homogeneous stochastic differential equation:
201
+ dxt = b(xt)dt + σ(xt)dWt,
202
+ x(t0) ∼ ρ0,
203
+ (2.3)
204
+ 4
205
+
206
+ where Wt is a standard Brownian motion. In this case, the distribution of the stochastic
207
+ process xt can be described by a semigroup of Perron-Frobenius operators {Pτ}τ>0 on L1(X).
208
+ The generator of {Pτ}τ>0 is a second-order differential operator on X. The PDE defined by
209
+ the generator describes the evolution of the probability density of xt.
210
+ Suppose that Φ is the mapping of the stochastic dynamical system (2.3) and Φ(x) is an
211
+ X-valued random variable over the probability space (X, B, µ). Given a stochastic transition
212
+ function pτ : X × B → [0, 1] induced by Φ, we consider probability measure µ translated
213
+ with a linear operator defined in terms of the transition function pτ(x, ·). The stochastic
214
+ PFO [25] Pτ : M → M is defined by
215
+ Pτµ(A) =
216
+
217
+ X
218
+ pτ(x, A) dµ(x),
219
+ ∀A ∈ B.
220
+ (2.4)
221
+ If pτ(x, ·) is absolutely continuous to µ for all x ∈ X, there exists a nonnegative transition
222
+ density function qτ : X × X → R with qτ(x, ·) ∈ L1(X) and
223
+ P(xt+τ ∈ A|xt = x) =
224
+
225
+ A
226
+ qτ(x, y)dµ(y),
227
+ ∀A ∈ B.
228
+ The transition density function is the infinite-dimensional counterpart of the transition ma-
229
+ trix for a Markov chain. Now we define the stochastic PFO associated with transition density.
230
+ If f ∈ L1(X) is a probability density function, the Perron-Frobenius semigroup of operators
231
+ Pτ : L1(X) → L1(X), τ > 0, is defined by
232
+ Pτf(y) =
233
+
234
+ X
235
+ qτ(x, y)f(x) dµ(x).
236
+ The PFO Pτ defined here translates the probability density function of xt with time. Let ρ
237
+ be a probability density. The infinitesimal generator AP F of Pτ is given by
238
+ AP Fρ = −∇ · (bρ) + 1
239
+ 2∇ · ∇ · (σσTρ).
240
+ We assume that �ρ : [0, ∞) × X → [0, ∞) is the probability density function of the solution
241
+ xt in (2.3) and ρ0 is the density function of the initial condition x0.
242
+ Then �ρ solves the
243
+ Fokker-Planck equation,
244
+
245
+
246
+
247
+ ∂�ρ
248
+ ∂t = −∇ · (b�ρ) + 1
249
+ 2∇ · ∇ · (σσT �ρ),
250
+ (t, x) ∈ (0, ∞) × X,
251
+ �ρ(0, x) = ρ0(x).
252
+ If the phase space X is compact and b ∈ C3(X, X), the equation has a unique solution,
253
+ which is given by
254
+ �ρ(t, x) = Ptρ0(x).
255
+ 2.2
256
+ Bayesian filter
257
+ In this section, we present the framework of Bayesian filter in discrete time from the per-
258
+ spective of Bayes’ rule.
259
+ In filtering problems, a state model and an observation model
260
+ 5
261
+
262
+ are combined to estimate the posterior distribution, which is a conditional distribution
263
+ of the state given by observation. Let us consider the dynamical model governed by the
264
+ flow Ψ ∈ C(Rn, Rn) with noisy observations y = {yj}j∈Z+ depending on the function
265
+ h(x) : Rn → Rp:
266
+
267
+ xj+1 = Ψ(xj) + ξj, j ∈ N, x0 ∼ ρ0,
268
+ yj+1 = h(xj+1) + ηj+1, j ∈ N,
269
+ (2.5)
270
+ where ξ := {ξj}j∈N is an i.i.d. sequence with ξj ∼ N(0, Σ) and η := {ηj}j∈Z+ is an i.i.d.
271
+ sequence with ηj ∼ N(0, R). Let Yj = {yl}j
272
+ l=1 denote the data up to time tj. The filtering
273
+ problem aims to determine the posterior PDF p(xj|Yj) of the random variable xj|Yj and the
274
+ sequential updating of the PDF as the data increases. The Bayesian filtering involves two
275
+ steps: prediction and analysis. It provides a derivation of p(xj+1|Yj+1) from p(xj|Yj). The
276
+ prediction is concerned with the map p(xj|Yj) �→ p(xj+1|Yj) and the analysis derives the map
277
+ p(xj+1|Yj) �→ p(xj+1|Yj+1) by Bayes’s formula.
278
+ Prediction
279
+ p(xj+1|Yj) =
280
+
281
+ Rn p(xj+1|Yj, xj)p(xj|Yj)dxj
282
+ =
283
+
284
+ Rn p(xj+1|xj)p(xj|Yj)dxj.
285
+ (2.6)
286
+ Note that p(xj+1|Yj, xj) = p(xj+1|xj), because Yj provides indirect information about deter-
287
+ mining xj+1. Since p(xj+1|xj) is specified by the underlying model (2.5) and
288
+ p(xj+1|xj) ∝ exp(−1
289
+ 2|Σ− 1
290
+ 2(xj+1 − Ψ(xj))|2),
291
+ (2.7)
292
+ the prediction provides the map from p(xj|Yj) to p(xj+1|Yj). Let �µjbe the prior probability
293
+ measure corresponding to the density p(xj|Yj−1) and µj be the posterior probability measure
294
+ on corresponding to the density p(xj|Yj). The stochastic process {xj, j ∈ N} of (2.5) is a
295
+ Markov chain with the transition kernel p(·, ·) determined by p(xj, xj+1) = p(xj+1|xj). Then
296
+ we can rewrite (2.6) as
297
+ �µj+1(·) = (Pµj)(·) :=
298
+
299
+ Rn p(xj, ·)µj(dxj) =
300
+
301
+ Rn p(xj, ·)dµ(xj)1.
302
+ (2.8)
303
+ In particular, the operator P coincides with the Perron-Frobenius operator defined in (2.4).
304
+ Analysis
305
+ p(xj+1|Yj+1) = p(xj+1|Yj, yj+1)
306
+ = p(yj+1|xj+1, Yj)p(xj+1|Yj)
307
+ p(yj+1|Yj)
308
+ = p(yj+1|xj+1)p(xj+1|Yj)
309
+ p(yj+1|Yj)
310
+ .
311
+ (2.9)
312
+ Note that p(yj+1|xj+1, Yj) = p(yj+1|xj+1) and Bayes’s formula is used in the second equality.
313
+ The likelihood function p(yj+1|xj+1) is determined by the observation model: p(yj+1|xj+1) ∝
314
+ 1Refer to [26], if the function f ∈ L1(X) on a measure space (X, B, µ) is said to be µ integrable, we have
315
+
316
+ fdµ =
317
+
318
+ f(x)µ(dx) =
319
+
320
+ f(x)dµ(x).
321
+ 6
322
+
323
+ exp(−1
324
+ 2|R− 1
325
+ 2(yj+1 − h(xj+1))|2). Let
326
+ gj(xj+1) := exp(−1
327
+ 2|R− 1
328
+ 2(yj+1 − h(xj+1))|2).
329
+ (2.10)
330
+ The analysis provides a map from p(xj+1|Yj) to p(xj+1|Yj+1), so we can represent the update
331
+ of the measure µj+1(·) by
332
+ µj+1(·) = (Lj�µj+1)(·) :=
333
+ gj(xj+1)�µj+1(·)
334
+
335
+ Rn gj(xj+1)�µj+1(·),
336
+ (2.11)
337
+ where the likelihood operator Lj is defined by
338
+ (Ljµ)(dx) =
339
+ gj(x)µ(dx)
340
+
341
+ Rn gj(x)µ(dx).
342
+ (2.12)
343
+ In general, the prediction and analysis provide the mapping from µj to µj+1. The prediction
344
+ maps µj to �µj+1 through the Perron-Frobenius operator P, while the analysis maps �µj+1 to
345
+ µj+1 through the likelihood operator Lj. Then we represent the µj+1 using formulas (2.8)
346
+ and (2.11), and summarize Bayesian filtering as
347
+ µj+1 = LjPµj.
348
+ (2.13)
349
+ The µ0 is assumed to be a known initial probability measure. We note that P does not
350
+ depend on j, because the prediction step is governed by the same Markov process at each
351
+ j. However, Lj depends on j because the different observations are used to compute the
352
+ likelihood at each j. In this way, the evolution of µj processes through a linear operator P
353
+ and a nonlinear operator Lj. The approximation of µj can be achieved by the numerical
354
+ iteration of (2.13).
355
+ 3
356
+ Bayesian filter in terms of Perron-Frobenius opera-
357
+ tor
358
+ It is noted that the Perron-Frobenius operator translates a probability density function
359
+ with time according to the flow of the dynamics. We extend the idea to filtering problems
360
+ to represent the transition of the posterior probability density function, i.e., the filtering
361
+ distribution.
362
+ Therefore, we propose a filtering method: Perron-Frobenius operator filter
363
+ (PFOF). In the proposed method, the density function is projected onto an approximation
364
+ subspace spanned by indicator functions. The fluctuation of the density function, which is
365
+ approximated by weights vector, is transferred by PFO and likelihood operator. Moreover,
366
+ we present a low-rank Perron-Frobenius operator filter (lr-PFOF), which is a modified version
367
+ of the PFOF.
368
+ 3.1
369
+ Perron-Frobenius operator filter
370
+ The iteration (2.13) is helpful to design a filter method. According to definition (2.8), the
371
+ operator P in the iteration is Perron-Frobenius operator corresponding to the flow Ψ of the
372
+ 7
373
+
374
+ model (2.5). Based on the idea, we propose a Perron-Frobenius operator filter, which utilizes
375
+ Ulam’s method [7] to approximate operator P in the iteration. In PFOF, we simply use P
376
+ for Pτ because the discrete time steps of the state model keep the same. In this manner, the
377
+ iteration of filtering distribution of PFOF becomes
378
+ µN
379
+ j+1 = LjPNµN
380
+ j ,
381
+ µN
382
+ 0 = µ0,
383
+ (3.14)
384
+ where PN calculated by the Ulam’s method is an approximation of P. Ulam’s method is
385
+ a Galerkin projection method to discretize the Perron-Frobenius operator. We first give a
386
+ discretisation of the phase space. Let B = {B1, · · · , BN} ⊂ B be a finite number of measure
387
+ boxes and a disjoint partition of phase space X.
388
+ The indicator function is a piecewise
389
+ constant function and is defined by
390
+ 1Bi(x) =
391
+
392
+ 1,
393
+ if x ∈ Bi,
394
+ 0,
395
+ otherwise.
396
+ (3.15)
397
+ Ulam proposed to use the space of a family of indicator functions {1B1, · · · , 1BN} as the
398
+ approximation space for the PFO. We define the projection space VN := span{1B1, · · · , 1BN}.
399
+ The VN ∈ L1(X) is regarded as an approximation subspace of L1(X). For each ρ ≥ 0 in
400
+ L1(X), we define the operator πN : L1(X) → VN by
401
+ πNρ =
402
+ N
403
+
404
+ i=1
405
+ ω(i)1Bi,
406
+ where
407
+ ω(i) :=
408
+
409
+ Bi ρ dµ
410
+ µ(Bi) .
411
+ (3.16)
412
+ Then πN is the projection onto VN. Due to the projection, we define the discretized PFO
413
+ PN as
414
+ PN = πN ◦ P.
415
+ We can represent the linear map PN|V 1
416
+ N : V 1
417
+ N → V 1
418
+ N, where V 1
419
+ N :=
420
+
421
+ f ∈ VN :
422
+
423
+ |f|dµ = 1
424
+
425
+ by
426
+ a matrix P N = (P N
427
+ ij ) ∈ RN×N whose entries P N
428
+ ij =
429
+ 1
430
+ µ(Bi)
431
+
432
+ Bi P1Bjdµ. The entries characterizes
433
+ the transition probability from the box Bi to box Bj under the flow map Ψ. We can show
434
+ P N
435
+ ij = µ(Bi ∩ Ψ−1(Bj))
436
+ µ(Bi)
437
+ .
438
+ (3.17)
439
+ A Markov chain for Ψ arises as the discretization PN of the PFO, and the Markov chain has
440
+ transition matrix P N. So the Ulam’s method can be described either in terms of the operator
441
+ PN or the matrix P N. By the projection πN, the density ρ can be expressed as a vector
442
+ W = [ω(1), · · · , ω(N)], where ω(i) is the weight of the basis function 1Bi. Since the entries P N
443
+ ij
444
+ represent the transition probability from Bi to Bj, they can be estimated by a Monte-Carlo
445
+ method, which gives a numerical realization of Ulam’s method [6]. We randomly choose a
446
+ large number of points {xl
447
+ i}n
448
+ l=1 in each Bi and count the number of times Ψ(xl
449
+ i) contained in
450
+ box Bj. Then P N
451
+ ij is calculated by
452
+ P N
453
+ ij ≈ P N
454
+ n,ij = 1
455
+ n
456
+ n
457
+
458
+ l=1
459
+ 1Bj(Ψ(xl
460
+ i)).
461
+ (3.18)
462
+ 8
463
+
464
+ The Monte-Carlo method is used as an approximation to the integrals (3.17). The conver-
465
+ gence of the Ulam’s method depends on the choice of the partition of the region and the
466
+ number of points. Based on indicator basis functions, we denote the the empirical density
467
+ in the PFOF with respect to the measure µN
468
+ j as
469
+ ρN
470
+ j (x) =
471
+ N
472
+
473
+ i=1
474
+ ω(i)
475
+ j 1Bi(x),
476
+ (3.19)
477
+ where 1Bi(·) is the indicator function defined by (3.15) and j represents the index of time
478
+ sequence.
479
+ In this way, the density ρN
480
+ j
481
+ can be represented by the vector of the weights
482
+ Wj = [ω(1)
483
+ j , · · · , ω(N)
484
+ j
485
+ ]. Suppose that Wj = [ω(1)
486
+ j , · · · , ω(N)
487
+ j
488
+ ] and Wj+1 = [ω(1)
489
+ j+1, · · · , ω(N)
490
+ j+1] are
491
+ separately the weights of πNρj and πNρj+1 := πNPρj. When the region is evenly divided,
492
+ the evolution of density functions becomes a Markov transition equation of the weights:
493
+ Wj+1 = WjP N,
494
+ (3.20)
495
+ where P N is the matrix form of discretized PFO. We consider the projection of the Pρj, i.e.,
496
+ πNPρj =
497
+ N
498
+
499
+ i=1
500
+ ω(i)
501
+ j+11Bi.
502
+ (3.21)
503
+ In addition, note that
504
+ πNPρj = πNP
505
+ N
506
+
507
+ i=1
508
+ ω(i)
509
+ j 1Bi =
510
+ N
511
+
512
+ i=1
513
+ ω(i)
514
+ j πN(P1Bi).
515
+ We denote πN(P1Bi) = �N
516
+ k=1 cik1Bk, where
517
+ cik =
518
+
519
+ X P(1Bi)1Bkdx
520
+ µ(Bk)
521
+ =
522
+
523
+ Bk P(1Bi)dx
524
+ µ(Bk)
525
+ =
526
+
527
+ Ψ−1(Bk) 1Bidx
528
+ µ(Bk)
529
+ = µ(Bi ∩ Ψ−1(Bk))
530
+ µ(Bk)
531
+ .
532
+ Then we have
533
+ πNPρj =
534
+ N
535
+
536
+ i=1
537
+ ω(i)
538
+ j
539
+ N
540
+
541
+ k=1
542
+ cik1Bk =
543
+ N
544
+
545
+ k=1
546
+ N
547
+
548
+ i=1
549
+ ω(i)
550
+ j cik1Bk
551
+ =
552
+ N
553
+
554
+ k=1
555
+ N
556
+
557
+ i=1
558
+ µ(Bi)
559
+ µ(Bk)ω(i)
560
+ j P N
561
+ ik 1Bk.
562
+ Comparing to (3.21), we get
563
+ ω(k)
564
+ j+1 =
565
+ N
566
+
567
+ i=1
568
+ µ(Bi)
569
+ µ(Bk)ω(i)
570
+ j P N
571
+ ik .
572
+ (3.22)
573
+ 9
574
+
575
+ Thus, if we give a uniform partition of the X, i.e., µ(Bi) = µ(Bj), ∀i, j ∈ N, then we get
576
+ the result (3.20). With the expression, we design the following prediction step and analysis
577
+ step to approximate the posterior distribution p(xj+1|Yj+1).
578
+ Prediction In this step, we give a set of boxes {B1, · · · , BN} ⊂ B, which is a uniform
579
+ partition of X, and denote the mass point of each box as x(i), i = 1, · · · , N. Define �
580
+ Wj =
581
+ [�ω(1)
582
+ j , · · · , �ω(N)
583
+ j
584
+ ] as the prior weight vector and Wj = [ω(1)
585
+ j , · · · , ω(N)
586
+ j
587
+ ] as the posterior weight
588
+ vector. In equation (2.8), we note that the prior density p(xj+1|Yj) is computed under the
589
+ linear operator P. To discretize the formula �µj+1 = Pµj, we build a map between the weights
590
+ of the density function,
591
+
592
+ Wj+1 = WjP N.
593
+ The formula contains the prior information of the underlying system (2.5) because the PFO
594
+ in the formula is defined by the transition kernel p of the system. With the basis functions
595
+ 1Bi(·), the empirical prior measure is given by
596
+ �µN
597
+ j+1 =
598
+ N
599
+
600
+ i=1
601
+ �ω(i)
602
+ j+11Bi(dx).
603
+ Analysis In this step, we derive the posterior measure µN
604
+ j+1. To achieve this, we apply
605
+ Bayes’s formula (2.9) on weights and update the weights by
606
+ ω(i)
607
+ j+1 = �ω(i)
608
+ j+1/(
609
+ N
610
+
611
+ n=1
612
+ �ω(n)
613
+ j+1),
614
+ �ω(i)
615
+ j+1 = gj(x(i))�ω(i)
616
+ j+1,
617
+ (3.23)
618
+ where gj(x) given by (2.10) denotes the likelihood function as before. Then the µN
619
+ j+1 approx-
620
+ imated by the indicator measure is given by
621
+ µN
622
+ j+1 =
623
+ N
624
+
625
+ i=1
626
+ ω(i)
627
+ j+11Bi(dx).
628
+ Note that we choose the mass point x(i) of each box Bi to calculate gj(x), i.e., the likelihood
629
+ function. It is a reasonable choice to approximate the likelihood function of the points in
630
+ the Bi. In both prediction step and analysis step, they only evolve weights {ω(i)
631
+ j }N
632
+ i=1 into
633
+ {ω(i)
634
+ j+1}N
635
+ i=1 via {�ω(i)
636
+ j+1}N
637
+ i=1, and provide a transform from µN
638
+ j to µN
639
+ j+1. The complete procedure is
640
+ summarized in Algorithm 2, named Perron-Frobenius operator filter. The algorithm consists
641
+ of two phases: offline phase to compute P N by Ulam’s method, and online phase to update
642
+ the approximation of the posterior measure. Besides, the standard Ulam’s method becomes
643
+ inefficient in high-dimensional dynamical systems due to the curse of dimensionality. For
644
+ this case, we may use the sparse Ulam method [27] instead. It constructs an optimal ap-
645
+ proximation subspace and costs less computational effort than the standard Ulam’s method
646
+ when a certain accuracy is imposed.
647
+ 10
648
+
649
+ Algorithm 1 Perron-Frobenius operator filter
650
+ Offline:
651
+ Compute P N by Ulam’s method
652
+ Online:
653
+ 1: Set j = 0 and µN
654
+ 0 (dx0) = µ0(dx0), compute ω(i)
655
+ 0 =
656
+
657
+ Bi µ0dx0
658
+ µ(Bi)
659
+ 2: Let Wj = [ω(1)
660
+ j , · · · , ω(N)
661
+ j
662
+ ], compute �
663
+ Wj+1 = WjP N
664
+ 3: Define �µN
665
+ j+1 = �N
666
+ i=1 �ω(i)
667
+ j+11Bi(x)
668
+ 4: Denote ω(i)
669
+ j+1 by (3.23), define µN
670
+ j+1 = �N
671
+ i=1 ω(i)
672
+ j+11Bi(x)
673
+ 5: j+1→ j
674
+ 6: Go to step 2
675
+ 3.2
676
+ Analysis of error estimate
677
+ We analyze the error estimate of the Perron-Frobenius operator filter in this section to
678
+ explore the factors, which determine convergence of the algorithm. Define a total-variation
679
+ distance d(·, ·) between two probability measures µ and ν as follows:
680
+ d(µ, ν) = 1
681
+ 2sup|f|∞≤1|Eµ(f) − Eν(f)|,
682
+ where Eµ(f) =
683
+
684
+ X f(x)µ(dx) for f ∈ L1(X) and |f|∞ = supx|f(x)|. The distance d(·, ·) can
685
+ also be characterized by the L1 norm of the difference between the two PDFs ρ and ρ′, which
686
+ correspond to the measure µ and ν, respectively, i.e.,
687
+ d(µ, ν) = 1
688
+ 2
689
+
690
+ X
691
+ |ρ(x) − ρ′(x)|dx.
692
+ (3.24)
693
+ The distance induces a metric. To estimate the error, we recall the iteration (3.14) and see
694
+ that the approximation error of the probability comes from the operator PN. To do this, we
695
+ need the following lemmas.
696
+ Lemma 3.1. (Theorem 4.8 in [23]) Suppose that P is the Perron-Frobenius operator defined
697
+ in (2.8). Let µ and ν be two arbitrary probability measures. Then
698
+ d(Pµ, Pν) ≤ d(µ, ν).
699
+ Lemma 3.2. (Lemma 4.9 in [23]) Let gj be the likelihood function defined by (2.10) and the
700
+ operator Lj defined by (2.12). Assume that there exists κ ∈ (0, 1] such that for all x ∈ X
701
+ and j ∈ N,
702
+ κ ≤ gj(x) ≤ κ−1.
703
+ (3.25)
704
+ Then we have
705
+ d(Ljµ, Ljν) ≤ 2κ−2d(µ, ν).
706
+ Lemma 3.3. (Theorem 2.4.1 in [28]) Let Ca be discrete Lipschitz cone defined as
707
+ Ca = {φ : φ(x)
708
+ φ(y) ≤ ea|x−y|, ∀x, y ∈ R}.
709
+ 11
710
+
711
+ For each N > 0, the πN given by (3.16) denotes the projection of L1 onto VN. Then for any
712
+ function f ∈ Ca,
713
+ ∥f − πNf∥L1 ≤ (ea/N − 1)∥f∥L1.
714
+ By the above three lemmas, we analyze total-variance distance between the approximate
715
+ measure µN
716
+ J and the true measure µJ and give the following theorem.
717
+ Theorem 3.4. If gj(x) satisfies the condition (3.25) and the probability density ρN
718
+ j of the
719
+ measure µN
720
+ j satisfies PρN
721
+ j ∈ Ca, ∀j ∈ N, then
722
+ d(µN
723
+ J , µJ) ≤
724
+ J
725
+
726
+ j=1
727
+ (2κ−2)j ea − 1
728
+ 2N
729
+ .
730
+ Proof. From the formula (2.13) and (3.14), we apply the triangle inequality to the distance
731
+ d(µN
732
+ j+1, µj+1) and get
733
+ d(µN
734
+ j+1, µj+1) = d(LjPNµN
735
+ j , LjPµj)
736
+ ≤ d(LjPNµN
737
+ j , LjPµN
738
+ j ) + d(LjPµN
739
+ j , LjPµj).
740
+ According to Lemma 3.2 and Lemma 3.1, it follows that
741
+ d(µN
742
+ j+1, µj+1) ≤ 2k−2�
743
+ d(PNµN
744
+ j , PµN
745
+ j ) + d(PµN
746
+ j , Pµj)
747
+
748
+ ≤ 2k−2�
749
+ d(PNµN
750
+ j , PµN
751
+ j ) + d(µN
752
+ j , µj)
753
+
754
+ ,
755
+ (3.26)
756
+ Let us consider d(PNµN
757
+ j , PµN
758
+ j ). Suppose that ρ′
759
+ j+1 is density function associated with the
760
+ measure PµN
761
+ j . Let ρN
762
+ j and ρN
763
+ j+1 be the density functions of µN
764
+ j and µN
765
+ j+1, respectively. By
766
+ the definition of total-variance distance in (3.24), we have
767
+ d(PNµN
768
+ j , PµN
769
+ j ) = 1
770
+ 2
771
+
772
+ X
773
+ |ρN
774
+ j+1(x) − ρ′
775
+ j+1(x)|dx
776
+ = 1
777
+ 2
778
+
779
+ X
780
+ |PNρN
781
+ j (x) − PρN
782
+ j (x)|dx
783
+ = 1
784
+ 2
785
+
786
+ X
787
+ |πN ◦ PρN
788
+ j (x) − PρN
789
+ j (x)|dx,
790
+ where we have used the equation PN = πN ◦ P in the last equality. Since PρN
791
+ j (x) ∈ Ca, we
792
+ use Lemma 3.3 and get
793
+ d(PNµN
794
+ j , PµN
795
+ j ) ≤ 1
796
+ 2(ea/N − 1)
797
+
798
+ 1
799
+ 2N (ea − 1).
800
+ (3.27)
801
+ With the fact that µN
802
+ 0
803
+ = µ0, we combine (3.27) with (3.26) and repeat the iterating to
804
+ complete the proof.
805
+ Theorem 3.4 estimates the online error of the PFOF algorithm.
806
+ Since the Perron-
807
+ Frobenious operator is numerically approximated offline by the matrix form P N
808
+ n given by
809
+ (3.18), we will analyze the offline error generated by the approximation. Each coefficient of
810
+ P N
811
+ n is computed by the Monte-Carlo approximation of (3.17) using a set of the sampling
812
+ points {xl
813
+ i}n
814
+ l=1. We show that the matrix P N
815
+ n converge to the matrix P N.
816
+ 12
817
+
818
+ Proposition 3.5. If the matrix P N
819
+ n is defined by (3.18) and P N is defined by (3.17), then
820
+ the following convergence in distribution holds,
821
+ √n((P N
822
+ n )ij − (P N)ij)
823
+ D
824
+ −−−→
825
+ n→∞ N (0, σn,N
826
+ ij
827
+ ),
828
+ (3.28)
829
+ where
830
+ (σn,N
831
+ ij
832
+ )2 =
833
+
834
+ X
835
+ (1Ψ−1
836
+ τ (Bj) · 1Bi)2dµ − (
837
+
838
+ X
839
+ 1Ψ−1
840
+ τ (Bj) · 1Bidµ)2,
841
+ (3.29)
842
+ and N (0, σn,N
843
+ ij
844
+ ) is the normal distribution with the mean 0 and standard deviation σn,N
845
+ i,j .
846
+ Proof. Note that the entries of P N
847
+ n are given by
848
+ P N
849
+ n,ij = 1
850
+ n
851
+ n
852
+
853
+ l=1
854
+ 1Bj(Ψτ(xl
855
+ i)),
856
+ which is the Monte-Carlo approximation of
857
+ P N
858
+ ij =
859
+
860
+ X 1Ψ−1
861
+ τ (Bj) · 1Bidµ
862
+
863
+ X 1Bidµ
864
+ ,
865
+ with sampling points xl
866
+ i drawn independently and uniformly from the box Bi. The denomi-
867
+ nator
868
+
869
+ X 1Bidµ normalizes the entries P N
870
+ ij so that P N becomes a right stochastic matrix, with
871
+ each row summing to 1. The convergence result (3.28) follows directly from the convergence
872
+ of Monte-Carlo integration [29].
873
+ Proposition 3.5 indicates that there exits a constant CN(Ψτ, α) determined by the stan-
874
+ dard deviation σn,N
875
+ ij
876
+ with a given confidence rate α ∈ [0, 1) such that for m large enough,
877
+ the following estimate holds with probability at least α:
878
+ ∥ (P N
879
+ n )ij − (P N)ij ∥∞≤ CN(Ψτ, α)n− 1
880
+ 2.
881
+ (3.30)
882
+ The result shows that the convergence of P N
883
+ n to P N is in O(n− 1
884
+ 2).
885
+ 3.3
886
+ A low-rank Perron-Frobenius operator filter
887
+ In PFOF, we note that the number of blocks increases exponentially with respect to dimen-
888
+ sions, resulting in the number of basis functions growing rapidly. Therefore, we propose
889
+ a low-rank approximation, formed by eigenfunctions of the Perron-Frobenius operator, to
890
+ represent the density. This approach can effectively reduce the number of the required basis
891
+ functions. Let ρ still be a probability density function of the dynamical system governed by
892
+ Ψ. Then it can be written as a linear combination of the independent eigenfunctions ϕi of
893
+ P. So
894
+ ρ(x, t) =
895
+
896
+
897
+ i=1
898
+ aiϕi(x),
899
+ ai ∈ C.
900
+ 13
901
+
902
+ Suppose that λi is the eigenvalue corresponding to the eigenfunction ϕi of P, then
903
+ Pρ(x, t) =
904
+
905
+
906
+ i=1
907
+ λiaiϕi(x).
908
+ Actually, the eigenfunction of the discretized PFO can be determined by the following propo-
909
+ sition.
910
+ Proposition 3.6. Let B = {B1, · · · , BN} ⊂ B be a uniform partition of the phase space
911
+ X. If ξ is the left eigenvector of P N corresponding to the eigenvalue λ, then λ is also the
912
+ eigenvalue of the restricted operator πNP with the eigenfunction ϕ ≜ ξTU, where U =
913
+ [1B1(x), · · · , 1BN(x)]T.
914
+ Proof. Let ϕ = �
915
+ i ξ(i)1Bi. From Eq. (3.21) and Eq. (3.22),
916
+ πNPϕ =
917
+ N
918
+
919
+ j
920
+ N
921
+
922
+ i
923
+ ξ(i)P N
924
+ ij 1Bj.
925
+ Since ξP N = λP N, i.e., λξ(j) = �
926
+ i ξ(i)P N
927
+ ij , ∀j ∈ N, we get
928
+ πNPϕ =
929
+ N
930
+
931
+ j
932
+ λξ(j)1Bj = λϕ.
933
+ Thus, λ is also an eigenvalue of the restricted operator πNP with eigenfunction ϕ.
934
+ In order to obtain the spectral expansion of the density function ρj := ρ(x, tj), we de-
935
+ fine the matrix ϕ = [ϕ1, · · · , ϕN]T, where {ϕi}N
936
+ i=1 are the eigenfunctions with respect to
937
+ eigenvalues {λi}N
938
+ i=1, with |λ1| = 1 ≥ |λ2| ≥ · · · ≥ |λN| ≥ 0. Let ρ0 = W0U and
939
+ Ξ =
940
+
941
+ 
942
+ ξT
943
+ 1
944
+ ξT
945
+ 2...
946
+ ξT
947
+ N
948
+
949
+  .
950
+ Then the eigenfunction is denoted as ϕ = ΞU and the density function of ρ1 is given by
951
+ ρ1 = πNPρ0 = πNPW0U = πNPW0Ξ−1ϕ = ΛW0Ξ−1ϕ =
952
+ N
953
+
954
+ i=1
955
+ λiϕivi,
956
+ (3.31)
957
+ where Λ is a diagonal eigenvalue matrix for πNP and vi is the column vector of the matrix
958
+ V = W0Ξ−1. The first r major eigenvalues and their corresponding eigenfunctions are used
959
+ to approximate the density function. If the formula (3.31) is truncated by r < N, then the
960
+ low-rank model of ρ1 has the form of
961
+ ρ1 =
962
+ r
963
+
964
+ i=1
965
+ λiϕivi.
966
+ 14
967
+
968
+ In this way, the low-rank model of density functions at time tj is given by
969
+ ρj =
970
+ r
971
+
972
+ i=1
973
+ λj
974
+ iϕivi,
975
+ j ∈ N.
976
+ Let us denote the low-rank approximation of Perron-Frobenius operator as ρj = �Pρj−1 ≜
977
+ �r
978
+ i=1 λiϕivj−1,i, where vj−1,i is the column vector of the matrix Vj−1 = Wj−1Ξ−1. We apply
979
+ �P in the Bayesian filter to obtain the low-rank Perron-Frobenius operator filter (lr-PFOF),
980
+ in which the probability measure satisfies the recursive formula
981
+ µN
982
+ j+1 = Lj �PµN
983
+ j ,
984
+ µN
985
+ 0 = µ0.
986
+ To describe the following prediction and analysis steps, we first calculate the weak approxi-
987
+ mation P N and get the eigenvalues Λ and left eigenvectors Ξ of P N.
988
+ Prediction In this step, we give a model decomposition of the prior density p(xj+1|Yj).
989
+ First, the Wj satisfying p(xj|Yj) = WjU is obtained from the previous analysis step. Next,
990
+ compute the matrix Vj = WjΞ−1 and
991
+ �ρj+1 = p(xj+1|Yj) =
992
+ r
993
+
994
+ i=1
995
+ λiϕivj,i.
996
+ Analysis In this step, we derive the posterior density p(xj+1|Yj+1) via Bayes’s formula.
997
+ Multiply p(xj+1|Yj) by likelihood function gj and have
998
+ ρj+1 = p(xj+1|Yj+1) ∝
999
+ r
1000
+
1001
+ i=1
1002
+ λiϕivj,igj.
1003
+ To normalize ρj+1, we rewrite the �ρj+1. Since ϕi = ξT
1004
+ i U = �N
1005
+ k=1 ξ(k)
1006
+ i 1Bk, we get
1007
+ �ρj+1 =
1008
+ r
1009
+
1010
+ i=1
1011
+ λi
1012
+ N
1013
+
1014
+ k=1
1015
+ ξ(k)
1016
+ i 1Bkvj,i =
1017
+ N
1018
+
1019
+ k=1
1020
+
1021
+ r
1022
+
1023
+ i=1
1024
+ λiξ(k)
1025
+ i vj,i
1026
+
1027
+ 1Bk.
1028
+ Then we multiply by gj(x) and make a normalization to the weights of 1Bk, such that
1029
+ ω(k)
1030
+ j+1 = �ω(k)
1031
+ j+1/(
1032
+ N
1033
+
1034
+ n=1
1035
+ �ω(n)
1036
+ j+1),
1037
+ �ω(k)
1038
+ j+1 =
1039
+ r
1040
+
1041
+ i=1
1042
+ λiξ(k)
1043
+ i vj,igj(x(k)),
1044
+ (3.32)
1045
+ where x(k) is still the mass point of each box Bk. The posterior density becomes
1046
+ ρj+1 =
1047
+ N
1048
+
1049
+ k=1
1050
+ ω(k)
1051
+ j+11Bk = Wj+1U.
1052
+ Remark 3.1. Note that the complex eigenvalues and eigenvectors may appear in the eigen-
1053
+ decomposition of the matrix P N. When the stationary distribution �π of the system satisfies
1054
+ detailed balance, a symmetrization method is designed in [30] to solve the problem. Since
1055
+ 15
1056
+
1057
+ Algorithm 2 low-rank Perron-Frobenius operator filter
1058
+ Offline:
1059
+ Compute P N and its eigenvalue Λ and left eigenvector Ξ. Give the eigenfunction ϕ = ΞU.
1060
+ Online:
1061
+ 1: Set j = 0 and ρ0 = W0U, compute ω(i)
1062
+ 0 =
1063
+
1064
+ Bi µ0dx0
1065
+ µ(Bi)
1066
+ 2: Denote Vj = WjΞ−1, compute �ρj+1 = �r
1067
+ i=1 λiϕivj,i
1068
+ 3: Define gj by (2.10), give ρj+1 ∝ �r
1069
+ i=1 λiϕivj,igj
1070
+ 4: Normalize weights by (3.32) and obtain Wj+1, let ρj+1 = �N
1071
+ k=1 ω(k)
1072
+ j+11Bk.
1073
+ 5: j+1→ j
1074
+ 6: Go to step 2
1075
+ ρ0, ρ1, · · · can be seen as a Markov chain with transition matrix P N, we suppose that P N
1076
+ satisfies detailed balance with respect to �π, i.e.,
1077
+ �πiP N
1078
+ ij = �πjP N
1079
+ ji ,
1080
+ ∀i, j ∈ N.
1081
+ Then P N can be symmetrized by a similarity transformation
1082
+ S = �ΛP N �Λ−1,
1083
+ where �Λ =
1084
+
1085
+ 
1086
+ √�π1
1087
+ √�π2
1088
+ ...
1089
+ √�πN
1090
+
1091
+  .
1092
+ Here the S is a symmetric matrix and this can be easily checked by detailed balance equation.
1093
+ It is known that S has a full set of real eigenvalues αj ∈ R and an orthogonal set of
1094
+ eigenvectors wj. Therefore, P N has the same eigenvalues αj and real left eigenvectors
1095
+ ψj = �Λwj.
1096
+ 3.4
1097
+ Extension to continuous-time filtering problems
1098
+ In this subsection, we consider a continuous-time filtering problem, where the state model
1099
+ and observation are by the following SDEs,
1100
+ dx
1101
+ dt = f(x) +
1102
+
1103
+ Σc
1104
+ dWt
1105
+ dt ,
1106
+ x(t0) ∼ N (m0, C0),
1107
+ (3.33)
1108
+ dz
1109
+ dt = h(x) +
1110
+
1111
+ Rc
1112
+ dWt
1113
+ dt ,
1114
+ z(0) = 0.
1115
+ (3.34)
1116
+ Here Σc is the covariance of model error and Rc is the covariance of observation error. Sup-
1117
+ pose that the posterior measure µt governed by the continuous-time problem has Lebesgue
1118
+ density ρ(·, t) : Rn �→ R+ for a fixed t. Let ρ(x, t) = r(x, t)/
1119
+
1120
+ Rn r(x, t)dx, where r is the
1121
+ unnormalized density. For a positive definite symmetric matrix A ∈ Rp×p, we define the
1122
+ weighted inner product ⟨·, ·⟩A = ⟨A− 1
1123
+ 2·, A− 1
1124
+ 2·⟩ on the space L2([0, T]; Rp).
1125
+ The resulting
1126
+ 16
1127
+
1128
+ norm | · |A = |A− 1
1129
+ 2 · |. In the continuous filtering problem, our interest is to find the dis-
1130
+ tribution of the random variable x(t)|{z(s)}s∈[0,t] as the time t increases. Zakai stochastic
1131
+ partial differential equation (SPDE) is a well-known equation whose solution characterizes
1132
+ the unnormalized density of posterior distribution [35]. The Zaikai equation has the form of
1133
+ ∂r
1134
+ ∂t = AP Fr + r
1135
+
1136
+ h, dz
1137
+ dt
1138
+
1139
+ Rc
1140
+ .
1141
+ (3.35)
1142
+ The partial differential operator AP F generates a continuous Perron-Frobenius semigroup
1143
+ {Pt, t ≥ 0}. Let {Qt
1144
+ s, 0 ≤ s ≤ t} be the stochastic semigroup [31] associated with with the
1145
+ following SDE
1146
+ dr′
1147
+ dt = r′
1148
+
1149
+ h, dz
1150
+ dt
1151
+
1152
+ Rc
1153
+ .
1154
+ (3.36)
1155
+ Then the Zakai equation (3.35) can be approximated by the following Trotter-like product
1156
+ formula
1157
+ rj+1 = Q
1158
+ tj+1
1159
+ tj
1160
+ Pτrj,
1161
+ (3.37)
1162
+ where τ = tj+1 − tj, ∀j ∈ N. For the fixed τ, Pτ is still denoted by P. By the reference [31],
1163
+ the Q
1164
+ tj+1
1165
+ tj
1166
+ describes the solution of the equation (3.36), i.e.,
1167
+ Q
1168
+ tj+1
1169
+ tj
1170
+ r(x) = exp
1171
+
1172
+ ⟨h(x), zj+1 − zj⟩Rc − τ
1173
+ 2|h(x)|2
1174
+ Rc
1175
+
1176
+ r(x).
1177
+ With the discrete scheme (3.37), we utilize the Perron-Frobenius operator to solve Zakai
1178
+ equation, rather than using Fokker-Planck operator AP F. Thus, we discretize P by Ulam’s
1179
+ method and project the density function onto VN. Let P N be the discretization of P. Let
1180
+ Wj and Wj+1 be the weights vectors with respect to πNrj and πNrj+1. Denote gc
1181
+ j(x) =
1182
+ exp
1183
+
1184
+ ⟨h(x), zj+1 − zj⟩Rc − τ
1185
+ 2|h(x)|2
1186
+ Rc
1187
+
1188
+ and
1189
+ Gj =
1190
+
1191
+ 
1192
+ gc
1193
+ j(x(1))
1194
+ gc
1195
+ j(x(2))
1196
+ ...
1197
+ gc
1198
+ j(x(N))
1199
+
1200
+  ,
1201
+ where x(i) is the mass point of Bi. Then the transition of density functions turns into a map
1202
+ of the weights,
1203
+ Wj+1 = Gj ⊙
1204
+
1205
+ WjP N�
1206
+ .
1207
+ Here ⊙ denotes Hadamard product. In this case, the PFO is extended to the continuous-time
1208
+ filtering problem to estimate the posterior density function.
1209
+ 4
1210
+ Comparison with particle filter
1211
+ Particle filter (PF) [32, 33] is an important filtering method to sequentially approximate the
1212
+ true posterior filtering distribution p(xj|Yj) in the limit of a large number of particles. In
1213
+ 17
1214
+
1215
+ practice, we approximate the probability density by a combination of locations of particles
1216
+ and weights associated with Dirac functions. Particle filter proceeds by varying the weights
1217
+ and determining the particle Dirac measures. It is able to take care of non-Gaussian and non-
1218
+ linear models. In this section, we will compare the computational accuracy and differences
1219
+ between PFOF and PF.
1220
+ Accordingly, we define µN
1221
+ j as the posterior empirical measure on RN approximating truth
1222
+ posterior probability measure µj and define �µN
1223
+ j
1224
+ on RN as the approximation of the prior
1225
+ probability measure �µj. Let
1226
+ µj ≈ µN
1227
+ j :=
1228
+ N
1229
+
1230
+ n=1
1231
+ ω(n)
1232
+ j δx(n)
1233
+ j ,
1234
+ �µj+1 ≈ �µN
1235
+ j+1 :=
1236
+ N
1237
+
1238
+ n=1
1239
+ �ω(n)
1240
+ j+1δ�x(n)
1241
+ j+1,
1242
+ where x(n)
1243
+ j
1244
+ and �x(n)
1245
+ j+1 are particle positions, and ω(n)
1246
+ j
1247
+ > 0, �ω(n)
1248
+ j+1 > 0 are the associated
1249
+ weights satisfying �N
1250
+ n=1 ω(n)
1251
+ j
1252
+ = 1, �N
1253
+ n=1 �ω(n)
1254
+ j+1 = 1. The empirical distribution is completely
1255
+ determined by particle positions and weights. The objective of particle filter is to calculate
1256
+ the update {x(n)
1257
+ j , ω(n)
1258
+ j } → {�x(n)
1259
+ j+1, �ω(n)
1260
+ j+1} and {�x(n)
1261
+ j+1, �ω(n)
1262
+ j+1} → {x(n)
1263
+ j+1, ω(n)
1264
+ j+1}, which define the
1265
+ prediction step and analysis step, respectively. Monte-Carlo sampling is used to determine
1266
+ particle positions in the prediction and Bayesian rule is used to update of the weights in the
1267
+ analysis.
1268
+ Prediction In this step, the prediction phase is approximated by the Markov chain
1269
+ {Ψ(xj)}j∈N with transition kernel p(xj, xj+1) = p(xj+1|xj). We draw �x(n)
1270
+ j+1 from the kernel p
1271
+ started from x(n)
1272
+ j , i.e., �x(n)
1273
+ j+1 ∼ p(x(n)
1274
+ j , ·). We keep the weights unchanged so that �ω(n)
1275
+ j+1 = ω(n)
1276
+ j ,
1277
+ and obtain the prior probability measure
1278
+ �µN
1279
+ j+1 =
1280
+ N
1281
+
1282
+ n=1
1283
+ ω(n)
1284
+ j δ�x(n)
1285
+ j+1.
1286
+ Analysis In this step, we apply Bayes’s formula to approximate the posterior probability
1287
+ measure. To do this, we fix the position of the particles and update the weights. With the
1288
+ definition of gj(x) in (2.10), we have the empirical posterior distribution
1289
+ µN
1290
+ j+1 =
1291
+ N
1292
+
1293
+ n=1
1294
+ ω(n)
1295
+ j+1δ�x(n)
1296
+ j+1,
1297
+ where
1298
+ ω(n)
1299
+ j+1 = �ω(n)
1300
+ j+1/(
1301
+ N
1302
+
1303
+ n=1
1304
+ �ω(n)
1305
+ j+1),
1306
+ �ω(n)
1307
+ j+1 = gj(�x(n)
1308
+ j+1)ωn
1309
+ j .
1310
+ (4.38)
1311
+ The first equation in (4.38) is a normalization. Sequential Importance Resampling (SIR)
1312
+ particle filter is a basic particle filter and shown in Algorithm 1.
1313
+ A resampling step is
1314
+ introduced in the algorithm. In this way, we can deal with the initial measure µ0 when it
1315
+ is not a combination of Dirac functions. We can also deal with the case when some of the
1316
+ particle weights are close to 1. The algorithm shows that each particle moves according
1317
+ 18
1318
+
1319
+ to the underlying model and is reweighted according to the likelihood. By the iteration of
1320
+ Bayesian filtering , we rewrite the particle filter approximated by the form
1321
+ µN
1322
+ j+1 = LjSNPµN
1323
+ j ,
1324
+ µN
1325
+ 0 = µ0,
1326
+ (4.39)
1327
+ where the operator SN is defined as follows:
1328
+ (SNµ)(dx) = 1
1329
+ N
1330
+ N
1331
+
1332
+ n=1
1333
+ δx(n)(dx),
1334
+ x(n) ∼ µ
1335
+ i.i.d..
1336
+ Algorithm 3 Sequential Importance Resampling particle filter
1337
+ 1: Set j = 0 and µN
1338
+ 0 (dx0) = µ0(dx0)
1339
+ 2: Draw x(n)
1340
+ j
1341
+ ∼ µN
1342
+ j , n = 1, · · · , N
1343
+ 3: Set ω(n)
1344
+ j
1345
+ = 1/N, n = 1, · · · , N, redefine µN
1346
+ j := �N
1347
+ n=1 ω(n)
1348
+ j δx(n)
1349
+ j
1350
+ 4: Draw �x(n)
1351
+ j+1 ∼ p(x(n)
1352
+ j , ·)
1353
+ 5: Define ω(n)
1354
+ j+1 by (3.23) and µN
1355
+ j+1 = �N
1356
+ n=1 ω(n)
1357
+ j+1δ�x(n)
1358
+ j+1
1359
+ 6: j+1→ j
1360
+ 7: Go to step 2
1361
+ By (4.39), we find that the randomness for the probability measure is caused by the sam-
1362
+ pling operator SN and the convergence of particle filter depends on the number of particles.
1363
+ The particle filter does recover the truth posterior distribution as the number of particles
1364
+ tends to infinity [34]. The following theorem gives a convergence result for PF.
1365
+ Theorem 4.1. (Theorem 4.5 in [23]) Let m be the number of particles and µm
1366
+ j the approx-
1367
+ imation measure in SIR particle filter. Assume that κ ∈ (0, 1] is the constant defined in
1368
+ Lemma 3.2, then the total-variance distance between µm
1369
+ J and µJ is estimated by
1370
+ d(µm
1371
+ J , µJ) ≤
1372
+ J
1373
+
1374
+ j=1
1375
+ (2κ−2)j 1
1376
+ √m.
1377
+ (4.40)
1378
+ Let J be fixed in Theorem 3.4 and Theorem 4.1. We find that the convergence rate of
1379
+ particle filter depends on the number of particles m. Similarly, the convergence of PFOF
1380
+ is determined by the number of blocks N used in the Ulam’s method. When N = m, i.e.,
1381
+ the same number of basis functions in the two methods, the rate of convergence is O( 1
1382
+ N )
1383
+ in PFOF and O(
1384
+ 1
1385
+ √m) in SIR particle filter. The analysis shows that PFOF converges faster
1386
+ than the particle filter.
1387
+ Sampling from high-dimensional and complex transition kernels is difficult to realize in
1388
+ PF. The PFOF avoids the sampling and uses a data-driven approximation instead, which
1389
+ requires short-term path simulations rather than the form of transition density. Particle
1390
+ degeneracy is also a significant issue. As the number of effective particles decreases gradually,
1391
+ the efficiency of the particle filter becomes worse.
1392
+ It is known that particle filter is inefficient for high-dimensional models because of degen-
1393
+ eracy. So the accurate estimate of posterior PDF requires a great number of particles that
1394
+ 19
1395
+
1396
+ scales exponentially with the size of the system. In addition to resampling, adding jitter and
1397
+ localisation are effective modifications to solve the problem. The PFOF also has the “curse
1398
+ of dimensionality” problem in high dimensions as the partition scale expansion. One solu-
1399
+ tion to circumvent this problem is the sparse Ulam method. The low-rank Perron-Frobenius
1400
+ operator filter can enhance the efficiency of filtering problems.
1401
+ 5
1402
+ Numerical results
1403
+ In this section, we present some numerical examples for filtering problems using the pro-
1404
+ posed PFOF. The system dynamics is unknown and some observations are given in the
1405
+ filtering problems. The PFOF and lr-PFOF are implemented to estimate posterior PDFs
1406
+ of the stochastic filtering problems. In Section 5.1, we consider an Ornstein-Uhlenbeck (O-
1407
+ U) process to identify the Gaussian PDF of the system and estimate its posterior PDFs
1408
+ with observations known.
1409
+ In Section 5.2, we consider a nonlinear filtering problem gov-
1410
+ erned by Bene˘s SDE, and estimate the non-Gaussian posterior PDFs. In Section 5.3, we
1411
+ consider a continuous-time filtering problem, which is a classical chaotic system Lorenz’63
1412
+ model with observations, to model posterior density of the state. We compare the proposed
1413
+ PFOF/lr-PFOF with particle filter and Extended Kalmn filter (ExKF). Numerical results
1414
+ show that PFOF achieves a better posterior PDF estimates than PF, and a more accurate
1415
+ state estimates than ExKF.
1416
+ 5.1
1417
+ O-U process
1418
+ Let us consider an O-U process, which is a one-dimensional linear dynamical system,
1419
+ dxt = −λxtdt + dWt,
1420
+ x(t0) ∼ N (m0, C0),
1421
+ where λ > 0 and Wt is a standard Brownian motion. We now consider a state-space model
1422
+ formed by a discretization of the O-U process and the discrete observations of the state as
1423
+ follows,
1424
+
1425
+ x(tk+1) = exp(−λ∆tk)x(tk) + qk,
1426
+ qk ∼ N (0, Σk),
1427
+ y(tk) = Hx(tk) + rk,
1428
+ rk ∼ N (0, R),
1429
+ (5.41)
1430
+ where Σk = exp(−2λ∆tk), H = I and R = σ2. The parameters are given by λ = 1/2,
1431
+ m0 = 2, C0 = 0.1 and σ = 1. To apply PFOF, we compute Perron-Frobenius operator Pτ
1432
+ using Ulam’s method with time step τ = 0.1 and obtain an approximation form P N
1433
+ τ ∈ RN×N
1434
+ of Pτ. We take the phase space of xt is [−6, 6] and divide it into N = 100 grids, and each
1435
+ interval [zk, zk+1], k = 0, · · · , N − 1, defines a box Bk. We define an indicator function
1436
+ 1Bk(x) on each Bk and randomly choose n = 100 sample points in the box to calculate P N
1437
+ τ .
1438
+ Given initial Gaussian distribution N (2, 0.1), we rewrite µ0 as a vector W0, which denotes
1439
+ the coefficients of µN
1440
+ 0 . The P N
1441
+ τ
1442
+ acts on the weight vector to estimate probability value of xt
1443
+ on each Bk, i.e., P(xt ∈ Bk), t = qτ, q = 0, 1, 2 · · ·. Thus, we get the discrete probability
1444
+ density function (PDF) of xt at t. The simulation PDFs at different times are shown in the
1445
+ left column of Figure 5.1. By the figure, we see that the PDFs estimated by PFO are close
1446
+ 20
1447
+
1448
+ -6
1449
+ -4
1450
+ -2
1451
+ 0
1452
+ 2
1453
+ 4
1454
+ 6
1455
+ t=2.5
1456
+ 0
1457
+ 0.5
1458
+ 1
1459
+ 1.5prior PDF by P-F operator
1460
+ Truth
1461
+ PDF by PFO
1462
+ -6
1463
+ -4
1464
+ -2
1465
+ 0
1466
+ 2
1467
+ 4
1468
+ 6
1469
+ 0
1470
+ 0.5
1471
+ 1
1472
+ 1.5 empirical posterior PDF
1473
+ Truth
1474
+ PFOF
1475
+ -6
1476
+ -4
1477
+ -2
1478
+ 0
1479
+ 2
1480
+ 4
1481
+ 6
1482
+ 0
1483
+ 0.5
1484
+ 1
1485
+ 1.5histogram of posterior PDF
1486
+ Truth
1487
+ Particle filter
1488
+ -6
1489
+ -4
1490
+ -2
1491
+ 0
1492
+ 2
1493
+ 4
1494
+ 6
1495
+ t=5
1496
+ 0
1497
+ 0.5
1498
+ 1
1499
+ 1.5
1500
+ Truth
1501
+ PDF by PFO
1502
+ -6
1503
+ -4
1504
+ -2
1505
+ 0
1506
+ 2
1507
+ 4
1508
+ 6
1509
+ 0
1510
+ 0.5
1511
+ 1
1512
+ 1.5
1513
+ Truth
1514
+ PFOF
1515
+ -6
1516
+ -4
1517
+ -2
1518
+ 0
1519
+ 2
1520
+ 4
1521
+ 6
1522
+ 0
1523
+ 0.5
1524
+ 1
1525
+ 1.5
1526
+ Truth
1527
+ Particle filter
1528
+ -6
1529
+ -4
1530
+ -2
1531
+ 0
1532
+ 2
1533
+ 4
1534
+ 6
1535
+ t=10
1536
+ 0
1537
+ 0.5
1538
+ 1
1539
+ 1.5
1540
+ Truth
1541
+ PDF by PFO
1542
+ -6
1543
+ -4
1544
+ -2
1545
+ 0
1546
+ 2
1547
+ 4
1548
+ 6
1549
+ 0
1550
+ 0.5
1551
+ 1
1552
+ 1.5
1553
+ Truth
1554
+ PFOF
1555
+ -6
1556
+ -4
1557
+ -2
1558
+ 0
1559
+ 2
1560
+ 4
1561
+ 6
1562
+ 0
1563
+ 0.5
1564
+ 1
1565
+ 1.5
1566
+ Truth
1567
+ Particle filter
1568
+ Figure 5.1: The prior PDF estimated by PFO (left column), posterior PDF by PFOF (middle column)
1569
+ and posterior PDF by particle filter (right column) at different times.
1570
+ to the truth. By this way, the PDF is computed without solving Fokker-Planck equation
1571
+ and the estimation of PDF is actually the prior density in the model (5.41).
1572
+ Then we compute posterior probability density of the state-space model (5.41). We set
1573
+ N = 500 and n = 100. The posterior probability density function is estimated by Algorithm
1574
+ 1 and the results are displayed in the middle column of Figure 5.1. From Figure 5.1, we find
1575
+ that the empirical posterior PDFs estimated by PFOF are close to the Gaussian posterior
1576
+ densities. To make comparison with PFOF, the particle filter is also used for the filtering
1577
+ problem. In the particle filter, 500 particles are drawn randomly to generate Dirac measure
1578
+ and construct empirical measure. Thus, the number of basis functions is equal to each other
1579
+ in the two methods. Figure 5.1 clearly shows that the empirical PDF calculated by PFOF
1580
+ is more accurate than that by PF. The numerical results support Theorem 3.4 and Theorem
1581
+ 4.1.
1582
+ 5.2
1583
+ Bene˘s-Daum filter
1584
+ In this subsection, we apply PFOF to a nonlinear filtering problem, whose state-space model
1585
+ is defined by the Bene˘s stochastic difference equation,
1586
+ dxt = tanh(xt)dt + dWt,
1587
+ (5.42)
1588
+ 21
1589
+
1590
+ with initial condition x0 = 0. Refer to [36], the probability density function of the equation
1591
+ (5.42) is given by
1592
+ p(x(t)) =
1593
+ 1
1594
+
1595
+ 2πt
1596
+ cosh(x(t))
1597
+ cosh(x0) exp
1598
+
1599
+ − t
1600
+ 2
1601
+
1602
+ exp
1603
+
1604
+ − 1
1605
+ 2t(x(t) − x0)
1606
+
1607
+ .
1608
+ We take the phase space [−15, 15] and uniformly divide it into 100 (N = 100) grids [zk, zk+1], k =
1609
+ 0, ..., N − 1, each of which corresponds to a box Bk. The Ulam’s method is used to approxi-
1610
+ mate PFO. The time step is set as τ = 0.5 and the number of random sample points m = 400.
1611
+ The predicted PDF of xt at t = 1, t = 2.5 and t = 5 are shown in Figure 5.2. The PDFs are
1612
+ separately estimated by discretized PFO matrix P N ∈ R100×100 and low-rank approximation
1613
+ of PFO with truncation r = 30. We see that PDFs at t = 2.5 and t = 5 have two modes and
1614
+ the PFO can fairly approximate the two modes.
1615
+ -20
1616
+ -10
1617
+ 0
1618
+ 10
1619
+ 20
1620
+ 0
1621
+ 0.05
1622
+ 0.1
1623
+ 0.15
1624
+ 0.2
1625
+ 0.25
1626
+ 0.3
1627
+ 0.35
1628
+ 0.4
1629
+ t=1
1630
+ true PDF
1631
+ PDF by PFO
1632
+ PDF by lr-PFO, r=30
1633
+ -20
1634
+ -10
1635
+ 0
1636
+ 10
1637
+ 20
1638
+ 0
1639
+ 0.02
1640
+ 0.04
1641
+ 0.06
1642
+ 0.08
1643
+ 0.1
1644
+ 0.12
1645
+ 0.14
1646
+ 0.16
1647
+ 0.18
1648
+ 0.2
1649
+ t=2.5
1650
+ true PDF
1651
+ PDF by PFO
1652
+ PDF by lr-PFO, r=30
1653
+ -20
1654
+ -10
1655
+ 0
1656
+ 10
1657
+ 20
1658
+ 0
1659
+ 0.05
1660
+ 0.1
1661
+ 0.15
1662
+ t=5
1663
+ true PDF
1664
+ PDF by PFO
1665
+ PDF by lr-PFO, r=30
1666
+ Figure 5.2: The PDF estimated by PFO and low-rank model at different times.
1667
+ First we want to calculate the truth posterior filtering distribution of the model (5.42)
1668
+ subject to observation. In this example, the observation model satisfies
1669
+ p(yk|x(tk)) = N (yk|x(tk), σ2).
1670
+ (5.43)
1671
+ According to [36] (Chapter 10.5), the transition density of the Bene˘s SDE is given by
1672
+ p(x(tk)|x(tk−1)) =
1673
+ 1
1674
+ √2π∆tk−1
1675
+ cosh(x(tk))
1676
+ cosh(x(tk−1))exp(−1
1677
+ 2∆tk−1)×exp
1678
+
1679
+
1680
+ 1
1681
+ 2∆tk−1
1682
+ (x(tk)−x(tk−1))2
1683
+
1684
+ ,
1685
+ where ∆tk−1 = tk − tk−1. If we assume that the filtering solution at time tk−1 is of the form
1686
+ p(x(tk−1)|y1:k−1) ∝ cosh(x(tk−1))exp
1687
+
1688
+
1689
+ 1
1690
+ 2Pk−1
1691
+ (x(tk−1) − mk−1)2
1692
+
1693
+ for given mk−1 and Pk−1. Then we use the Chapman-Kolmogorov equation and give the
1694
+ prior density
1695
+ p
1696
+
1697
+ x(tk)|y1:k−1
1698
+
1699
+ ∝ cosh
1700
+
1701
+ x(tk)
1702
+
1703
+ exp
1704
+
1705
+
1706
+ 1
1707
+ 2P −
1708
+ k
1709
+ (x(tk) − m−
1710
+ k )2
1711
+
1712
+ ,
1713
+ 22
1714
+
1715
+ where
1716
+ m−
1717
+ k = mk−1,
1718
+ P −
1719
+ k = Pk−1 + ∆tk−1.
1720
+ The m−
1721
+ k and P −
1722
+ k
1723
+ are sufficient statistics representing prior density functions.
1724
+ By Bayes’
1725
+ formula, the posterior density of x(tk) is given by
1726
+ p
1727
+
1728
+ x(tk)|y1:k
1729
+
1730
+ ∝ cosh
1731
+
1732
+ x(tk)
1733
+
1734
+ exp
1735
+
1736
+
1737
+ 1
1738
+ 2Pk
1739
+
1740
+ x(tk) − mk
1741
+ �2
1742
+
1743
+ ,
1744
+ (5.44)
1745
+ where the equations of parameters mk and Pk in the posterior density satisfy
1746
+ mk = m−
1747
+ k +
1748
+
1749
+ P −
1750
+ k
1751
+ P −
1752
+ k + σ2
1753
+
1754
+ (yk − m−
1755
+ k ),
1756
+ P −
1757
+ k = Pk−1 + ∆tk−1.
1758
+ Thus, the reference posterior distribution is defined by (5.44).
1759
+ To apply PFOF to the nonlinear filtering problem, we make a finer division of the phase
1760
+ interval [−15, 15] to obtain 400 boxes. Besides, we choose enough sample points in Ulam’s
1761
+ method to reduce error of Monte-Carlo as much as possible. The observations yk are arti-
1762
+ ficially obtained by simulating the underlying model (5.42) and adding noise according to
1763
+ (5.43), where σ = 1. The observable interval is [0, 5] with a time step ∆tk = 0.1. The
1764
+ initial distribution for the filtering process is chosen to be m0 = 0, P0 = 2. Particularly, we
1765
+ also use the particle filter as a comparison. In the prediction, we are not allowed to draw
1766
+ sample points directly because of a complex transition probability density function. We use
1767
+ Acceptance-Rejection method to resolve the issue. We first show the results of posterior
1768
+ mean estimated by PFOF and lr-PFOF (r=40) in Figure 5.3, together with truth and ob-
1769
+ servations. The mean is obtained by averaging the posterior distribution of PFOF/lr-PFOF
1770
+ and it is close to the truth as the figure shows.
1771
+ The posterior densities estimated by PFOF, lr-PFOF and particle filter are shown in
1772
+ Figure 5.4 together with the truth. The truncation parameters in lr-PFOF are separately
1773
+ set as r = 10, r = 20 and r = 40. The estimation accuracy of lr-PFOF gradually improves as
1774
+ the number of truncation basis functions increases, and achieves almost the same as PFOF
1775
+ when r = 40 < N = 400. Although the number of basis functions is the same in both
1776
+ PFOF and particle filter, there exit clear difference between the two methods. The results
1777
+ show that the accuracy of PFOF is higher than that of particle filter in the non-Gaussian
1778
+ and nonlinear filtering problem. This further confirms the theoretical analysis in Section
1779
+ 3. As shown in Table 1, both PFOF and lr-PFOF use less CPU-time than SIR particle
1780
+ filter does. Actually, the CPU-time in particle filter is mainly from Acceptance-Rejection
1781
+ sampling. From the table, it can be seen that lr-PFOF can reduce online computation time
1782
+ comparing to PFOF.
1783
+ 23
1784
+
1785
+ 0
1786
+ 0.5
1787
+ 1
1788
+ 1.5
1789
+ 2
1790
+ 2.5
1791
+ 3
1792
+ 3.5
1793
+ 4
1794
+ 4.5
1795
+ 5
1796
+ -3
1797
+ -2
1798
+ -1
1799
+ 0
1800
+ 1
1801
+ 2
1802
+ 3
1803
+ 4
1804
+ 5
1805
+ 6
1806
+ Truth
1807
+ PFOF-mean
1808
+ lr-PFOF-mean,r=40
1809
+ observation
1810
+ Figure 5.3: The mean estimated by PFOF and lr-PFOF.
1811
+ -15
1812
+ -10
1813
+ -5
1814
+ 0
1815
+ 5
1816
+ 10
1817
+ 15
1818
+ t=1
1819
+ 0
1820
+ 0.2
1821
+ 0.4
1822
+ 0.6
1823
+ 0.8
1824
+ 1
1825
+ 1.2
1826
+ empirical posterior PDF
1827
+ Truth
1828
+ lr-PFOF,r=10
1829
+ r=20
1830
+ r=40
1831
+ -15
1832
+ -10
1833
+ -5
1834
+ 0
1835
+ 5
1836
+ 10
1837
+ 15
1838
+ 0
1839
+ 0.2
1840
+ 0.4
1841
+ 0.6
1842
+ 0.8
1843
+ 1
1844
+ 1.2
1845
+ empirical posterior PDF
1846
+ Truth
1847
+ PFOF
1848
+ -15
1849
+ -10
1850
+ -5
1851
+ 0
1852
+ 5
1853
+ 10
1854
+ 15
1855
+ 0
1856
+ 0.2
1857
+ 0.4
1858
+ 0.6
1859
+ 0.8
1860
+ 1
1861
+ 1.2
1862
+ histogram of posterior PDF
1863
+ Truth
1864
+ Particle filter
1865
+ -15
1866
+ -10
1867
+ -5
1868
+ 0
1869
+ 5
1870
+ 10
1871
+ 15
1872
+ t=2.5
1873
+ 0
1874
+ 0.2
1875
+ 0.4
1876
+ 0.6
1877
+ 0.8
1878
+ 1
1879
+ 1.2
1880
+ Truth
1881
+ lr-PFOF,r=10
1882
+ r=20
1883
+ r=40
1884
+ -15
1885
+ -10
1886
+ -5
1887
+ 0
1888
+ 5
1889
+ 10
1890
+ 15
1891
+ 0
1892
+ 0.2
1893
+ 0.4
1894
+ 0.6
1895
+ 0.8
1896
+ 1
1897
+ 1.2
1898
+ Truth
1899
+ PFOF
1900
+ -15
1901
+ -10
1902
+ -5
1903
+ 0
1904
+ 5
1905
+ 10
1906
+ 15
1907
+ 0
1908
+ 0.2
1909
+ 0.4
1910
+ 0.6
1911
+ 0.8
1912
+ 1
1913
+ 1.2
1914
+ Truth
1915
+ Particle filter
1916
+ -15
1917
+ -10
1918
+ -5
1919
+ 0
1920
+ 5
1921
+ 10
1922
+ 15
1923
+ t=5
1924
+ 0
1925
+ 0.2
1926
+ 0.4
1927
+ 0.6
1928
+ 0.8
1929
+ 1
1930
+ 1.2
1931
+ Truth
1932
+ lr-PFOF,r=10
1933
+ r=20
1934
+ r=40
1935
+ -15
1936
+ -10
1937
+ -5
1938
+ 0
1939
+ 5
1940
+ 10
1941
+ 15
1942
+ 0
1943
+ 0.2
1944
+ 0.4
1945
+ 0.6
1946
+ 0.8
1947
+ 1
1948
+ 1.2
1949
+ Truth
1950
+ PFOF
1951
+ -15
1952
+ -10
1953
+ -5
1954
+ 0
1955
+ 5
1956
+ 10
1957
+ 15
1958
+ 0
1959
+ 0.2
1960
+ 0.4
1961
+ 0.6
1962
+ 0.8
1963
+ 1
1964
+ 1.2
1965
+ Truth
1966
+ Particle filter
1967
+ Figure 5.4: The posterior PDF by lr-PFOF (left column), PFOF (middle column) and particle filter (right
1968
+ column) at different times.
1969
+ Table 1: CPU-time (seconds) for posterior PDF with different methods.
1970
+ Methods
1971
+ PFOF
1972
+ lr-PFOF (r=10)
1973
+ lr-PFOF (r=20)
1974
+ lr-PFOF (r=40)
1975
+ particle filter
1976
+ offline
1977
+ 0.1599
1978
+ 0.2536
1979
+ 0.2649
1980
+ 0.2689
1981
+ 6906.9486
1982
+ online
1983
+ 0.0673
1984
+ 0.0150
1985
+ 0.0431
1986
+ 0.0641
1987
+ 24
1988
+
1989
+ 5.3
1990
+ Lorenz’63 model
1991
+ Lorenz developed a mathematical model for atmospheric convection in 1963. The Lorenz’63
1992
+ model is the simplest continuous-time system to exhibit sensitivity to initial conditions and
1993
+ chaos, and it is popular example used for data assimilation. For some parameters and initial
1994
+ conditions, the system may perform a chaotic behaviour. The model consists of three coupled
1995
+ nonlinear ordinary differential equations with the solution v = (v1, v2, v3) ∈ R3. We consider
1996
+ the Lorenz’63 model with additive white noise,
1997
+
1998
+
1999
+
2000
+
2001
+
2002
+
2003
+
2004
+
2005
+
2006
+
2007
+
2008
+
2009
+
2010
+
2011
+
2012
+
2013
+
2014
+
2015
+
2016
+ dv1
2017
+ dt = a(v2 − v1) + σ1
2018
+ dW1
2019
+ dt
2020
+ dv2
2021
+ dt = −av1 − v2 − v1v3 + σ2
2022
+ dW2
2023
+ dt
2024
+ dv3
2025
+ dt = v1v2 − bv3 − b(r + a) + σ3
2026
+ dW3
2027
+ dt
2028
+ v(0) ∼ N (m0, C0),
2029
+ where Wj are Brownian motions assumed to be independent. We use the classical parameter
2030
+ values (a, b, r) = (10, 8
2031
+ 3, 28) and set σ1 = σ2 = σ3 = 2. The initial mean m0 is given by
2032
+ (0, 0, 0) and covariance matrix is an identity matrix I3 ∈ R3×3. We give the continuous
2033
+ observation z(t), which is governed by a SDE
2034
+
2035
+
2036
+
2037
+ dz
2038
+ dt = h(v) + γ dWz
2039
+ dt
2040
+ z(0) = 0,
2041
+ with γ = 0.2.
2042
+ The purpose of this example is to explore the performance of PFOF in
2043
+ continuous-time filtering problems. We compare the assimilation results based on Perron-
2044
+ Frobenius operator and continuous-time Extended Kalman filter. The posterior means es-
2045
+ timated by the two methods are shown in Figure 5.5 and Figure 5.7.
2046
+ The two figures
2047
+ are corresponding to different observations h(v) = Hv, where the former is determined by
2048
+ H = [0, 1, 0] and the latter is determined by H = [0, 0, 1]. In particular, we find that the
2049
+ choice of observations in Lorenz models is quite influential, especially for ExKF. The stability
2050
+ of ExKF significantly depends on the observation. Because the insufficient observations may
2051
+ keep the filter away from the truth and cause significant model error, and it may easily lead
2052
+ to the numerical instability once the deviation occurs. However, the results of reconstruc-
2053
+ tion by PFOF much less affected by observation model, so the method shows much better
2054
+ robustness than ExKF.
2055
+ Figure 5.6 shows the consequence of mean-square error with v2 or v3 as the different
2056
+ observation. For ExKF, we find that there is a large error in estimating mean by ExKF
2057
+ when the third component v3 is observed. To better visualize the results, we compare the
2058
+ trajectories of mean obtained by PFOF and ExKF in Figure 5.8 together with truth. We
2059
+ find the trajectory mean of PFOF agrees with the truth more than the the trajectory mean
2060
+ of ExKF.
2061
+ For v3 as an observation, the one-dimensional and two-dimensional marginal probability
2062
+ distributions are displayed in Figure 5.9. The figure aims to intuitively describe distribution
2063
+ of the single value and correlation of the different components.
2064
+ As shown in the figure,
2065
+ 25
2066
+
2067
+ t
2068
+ 0
2069
+ 0.2
2070
+ 0.4
2071
+ 0.6
2072
+ 0.8
2073
+ v1
2074
+ -15
2075
+ -10
2076
+ -5
2077
+ 0
2078
+ 5
2079
+ 10
2080
+ 15
2081
+ 20
2082
+ 25
2083
+ truth
2084
+ PFOF-mean
2085
+ ExKF-mean
2086
+ standard deviation of ExKF
2087
+ t
2088
+ 0
2089
+ 0.2
2090
+ 0.4
2091
+ 0.6
2092
+ 0.8
2093
+ v2
2094
+ -20
2095
+ -15
2096
+ -10
2097
+ -5
2098
+ 0
2099
+ 5
2100
+ 10
2101
+ 15
2102
+ 20
2103
+ 25
2104
+ 30
2105
+ truth
2106
+ PFOF-mean
2107
+ ExKF-mean
2108
+ standard deviation of ExKF
2109
+ t
2110
+ 0
2111
+ 0.2
2112
+ 0.4
2113
+ 0.6
2114
+ 0.8
2115
+ v3
2116
+ -30
2117
+ -25
2118
+ -20
2119
+ -15
2120
+ -10
2121
+ -5
2122
+ 0
2123
+ 5
2124
+ 10
2125
+ truth
2126
+ PFOF-mean
2127
+ ExKF-mean
2128
+ standard deviation of ExKF
2129
+ Figure 5.5: The posterior mean of each component by ExKF and PFOF in Lorenz’63 model with continuous
2130
+ observation. The component v2 is observed.
2131
+ t
2132
+ 0
2133
+ 0.2
2134
+ 0.4
2135
+ 0.6
2136
+ 0.8
2137
+ MSE
2138
+ 0
2139
+ 50
2140
+ 100
2141
+ 150
2142
+ 200
2143
+ 250
2144
+ ExKF, v2 observed
2145
+ t
2146
+ 0
2147
+ 0.2
2148
+ 0.4
2149
+ 0.6
2150
+ 0.8
2151
+ MSE
2152
+ 0
2153
+ 10
2154
+ 20
2155
+ 30
2156
+ 40
2157
+ 50
2158
+ 60
2159
+ 70
2160
+ 80
2161
+ 90
2162
+ 100
2163
+ PFOF, v2 observed
2164
+ t
2165
+ 0
2166
+ 0.2
2167
+ 0.4
2168
+ 0.6
2169
+ 0.8
2170
+ MSE
2171
+ 0
2172
+ 200
2173
+ 400
2174
+ 600
2175
+ 800
2176
+ 1000
2177
+ 1200
2178
+ 1400
2179
+ 1600
2180
+ 1800
2181
+ ExKF, v3 observed
2182
+ t
2183
+ 0
2184
+ 0.2
2185
+ 0.4
2186
+ 0.6
2187
+ 0.8
2188
+ MSE
2189
+ 0
2190
+ 10
2191
+ 20
2192
+ 30
2193
+ 40
2194
+ 50
2195
+ 60
2196
+ 70
2197
+ 80
2198
+ 90
2199
+ 100
2200
+ PFOF, v3 observed
2201
+ Figure 5.6: The mean-square error ∥v(t) − m(t)∥2
2202
+ 2 of filters.
2203
+ 26
2204
+
2205
+ t
2206
+ 0
2207
+ 0.2
2208
+ 0.4
2209
+ 0.6
2210
+ 0.8
2211
+ v1
2212
+ -15
2213
+ -10
2214
+ -5
2215
+ 0
2216
+ 5
2217
+ 10
2218
+ 15
2219
+ 20
2220
+ 25
2221
+ truth
2222
+ PFOF-mean
2223
+ ExKF-mean
2224
+ standard deviation of ExKF
2225
+ t
2226
+ 0
2227
+ 0.2
2228
+ 0.4
2229
+ 0.6
2230
+ 0.8
2231
+ v2
2232
+ -20
2233
+ -15
2234
+ -10
2235
+ -5
2236
+ 0
2237
+ 5
2238
+ 10
2239
+ 15
2240
+ 20
2241
+ 25
2242
+ 30
2243
+ truth
2244
+ PFOF-mean
2245
+ ExKF-mean
2246
+ standard deviation of ExKF
2247
+ t
2248
+ 0
2249
+ 0.2
2250
+ 0.4
2251
+ 0.6
2252
+ 0.8
2253
+ v3
2254
+ -30
2255
+ -25
2256
+ -20
2257
+ -15
2258
+ -10
2259
+ -5
2260
+ 0
2261
+ 5
2262
+ 10
2263
+ truth
2264
+ PFOF-mean
2265
+ ExKF-mean
2266
+ standard deviation of ExKF
2267
+ Figure 5.7: The posterior mean of each component by ExKF and PFOF in Lorenz’63 model with continuous
2268
+ observation. The component v3 is observed.
2269
+ -20
2270
+ -15
2271
+ -10
2272
+ -5
2273
+ 0
2274
+ v2
2275
+ 5
2276
+ 10
2277
+ 15
2278
+ 20
2279
+ -15
2280
+ -10
2281
+ v1
2282
+ -5
2283
+ 0
2284
+ 5
2285
+ 10
2286
+ 15
2287
+ -10
2288
+ -5
2289
+ 0
2290
+ -20
2291
+ -25
2292
+ -15
2293
+ v3
2294
+ truth
2295
+ PFOF-mean
2296
+ ExKF-mean
2297
+ Figure 5.8: The trajectories of mean by PFOF and ExKF.
2298
+ 27
2299
+
2300
+ one-dimensional marginal distributions of the observed component are closer to Gaussian
2301
+ distributions than the other two components. This phenomenon reflects that when a com-
2302
+ ponent is used as an observation, its mean estimates will be more accurate than the other
2303
+ unobserved components.
2304
+ The results above show that PFOF has a higher accuracy for state estimates than ExKF
2305
+ in this chaotic nonlinear system. The former can also give estimates of probability density
2306
+ functions to gain more information of the state in the probabilistic sense.
2307
+ 6
2308
+ Conclusions
2309
+ A new filtering method was proposed to estimate filtering distribution of the state under
2310
+ the framework of Perron-Frobenius operator. We formulated filtering problems for discrete
2311
+ and continuous stochastic dynamical systems and applied the Perron-Frobenius operator to
2312
+ propagation of the posterior probability density function. The finite-dimensional approxi-
2313
+ mation of the PFO was realized by Ulam’s method, which provides a Galerkin projection
2314
+ space spanned by indicator functions. With Ulam’s method, the posterior PDF was dis-
2315
+ cretized and expressed by the weights of basis functions. Then the evolution of PDF became
2316
+ the transition of the weights vectors, which were iterated by PFO and likelihood function.
2317
+ This procedure was called Perron Frobenius operator filter. Thus, the empirical PDF was
2318
+ determined by a convex combination of indicator functions. We gave an error estimate of
2319
+ the proposed method and proved that its accuracy is higher than that of particle filters. Fur-
2320
+ thermore, a low-rank Perron-Frobenius operator filter was presented to approximate density
2321
+ functions via spectral decomposition. The decomposition was realized by eigendecomposi-
2322
+ tion of discretized PFO. Finally, the proposed method was implemented for three stochastic
2323
+ filtering problems, which included a linear discrete system, a nonlinear discrete system and
2324
+ a nonlinear continuous chaotic system.
2325
+ The numerical results showed that the proposed
2326
+ method has better accuracy and better robustness compared with particle filters and ExKF.
2327
+ Acknowledgement: L. Jiang acknowledges the support of NSFC 12271408 and the
2328
+ Fundamental Research Funds for the Central Universities.
2329
+ References
2330
+ [1] G. Froyland, R. Stuart and E. Sebille, How well-connected is the surface of
2331
+ the global ocean?, Chaos: An Interdisciplinary Journal of Nonlinear Science, 24 (2014),
2332
+ 033126.
2333
+ [2] C. Sch¨utte and M. Sarich, Metastability and Markov State Models in Molecular
2334
+ Dynamics, American Mathematical Soc., 2013.
2335
+ [3] A. Tantet, F. Burgt and H. Dijkstra, An early warning indicator for atmospheric
2336
+ blocking events using transfer operators, Chaos, 25 (2015), 036406.
2337
+ 28
2338
+
2339
+ -10
2340
+ 0
2341
+ 10
2342
+ 20
2343
+ v1
2344
+ 0
2345
+ 0.1
2346
+ 0.2
2347
+ 0.3
2348
+ -5
2349
+ 0
2350
+ 5
2351
+ 10
2352
+ 15
2353
+ v2
2354
+ -5
2355
+ 0
2356
+ 5
2357
+ 10
2358
+ 15
2359
+ -10
2360
+ 0
2361
+ 10
2362
+ 20
2363
+ 0
2364
+ 0.1
2365
+ 0.2
2366
+ 0.3
2367
+ v1
2368
+ -5
2369
+ 0
2370
+ 5
2371
+ 10
2372
+ 15
2373
+ v3
2374
+ -20
2375
+ -10
2376
+ 0
2377
+ v2
2378
+ -5
2379
+ 0
2380
+ 5
2381
+ 10
2382
+ 15
2383
+ -20
2384
+ -10
2385
+ 0
2386
+ t=0.25
2387
+ v3
2388
+ -30
2389
+ -20
2390
+ -10
2391
+ 0
2392
+ 10
2393
+ 0
2394
+ 0.1
2395
+ 0.2
2396
+ 0.3
2397
+ -10
2398
+ 0
2399
+ 10
2400
+ 20
2401
+ v1
2402
+ 0
2403
+ 0.1
2404
+ 0.2
2405
+ 0.3
2406
+ -5
2407
+ 0
2408
+ 5
2409
+ 10
2410
+ 15
2411
+ v2
2412
+ -5
2413
+ 0
2414
+ 5
2415
+ 10
2416
+ 15
2417
+ -10
2418
+ 0
2419
+ 10
2420
+ 20
2421
+ 0
2422
+ 0.1
2423
+ 0.2
2424
+ 0.3
2425
+ v1
2426
+ -5
2427
+ 0
2428
+ 5
2429
+ 10
2430
+ 15
2431
+ v3
2432
+ -20
2433
+ -10
2434
+ 0
2435
+ v2
2436
+ -5
2437
+ 0
2438
+ 5
2439
+ 10
2440
+ 15
2441
+ -20
2442
+ -10
2443
+ 0
2444
+ t=0.5
2445
+ v3
2446
+ -30
2447
+ -20
2448
+ -10
2449
+ 0
2450
+ 10
2451
+ 0
2452
+ 0.1
2453
+ 0.2
2454
+ 0.3
2455
+ -10
2456
+ 0
2457
+ 10
2458
+ 20
2459
+ v1
2460
+ 0
2461
+ 0.1
2462
+ 0.2
2463
+ 0.3
2464
+ -5
2465
+ 0
2466
+ 5
2467
+ 10
2468
+ 15
2469
+ v2
2470
+ -5
2471
+ 0
2472
+ 5
2473
+ 10
2474
+ 15
2475
+ -10
2476
+ 0
2477
+ 10
2478
+ 20
2479
+ 0
2480
+ 0.1
2481
+ 0.2
2482
+ 0.3
2483
+ v1
2484
+ -5
2485
+ 0
2486
+ 5
2487
+ 10
2488
+ 15
2489
+ v3
2490
+ -20
2491
+ -10
2492
+ 0
2493
+ v2
2494
+ -5
2495
+ 0
2496
+ 5
2497
+ 10
2498
+ 15
2499
+ -20
2500
+ -10
2501
+ 0
2502
+ t=1
2503
+ v3
2504
+ -30
2505
+ -20
2506
+ -10
2507
+ 0
2508
+ 10
2509
+ 0
2510
+ 0.1
2511
+ 0.2
2512
+ 0.3
2513
+ Figure 5.9: 1-D and 2-D posterior marginal probability density functions of v .
2514
+ 29
2515
+
2516
+ [4] M. Dellnitz, G. Froyland, and O. Junge, The algorithms behind GAIO-set ori-
2517
+ ented numerical methods for dynamical systems,in Ergodic Theory, Analysis, and Effi-
2518
+ cient Simulation of Dynamical Systems, Springer, 2001, pp. 145-174.
2519
+ [5] K. Krzy ˙zewski and W. Szlenk, On invariant measures for expanding differentiable
2520
+ mappings, Studia Mathematica, 33 (1969). pp. 83-92.
2521
+ [6] S. Klus, P. Koltai and C. Sch¨utte, On the numerical approximation of the Perron-
2522
+ Frobenius and Koopman operator, Journal of Computational Dynamics, 3 (2016), pp.
2523
+ 51-79.
2524
+ [7] S. Ulam, A collection of mathematical problems, Interscience Publishers, 1960.
2525
+ [8] C. Bose and R. Murray, The exact rate of approximation in Ulam’s method, Discrete
2526
+ & Continuous Dynamical Systems, 7 (2001), pp. 219-235.
2527
+ [9] J. Ding and A. Zhou, Finite approximations of Frobenius-Perron operators. A solu-
2528
+ tion of Ulam’s conjecture to multi-dimensional transformations, Physica D: Nonlinear
2529
+ Phenomena, 92 (1996), pp. 61-68.
2530
+ [10] A. Jazwinski, Stochastic Processes and Filtering Theory, Dover Publications, 2007.
2531
+ [11] P. Maybeck, Stochastic Models, Estimation and Control, Academic Press, 1979.
2532
+ [12] G. Evensen, Data Assimilation: The Ensemble Kalman Filter, Springer, 2006.
2533
+ [13] D. Oliver, A. Reynolds and N. Liu, Inverse Theory for Petroleum Reservoir
2534
+ Characterization and History Matching, Cambridge University Press, 2008.
2535
+ [14] E. Kalnay, Atmospheric Modeling, Data Assimilation and Predictability, Cambridge
2536
+ university press, 2003.
2537
+ [15] R. Kalman, A new approach to linear filtering and prediction problems, Journal of
2538
+ Basic Engineering, 82 (1960), pp. 35-45.
2539
+ [16] Y. Ba and L. Jiang, A two-stage variable-separation Kalman filter for data assimi-
2540
+ lation, Journal of Computational Physics, 434 (2021), 110244.
2541
+ [17] Y. Ba, L. Jiang and N. Ou, A two-stage ensemble Kalman filter based on multiscale
2542
+ model reduction for inverse problems in time fractional diffusion-wave equations, Journal
2543
+ of Computational Physics, 374 (2018), pp. 300-330.
2544
+ [18] L. Jiang and N. Liu, Correcting noisy dynamic mode decomposition with Kalman
2545
+ filters, Journal of Computational Physics, 461 (2022), 111175.
2546
+ [19] A. Lorenc, Analysis methods for numerical weather prediction, Quart. J. R. Met. Soc.,
2547
+ 112 (2000), pp. 1177-1194.
2548
+ [20] D. Kelly, K. Law and A. Stuart, Well-posedness and accuracy of the ensemble
2549
+ Kalman filter in discrete and continuous time, Nonlinearity, 27 (2014), p. 25-79.
2550
+ 30
2551
+
2552
+ [21] A. Doucet and A. Johansen, A tutorial on particle filtering and smoothing: 15 years
2553
+ later, The Oxford Handbook of Nonlinear Filtering, Oxford University Press, New York,
2554
+ 2011, p.656-704.
2555
+ [22] C. Snyder, T. Bengtsson, P. Bickel and J. Anderson, Obstacles to high-
2556
+ dimensional particle filtering, Monthly Weather Review, 136 (2008), pp. 4629-4640.
2557
+ [23] A. Stuart and K. Zygalakis, Data assimilation: A mathematical introduction,
2558
+ Springer, 2015.
2559
+ [24] P. Koltai, Efficient approximation methods for the global long-term behavior of dy-
2560
+ namical systems: theory, algorithms and examples, Logos Verlag Berlin, 2011.
2561
+ [25] D. Goswami, E. Thackray and D. Paley, Constrained Ulam dynamic mode decom-
2562
+ position: approximation of the Perron-Frobenius operator for deterministic and stochas-
2563
+ tic systems, IEEE control systems letters, 2 (2018), pp. 809-814.
2564
+ [26] R. Schilling, Measures, integrals and martingales, Cambridge University Press, 2017.
2565
+ [27] O. Junge and P. Koltai, Discretization of the Frobenius–Perron Operator Using
2566
+ a Sparse Haar Tensor Basis: The Sparse Ulam Method, SIAM Journal on Numerical
2567
+ Analysis, 47 (2009), pp. 3464-3485.
2568
+ [28] R. Murray, Discrete approximation of invariant densities, Ph.D. Thesis, University
2569
+ of Cambridge, 1997.
2570
+ [29] H. Niederreiter and J. Spanier, Monte carlo and quasi-monte carlo methods,
2571
+ Springer, 1999.
2572
+ [30] E. Weinan, L. Tiejun, E. Vanden-eijnden, Applied Stochastic Analysis, American
2573
+ Mathematical Society, 2019.
2574
+ [31] P. Florchinger, F. Gland, Time-discretization of the Zakai equation for diffusion
2575
+ processes observed in correlated noise, Stochastics: An International Journal of Proba-
2576
+ bility and Stochastic Processes, 35 (1991), pp. 233-256.
2577
+ [32] J. Carpenter, P. Clifford and P. Fearnhead, Improved particle filter for non-
2578
+ linear problems. IEE Proceedings-Radar, Sonar and Navigation, 146 (1999), pp. 2-7.
2579
+ [33] M. Bolic, P. Djuric and S. Hong, Resampling algorithms and architectures for
2580
+ distributed particle filters, IEEE Transactions on Signal Processing, 53 (2005), pp. 2442-
2581
+ 2450.
2582
+ [34] D. Crisan and A. Doucet, A survey of convergence results on particle filtering
2583
+ methods for practitioners, IEEE Transactions on signal processing, 50 (2002), pp. 736-
2584
+ 746.
2585
+ [35] A. Bain and D. Crisan, Fundamentals of stochastic filtering, Springer, 2009.
2586
+ [36] S. S¨arkk¨a and A. Solin, Applied stochastic differential equations, Cambridge Uni-
2587
+ versity Press, 2019.
2588
+ 31
2589
+
7dE1T4oBgHgl3EQfTgPr/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a2d69492de9caee0b1a8e2f7e92d38304686c725e76ac79cb1c70cf1805a41c
3
+ size 213067
89AzT4oBgHgl3EQfgvxw/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c6c8ad047a9d4f86bd215a38b7c2b58bb7a0ed5a776e7c6a1d7ee4955783faa
3
+ size 2490413
89AzT4oBgHgl3EQfgvxw/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f646b8fc30eaf304c49e67870806416e8fda0595607dc0c3f87a4dfe68c278bb
3
+ size 109680
99AzT4oBgHgl3EQfg_xe/content/2301.01477v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7eb0c127736175395f38d3d440403aeb23b740e86b57a8724ead5d92ffd4a52f
3
+ size 310713
99AzT4oBgHgl3EQfg_xe/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2bc20330493ee85a1df5a0f9dcc29d45deb8cf44bebb9ec564bb451ecb89fc7
3
+ size 7798829
99AzT4oBgHgl3EQfg_xe/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca067bd0fb1ecc444dc1348a9faf77c47851b7c56a71d467bc6d5d5c7b250e21
3
+ size 242336
9tFRT4oBgHgl3EQfqzeA/content/tmp_files/2301.13618v1.pdf.txt ADDED
@@ -0,0 +1,1648 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Scheduling Inference Workloads on Distributed Edge Clusters with
2
+ Reinforcement Learning
3
+ Gabriele Castellano†∗, Juan-Jos´e Nieto ‡§, Jordi Luque ‡, Ferr´an Diego ‡, Carlos Segura ‡
4
+ Diego Perino ‡, Flavio Esposito†, Fulvio Risso∗, Aravindh Raman ‡
5
+ †Computer Science, Saint Louis University, USA
6
+ ∗Computer and Control Engineering, Politecnico di Torino, Italy
7
+ §Universitat Polit`ecnica de Catalunya, Spain
8
+ ‡Telef´onica Research, Spain
9
+ Email: †{gabriele.castellano}@slu.edu, ‡{jordi.luque}@telefonica.com
10
+ Abstract—Many real-time applications (e.g., Augmented/Vir-
11
+ tual Reality, cognitive assistance) rely on Deep Neural Networks
12
+ (DNNs) to process inference tasks. Edge computing is considered
13
+ a key infrastructure to deploy such applications, as moving
14
+ computation close to the data sources enables us to meet stringent
15
+ latency and throughput requirements. However, the constrained
16
+ nature of edge networks poses several additional challenges to
17
+ the management of inference workloads: edge clusters can not
18
+ provide unlimited processing power to DNN models, and often
19
+ a trade-off between network and processing time should be
20
+ considered when it comes to end-to-end delay requirements. In
21
+ this paper, we focus on the problem of scheduling inference
22
+ queries on DNN models in edge networks at short timescales
23
+ (i.e., few milliseconds). By means of simulations, we analyze
24
+ several policies in the realistic network settings and workloads
25
+ of a large ISP, highlighting the need for a dynamic scheduling
26
+ policy that can adapt to network conditions and workloads.
27
+ We therefore design ASET, a Reinforcement Learning based
28
+ scheduling algorithm able to adapt its decisions according to
29
+ the system conditions. Our results show that ASET effectively
30
+ provides the best performance compared to static policies when
31
+ scheduling over a distributed pool of edge resources.
32
+ I. INTRODUCTION
33
+ In the last years, we have witnessed the growing popularity
34
+ of applications leveraging Deep Neural Networks (DNNs),
35
+ from Augmented/Virtual Reality (AR/VR) to cognitive assis-
36
+ tance or video surveillance. The DNN model training process
37
+ typically does not have strict latency constraints and it is
38
+ performed offline in well-provisioned centralized data-centers
39
+ or in a distributed fashion via, e.g., federated learning [1].
40
+ Differently, the DNN inference task is usually performed
41
+ online with constraints in terms of accuracy, throughput, and
42
+ latency, which may significantly differ across applications. For
43
+ instance, services like cognitive assistance require high accu-
44
+ racy but may tolerate few hundreds of milliseconds latency,
45
+ while others, like self-driving cars, have more stringent latency
46
+ needs (i.e., tens of milliseconds).
47
+ Providing an inference service requires to address several
48
+ challenges to meet this diverse set of application constraints,
49
+ e.g., the selection of the appropriate variant of the model
50
+ to be used (programming framework, compiler optimization,
51
+ batching size, etc.), the processing unit to leverage for the
52
+ * Gabriele Castellano and Juan-Jos´e Nieto contributed equally to this work
53
+ during his internship at the Telef´onica Research team, in Spring 2020.
54
+ inference (e.g., GPU, CPU, TPU), and the nodes and resources
55
+ (e.g., memory, computing) to be allocated to every application.
56
+ This requires management at different timescale. On a short
57
+ timescale (i.e, milliseconds), a scheduler is in charge of
58
+ selecting the appropriate computing instance for every new
59
+ incoming request to meet its application requirements. This in-
60
+ cludes not only the selection of the computation node but also
61
+ the appropriate model variant and computation technology. On
62
+ a longer timescale (i.e., seconds, minutes), an orchestrator
63
+ selects the proper model variants to deploy, optimizes their
64
+ placement across the nodes, and allocates the appropriate
65
+ resources to them. Recent work [2]–[5] focused on data cen-
66
+ ters and proposed DNN inference workload management for
67
+ such environments. Further, commercial solutions have been
68
+ deployed in recent years [6]–[8] by major cloud providers.
69
+ Edge computing is considered a key enabler to deploy
70
+ DNN-based applications with stringent delay or bandwidth
71
+ requirements, as it moves computation capabilities closer to
72
+ end-users with respect to centralized cloud platforms. This
73
+ is especially the case for users connected via mobile access
74
+ (e.g. 5G). However, realizing DNN inference at the edge
75
+ poses several additional challenges. Edge infrastructures are
76
+ indeed complex networks composed of several layers with
77
+ heterogeneous limited resources and different latencies to end
78
+ users [9]. Due to the less availability of resources at edge,
79
+ multiple inference models of different capacities should be
80
+ considered, and end-to-end delay requirements may lead to
81
+ considering a trade-off between network delay and processing
82
+ time. This differs from centralized cloud platforms, which
83
+ usually feature large pools of uniform hardware available
84
+ in a single location where DNN models can be scaled up
85
+ almost indefinitely. For these reasons, the optimal selection
86
+ of inference models while scheduling real-time requests at
87
+ Edge is still a challenging task. Recent work combined edge
88
+ computing and deep learning [10], with a focus on scheduling
89
+ requests to minimize end-to-end delay [11] or maximize
90
+ accuracy [12]. However, none of the existing work analyzes
91
+ inference workload optimization taking into account different
92
+ application constraints in realistic edge network settings.
93
+ In this paper, we focus on the problem of scheduling DNN
94
+ inference requests taking into account not only accuracy (i.e.,
95
+ model selection) but also throughput and latency constraints
96
+ arXiv:2301.13618v1 [cs.LG] 31 Jan 2023
97
+
98
+ under realistic edge deployment settings. First, we model our
99
+ distributed edge inference system and provide a definition of
100
+ the scheduling problem (Section III), also proposing several
101
+ baseline static scheduling policies both original and from
102
+ literature. From evaluating static policies on a realistic network
103
+ topology, we observe that a policy that always performs better
104
+ does not exist, as different applications may benefit differently
105
+ from each scheduling strategy. Based on the insights derived
106
+ by this analysis we propose ASET1 (Adaptive Scheduling
107
+ of Edge Tasks), an adaptive scheduling algorithm based on
108
+ Reinforcement Learning (Section IV), which dynamically fol-
109
+ lows system conditions and apps requirements optimizing its
110
+ decisions accordingly. We evaluate ASET simulating three
111
+ topologies based on the realistic network of a large ISP
112
+ and using a pool of reference edge applications (Section V).
113
+ Our findings show that, while some static policies are well
114
+ suited to optimize workloads on cloud-based topologies, ASET
115
+ improves performance over any static policy when resources
116
+ are distributed across the edge network, effectively increasing
117
+ the percentage of successfully handled queries.
118
+ II. RELATED WORK
119
+ The provisioning of on-demand inference services has been
120
+ investigated in several recent works.
121
+ Inference scheduling in data centers. Most of the existing so-
122
+ lutions address the common scenario where inference queries
123
+ have to be scheduled over the resources of a Data Center. Some
124
+ of the main production systems are Tensorflow Serving [6],
125
+ Azure ML [7], and Cloud ML [8]. Most scientific works
126
+ focused on proposing algorithms and strategies to improve
127
+ the performance and ease of use of such cloud inference sys-
128
+ tems. [2] and [3] address the problem of scheduling Directed
129
+ Acyclic Graph (DAGs) tasks with the objective of improving
130
+ the throughput; GrandSLAm [2] relies on a prediction model
131
+ that estimates job duration, while [3] proposes an efficient
132
+ RL approach to select the number of servers to allocate
133
+ for a given job. Being oriented to a Cloud infrastructure,
134
+ none of them takes into account network latency between
135
+ the servers and their heterogeneity. In [13] a Model Master
136
+ manages the dynamic allocation of DNN models across the
137
+ servers of a heterogeneous data center based on Azure ML,
138
+ and proposes a protocol among servers to forward queries to
139
+ the correct destination. Clipper [4] provides a generalization
140
+ of TensorFlow Serving [6] to enable the usage of different
141
+ frameworks. One of the most complete solutions is provided
142
+ by INFaaS [5], which focuses on ease of use, providing
143
+ transparent scheduling of incoming queries over available
144
+ model variants, and autoscaling of deployed models based
145
+ on load thresholds. However, all the previous works address
146
+ the scheduling problem only from the boundaries of a data
147
+ center, considering neither (i) network latency, thus becoming
148
+ no suitable in scenarios with real-time constraints, nor (ii)
149
+ resource constrained clusters, thus failing to address situations
150
+ where workers cannot be indefinitely scaled up/out.
151
+ 1In ancient Egyptian mythology, Aset was a major goddess said to have
152
+ power over fate itself.
153
+ Inference offloading. Another related set of works concerns
154
+ offloading, with a focus on the end-devices. While offloading
155
+ has been widely studied in the literature [14], [15], the specific
156
+ use case of DNN workload introduces additional degrees of
157
+ freedom (e.g., model variant selection and configuration) that
158
+ can be exploited for improving optimization over the mere
159
+ selection of the task placement. Some recent works [16]–
160
+ [18] provides intelligent offloading techniques for DNN tasks.
161
+ DeepDecision [17] addresses the problem in the particular case
162
+ of a single device running a single application; queries are
163
+ scheduled among a series of local small models providing
164
+ different performance/requirements trade-off, and one remote
165
+ model, which provides the best performance. On the other
166
+ hand, LinkShare [18] focuses on the orthogonal problem of
167
+ ordering the offloaded requests from multiple apps on the
168
+ same device, with the main constraint of network bandwidth.
169
+ MCDNN [16] proposes a scheduler to handle queries from
170
+ multiple applications on the same device, deciding (i) the
171
+ model variant to be used and (ii) whether to offload the
172
+ inference task or not, seeking average accuracy maximization.
173
+ Such decisions are taken considering constraints such as
174
+ latency requirements, device energy, cloud monetary budget.
175
+ Inference and edge computing. Fewer and more recent are
176
+ the trends that combine DNN with edge computing [10], with
177
+ the aim of overcoming scalability and latency limitations of
178
+ cloud computing. The use of edge computing brings additional
179
+ challenges deriving from the high resource requirements of
180
+ DNN based tasks on less powerful edge compute resources.
181
+ Despite some issues have been addressed in recent works [11],
182
+ [12], [19], [20], edge-oriented solutions for inference systems
183
+ are still largely embryonic compared to data center solutions,
184
+ with many open challenges. CloudPath [19] focuses on the
185
+ problem of data distribution on a hierarchical continuum of
186
+ computing resources between edge and cloud. In [20], authors
187
+ propose an approach to schedule DAGs across multiple edge
188
+ servers, seeking minimization of end-to-end latency. However,
189
+ the proposed algorithm assumes the possibility to indefinitely
190
+ allocate new edge servers when needed, with no geographical
191
+ restrictions, thus not addressing the problem of constrained
192
+ resources at the edge. Other works [11], [12] study the problem
193
+ of processing data streams from scattered devices, exploiting
194
+ the geographically distributed edge/cloud clusters. In particu-
195
+ lar, VideoEdge [12] assumes a deployment of cameras gener-
196
+ ating a known set of video streams, on which various DNN
197
+ tasks should be performed. The proposed approach decides
198
+ globally the cluster where each stream should be processed,
199
+ as well as the model variant to employ and its configuration,
200
+ considering computation and network bandwidth as constraints
201
+ and seeking accuracy maximization. However, neither pro-
202
+ cessing nor network latencies are taken as constraints, thus
203
+ making this approach not suitable for interactive or critical
204
+ scenarios (e.g., virtual reality, autonomous driving, and more).
205
+ A similar use case is analyzed in [11], which focuses on
206
+ minimizing the end-to-end latency processing data flowing
207
+ from the edge to the cloud. However, it only considers the
208
+ problem of task allocation, missing the possibility to optimize
209
+
210
+ properly selecting model variants and their configurations.
211
+ To the best of our knowledge, none of the existing works
212
+ on inference serving systems addresses the problem simul-
213
+ taneously considering (i) end-to-end latency, accuracy, and
214
+ throughput constraints, (ii) edge-cloud computing and multi-
215
+ cluster deployment, (iii) real-time job dispatching, (iv) opti-
216
+ mization on model variant selection.
217
+ III. SCHEDULING IN EDGE-CLOUD INFRASTRUCTURE
218
+ In this section, we formally define the problem of schedul-
219
+ ing inference tasks on a distributed edge-cloud infrastructure.
220
+ Additionally, we describe a set of static scheduling policies
221
+ (both original and from literature), that we then use in Sec-
222
+ tion IV as a baseline for our dynamic scheduling approach.
223
+ A. System modeling
224
+ Applications and data-streaming sources. We consider a set
225
+ of sources (e.g., end users, IoT devices, vehicles) running
226
+ a variety of applications (e.g., virtual reality, autonomous
227
+ driving) each relying on one or more DNN inference tasks.
228
+ Every application generates queries to be processed, i.e., each
229
+ query represents the request to perform a specific inference
230
+ task j ∈ J (e.g., object detection, speech recognition) on
231
+ a given input (e.g., a video frame), where J is the set of
232
+ inference tasks supported by the system. Since applications
233
+ often require more than one query to be processed, we treat
234
+ sequential queries as streams (e.g., all the frames captured by
235
+ an AR headset). Therefore, each query q belongs to a stream
236
+ i ∈ I, being I the entire set of streams currently served by
237
+ the system. Every query of a stream has a set of requirements
238
+ such as a maximum end-to-end delay Di, and a minimum
239
+ required accuracy Ai. Additionally, every stream i has a data
240
+ rate ρi, that is the number of queries submitted each second
241
+ (e.g., frame rate), and every query of stream i has an input of
242
+ size ζi (e.g., frame size). Note that all queries of a stream are
243
+ for the same task j ∈ J with the same requirements.
244
+ DNN Models and Variants. Every inference task j can be
245
+ served using a Deep Neural Network model m among the
246
+ set of M j models that are trained for task j. Therefore, the
247
+ system provides a total of Nm = �
248
+ j∈J |M j| DNN models.
249
+ Take object detection as an example application. A model m
250
+ represents a particular Neural Network architecture with pre-
251
+ trained weights (e.g., yolo-v3, ssd-mobilenet-v1), and features
252
+ a given accuracy Am (mean average precision - mAP). A
253
+ model m can be deployed and run through different setups
254
+ and underlying hardware (e.g., SSD Mobilenet v1 on (i)
255
+ Tensorflow-GPU with batch size 8, or on (ii) Opencv-CPU
256
+ batch size 1 and 2 replicas, and more), thus obtaining a set
257
+ V m of different model variants. A model variant v features
258
+ a given processing delay Dv, throughput capacity Cv (i.e.,
259
+ the maximum number of queries it can process per second),
260
+ and resource usage rv ∈ Rk
261
+ + (e.g., in terms of CPU, system
262
+ memory and GPU memory). Note that the processing delay
263
+ may vary based on the size ζi ∈ R+ of the input data, thus it
264
+ is a function Dv : R+ → R+; with Dv we refer to the time
265
+ ____
266
+ ____
267
+ ____
268
+ ____
269
+ __
270
+ ____
271
+ ____
272
+ ____
273
+ ____
274
+ __
275
+ ____
276
+ ____
277
+ ____
278
+ ____
279
+ __
280
+ Tasks Scheduler
281
+ ____
282
+ ____
283
+ ____
284
+ ____
285
+ __
286
+ ____
287
+ ____
288
+ ____
289
+ ____
290
+ __
291
+ ____
292
+ ____
293
+ ____
294
+ ____
295
+ __
296
+ ___
297
+ ___
298
+ Queries
299
+ Task: Obj. det.
300
+ Constraints:
301
+
302
+ Latency
303
+
304
+ Accuracy
305
+ Stream:
306
+
307
+ Size
308
+
309
+ Rate
310
+ Tasks Scheduler
311
+ ____
312
+ ____
313
+ ____
314
+ ____
315
+ __
316
+ ____
317
+ ____
318
+ ____
319
+ ____
320
+ __
321
+ Model-Variant
322
+ Task: obj. det.
323
+ Specs:
324
+
325
+ mAP
326
+
327
+ Processing Time
328
+
329
+ Load Capacity
330
+ Load: current QPS
331
+ Fig. 1: The scheduler dispatches streams of queries on available model variants
332
+ based on their constraints and geographical position of clusters.
333
+ needed to process the maximum input size supported by the
334
+ model (analogous considerations hold for the capacity Cv).
335
+ Network topology and computing clusters. We consider a
336
+ geographically distributed cloud-edge infrastructure composed
337
+ by Nν computing clusters (e.g., a centralized data center, a
338
+ telco regional cloud, an eNodeB) typically organized in a
339
+ hierarchical topology. Each cluster potentially provides dif-
340
+ ferent resources. We denote cn ∈ Rk
341
+ + the overall capacity of
342
+ cluster n, with cnk representing the amount of resource k ∈ N
343
+ available on cluster n. Examples of resources include CPU,
344
+ system memory and GPU memory.
345
+ Model variants are deployed at different computing clus-
346
+ ters consuming a different amount of resources. On a long
347
+ timescale (i.e., seconds, minutes), an orchestrator selects the
348
+ appropriate set of model variants to deploy, optimizes their
349
+ placement across the clusters, and allocates the appropriate
350
+ resources. Finally, stream sources are connected to a small
351
+ cluster at the lower layer of the hierarchy. This can be either
352
+ the antenna/eNodeB in case of cellular communication or the
353
+ home gateway in the fixed access case. Queries need to be
354
+ scheduled for processing across model variants available at
355
+ different computing clusters to meet application requirements
356
+ on a short timescale (i.e., tens of milliseconds). In the fol-
357
+ lowing, we provide a definition of the scheduling problem we
358
+ tackle in this paper.
359
+ B. Scheduling problem definition
360
+ We assume a scheduler is located at the nearest compute
361
+ cluster available to existing stream sources, i.e., antenna/eN-
362
+ odeB or the home gateway/central office in the fixed access
363
+ case. It follows every stream source is served by a scheduler
364
+ s among Ns different ones (one per each lower layer cluster).
365
+ Each scheduler s has a given average network delay ds
366
+ n
367
+ towards each cluster n; we also model the associated delay
368
+ deviation as σs
369
+ n. Note that an additional access delay from the
370
+ stream source to the scheduler has to be taken into account
371
+ (e.g, the radio latency between a device and the nearest 5G
372
+ antenna). We denote δi the additional access delay that affects
373
+ stream i. Every scheduler is aware of each model variant v
374
+ currently available on each cluster n, each with its current
375
+ load Lvn(t) (measured in terms of incoming queries per
376
+ second2). Based on the current conditions of the available
377
+ 2Each stream i contributes to the total load as a fraction ηi
378
+ v of its data
379
+ rate ρi (named fractional load), based on the stream data size ζi.
380
+
381
+ 88880
382
+ 488 (0)model variants, for every stream i it serves, a scheduler s
383
+ decides which model variant v on which cluster n should be
384
+ used to process stream i.
385
+ When scheduling a stream i to the proper model variant/-
386
+ cluster, the scheduler takes into account application require-
387
+ ments. Specifically, it considers the stream data size ζi, its
388
+ data rate ρi, its bit rate bi, the maximum tolerated end-to-end
389
+ delay Di and the minimum required accuracy Ai, satisfying
390
+ the following constraints:
391
+ (i) the selected model variant v is a valid implementation of
392
+ task j required by i,
393
+ v ∈ V m ∧ m ∈ M j;
394
+ (1)
395
+ (ii) the load capacity of the chosen model variant is not
396
+ exceeded,
397
+ Lvn(t) + ηi
398
+ vρi ≤ Cv,
399
+ (2)
400
+ being ηi
401
+ v the fractional load of stream i for model variant v;
402
+ (iii) the sum of expected network delay and processing time
403
+ does not exceed the maximum tolerated delay,
404
+ 2(δi + ds
405
+ n + 2σs
406
+ n) + biζi + Dv(ζi) ≤ Di,
407
+ (3)
408
+ where the first addendum is the round-trip propagation time,
409
+ the second is the transmission delay for one query and the
410
+ third is the time needed to process the query;
411
+ (iv) the selected model provides an adequate accuracy
412
+ Am ≥ Ai.
413
+ (4)
414
+ A graphical representation of the scheduling problem is
415
+ depicted in Figure 1, while a scheduling policy can be formally
416
+ defined as follows.
417
+ Definition 1. (scheduling policy). Let us consider a stream i to
418
+ be processed through a task j on an edge-cloud infrastructure
419
+ that features a set of V m compatible model variants over Nν
420
+ clusters (|N| = Nν). A scheduling policy is any function
421
+ β : I → V m, N
422
+ (5)
423
+ that binds stream i to a feasible model variant v ∈ V m de-
424
+ ployed on cluster n ∈ N, so that constraints at Equations (1),
425
+ (2), (3), and (4) are satisfied.
426
+ Note that, as requests are handled in real-time, scheduler
427
+ decisions should be taken in an amount of time that is
428
+ negligible compared to the stream latency requirements.
429
+ Scheduling performance metrics and objectives. Based on
430
+ the scheduling decisions, in a given time instant t the stream
431
+ i will feature a reject ratio qR
432
+ i (t) ∈ [0, 1], i.e., the fraction
433
+ of queries from stream i that have not been processed by the
434
+ system because of resource unavailability, and a failure ratio
435
+ qF
436
+ i (t) ∈ [0, 1], i.e. the fraction of queries that have been served
437
+ violating one or more application requirements (i.e., delivered
438
+ out of maximum tolerated delay).
439
+ The goal of the scheduler is typically to maximize, over
440
+ time, the fraction of queries that are served successfully, i.e.,
441
+ to minimize the sum of reject ratio and failure ratio.
442
+ C. Static scheduling policies
443
+ Several policies have been proposed for static scheduling
444
+ of inference tasks on edge clusters [9], [21]. In this work we
445
+ consider the following ones (both original and from literature):
446
+ 1) closest: bind stream i to any feasible model variant v∗ lo-
447
+ cated on the cluster n∗ that features the lower network latency
448
+ to serving scheduler s, i.e., n∗ = arg minn∈N (ds
449
+ n + 2σs
450
+ n).
451
+ This policy may lead to the early saturation of smaller clusters
452
+ at the very edge, as they are always preferred [22].
453
+ 2) load balancing: bind the input stream to model variant v∗
454
+ on cluster n∗ such that (v∗, n∗) = arg minv,n∈V m×N Lvn(t).
455
+ This policy can bring huge performance gains compared to
456
+ closest [22]; however, it may lead to unfair allocation when
457
+ latency-sensitive applications are in the minority.
458
+ 3) farthest: bind stream i to any feasible model variant v∗
459
+ located on the cluster n∗ with the highest (still feasible) net-
460
+ work latency, i.e. n∗ = arg maxv∈N (ds
461
+ n + 2σs
462
+ n). As opposed
463
+ to closest, this policy preserves smaller clusters at the very
464
+ edge for those apps that really need them [23]; however, it is
465
+ highly affected by the unreliability of network delay for long
466
+ distance communications.
467
+ 4) cheaper: bind stream i to model variant v∗ on clus-
468
+ ter n∗ such that the expected end-to-end delay (round-
469
+ trip and processing time) is maximized, i.e., (v∗, n∗) =
470
+ arg minv,n∈V m×N (2(ds
471
+ n + 2σs
472
+ n) + Dv(ζi)). We designed this
473
+ policy as an improvement over farthest, as it additionally tries
474
+ to preserve the most performing model variants.
475
+ 5) random-proportional latency: bind stream i to model
476
+ variant v on cluster n with probability 1/(2(ds
477
+ n + 2σs
478
+ n) +
479
+ Dv(ζi)). This guarantees that, on a large enough number of
480
+ streams, bindings are proportionate to end-to-end delays [21].
481
+ 6) random-proportional load: bind stream i to model variant
482
+ v on cluster n with probability Cv/Lvn(t). This guarantees
483
+ that, on a large enough number of streams, bindings are
484
+ proportional to the capacity of each model variant.
485
+ 7) least impedance: bind stream i to model variant v∗ on
486
+ cluster n∗ such that end-to-end latency to s is minimized, i.e.,
487
+ (v∗, n∗) = arg minv,n∈V m×N (2(ds
488
+ n + 2σs
489
+ n) + Dv(ζi)) [21].
490
+ This greedy policy leads to the best performance when the
491
+ overall load is low, but may suffer from a high rejection rate
492
+ once the closest and fastest model variants are saturated.
493
+ Our experiments (Section V) show that, for a heterogeneous
494
+ pool of applications, a policy that always performs better
495
+ than the others does not exists: different applications may
496
+ benefit differently from each scheduling strategy, and also the
497
+ physical topology and the particular streams arrivals can be
498
+ determinant. Based on these findings, in the next section we
499
+ propose ASET, an algorithm for Adaptive Scheduling of Edge
500
+ Tasks that leverages Reinforcement Learning to optimize its
501
+ decisions dynamically based on the current system conditions.
502
+ IV. ASET SCHEDULING ALGORITHM
503
+ Our adaptive scheduling approach aims to learn the optimal
504
+ policy depending on current system conditions, e.g, current
505
+ applications, network topology, and stream arrivals that vary
506
+ over time. Due to the lack of labeled data, the optimal
507
+
508
+ ____
509
+ ____
510
+ ____
511
+ ____
512
+ ACTION
513
+ REWARD
514
+ ASET RL Agent
515
+ STATE
516
+ 3
517
+ 1
518
+ Incoming Workloads
519
+ ____
520
+ ____
521
+ Cloud-Edge Infrastructure
522
+ 2
523
+ 4
524
+ 5
525
+ Fig. 2: Algorithm overview. State St, sampled from the environment, is
526
+ forwarded through the agent DNN, which outputs action At; performing At
527
+ on the environment contributes to reward rt+1 obtained at the next step.
528
+ policy learning is formulated as a Reinforcement Learning
529
+ (RL) problem; hence, an intelligent agent tries to learn the
530
+ optimal policy selection strategy according to the observed
531
+ state of the environment. This is accomplished by an RL
532
+ policy that estimates a probability distribution of each possible
533
+ action (policy selection) that cumulatively maximizes a reward
534
+ (typically maximizing the fraction of queries that are served
535
+ successfully), as shown in Figure 2.
536
+ Let us consider a learner and decision-maker called the
537
+ agent, and an environment that is the external world that the
538
+ agent interacts with at discrete time steps t. Given St ∈ S,
539
+ where S is the set of possible states of the environment, the
540
+ agent can select an action At ∈ A(St), standing for the set of
541
+ available actions in state St. The agent receives an observation
542
+ of the environment St at time t and, one step later, a numerical
543
+ reward, rt+1 ∈ R ⊂ R, and it jointly determines the action
544
+ At to perform, which, in part, yields to next state St+1.
545
+ Definition 2. (stochastic reinforcement learning policy). An
546
+ RL policy πφ, where φ ∈ Rd denotes policy parameters, is
547
+ any function or algorithm that determines and maps the next
548
+ action to take by an agent. A stochastic RL policy, additionally,
549
+ estimates a probability distribution over actions that an agent
550
+ can take at a given state:
551
+ πφ : A x S → [0, 1],
552
+ (6)
553
+ πφ(a|s)
554
+ def
555
+ == P(take action a|given state s).
556
+ Overall, the goal of the proposed adaptive scheduling is to
557
+ learn an optimal sequence of static network scheduling policies
558
+ that maximizes the percentage of successfully dispatched
559
+ streams. At a T seconds rate, the RL-based scheduler sam-
560
+ ples the environment by collecting a variety of observations
561
+ from the edge-cloud infrastructure, e.g., responses and loads,
562
+ building up the current state St of the environment. Then,
563
+ the agent evaluates a discrete set A of actions and chooses
564
+ an action At ∈ A, where A stands in this work for the set
565
+ of available network scheduling policies β. Note that the set
566
+ of actions does not depend on the state itself, thus the sets
567
+ A(St) = A are the same (Section III-C). Therefore, every time
568
+ that the agent takes an action At, the state of the environment
569
+ St is observed and a reward score rt+1 is used as feedback
570
+ information to improve the policy selection, see Figure 2. In
571
+ this work, these rewards are defined as a linear combination of
572
+ the ratio of “failed” queries and the ratio of queries that have
573
+ 0
574
+ 100
575
+ 200
576
+ 300
577
+ 400
578
+ time (s)
579
+ 30
580
+ 40
581
+ 50
582
+ 60
583
+ 70
584
+ 80
585
+ 90
586
+ 100
587
+ % of success queries
588
+ RP-LOAD
589
+ CHEAPER
590
+ FARTHEST
591
+ CHEAPER
592
+ load-balancing
593
+ rp-load
594
+ least-impedance
595
+ closest
596
+ farthest
597
+ cheaper
598
+ rp-latency
599
+ random
600
+ ASET
601
+ (a) Cloud-based topology.
602
+ 0
603
+ 100
604
+ 200
605
+ 300
606
+ 400
607
+ time (s)
608
+ 40
609
+ 50
610
+ 60
611
+ 70
612
+ 80
613
+ 90
614
+ 100
615
+ % of success queries
616
+ CHEAPER
617
+ FARTHEST
618
+ CHEAPER
619
+ FARTHEST
620
+ CHEAPER
621
+ load-balancing
622
+ rp-load
623
+ least-impedance
624
+ closest
625
+ farthest
626
+ cheaper
627
+ rp-latency
628
+ random
629
+ ASET
630
+ (b) Edge-based topology.
631
+ Fig. 3: The ASET RL agent infers the optimal policy sequence based on the
632
+ system conditions, seeking an optimal binding between workloads and model
633
+ variants that maximizes the percentage of success queries. Plots show two
634
+ runs on a cloud-based topology and on an edge-based one (see Section V).
635
+ been “rejected” for lack of available resources (Section IV-C).
636
+ The particular policy βt, selected by the agent at time t, is
637
+ used to dispatch all incoming streams during the subsequent
638
+ time window [t, t + T]. Therefore, given the corresponding
639
+ states sequence S = [S0, ST , S2T , ..., SkT ] with k ∈ N, the re-
640
+ sulting overall scheduling policy β(S) = [β0, βT , β2T , ..., βkT ]
641
+ dynamically maps, with the corresponding baseline policies βt,
642
+ a stream i to a model variant v and its deployment on cluster
643
+ n. From now, and for the sake of simplicity, we will refer as π
644
+ to the policy learned by the ASET agent (Definition 2), which
645
+ leads to a particular static policy sequence β(S). It corresponds
646
+ to any function employed to estimate the optimal sequence of
647
+ actions that the agent should perform at each time window
648
+ [t, t+T] and given a state St, β(S) = [A0, AT , A2T , ..., AkT ].
649
+ The intuition of this behavior is provided in Figure 3. Note that
650
+ each of the static scheduling policies from Section III-C cor-
651
+ responds to a deterministic agent that always returns the same
652
+ action At independently of the system state; whereas the pol-
653
+ icy π learned by the ASET agent can be seen as a meta-policy
654
+ (or as a policy of baseline scheduling strategies) that also
655
+ satisfies the constraints from Equations (1), (2), (3), and (4).
656
+ A. Deep Q-Learning policy optimization
657
+ Our RL agent has to cope with a discrete set of actions, with
658
+ A ⊂ N. This is often modeled in literature as a stochastic
659
+ process with no memory, which is a Markov Decision Pro-
660
+ cess [24] (MDP). In this work, our MDP defined by tuples
661
+ (S, A, T , R, γ) represents states comprised of partial obser-
662
+ vations from the system. Nonetheless, the model parameters
663
+ of such MDP are unknown, i.e., the transition probabilities
664
+ T (s′|a, s) and the rewards R(s′|a, s) of taking the action
665
+ At = a and moving from state St = s to state St+1 = s′.
666
+ Note that the ASET agent should experience each transition
667
+ among states at least once, or even multiple times to get a
668
+ reliable estimation of both transition and cumulative rewards.
669
+ At each step t = kT, with k ∈ N, the RL agent can choose
670
+ one of several possible scheduling policy-actions, βt ≡ At.
671
+ The transition probability T (s′|a, s) depends in part on the
672
+ chosen action, and, additionally, from some positive or neg-
673
+ ative reward that may be returned by every state transition,
674
+ named return of actions. Overall, our objective is to find a
675
+ strategy, i.e., a policy π mapping to a sequence β(S), that
676
+ maximizes the expected return G(t) of rewards over time.
677
+
678
+ Thus, G(t) is defined in terms of the cumulative weighted
679
+ rewards along with states and given the corresponding optimal
680
+ sequence of actions to take in the future:
681
+ G(t) =
682
+ H
683
+
684
+ τ=0
685
+ γτrτ
686
+ γ ∈ [0, 1],
687
+ (7)
688
+ where rτ = R(s′|a, s) is the reward at time step τ due to
689
+ corresponding state transition (s, s′), γ is a weighting factor
690
+ that reduces the contribution of long-term rewards, usually
691
+ known as the discount factor, and time H is the last time
692
+ step within a training episode (see Section IV-C for further
693
+ details). Therefore, the RL agent’s target policy is
694
+ π∗(a|s) = arg max
695
+ πφ
696
+ Et∗∼πφ {G(t)} ,
697
+ (8)
698
+ which translates the scheduler state into a distribution over
699
+ actions, see Definition 2. Note that the expectation is computed
700
+ over the distribution of trajectories t∗ = (s0, a0, s1, ...).
701
+ In Q-Learning, the optimal pair values (s, a), i.e., those
702
+ yielding to the sequence of optimal actions, are generally
703
+ called Quality-Values (Q-Values) and noted as Q∗(s, a) [25].
704
+ They correspond to the sum of weighted rewards that the RL
705
+ agent can expect on average after performing action a on state
706
+ s. It is also known as the expected return of actions,
707
+ Q(s, a) = Et∗∼πφ {Gt|St = s, At = a} .
708
+ (9)
709
+ Bellman [24] showed that if an agent’s trajectory follows
710
+ the highest Q-Values, then its policy is optimal and leads
711
+ to the highest G(t) as well. Bellman also reported that an
712
+ accurate estimate of Q-Values can be found recursively by
713
+ using the Bellman Optimality Equation, also known as the
714
+ Value Iteration algorithm. In fact, Q-Learning is an adaptation
715
+ of Bellman’s value iteration algorithm, where a policy is
716
+ implicitly, or off-line, learned by following trajectories yield-
717
+ ing to the highest Q-Values [25]. It is usually computed by
718
+ dynamic programming and assumes that the optimal value of
719
+ state St = s is equal to the reward it will get on average,
720
+ after taking one optimal action a and adding the expected
721
+ optimal value of all possible next states along the future path of
722
+ decisions, that is Q(s, a) = Eπ {r + γ maxa′ Q(s′, a′)|s, a}.
723
+ Equation (9) turns out into the following iteration algorithm,
724
+ which converges to the optimal Q∗(s, a),
725
+ Qk+1(s, a) ←
726
+
727
+ s′
728
+ T (s, a, s′)
729
+
730
+ r + γ max
731
+ a′ Qk(s′, a′)
732
+
733
+ ,
734
+ (10)
735
+ for all s′ ∈ S, a′ ∈ A and k ∈ N as iteration step. For
736
+ simplicity, we set the transition probability matrix T to all
737
+ elements equal to 1, allowing initial transitions among all seen
738
+ states. Once Q-Values are estimated, the optimal policy π∗
739
+ for the RL agent corresponds to chose the action that has the
740
+ highest Q-Values: π∗(a|s) = arg maxπ Qπ(s, a), for all s ∈ S
741
+ and a ∈ A ≡ β static policies in Section III-C.
742
+ However, previous algorithm does not scale for large MDPs
743
+ with a large number of states. A solution is to approximate
744
+ the optimal Q∗(s, a) using a Deep Neural Network, named
745
+ Deep Q-Network (DQN) [26], to get an estimate Q(s, a; φ) ≈
746
+ Q∗(s, a), where φ stands for the parameters of the DQN
747
+ model, see line 16 in Algorithm 1. The using of a DQN for
748
+ approximate Q-Learning is known as Deep Q-Learning.
749
+ B. State Encoding
750
+ We model the state in a continuous fashion, representing
751
+ the environment in a given time t as a set of some particular
752
+ features sampled from the system and averaged along a time
753
+ window of size T. Features are evaluated separately for each
754
+ available worker w ∈ W,3 and are as follows: (i) the number
755
+ |Iw| of streams currently served by worker w, being Iw =
756
+ {i ∈ I| β(i) = (v, n)}; (ii) the current throughput Rw(t) of
757
+ worker w, in terms of responses delivered at the time instant
758
+ t; (iii) the current load Lw(t), measured in terms queries per
759
+ second normalized on input size (as defined in Section III-B);
760
+ (iv) number of incoming instant queries grouped by stream
761
+ characteristics, e.g., queries of all streams that require end-to-
762
+ end delay within a given range [δ1, δ2[ and features a data rate
763
+ in the interval [ρ4, +∞[, i.e., �
764
+ i∈I1,4 ρi, where I1,4 = {i ∈
765
+ I| Di ∈ [δ1, δ2[∧ρi ∈ [ρ4, +∞[}. In particular, we consider
766
+ a partition 0 = δ0 < δ1 < δ2 < ... < δNδ−1 of R+ with Nδ
767
+ delay intervals, and a second partition 0 = ρ0 < ρ1 < ρ2 <
768
+ ... < ρNρ−1 of R+ with Nρ input-rate intervals, evaluating
769
+ Nδ · Nρ different sum of instant queries, that is one feature
770
+ for each combination of the two partitioning sets. The features
771
+ defined so far constitute a vector as,
772
+ Sw,t =
773
+
774
+ |Iw|, Rw(t), Lw(t),
775
+
776
+ i∈I0,0
777
+ ρi, . . . ,
778
+
779
+ i∈INδ−1,Nρ−1
780
+ ρi
781
+
782
+ (11)
783
+ where Sw,t ∈ R3+Nδ·Nρ
784
+ +
785
+ . Therefore, the complete state S is
786
+ modeled as a three-dimensional vector in R(3+Nδ·Nρ)×|W |×T
787
+ +
788
+ ,
789
+ that is, each feature in (11) is first evaluated for each available
790
+ worker (each model variant on each node), and then for
791
+ each time instant within the considered time window. For
792
+ instance, vector Sw stores, for worker w, every features in
793
+ (11) evaluated at every time instant t−T +1, t−T +2, ..., t
794
+ within time window [t − T + 1, t]. From now, we refer to the
795
+ state vector encoding as simply s or st for a generally speaking
796
+ state or a state referred to a time window, respectively.
797
+ C. Training
798
+ The proposed RL scheduling agent is trained over a series
799
+ of episodes that resemble various scenarios. Each episode
800
+ corresponds to a different workload execution with given
801
+ parameters, e.g. requirements from tasks, number of clients
802
+ per minute (λ) or the seed value (ζ) for random number
803
+ generation (RNG), and is concluded when the percentage
804
+ of success queries, qS
805
+ t , falls below a given threshold θ or
806
+ when a timeout H is reached. This allows us to speed up
807
+ the training by terminating unsuccessful or steady episodes
808
+ quickly. At every time step t, a reward rt scores the rate of
809
+ successful queries, see Algorithm 1 at lines 9-10, where qF
810
+ t
811
+ 3A worker is a model variant instance v running on a particular cluster n,
812
+ therefore we can assume index w = n · v + v.
813
+
814
+ Algorithm 1 ASET training procedure
815
+ 1: initialize replay buffer D = ∅ ring of size Nr and φ0 DQN parameters
816
+ 2: initialize training RNG seeds ζ ∈ [0, ..., P − 1]
817
+ 3: for episode k ∈ [0, ..., M] do
818
+ 4:
819
+ sample random seed ζ ∼ U(P)
820
+ 5:
821
+ initialize πk(a|s) = ϵ U(β) + (1 − ϵ)δ(a = arg maxa Q(s, a; φk))
822
+ 6:
823
+ for step t ∈ [0, ..., H] do
824
+ 7:
825
+ sample at ∼ πk(a|s)
826
+ 8:
827
+ sample st+1 state from edge-network and reward rt
828
+ 9:
829
+ if ϕ > 1 − (qF
830
+ t + qR
831
+ t ) then rt = −(qF
832
+ t + qR
833
+ t )
834
+ 10:
835
+ else rt = ψ
836
+ 11:
837
+ D ← D ∪ {(st, at, st+1, rt)} with circular replacement
838
+ 12:
839
+ if qS
840
+ t ≤ θ then t ← H, ends this episode
841
+ 13:
842
+ φk,0 ← φk
843
+ 14:
844
+ for gradient descend step g ∈ [0, ..., G − 1] do
845
+ 15:
846
+ sample batch B ⊂ D with idx ∼ U(Nr)
847
+ 16:
848
+ compute Ct ← rt + γ arg maxat+1 Q(st+1, a(st+1); φk)
849
+ 17:
850
+ compute loss L = �
851
+ i(Ci − Q(si, ai; φk))2
852
+ 18:
853
+ update replay network φk,g+1 ← φk,g − α∇φk,gL
854
+ 19:
855
+ update DQN parameters φk+1 ← φk,G
856
+ is the ratio of “failed” queries, i.e., those delivered violating
857
+ one or more constraints (e.g., out of tolerated delay), and qR
858
+ t
859
+ is the ratio of queries “rejected” by the system for lack of
860
+ resources, normalized by corresponding time window. ψ is
861
+ a penalty inverse proportional to the episode active time. It
862
+ ensures that both short and bad action trajectories do not reach
863
+ higher returns than optimal ones. Note that DQN network
864
+ is used to minimize the target loss L, see lines 14-18, by
865
+ Adam optimizer and α learning rate. It takes gradient steps
866
+ on the Bellman error objective L, see factor C at line 16 and
867
+ Eq. (10), concurrently with data collection from the replay
868
+ buffer [27] for an efficient off-line learning. This is a common
869
+ hybrid approach to implement Q-Learning [25], [28], [29].
870
+ Additionally, we employ an ϵ-greedy exploration policy, see
871
+ line 5, with parameter ϵ dynamically updated. The architecture
872
+ of our DQN consists of a stacking of convolutional layers
873
+ that extracts temporal correlations from the state tensor S.
874
+ Such feature extraction part is composed of three convolutional
875
+ layers with 4x4 kernels along the time and feature dimensions,
876
+ followed by Re-Lu activation and max-pooling. Finally, two
877
+ linear layers squeeze the dimension from 256 to as many
878
+ outputs as different static policies β.
879
+ V. PERFORMANCE EVALUATION
880
+ We evaluate ASET using a prototype implementation of an
881
+ edge inference system that will be released upon acceptance
882
+ of the paper. We first use our prototype to run small scale
883
+ experiments with the aim of profiling some representative
884
+ models and their variants (results not shown). Then we use
885
+ such profiling data to run large scale experiments on a simu-
886
+ lated setup, comparing the performance of ASET to those of
887
+ static scheduling policies.
888
+ A. Evaluation settings
889
+ System Prototype. Our prototype implements the edge in-
890
+ ference system functionalities described in Section III. On
891
+ each cluster, a Master deploys workers and routes streams
892
+ between them and remote clients; each Worker runs in a
893
+ Docker container and implements a pipeline that processes
894
+ queries in FIFO order from different streams, based on the
895
+ model variant batch size; a Monitoring agent on each cluster
896
+ collects stats from model variants usage and their performance,
897
+ used (i) to build a catalog of model variants and (ii) to provide
898
+ each Scheduler with aggregated observations on the system
899
+ state. We use such a prototype to profile variants of pre-
900
+ trained inference models with respect to their resource usage
901
+ and performance (see below).
902
+ Simulation Setup. To evaluate our approach on a large
903
+ scale, we set up a simulated environment where each worker
904
+ simulates the inference task based on the profiling information
905
+ available for its model variant. Therefore, empty responses are
906
+ generated for each batch of queries after simulating a pro-
907
+ cessing delay (based on a normal distribution). Additionally,
908
+ we simulate network delay between stream sources and des-
909
+ tination clusters (see below for considerations on the network
910
+ topologies), as well as the transmission delay. Apart from
911
+ simulated workers, other system components are deployed
912
+ using their prototype implementation. Therefore, the system
913
+ operates on a realistic timescale.
914
+ Network topology. We leverage the network topology of a
915
+ large ISP to assess scheduling performance under realistic
916
+ settings. Specifically, our simulated environment is a cloud-to-
917
+ edge topology with clusters of different sizes deployed hierar-
918
+ chically. To preserve ISP confidentiality, we only report a high-
919
+ level summary of topology, latency, and hardware distribution
920
+ characteristics. Similarly to the tiered topologies from [30],
921
+ [31], our topology can provide clusters with computation
922
+ capabilities at different layers: network access (e.g., antennas,
923
+ home gateway), central offices (multiple payers), operator
924
+ data center, and remote cloud (third parties). Specifically, we
925
+ focus on three scenarios: (i) dc-cloud, where resources are
926
+ deployed at ISP data center and remote cloud only; (ii) co-dc-
927
+ cloud, where resources are deployed at central offices, operator
928
+ data center and remote cloud; (iii) full-edge topology, where
929
+ clusters are deployed at all layers previously mentioned. Note
930
+ that we limit the simulations to the topology serving 1,000
931
+ antennas from the full ISP topology, and appropriately scale
932
+ resources (see below). For the evaluation, we assume a 5G
933
+ radio access technology with antennas deployed similarly to
934
+ LTE. Network/transmission delays range from few millisec-
935
+ onds, to reach the eNodeBs behind the antennas, to the order
936
+ of ten milliseconds for central offices and ISP data centers,
937
+ and few tens of milliseconds for the remote cloud.
938
+ Requests workload. Requests are generated following a Pois-
939
+ son distribution. Each generator runs on average λ clients per
940
+ minute querying the scheduler of a given geographical area
941
+ (antenna). Once spawned, each client requests for processing
942
+ a stream featuring randomized characteristics in terms of frame
943
+ rate, required end-to-end latency, required model accuracy,
944
+ frame sizes, stream duration. To capture realistic queries char-
945
+ acteristics, we modeled metrics of generated streams according
946
+ to the reference edge applications in Table I. In our settings,
947
+ a generator with λ = 60 brings a load of almost 1000 queries
948
+ per second on the serving antenna.
949
+ Computing clusters and model variant. We assume a given
950
+
951
+ TABLE I: Characteristics of reference applications [32], [33].
952
+ Edge app
953
+ Tolerated
954
+ delay
955
+ Frame
956
+ rate
957
+ Streams
958
+ duration
959
+ Required
960
+ accuracy
961
+ Pool
962
+ 95 ms
963
+ 5 FPS
964
+ 5-10 s
965
+ 10 mAP
966
+ Workout Assistant
967
+ 300 ms
968
+ 2 FPS
969
+ 90 s
970
+ 10 mAP
971
+ Ping-pong
972
+ 150 ms
973
+ 15-20 FPS
974
+ 20-40 s
975
+ 15 mAP
976
+ Face Assistant
977
+ 370 ms
978
+ 5 FPS
979
+ 1-5 s
980
+ 30 mAP
981
+ Lego/Draw/Sandwich
982
+ 600 ms
983
+ 10-15 FPS
984
+ 60 s
985
+ 25 mAP
986
+ Gaming
987
+ 20-30 ms
988
+ 25 FPS
989
+ 10-30 m
990
+ 35 mAP
991
+ Connected Cars
992
+ 150 ms
993
+ 10-15 FPS
994
+ 15-30 m
995
+ 40 mAP
996
+ Tele-Robots
997
+ 25-35 ms
998
+ 10 FPS
999
+ 5 m
1000
+ 40 mAP
1001
+ Remote-driving
1002
+ 20-30 ms
1003
+ 20 FPS
1004
+ 15-30 m
1005
+ 50 mAP
1006
+ Interactive AR/VR
1007
+ 30-50 ms
1008
+ 25 FPS
1009
+ 30-60 s
1010
+ 35 mAP
1011
+ pool
1012
+ workout
1013
+ pingpong
1014
+ face
1015
+ lego
1016
+ gaming
1017
+ cars
1018
+ robots
1019
+ driving
1020
+ ar/vr
1021
+ 0
1022
+ 20
1023
+ 40
1024
+ 60
1025
+ 80
1026
+ 100
1027
+ % of success queries
1028
+ load-balancing
1029
+ rp-load
1030
+ least-impedance
1031
+ closest
1032
+ farthest
1033
+ cheaper
1034
+ rp-latency
1035
+ ASET
1036
+ (a) Average values for clients rate λ = 60
1037
+ pool
1038
+ workout
1039
+ pingpong
1040
+ face
1041
+ lego
1042
+ gaming
1043
+ cars
1044
+ robots
1045
+ driving
1046
+ ar/vr
1047
+ 0
1048
+ 20
1049
+ 40
1050
+ 60
1051
+ 80
1052
+ 100
1053
+ % of success queries
1054
+ load-balancing
1055
+ rp-load
1056
+ least-impedance
1057
+ closest
1058
+ farthest
1059
+ cheaper
1060
+ rp-latency
1061
+ ASET
1062
+ (b) Average values for episodes with dynamic clients rates.
1063
+ Fig. 4: Success percentage for different apps on the full-edge topology.
1064
+ reference hardware distribution across clusters, with comput-
1065
+ ing capabilities increasing from the access network to the
1066
+ cloud. Specifically, the access network can be equipped with
1067
+ an 8-16 cores machine, 16 GB of memory and a small TPU,
1068
+ central offices can host in the order of tens of servers (32-
1069
+ 64 CPUs, 128-256 GB, and few GPUs), ISP data centers
1070
+ can host hundreds of servers, while for the centralized cloud
1071
+ we assume unlimited resources. In our evaluation, we focus
1072
+ on DNN models for the object detection task, as it is one
1073
+ of the most challenging and computation-intensive inference
1074
+ service [34], [35]. Using our prototype we profile MobileNet-
1075
+ SSD, Yolo-v3, and Tinyyolo-v2 models [36], [37], with CPU
1076
+ and GPU variants on different batch sizes, scaling on allocated
1077
+ resources and number of replicas. Such a set of results is not
1078
+ shown for lack of space. We use profiled information to run
1079
+ our simulations on top of the three topologies described above.
1080
+ On each cluster, workers have been scaled on the number of
1081
+ replicas up to resource saturation.
1082
+ B. Experimental Results
1083
+ We compare the performance of the baseline policies de-
1084
+ scribed in Section III-C distinguishing results for different
1085
+ applications from Table I. As a performance metric we con-
1086
+ sider the percentage of queries that are successfully processed
1087
+ by the system satisfying the application QoS requirements.
1088
+ Figure 4a shows results of multiple runs with λ = 60. Results
1089
+ 0
1090
+ 100
1091
+ 200
1092
+ 300
1093
+ 400
1094
+ time (s)
1095
+ 40
1096
+ 50
1097
+ 60
1098
+ 70
1099
+ 80
1100
+ 90
1101
+ 100
1102
+ % of success queries
1103
+ load-balancing
1104
+ rp-load
1105
+ least-impedance
1106
+ closest
1107
+ farthest
1108
+ cheaper
1109
+ rp-latency
1110
+ random
1111
+ ASET
1112
+ (a) Time avg on dc-cloud for λ = 60.
1113
+ 20
1114
+ 40
1115
+ 60
1116
+ 100
1117
+ # of streams per minute (poisson λ)
1118
+ 30
1119
+ 40
1120
+ 50
1121
+ 60
1122
+ 70
1123
+ 80
1124
+ 90
1125
+ 100
1126
+ % of success queries
1127
+ load-balancing
1128
+ rp-load
1129
+ least-impedance
1130
+ closest
1131
+ farthest
1132
+ cheaper
1133
+ rp-latency
1134
+ random
1135
+ ASET
1136
+ (b) Different clients rate on dc-cloud.
1137
+ 0
1138
+ 100
1139
+ 200
1140
+ 300
1141
+ 400
1142
+ time (s)
1143
+ 40
1144
+ 50
1145
+ 60
1146
+ 70
1147
+ 80
1148
+ 90
1149
+ 100
1150
+ % of success queries
1151
+ load-balancing
1152
+ rp-load
1153
+ least-impedance
1154
+ closest
1155
+ farthest
1156
+ cheaper
1157
+ rp-latency
1158
+ random
1159
+ ASET
1160
+ (c) Time avg on co-dc-cloud for λ = 60.
1161
+ 20
1162
+ 40
1163
+ 60
1164
+ 100
1165
+ # of streams per minute (poisson λ)
1166
+ 30
1167
+ 40
1168
+ 50
1169
+ 60
1170
+ 70
1171
+ 80
1172
+ 90
1173
+ 100
1174
+ % of success queries
1175
+ load-balancing
1176
+ rp-load
1177
+ least-impedance
1178
+ closest
1179
+ farthest
1180
+ cheaper
1181
+ rp-latency
1182
+ random
1183
+ ASET
1184
+ (d) Different clients rate on co-dc-cloud.
1185
+ Fig. 5: Performance of ASET compared with static policies for (ab) the dc-
1186
+ cloud topology and (cd) the co-dc-cloud topology.
1187
+ suggest that there is no one-size-fits-all policy, as various
1188
+ applications may benefit differently from each policy. Varying
1189
+ the rate of stream requests on the antenna (Figure 4b) may
1190
+ further increase the uncertainty of relying on a single policy.
1191
+ In the following, we compare the performance of the ASET RL
1192
+ scheduling approach with the performance of static policies,
1193
+ evaluating the benefits it can introduce in the various scenarios.
1194
+ We trained three different versions of ASET (one for each
1195
+ topology). In particular, we sample the state using a time
1196
+ window T = 25 seconds, and we experimentally chose an
1197
+ episode timeout of 8 minutes to avoid steady states in the
1198
+ network. Despite we evaluate on multiple clients rate, our
1199
+ agent has been trained only on episodes with λ = 60.
1200
+ Cloud deployment. When all the available resources are
1201
+ located in a few centralized clusters, the various static policies
1202
+ have small differences in performance and a dynamic approach
1203
+ has little room for improvement. Results for the dc-cloud
1204
+ topology are shown in Figures 5ab. In particular, Figure 5a
1205
+ plots, for every moment of the simulation (time axis), the
1206
+ percentage of queries that are handled successfully, averaging
1207
+ multiple runs with different workloads. The graph shows that,
1208
+ for this topology, ASET does not improve over static policies,
1209
+ and it even performs worse for higher lambdas (Figure 5b).
1210
+ Figures 5cd shows that moving some resources to Central
1211
+ Offices (co-dc-cloud topology) makes a huge difference: in
1212
+ general, all the policies achieve a higher success ratio on this
1213
+ configuration (Figure 5c), as they can exploit the additional
1214
+ lower latency spots, and the higher level of distribution gives
1215
+ to ASET a certain margin of improvement. Figure 5d shows
1216
+ that ASET introduces some improvement over all the baselines
1217
+ for every lambda, despite being trained only for λ = 60.
1218
+ Edge deployment. The results so far suggest that a good
1219
+ distribution of computing resources is a key factor to improve
1220
+ against static scheduling policies. As shown in Figure 6, the
1221
+ benefits of using a dynamic scheduling approach become more
1222
+
1223
+ 0
1224
+ 100
1225
+ 200
1226
+ 300
1227
+ 400
1228
+ time (s)
1229
+ 40
1230
+ 50
1231
+ 60
1232
+ 70
1233
+ 80
1234
+ 90
1235
+ 100
1236
+ % of success queries
1237
+ load-balancing
1238
+ rp-load
1239
+ least-impedance
1240
+ closest
1241
+ farthest
1242
+ cheaper
1243
+ rp-latency
1244
+ random
1245
+ ASET
1246
+ (a) Queries handled successfully.
1247
+ 20
1248
+ 40
1249
+ 60
1250
+ 100
1251
+ # of streams per minute (poisson λ)
1252
+ 0
1253
+ 20
1254
+ 40
1255
+ 60
1256
+ 80
1257
+ 100
1258
+ % of success queries
1259
+ load-balancing
1260
+ rp-load
1261
+ least-impedance
1262
+ closest
1263
+ farthest
1264
+ cheaper
1265
+ rp-latency
1266
+ random
1267
+ ASET
1268
+ (b) Different clients rate on full-edge.
1269
+ 0
1270
+ 100
1271
+ 200
1272
+ 300
1273
+ 400
1274
+ time (s)
1275
+ 0
1276
+ 5
1277
+ 10
1278
+ 15
1279
+ 20
1280
+ 25
1281
+ 30
1282
+ 35
1283
+ % of failed queries
1284
+ load-balancing
1285
+ rp-load
1286
+ least-impedance
1287
+ closest
1288
+ farthest
1289
+ cheaper
1290
+ rp-latency
1291
+ random
1292
+ ASET
1293
+ (c) Queries delivered with QoS violations.
1294
+ 0
1295
+ 100
1296
+ 200
1297
+ 300
1298
+ 400
1299
+ time (s)
1300
+ 0
1301
+ 5
1302
+ 10
1303
+ 15
1304
+ 20
1305
+ 25
1306
+ 30
1307
+ % of rejected queries
1308
+ load-balancing
1309
+ rp-load
1310
+ least-impedance
1311
+ closest
1312
+ farthest
1313
+ cheaper
1314
+ rp-latency
1315
+ random
1316
+ ASET
1317
+ (d) Queries rejected for lack of resources.
1318
+ Fig. 6: Performance of ASET compared with static policies for the full-edge
1319
+ topology. (a) (c) and (d) show averages of multiple runs with λ = 60.
1320
+ concrete in a full-edge topology, where resources are better
1321
+ distributed on multiple smaller clusters in different locations.
1322
+ In fact, Figure 6a shows that the dynamic approach of ASET is
1323
+ able to achieve a constant improvement over any static policy,
1324
+ with a higher success ratio over time. In particular, Figures 6cd
1325
+ show that, while maintaining the same rejection rate as the best
1326
+ static-policy, ASET effectively reduces the number of queries
1327
+ that are handled violating one or more QoS requirements.
1328
+ Moreover, Figure 6b shows that an ASET agent trained only
1329
+ for λ = 60 can also generalize on different requests rate, even
1330
+ supporting a load of more than 1600 queries per second (λ =
1331
+ 100) on a single antenna.
1332
+ Dynamic input rate. We have performed some additional
1333
+ experiments to evaluate how the system behaves in dynamic
1334
+ situations where the requests rate varies over time. For this
1335
+ purpose, we have set up some dynamic runs where the lambda
1336
+ value changes every 150 seconds: a first pattern simulates
1337
+ a particularly fast variation with values of 20, 60, and 100
1338
+ clients per minute (Figure 7a); a different pattern simulates
1339
+ a more steady scenario where the requests rate first moves
1340
+ from 60 to 40 clients per minute, then drops to 20, and finally
1341
+ slowly goes back to 60 (Figure 7b). Similar to previous plots,
1342
+ the outcomes for this set of experiments are shown averaging
1343
+ values over time for multiple runs (Figure 7). Results in both
1344
+ figures show that having a dynamic requests arrival even intro-
1345
+ duces a bigger margin for improvement that ASET effectively
1346
+ exploits reaching the highest percentage of queries handled
1347
+ successfully. This appears particularly evident in the case
1348
+ where the variation between client arrivals is faster and bigger
1349
+ (Figure 7a). This result suggests that, while some of the static
1350
+ policies may achieve decent performance when the system
1351
+ load is stable, they struggle on more dynamic scenarios. In
1352
+ such situations, an adaptive algorithm such as ASET is more
1353
+ suitable as it can learn how to best optimize the system under
1354
+ different conditions. Moreover, results suggest that ASET
1355
+ 0
1356
+ 100
1357
+ 200
1358
+ 300
1359
+ 400
1360
+ time (s)
1361
+ 40
1362
+ 50
1363
+ 60
1364
+ 70
1365
+ 80
1366
+ 90
1367
+ 100
1368
+ % of success queries
1369
+ load-balancing
1370
+ rp-load
1371
+ least-impedance
1372
+ closest
1373
+ farthest
1374
+ cheaper
1375
+ rp-latency
1376
+ random
1377
+ ASET
1378
+ (a) Burst variation of λ from 20 to 100.
1379
+ 0
1380
+ 100
1381
+ 200
1382
+ 300
1383
+ 400
1384
+ time (s)
1385
+ 40
1386
+ 50
1387
+ 60
1388
+ 70
1389
+ 80
1390
+ 90
1391
+ 100
1392
+ % of success queries
1393
+ load-balancing
1394
+ rp-load
1395
+ least-impedance
1396
+ closest
1397
+ farthest
1398
+ cheaper
1399
+ rp-latency
1400
+ random
1401
+ ASET
1402
+ (b) Steady variation of λ between 60 and 20.
1403
+ Fig. 7: Performance of ASET varying the requests rate over time with two
1404
+ different load variation patterns (full-edge topology).
1405
+ 0
1406
+ 2
1407
+ 4
1408
+ 6
1409
+ 8
1410
+ time (s)
1411
+ 0.0
1412
+ 0.2
1413
+ 0.4
1414
+ 0.6
1415
+ 0.8
1416
+ 1.0
1417
+ CDF
1418
+ λ = 60
1419
+ λ = 80
1420
+ λ = 100
1421
+ switching policy delay
1422
+ requests interval
1423
+ (a) Cumulative distribution of delay.
1424
+ 0
1425
+ 100
1426
+ 200
1427
+ 300
1428
+ 400
1429
+ 500
1430
+ 600
1431
+ 700
1432
+ training episodes
1433
+ 0
1434
+ 20
1435
+ 40
1436
+ 60
1437
+ 80
1438
+ 100
1439
+ testing % of success queries
1440
+ topology
1441
+ full-edge
1442
+ co-dc-cloud
1443
+ dc-cloud
1444
+ (b) Learning curve.
1445
+ Fig. 8: (a) Delay for switching policy compared with requests arrival intervals.
1446
+ (b) Learning curve while training ASET on different topologies.
1447
+ training generalizes enough as the algorithm performs well
1448
+ under previously unseen dynamic conditions.
1449
+ Training and applicability. Figure 8a shows the cumulative
1450
+ distribution of the time needed by ASET to infer a switching
1451
+ decision from the current policy to the one that is best suitable
1452
+ for the current system conditions (non-dashed line). The
1453
+ switching delay is compared with distributions of the intervals
1454
+ between subsequent requests for different lambdas. As shown,
1455
+ even for very large client loads (100 clients connected to
1456
+ the antenna), the time interval between two stream arrivals is
1457
+ typically in the scale of seconds or hundreds of milliseconds,
1458
+ while the delay for switching between policies is one order of
1459
+ magnitude smaller. Finally, Figure 8b shows the learning curve
1460
+ of ASET for different topologies on continuous stream request
1461
+ arrivals. The figure shows that ASET quickly reaches a certain
1462
+ level of performance in the first training iterations (before 100
1463
+ episodes) independently from the topology complexity, leaving
1464
+ room for extra improvements to the subsequent episodes based
1465
+ on the margin left by the topology itself.
1466
+ VI. CONCLUSIONS
1467
+ This paper proposes ASET, an adaptive algorithm based on
1468
+ Reinforcement Learning for scheduling inference workloads
1469
+ at the network edge. ASET solves the problem of exploiting
1470
+ scattered clusters of resources to serve inference queries from
1471
+ multiple edge applications (e.g., AR/VR, cognitive assistance).
1472
+ We model an edge inference system where queries from
1473
+ different access networks are processed across a multitude
1474
+ of distributed processing locations. The constrained nature
1475
+ of the edge network introduces a trade-off between network
1476
+ delay and processing time based on the various available
1477
+ DNN models. In such a scenario, ASET optimizes the binding
1478
+ between inference stream requests and available DL models
1479
+
1480
+ across the network, maximizing the throughput and ensuring
1481
+ that any requirement in terms of inference accuracy and end-
1482
+ to-end delay is satisfied. We evaluated our approach over the
1483
+ realistic network topology of a large ISP and considering a
1484
+ heterogeneous pool of edge applications. Our findings show
1485
+ that ASET effectively improves the performance compared to
1486
+ static policies when resources are deployed across the whole
1487
+ edge-cloud infrastructure.
1488
+ REFERENCES
1489
+ [1] J. Konecny, H. B. McMahan, F. Yu, P. Richtarik, A. Theertha Suresh,
1490
+ and D. Bacon, “Federated learning: Strategies for improving communi-
1491
+ cation efficiency,” in 29th Conference on Neural Information Processing
1492
+ Systems(NIPS), 2016.
1493
+ [2] R. S. Kannan, L. Subramanian, A. Raju, J. Ahn, J. Mars, and L. Tang,
1494
+ “Grandslam: Guaranteeing slas for jobs in microservices execution
1495
+ frameworks,” in Proceedings of the Fourteenth EuroSys Conference
1496
+ 2019.
1497
+ ACM, 2019, p. 34.
1498
+ [3] H. Mao, M. Schwarzkopf, S. B. Venkatakrishnan, Z. Meng, and M. Al-
1499
+ izadeh, “Learning scheduling algorithms for data processing clusters,”
1500
+ in Proceedings of the ACM Special Interest Group on Data Communi-
1501
+ cation, 2019, pp. 270–288.
1502
+ [4] D. Crankshaw, X. Wang, G. Zhou, M. J. Franklin, J. E. Gonzalez, and
1503
+ I. Stoica, “Clipper: A low-latency online prediction serving system,”
1504
+ in 14th {USENIX} Symposium on Networked Systems Design and
1505
+ Implementation ({NSDI} 17), 2017, pp. 613–627.
1506
+ [5] F. Romero, Q. Li, N. J. Yadwadkar, and C. Kozyrakis, “Infaas: Managed
1507
+ & model-less inference serving,” arXiv preprint arXiv:1905.13348,
1508
+ 2019.
1509
+ [6] C. Olston, N. Fiedel, K. Gorovoy, J. Harmsen, L. Lao, F. Li, V. Ra-
1510
+ jashekhar, S. Ramesh, and J. Soyke, “Tensorflow-serving: Flexible, high-
1511
+ performance ml serving,” arXiv preprint arXiv:1712.06139, 2017.
1512
+ [7] D. Chappell, “Introducing azure machine learning,” A guide for technical
1513
+ professionals, sponsored by microsoft corporation, 2015.
1514
+ [8] “Ai platform of google cloud,” https://cloud.google.com/ai-platform.
1515
+ [9] P. Mach and Z. Becvar, “Mobile edge computing: A survey on archi-
1516
+ tecture and computation offloading,” IEEE Communications Surveys &
1517
+ Tutorials, vol. 19, no. 3, pp. 1628–1656, 2017.
1518
+ [10] J. Chen and X. Ran, “Deep learning with edge computing: A review,”
1519
+ Proceedings of the IEEE, vol. 107, no. 8, pp. 1655–1674, 2019.
1520
+ [11] R. Ghosh and Y. Simmhan, “Distributed scheduling of event analytics
1521
+ across edge and cloud,” ACM Transactions on Cyber-Physical Systems,
1522
+ vol. 2, no. 4, p. 24, 2018.
1523
+ [12] C.-C. Hung, G. Ananthanarayanan, P. Bodik, L. Golubchik, M. Yu,
1524
+ P. Bahl, and M. Philipose, “Videoedge: Processing camera streams
1525
+ using hierarchical clusters,” in 2018 IEEE/ACM Symposium on Edge
1526
+ Computing (SEC).
1527
+ IEEE, 2018, pp. 115–131.
1528
+ [13] J. Soifer, J. Li, M. Li, J. Zhu, Y. Li, Y. He, E. Zheng, A. Oltean,
1529
+ M. Mosyak, C. Barnes et al., “Deep learning inference service at
1530
+ microsoft,” in 2019 {USENIX} Conference on Operational Machine
1531
+ Learning (OpML 19), 2019, pp. 15–17.
1532
+ [14] E. Cuervo, A. Balasubramanian, D.-k. Cho, A. Wolman, S. Saroiu,
1533
+ R. Chandra, and P. Bahl, “Maui: making smartphones last longer with
1534
+ code offload,” in Proceedings of the 8th international conference on
1535
+ Mobile systems, applications, and services.
1536
+ ACM, 2010, pp. 49–62.
1537
+ [15] M.-R. Ra, A. Sheth, L. Mummert, P. Pillai, D. Wetherall, and R. Govin-
1538
+ dan, “Odessa: enabling interactive perception applications on mobile
1539
+ devices,” in Proceedings of the 9th international conference on Mobile
1540
+ systems, applications, and services.
1541
+ ACM, 2011, pp. 43–56.
1542
+ [16] S. Han, H. Shen, M. Philipose, S. Agarwal, A. Wolman, and A. Krishna-
1543
+ murthy, “Mcdnn: An approximation-based execution framework for deep
1544
+ stream processing under resource constraints,” in Proceedings of the 14th
1545
+ Annual International Conference on Mobile Systems, Applications, and
1546
+ Services.
1547
+ ACM, 2016, pp. 123–136.
1548
+ [17] X. Ran, H. Chen, X. Zhu, Z. Liu, and J. Chen, “Deepdecision: A mobile
1549
+ deep learning framework for edge video analytics,” in IEEE INFOCOM
1550
+ 2018-IEEE Conference on Computer Communications.
1551
+ IEEE, 2018,
1552
+ pp. 1421–1429.
1553
+ [18] B. Hu and W. Hu, “Linkshare: device-centric control for concurrent
1554
+ and continuous mobile-cloud interactions,” in Proceedings of the 4th
1555
+ ACM/IEEE Symposium on Edge Computing, 2019, pp. 15–29.
1556
+ [19] S. H. Mortazavi, M. Salehe, C. S. Gomes, C. Phillips, and E. de Lara,
1557
+ “Cloudpath: a multi-tier cloud computing framework,” in Proceedings of
1558
+ the Second ACM/IEEE Symposium on Edge Computing.
1559
+ ACM, 2017,
1560
+ p. 20.
1561
+ [20] S. Khare, H. Sun, J. Gascon-Samson, K. Zhang, A. Gokhale, Y. Barve,
1562
+ A. Bhattacharjee, and X. Koutsoukos, “Linearize, predict and place:
1563
+ minimizing the makespan for edge-based stream processing of directed
1564
+ acyclic graphs,” in Proceedings of the 4th ACM/IEEE Symposium on
1565
+ Edge Computing, 2019, pp. 1–14.
1566
+ [21] C. Cicconetti, M. Conti, and A. Passarella, “An architectural framework
1567
+ for serverless edge computing: design and emulation tools,” in 2018
1568
+ IEEE International Conference on Cloud Computing Technology and
1569
+ Science (CloudCom).
1570
+ IEEE, 2018, pp. 48–55.
1571
+ [22] M. Jia, J. Cao, and W. Liang, “Optimal cloudlet placement and user
1572
+ to cloudlet allocation in wireless metropolitan area networks,” IEEE
1573
+ Transactions on Cloud Computing, vol. 5, no. 4, pp. 725–737, 2015.
1574
+ [23] J. Long, M. Dong, K. Ota, and A. Liu, “A green tdma scheduling
1575
+ algorithm for prolonging lifetime in wireless sensor networks,” IEEE
1576
+ Systems Journal, vol. 11, no. 2, pp. 868–877, 2015.
1577
+ [24] R. Bellman, “A markovian decision process,” Journal of mathematics
1578
+ and mechanics, pp. 679–684, 1957.
1579
+ [25] S. Levine, A. Kumar, G. Tucker, and J. Fu, “Offline reinforcement
1580
+ learning: Tutorial, review, and perspectives on open problems,” arXiv
1581
+ preprint arXiv:2005.01643, 2020.
1582
+ [26] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou,
1583
+ D.
1584
+ Wierstra,
1585
+ and
1586
+ M.
1587
+ A.
1588
+ Riedmiller,
1589
+ “Playing
1590
+ atari
1591
+ with
1592
+ deep
1593
+ reinforcement learning,” CoRR, vol. abs/1312.5602, 2013. [Online].
1594
+ Available: http://arxiv.org/abs/1312.5602
1595
+ [27] L. ji Lin, “Self-improving reactive agents based on reinforcement
1596
+ learning, planning and teaching,” in Machine Learning, 1992, pp. 293–
1597
+ 321.
1598
+ [28] C. J. C. H. Watkins and P. Dayan, “Q-learning,” Machine Learning,
1599
+ vol.
1600
+ 8,
1601
+ no.
1602
+ 3,
1603
+ pp.
1604
+ 279–292,
1605
+ 1992.
1606
+ [Online].
1607
+ Available:
1608
+ https:
1609
+ //doi.org/10.1007/BF00992698
1610
+ [29] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G.
1611
+ Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski
1612
+ et al., “Human-level control through deep reinforcement learning,”
1613
+ nature, vol. 518, no. 7540, pp. 529–533, 2015.
1614
+ [30] L. Tong, Y. Li, and W. Gao, “A hierarchical edge cloud architecture for
1615
+ mobile computing,” in IEEE INFOCOM 2016-The 35th Annual IEEE
1616
+ International Conference on Computer Communications.
1617
+ IEEE, 2016,
1618
+ pp. 1–9.
1619
+ [31] A. Ceselli, M. Premoli, and S. Secci, “Mobile edge cloud network design
1620
+ optimization,” IEEE/ACM Transactions on Networking, vol. 25, no. 3,
1621
+ pp. 1818–1831, 2017.
1622
+ [32] Z. Chen, W. Hu, J. Wang, S. Zhao, B. Amos, G. Wu, K. Ha, K. Elgazzar,
1623
+ P. Pillai, R. Klatzky et al., “An empirical study of latency in an
1624
+ emerging class of edge computing applications for wearable cognitive
1625
+ assistance,” in Proceedings of the Second ACM/IEEE Symposium on
1626
+ Edge Computing, 2017, pp. 1–14.
1627
+ [33] A. Cartas, M. Kocour, A. Raman, I. Leontiadis, J. Luque, N. Sastry,
1628
+ J. Nu˜nez-Martinez, D. Perino, and C. Segura, “A reality check on infer-
1629
+ ence at mobile networks edge,” in Proceedings of the 2nd International
1630
+ Workshop on Edge Systems, Analytics and Networking, 2019, pp. 54–59.
1631
+ [34] L. Jiao, F. Zhang, F. Liu, S. Yang, L. Li, Z. Feng, and R. Qu, “A
1632
+ survey of deep learning-based object detection,” IEEE Access, vol. 7,
1633
+ pp. 128 837–128 868, 2019.
1634
+ [35] A. Srivastava, D. Nguyen, S. Aggarwal, A. Luckow, E. Duffy,
1635
+ K. Kennedy, M. Ziolkowski, and A. Apon, “Performance and memory
1636
+ trade-offs of deep learning object detection in fast streaming high-
1637
+ definition images,” in 2018 IEEE International Conference on Big Data
1638
+ (Big Data).
1639
+ IEEE, 2018, pp. 3915–3924.
1640
+ [36] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C.
1641
+ Berg, “Ssd: Single shot multibox detector,” in European conference on
1642
+ computer vision.
1643
+ Springer, 2016, pp. 21–37.
1644
+ [37] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look
1645
+ once: Unified, real-time object detection,” in Proceedings of the IEEE
1646
+ conference on computer vision and pattern recognition, 2016, pp. 779–
1647
+ 788.
1648
+
9tFRT4oBgHgl3EQfqzeA/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
ANAyT4oBgHgl3EQf3_og/content/tmp_files/2301.00777v1.pdf.txt ADDED
@@ -0,0 +1,1400 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Non-Invertible Symmetries in Supergravity
2
+ Eduardo Garc´ıa-Valdecasas
3
+ 1Jefferson Physical Laboratory, Harvard University,
4
+ Cambridge, MA 02138, USA
5
+ 2Universidad Autonoma de Madrid, Ciudad Universitaria de Cantoblanco,
6
+ 28049 Madrid, Spain
7
+ E-mail: [email protected]
8
+ Abstract: Non-invertible symmetries have been extensively studied in quantum field
9
+ theories in recent years. In this note we initiate their study in supergravity. We find
10
+ infinite families of non-invertible defects in 11d and 10d Type II supergravities. These
11
+ operators display a rich action on different probe branes. We comment on how these
12
+ symmetries are removed in the UV completion, M-theory and Type II String Theory
13
+ and how their existence strengthens the link between the absence of global symmetries
14
+ in Quantum Gravity and the Completeness Hypothesis.
15
+ arXiv:2301.00777v1 [hep-th] 2 Jan 2023
16
+
17
+ Contents
18
+ 1
19
+ Introduction
20
+ 1
21
+ 2
22
+ Review of QFT examples
23
+ 3
24
+ 3
25
+ Non-invertible symmetries in Supergravity
26
+ 6
27
+ 3.1
28
+ 11d Supergravity
29
+ 6
30
+ 3.2
31
+ Type IIA Supergravity
32
+ 9
33
+ 3.3
34
+ Type IIB Supergravity
35
+ 12
36
+ 4
37
+ A different approach
38
+ 13
39
+ 5
40
+ Discussion
41
+ 15
42
+ A The 7d ZN TQFT
43
+ 17
44
+ B Explicit construction of the non-invertible defects by half gauging
45
+ 18
46
+ 1
47
+ Introduction
48
+ Symmetry is one of the basic principles in contemporary physics and quantum field
49
+ theory in particular.
50
+ A modern definition of symmetry, pioneered in [1], identifies
51
+ symmetry with the topological sector of the theory.
52
+ This is precisely what makes
53
+ it a universal notion that helps classify quantum field theories and is robust under
54
+ smooth deformations such as the RG flow. In more detail, the symmetry of a theory
55
+ is given by the set of all topological operators and their fusion rules.
56
+ In the most
57
+ general definition one allows for topological operators of any codimension and fusion
58
+ rules that may not obey a group law. The usual symmetries such as baryon number are
59
+ generated by topological operators of codimension 1, Ug(Σd−1), hence acting on Hilbert
60
+ space, that obey a group-law fusion Ug1(Σd−1)×Ug2(Σd−1) = Ug1·g2(Σd−1). Allowing for
61
+ topological operators of different codimensions gives rise to higher-form symmetries. A
62
+ prototypical example is the 1-form symmetry of free Maxwell theory acting on Wilson
63
+ loops. Allowing for non group-laws brings non-invertible symmetries into the game.
64
+ These are generated by topological operators that need not have an inverse, hence the
65
+ – 1 –
66
+
67
+ name. In this work we will be interested in p-form symmetries generated by codimension
68
+ p + 1 operators with various p′s and obeying non-invertible fusion rules.
69
+ While non-invertible symmetries may look exotic at first, in recent years it has
70
+ been understood that they are essentially ubiquitous. In fact, theories as simple as free
71
+ Maxwell in 4d are plagued with non-invertible symmetry operators. For a partial list of
72
+ these recent developments see [2–38]. For a partial list of earlier results in 2d see [39–50].
73
+ While these symmetries have been extensively studied in quantum field theory, they are
74
+ almost uncharted territory in supergravity1. In this work we make a first, if superficial,
75
+ approach to these new lands by studying some non-invertible defects in 11d, 10d Type
76
+ IIA and Type IIB supergravity. The core idea is that Chern-Simons terms, which are
77
+ usually believed to imply explicit breaking of the higher form symmetries of the gauge
78
+ fields [51], are many times just a signal of their non-invertibility. Let us illustrate the
79
+ argument, already formulated in [21] by building on earlier work [16, 17]. In the absence
80
+ of Chern-Simons terms, gauge theories typically have equations of motion and Bianchi
81
+ identities of the form,
82
+ d ⋆ F = 0,
83
+ dF = 0
84
+ (1.1)
85
+ These equations imply the existence of higher-form electric and magnetic symmetries.
86
+ Adding Chern-Simons terms generically spoils these symmetries, particularly the elec-
87
+ tric one, as the equation of motion now takes the form of a non-conservation equation,
88
+ for instance:
89
+ d ⋆ F = F ∧ F
90
+ (1.2)
91
+ However, as we discuss in more detail in the following section, in many cases the non-
92
+ conservation is mild enough, since the right hand side vanishes in trivial topology, that
93
+ one can still define topological operators implementing the symmetry. The price to pay
94
+ is that one needs to dress the operator with some topological degrees of freedom that
95
+ generate a non-invertible fusion rule. Given that supergravity theories are plagued with
96
+ Chern-Simons terms2 one expects a very rich set of non-invertible symmetries! These
97
+ symmetries will naturally act on probe branes.
98
+ An important lesson from Quantum Gravity is that exact global symmetries are
99
+ incompatible with it [52–59]. This implies that all the symmetries described in this
100
+ work must be broken by the UV completion, be it M-theory or Type II String Theory.
101
+ As we will see this is easily achieved by the presence of dynamical branes, in analogy to
102
+ what happens with less exotic symmetries [60]. For further discussions on non-invertible
103
+ symmetries and Quantum Gravity see [2, 3, 12].
104
+ 1See [38] for a noteworthy exception.
105
+ 2Many times these Chern-Simons terms are actually absorbed in modified Bianchi identities.
106
+ – 2 –
107
+
108
+ The remainder of this note is organised as follows. In section 2 we review non-
109
+ invertible symmetries in quantum field theories with Chern-Simons terms. In section 3
110
+ we present infinite families of non-invertible defects for 11d supergravity and the 10d
111
+ Type II supergravities.
112
+ We study their action on probe branes in some detail.
113
+ In
114
+ section 4 we elaborate in a different approach to the same question. We find a different
115
+ set of topological operators and comment on their connection to the former ones. We
116
+ conclude in section 5 with comments on our results, their implications and an outlook
117
+ of future work. We leave for the Appendices A and B the explicit construction of the
118
+ TQFT’s that are used in the main text to construct the general topological operators.
119
+ 2
120
+ Review of QFT examples
121
+ Chern-Simons terms typically turn the equations of motion for gauge fields into non-
122
+ conservation equations for the currents they carry.
123
+ Until recently, it was believed
124
+ that these non-conservation equations implied the explicit breaking of the symmetry
125
+ obtained by integrating the no longer conserved current.
126
+ However, when the non-
127
+ conservation is mild enough, one may construct topological operators for the symme-
128
+ try by appropriately dressing them with gaped degrees of freedom. More concretely,
129
+ consider a non-conservation equation for a U(1) current Jp of the form,
130
+ d ⋆ Jp = Gd−p+1
131
+ (2.1)
132
+ where Gd−p+1 is a wedge product of gauge field strengths. If Gd−p+1 is locally exact
133
+ Gd−p+1 = dKd−p and its integral vanishes in manifolds of trivial topology3, one can
134
+ safely improve the current,
135
+ d(⋆Jp − Kd−p) = 0
136
+ (2.2)
137
+ This improved current is conserved but may be non gauge invariant. The integrated
138
+ charge for such a current is known as a Page charge [61] and we can use it to write an
139
+ operator,
140
+ e
141
+ 2πiα
142
+
143
+ Σd−p ⋆Jp−Kd−p
144
+ (2.3)
145
+ For simple enough spacetime manifolds (trivial topology) this operator is actually topo-
146
+ logical and well defined. In fact, in most cases4 a new operator can be introduced with
147
+ 3With trivial topology we refer to manifolds homeomorphic to R4 or S4.
148
+ 4An exception is the case of Gd−p+1 being a single field strength, as in 3d Chern-Simons theory
149
+ or 4d BF theory. For these theories however,
150
+
151
+ M2 Gd−p+1 =
152
+
153
+ M2 F ̸= 0, so our assumptions are not
154
+ satisfied.
155
+ – 3 –
156
+
157
+ Kd−p modified to be also topological and gauge invariant in arbitrary manifolds. Let
158
+ us consider a particularly simple example, 5d Chern-Simons Maxwell theory [21],
159
+ S = 2π
160
+
161
+ −1
162
+ 2F2 ∧ ⋆F2 + 1
163
+ 6A1 ∧ F2 ∧ F2
164
+ (2.4)
165
+ Note that we have chosen to stick to the conventions in the supergravity literature.
166
+ In particular, we have chosen field strengths to have integer periods,
167
+
168
+ F ∈ Z. In the
169
+ absence of a Chern-Simons term, this theory would have U(1)(1)
170
+ e
171
+ × U(1)(2)
172
+ m electric and
173
+ magnetic symmetries with currents je = F2, jm = ⋆F2. These currents are conserved
174
+ due to the equation of motion and the Bianchi identity, respectively. However, the
175
+ Chern-Simons term seems to break the electric symmetry: the equation of motion
176
+ becomes,
177
+ d ⋆ F2 = 1
178
+ 2F2 ∧ F2
179
+ (2.5)
180
+ This equation implies that F2 is not conserved anymore, so the naive operator,
181
+ Uα(Σ3) = exp
182
+
183
+ 2πiα
184
+
185
+ Σ3
186
+ ⋆F2
187
+
188
+ (2.6)
189
+ is no longer topological. However, F2 ∧ F2 is locally exact and, since there is no U(1)
190
+ instanton number in S4, its integral vanishes in manifolds of trivial topology5. We may
191
+ then improve the current,
192
+ d
193
+
194
+ ⋆F2 − 1
195
+ 2A1 ∧ F2
196
+
197
+ = 0
198
+ (2.7)
199
+ which is conserved but not gauge invariant. The corresponding topological operator,
200
+ only valid for simple enough topology, is,
201
+ ˜Uα(Σ3) = exp
202
+
203
+ 2πiα
204
+
205
+ Σ3
206
+ ⋆F2 − 1
207
+ 2A1 ∧ F2
208
+
209
+ (2.8)
210
+ While this operator is not valid for arbitrary topology, if α ∈ [0, 1) is rational, one can
211
+ define an improved operator which is topological and gauge invariant for arbitrary Σ3.
212
+ For the particular case of α = 1/N, with N ∈ Z, the corresponding operator is,
213
+ D1/N(Σ3) =
214
+
215
+ Dc1
216
+ ���
217
+ Σ3exp
218
+
219
+ 2πi
220
+ � ⋆F2
221
+ N + N
222
+ 2 c1 ∧ dc1 − c1 ∧ F2
223
+
224
+ (2.9)
225
+ where c1 is a U(1) gauge field localized in the topological defect. That this operator is
226
+ the gauge invariant version of 2.8 can be morally seen by integrating out c1 = A1/N.
227
+ 5This statement is sensitive to the UV completion of the quantum field theory at hand. For instance,
228
+ there are stringy instantons in single D-branes in String Theory [62] and there are also U(1) instantons
229
+ in non-commutative spacetimes [63]. We thank I˜naki Garc´ıa-Etxebarria for raising this point.
230
+ – 4 –
231
+
232
+ Figure 1: A deformation of the naive operator Uα(Σ3) generates an anomalous phase
233
+ in the region swept by it.
234
+ While this integration is not a correct equation for A1, c1 which are both U(1) gauge
235
+ fields, it gives the correct result. For more details, including an explicit construction of
236
+ the defect using higher half-space gauging see [21]. For a general α = p/N, p, N ∈ Z,
237
+ a good topological operator can also be written in terms of the minimal An,p TQFT as
238
+ in Equation (A.3). For more details on An,p, see [64]. What we have done is to dress
239
+ the naive operator in 2.6 in such a way that the new degrees of freedom cancel the
240
+ anomalous phase it generates as it is deformed,
241
+ Uα(Σ′
242
+ 3) = Uα(Σ3)e
243
+ 2πiα
244
+ 2
245
+
246
+ Σ4 F2∧F2
247
+ (2.10)
248
+ For a pictorial representation see Figure 1. The upshot is that the rational α = p/N
249
+ subgroup of the U(1) electric 1-form symmetry survives and is generated by A.3. The
250
+ price to pay is that the symmetry becomes non-invertible, as can be seen by explicitly
251
+ computing the fusion rules of the topological operators. These operators act on Wilson
252
+ lines as the naive operators would have, but they also act on ’t Hooft surfaces by
253
+ attaching a fractional flux to them. The action is completely analogous to what we
254
+ will describe for 11d supergravity in section 3.1. We will denote these non-invertible
255
+ symmetries with rational valued parameters as Γ(1)
256
+ Q . The above construction generalizes
257
+ to mixed Chern-Simons terms [21]. Consider for instance,
258
+ S = 2π
259
+
260
+ −1
261
+ 2F2 ∧ ⋆F2 − 1
262
+ 2G2 ∧ ⋆G2 + 1
263
+ 2C1 ∧ F2 ∧ F2
264
+ (2.11)
265
+ Where F2 = dA1, G2 = dC1. In the absence of the Chern-Simons coupling, this theory
266
+ has electric and magnetic symmetries
267
+ U(1)(1)
268
+ e,A × U(1)(2)
269
+ m,A × U(1)(1)
270
+ e,C × U(1)(2)
271
+ m,C
272
+ (2.12)
273
+ Yet again, the Chern-Simons coupling spoils the conservation equations of the electric
274
+ symmetries, which become,
275
+ d ⋆ F2 = G2 ∧ F2,
276
+ d ⋆ G2 = 1
277
+ 2F2 ∧ F2
278
+ (2.13)
279
+ – 5 –
280
+
281
+ In the same vein as before, these two currents can be improved to Page currents.
282
+ In turn, topological operators corresponding to the Page charge can be appropriately
283
+ defined. For α = 1/N these operators can be written as,
284
+ DG
285
+ 1/N(Σ3) =
286
+
287
+ Dc1
288
+ ���
289
+ Σ3exp
290
+
291
+ 2πi
292
+ � ⋆G2
293
+ N
294
+ + N
295
+ 2 c1 ∧ dc1 − c1 ∧ F2
296
+
297
+ (2.14)
298
+ DF
299
+ 1/N(Σ3) =
300
+
301
+ Dc1Dv1
302
+ ���
303
+ Σ3exp
304
+
305
+ 2πi
306
+
307
+ ⋆F2
308
+ N + Nv1 ∧ dc1 − c1 ∧ G2 − v1 ∧ F2
309
+
310
+ (2.15)
311
+ These two constructions can once again be extended to any rational α = p/N by using
312
+ the An,p theories. The upshot is that the electric symmetries are turned non-invertible
313
+ by the Chern-Simons terms and the true symmetry of the theory is,
314
+ Γ(1)
315
+ Q,A × U(1)(2)
316
+ m,A × Γ(1)
317
+ Q,C × U(1)(2)
318
+ m,C
319
+ (2.16)
320
+ In the remainder of this note we explore similar constructions in supergravity, where
321
+ Chern-Simons terms are ubiquitous.
322
+ 3
323
+ Non-invertible symmetries in Supergravity
324
+ 3.1
325
+ 11d Supergravity
326
+ The construction of the non-invertible defects in the previous examples is only possible
327
+ thanks to the existence of a particular 3d TQFT with the appropriate 1-form symmetry
328
+ and anomaly. For would-be U(1) actions with phase α = 2π/N the full topological
329
+ operator is 2.14 and the TQFT that is stacked on top of the naive operator 2.6 takes
330
+ the form,
331
+ AN,1
332
+ 3
333
+ (Σ3) =
334
+
335
+ Dc1
336
+ ���
337
+ Σ3exp
338
+
339
+ 2πi
340
+
341
+ Σ3
342
+ N
343
+ 2 c1 ∧ dc1 − c1 ∧ F2
344
+
345
+ (3.1)
346
+ where c1 is an auxiliary U(1) gauge field living in Σ3 and F2 is the magnetic current
347
+ in the bulk that needs to be gauged to obtain the defect. This TQFT, and its general-
348
+ izations AN,p
349
+ 3
350
+ (Σ3), are called fractional quantum Hall states, or FQHE states, and are
351
+ particular to 3d. In 7d an analog FQHE construction con be made6, where one can
352
+ write the following TQFT,
353
+ AN,1
354
+ 7
355
+ (Σ7) =
356
+
357
+ Dc3
358
+ ���
359
+ Σ7exp
360
+
361
+ 2πi
362
+
363
+ Σ7
364
+ N
365
+ 2 c3 ∧ dc3 − c3 ∧ F4
366
+
367
+ (3.2)
368
+ As we now argue, this TQFT precisely arises in 11d supergravity. Consider its action,
369
+ S =
370
+ 1
371
+ 2κ2
372
+ 11
373
+
374
+ Σ11
375
+ √−gR − 1
376
+ 2F4 ∧ ⋆F4 − 1
377
+ 6A3 ∧ F4 ∧ F4
378
+ (3.3)
379
+ 6See, for instance [65].
380
+ – 6 –
381
+
382
+ Where 2κ2
383
+ 11 = (2π)−1(2πlp)9. If the Chern-Simons coupling is turned off, this theory
384
+ has 2 symmetries U(1)(3)
385
+ e
386
+ × U(1)(6)
387
+ m with currents je = F4, jm = ⋆F4 conserved thanks
388
+ to the equation of motion and the Bianchi identity of F4. As discussed in [60] it also
389
+ has a Chern-Weil symmetry with current F4 ∧ F4, which will not be relevant for our
390
+ purposes. Once one includes the CS interaction the conservation equation of U(1)(3)
391
+ e
392
+ is
393
+ modified to,
394
+ d ⋆ F4 = 1
395
+ 2F4 ∧ F4
396
+ (3.4)
397
+ Hence the CS term explicitly breaks U(1)(3)
398
+ e
399
+ and gauges the Chern-Weil current. By
400
+ now the game we must play is clear, this current fulfills the conditions to be improved
401
+ to a U(1) Page current,
402
+ d
403
+
404
+ ⋆F4 − 1
405
+ 2A3 ∧ F4
406
+
407
+ = 0
408
+ (3.5)
409
+ which is conserved but not gauge invariant. The naive operator, which is only valid for
410
+ trivial topology is,
411
+ Uα(Σ7) = exp
412
+
413
+ 2πiα
414
+
415
+ Σ7
416
+ ⋆11F4 − 1
417
+ 2A3 ∧ F4
418
+
419
+ (3.6)
420
+ Writing the corresponding good topological operator is straightforward. For α = 1/N ∈
421
+ U(1) we just need to stack the 7d FQHE theory. The resulting operator is,
422
+ D1/N(Σ7) =
423
+
424
+ Dc3
425
+ ���
426
+ Σ7exp
427
+
428
+ 2πi
429
+
430
+ Σ7
431
+ ⋆11F4
432
+ N
433
+ + N
434
+ 2 c3 ∧ dc3 − c3 ∧ F4
435
+
436
+ (3.7)
437
+ In fact, we can make use of the A(N,p)
438
+ 7
439
+ [b4] theory defined in Appendix A to build a
440
+ topological operator for any α ∈ [0, 1) of the form α = p/N, with p, N ∈ Z,
441
+ Dp/N(Σ7) = exp
442
+ �2πip
443
+ N
444
+
445
+ Σ7
446
+ ⋆11F4
447
+
448
+ × A(N,p)
449
+ 7
450
+ �F4
451
+ N
452
+
453
+ (3.8)
454
+ These defects can be explicitly constructed by higher gauging the magnetic U(1)(6)
455
+ m
456
+ symmetry, as detailed in appendix B. The upshot of the discussion above is that,
457
+ contrary to expectations, the electric U(1)(3)
458
+ e
459
+ symmetry is not completely broken, but
460
+ a non-invertible rational discrete subgroup remains. The symmetries associated to F4
461
+ in M-theory are then,
462
+ Γ(3)
463
+ Q × U(1)(6)
464
+ m
465
+ (3.9)
466
+ Of course we expect these two symmetries to be broken by the UV-completion of 11d
467
+ supergravity. This is indeed the case in M-theory, as inclusion of dynamical M2- and
468
+ M5-branes breaks them explicitly. In fact, given that one needs to gauge the magnetic
469
+ – 7 –
470
+
471
+ symmetry to build the electric defects, the presence of dynamical M5-branes is enough
472
+ to break both symmetries. A consequence is that Γ(3)
473
+ Q
474
+ must be broken at an energy
475
+ scale lower or equal than U(1)(6)
476
+ m .
477
+ Let us now study more carefully the action of the topological defect 3.8 on electric
478
+ and magnetic probes, which we call probe M2- and M5-branes for obvious reasons. On
479
+ probe M2-branes, which source ⋆F4, the action is invertible and given by the first term
480
+ in 3.8. The action is indistinguishable from the one we expected for the electric 3-form
481
+ symmetry. The second term in 3.8 can detect magnetic charges, sources for F4. In
482
+ order to find the precise action consider the construction of the topological defect in Σ7
483
+ via higher gauging of the magnetic symmetry as explained in Appendix B. Take 11d
484
+ spacetime to be R11 with coordinates x1, ..., x11 such that Σ7 spans x1, ..., x7 and the
485
+ gauging is defined for x8 > 0 such that Σ8 = R7 × R>0
486
+ x8 . Denote by HF4
487
+ m a 6-dimensional
488
+ source of m units of F4 flux, which could be a bunch of probe M5-branes, spanning
489
+ directions x5,6,7,9,10,11. If we displace HF4
490
+ m from x8 < 0 to x8 > 0 it enters into the
491
+ region where the magnetic symmetry is gauged and it stops being gauge invariant.
492
+ In more detail, the 3-dimensional part of the probe M5 worldvolume that is inside the
493
+ submanifold where the symmetry is gauged, Σ3 ≡ R3
494
+ x5,x6,x7 = WV (M5)∩Σ8, transforms
495
+ under b4 → b4 + dΛ3 gauge transformations by picking up a phase,
496
+ HF4
497
+ m → HF4
498
+ m e2πim
499
+
500
+ Σ3 Λ3,
501
+ for
502
+ x8 > 0
503
+ (3.10)
504
+ Hence, in the x8 > 0 region the gauge invariant object is,
505
+ HF4
506
+ m e−2πim
507
+
508
+ Σ4 b4 = HF4
509
+ m e
510
+ 2πipm
511
+ N
512
+
513
+ Σ4 F4
514
+ (3.11)
515
+ where Σ4 is such that ∂Σ4 = Σ3. To write the right hand side we have used the equation
516
+ of motion for b4 in B mod N. We conclude that the defect acts on the magnetic source
517
+ by attaching a fractional F4 flux along Σ4, as depicted in Fig. 2,
518
+ Dp/N(Σ7)HF4
519
+ m = HF4
520
+ m e
521
+ 2πipm
522
+ N
523
+
524
+ Σ4 F4.
525
+ (3.12)
526
+ An important consequence of this action is that, if m ̸= 0 mod N, the symmetry defect
527
+ annihilates the magnetic source, as argued in [66]. Consider again Figure 2. The idea
528
+ is that, since Dp/N(Σ7) is topological, it may be shrunk to a point and removed, giving
529
+ rise to a topological endpoint for V (Σ4) pm
530
+ N ≡ e
531
+ 2πipm
532
+ N
533
+
534
+ Σ4 F4. This endpoint is local and its
535
+ existence implies that V (Σ4) pm
536
+ N can develop a hole and disappear. However, V (Σ4) pm
537
+ N
538
+ is a generator of the magnetic symmetry U(1)(6)
539
+ m which acts faithfully, so it better be
540
+ that it can’t just go away! The only solution is that correlation functions with this
541
+ – 8 –
542
+
543
+ Figure 2: A magnetic source for F4 crosses the topological defect and picks up a
544
+ fractional F4 flux attached to it.
545
+ endpoint give zero. We conclude that the topological operator annihilates HF4
546
+ m sources
547
+ unless,
548
+ pm
549
+ N ∈ Z
550
+ (3.13)
551
+ However, if m = 0 mod N, V (Σ4) pm
552
+ N becomes trivial except at the boundary of Σ4 and
553
+ the action of the topological operator leaves an operator insertion in Σ3. We leave the
554
+ study of this junction in detail for future work.
555
+ 3.2
556
+ Type IIA Supergravity
557
+ The existence of non-invertible symmetries in 11d supergravity suggests the existence
558
+ of appropriate counterparts in Type IIA supergravity, which arises when dimensionally
559
+ reducing on a circle. We will not attempt direct dimensional reduction of the symme-
560
+ tries in this work. We study instead the non-invertible symmetries directly from the
561
+ 10 formulation. Consider the low energy action of type IIA string theory in 10d [67],
562
+ SIIA =
563
+ 1
564
+ 2κ2
565
+
566
+ M10
567
+ √−g
568
+
569
+ e−2Φ
570
+
571
+ R + 4|dΦ|2−1
572
+ 2|H3|2
573
+
574
+ − 1
575
+ 2|F2|2−1
576
+ 2| ˜F4|2
577
+
578
+
579
+ − 1
580
+ 2κ2
581
+
582
+ M10
583
+ 1
584
+ 2B2 ∧ F4 ∧ F4
585
+ (3.14)
586
+ where ˜F4 = dA3 + A1 ∧ H3 is invariant under the gauge transformation (A1, A3) →
587
+ (A1 + dλ0, A3 − λ0 ∧ H3). The equations of motion for F2, ˜F4 and H3 are,
588
+ d ⋆ F2 = H3 ∧ ⋆ ˜F4
589
+ (3.15)
590
+ d ⋆ ˜F4 = H3 ∧ F4 = H3 ∧ ˜F4
591
+ (3.16)
592
+ d ⋆ H3 = 1
593
+ 2
594
+ ˜F4 ∧ ˜F4 + F2 ∧ ⋆ ˜F4
595
+ (3.17)
596
+ – 9 –
597
+
598
+ The Bianchi Identities are,
599
+ dH3 = 0,
600
+ dF2 = 0,
601
+ d ˜F4 = F2 ∧ H3
602
+ (3.18)
603
+ The Bianchi identities for F2, H3 imply that there are two conserved currents: ⋆H3, ⋆F2.
604
+ These generate two magnetic symmetries U(1)(6)
605
+ m × U(1)(7)
606
+ m which are well understood,
607
+ see for instance [60]. The remaining equations can be rewritten by introducing Hodge
608
+ dual field strengths ˜Fp = ⋆ ˜F10−p as,
609
+ d ˜F4 = F2 ∧ H3,
610
+ d ˜F6 = ˜F4 ∧ H3,
611
+ d ˜F8 = ˜F6 ∧ H3,
612
+ (3.19)
613
+ d ⋆ H3 = 1
614
+ 2
615
+ ˜F4 ∧ ˜F4 + F2 ∧ ˜F6
616
+ (3.20)
617
+ The newly introduced gauge invariant field strengths are defined as ˜Fp = dAp−1+Ap−3∧
618
+ H3. From the equations above we see that there are three candidates for non-invertible
619
+ symmetries. Indeed, the right hand side in equations 3.19 can be shown to be locally
620
+ exact and its integral to vanish in manifolds with trivial topology, so we may write the
621
+ following conserved but gauge variant Page currents,
622
+ d
623
+
624
+ ˜F4 − A1 ∧ H3
625
+
626
+ = 0
627
+ (3.21)
628
+ d
629
+
630
+ ˜F6 − A3 ∧ H3
631
+
632
+ = 0
633
+ (3.22)
634
+ d
635
+
636
+ ˜F8 − A5 ∧ H3
637
+
638
+ = 0
639
+ (3.23)
640
+ At this point the strategy seems clear: we can dress the would-be topological operators
641
+ with some gaped degrees of freedom to obtain a good topological operator. However,
642
+ for eqs. (3.22) and (3.23) things are not so simple. Let us focus first in eq. (3.21).
643
+ We consider a U(1) transformation with α = 1/N and in analogy to 2.15 propose the
644
+ following operator,
645
+ DF4
646
+ 1/N(Σ4) =
647
+
648
+ Dc1Dv2
649
+ ���
650
+ Σ4exp
651
+
652
+ 2πi
653
+
654
+ Σ4
655
+ ˜F4
656
+ N + Nv2 ∧ dc1 − c1 ∧ H3 − v2 ∧ dA1
657
+
658
+ (3.24)
659
+ This operator is gauge invariant and can be built explicitly by gauging a Z(6)
660
+ N × Z(7)
661
+ N
662
+ subgroup of the magnetic symmetry U(1)(6)
663
+ m × U(1)(7)
664
+ m with currents ⋆H3 and ⋆F2 in
665
+ a manifold Σ5 such that ∂Σ5 = Σ4, as shown in Appendix B. In fact, as discussed
666
+ there, the operator can be built for arbitrary α = p/N in a similar way, such that
667
+ it is explicitly topological and gauge invariant. We hence conclude that the would-be
668
+ magnetic U(1)(5)
669
+ m symmetry of ˜F4, which is broken by the modified Bianchi identity,
670
+ – 10 –
671
+
672
+ becomes a non-invertible Γ(5)
673
+ Q symmetry. Let us describe how the topological operator
674
+ acts on the different probe branes. Since this discussion is a straightforward extension of
675
+ the 11d supergravity we will be brief. A related in-depth discussion for the case of axion
676
+ electrodynamics is presented in [16, 68]. The defect acts invertibly in probe D4-branes,
677
+ as the naive magnetic symmetry of ˜F4 would have, but it also acts non-invertibly in
678
+ probe NS5- and D6-branes. In particular, these probes are not gauge invariant if they
679
+ intersect the auxiliary higher-gauging submanifold and a flux needs to be attached to
680
+ them. The upshot is that the topological defect with α = p/N annihilates the probe
681
+ NS5- or D6-branes if their charge m does not satisfy the following relation,
682
+ pm
683
+ N ∈ Z
684
+ (3.25)
685
+ If the relation above is satisfied, the action of the topological defect on the probe
686
+ NS5- or D6-brane leaves behind an operator stuck in their worldvolume of dimension
687
+ 1 for the NS5-brane and 2 for the D6-brane. While the study of these junctions is
688
+ very interesting, it goes beyond the scope of this work and we leave it for the future.
689
+ We expect that anomaly inflow from the bulk will impose the existence of certain
690
+ worldvolume degrees of freedom on the branes that will in turn be used to furnish the
691
+ appropriate operators.
692
+ Consider now the Page currents in eqs. (3.22) and (3.23). As we now explain, the
693
+ would-be topological operator analogous to 3.24 can’t be built by higher gauging be-
694
+ cause there is a higher anomaly. For 3.22, for instance, one could propose the following
695
+ operator,
696
+ DF6
697
+ 1/N(Σ6) =
698
+
699
+ Dc3Dv2
700
+ ���
701
+ Σ6exp
702
+
703
+ 2πi
704
+
705
+ Σ6
706
+ ˜F6
707
+ N + Nv2 ∧ dc1 − c3 ∧ H3 − v2 ∧ dA3
708
+
709
+ (3.26)
710
+ One immediately notices however, that the last term is not invariant under (A1, A3) →
711
+ (A1 + dλ0, A3 − λ0 ∧ H3) gauge transformations. One can gain further insight into this
712
+ issue by explicitly introducing ZN background gauge fields for ⋆H3 and ⋆dA3 in Σ7
713
+ such that ∂Σ7 = Σ6. If correct, this higher gauging should generate 3.26. There is, at
714
+ first sight, nothing wrong in gauging the currents ⋆H3 and ⋆dA3 since they are both
715
+ conserved, the gauging is implemented by adding the following terms to the action7,
716
+ δS = 2πi
717
+
718
+ Σ7
719
+ Np′b3 ∧ b4 + Nb3 ∧ dˆc3 + Nb4 ∧ dˆv2 − b4 ∧ H3 − b3 ∧ dA3
720
+ (3.27)
721
+ 7For more details on this procedure see appendix B.
722
+ – 11 –
723
+
724
+ The problem is that under an A1 gauge transformation the last term picks up an
725
+ anomalous piece,
726
+ 2πi
727
+
728
+ Σ7
729
+ b3 ∧ dλ0 ∧ H3
730
+ (3.28)
731
+ We conclude that there is a higher anomaly8 encoded by inflow as 2πi
732
+
733
+ b3 ∧ dA1 ∧ H3
734
+ which precludes the higher gauging of the symmetry associated with the current ⋆dA3.
735
+ A similar statement holds for ⋆dA5. A different way of phrasing the problem is by
736
+ noticing that there is no magnetic symmetry associated to the would-be currents ⋆dA3
737
+ or ⋆dA5. We do not have the necessary symmetries for the higher gauging procedure.
738
+ An open possibility is to gauge the 5-form non-invertible magnetic symmetry for ˜F4 that
739
+ we just built. This would amount to performing an explicit sum over the non-invertible
740
+ defects in Σ7 instead of the coupling to the background field b3. We would in turn be
741
+ able to do it for ˜F8 by using the newly found operators. It is not clear however how to
742
+ perform this gauging, which we leave for future work. In section 4 we comment on how
743
+ this problem seems to be avoided in the alternative approach presented in [36, 38].
744
+ 3.3
745
+ Type IIB Supergravity
746
+ The discussion above translates almost verbatim to Type IIB supergravity, which has
747
+ the following action,
748
+ SIIB =
749
+ 1
750
+ 2κ2
751
+
752
+ M10
753
+ √−g
754
+
755
+ e−2Φ
756
+
757
+ R + 4|dΦ|2−1
758
+ 2|H3|2
759
+
760
+ − 1
761
+ 2|F1|2−1
762
+ 2| ˜F3|2−1
763
+ 2| ˜F5|2
764
+
765
+
766
+ − 1
767
+ 4κ2
768
+
769
+ M10 A4 ∧ H3 ∧ F3
770
+ (3.29)
771
+ where gauge invariant field strengths ˜F3 = F3−A0∧H3 and ˜F5 = F5− 1
772
+ 2A2∧H3+ 1
773
+ 2B2∧F3
774
+ have been introduced.
775
+ This action needs to be supplemented with the self-duality
776
+ condition for ˜F5: ˜F5 = ⋆ ˜F5. The equations of motion are:
777
+ d ⋆ H3 = − ˜F5 ∧ ˜F3 + F1 ∧ ⋆ ˜F3
778
+ (3.30)
779
+ d ⋆ F1 = −H3 ∧ ⋆ ˜F3
780
+ (3.31)
781
+ d ⋆ ˜F3 = ˜F5 ∧ H3
782
+ (3.32)
783
+ d ⋆ ˜F5 = − ˜F3 ∧ H3
784
+ (3.33)
785
+ 8This anomaly is in fact inherited from a standard anomaly in the bulk theory given by inflow as
786
+ ∼ B6 ∧ dA1 ∧ H3, B6 being the background coupled to the would-be current ⋆dA3.
787
+ – 12 –
788
+
789
+ And the Bianchi Identities are,
790
+ dH3 = 0
791
+ (3.34)
792
+ dF1 = 0
793
+ (3.35)
794
+ d ˜F3 = −F1 ∧ H3
795
+ (3.36)
796
+ d ˜F5 = − ˜F3 ∧ H3
797
+ (3.37)
798
+ Note that 3.33 and 3.37 are the same equation. There are two conserved currents,
799
+ ⋆H3 and ⋆F1, which generate a U(1)(6)
800
+ m × U(1)(8)
801
+ m magnetic symmetry. Armed with
802
+ the experience from the previous section we recognise that only the modified Bianchi
803
+ identity ˜F3 = −F1 ∧H3 can give rise to a non-invertible symmetry9. The corresponding
804
+ topological operator is,
805
+ DF3
806
+ 1/N(Σ3) =
807
+
808
+ Dc0Dv2
809
+ ���
810
+ Σ3exp
811
+
812
+ 2πi
813
+
814
+ Σ3
815
+ ˜F3
816
+ N − Nv2 ∧ dc0 + c0 ∧ H3 + v2 ∧ dA0
817
+
818
+ (3.38)
819
+ which can be checked to correspond to the integral of the corresponding Page current
820
+ upon naive integration of c0 = A0/N and v2 = −B2/N.
821
+ Once again we conclude
822
+ that the U(1)(5)
823
+ m with non-conserved current ⋆ ˜F3 is not completely broken, but a non-
824
+ invertible Γ(5)
825
+ Q
826
+ remains. The discussion regarding the action of this operator on the
827
+ different probe objects in the theory is completely analogous as the one for Type IIA
828
+ and we choose to free the reader from it. Let us just mention that it acts invertibly in
829
+ D5-branes and non-invertibly in NS5- and D7-branes.
830
+ 4
831
+ A different approach
832
+ We have so far seen how to explicitly build non-invertible topological defects in 11d
833
+ and 10d supergravity. In each of those cases, a would-be broken U(1) symmetry is still
834
+ realized by more exotic topological operators. The price to pay is that the topological
835
+ operators only exist for rational angles α = p/N and their fusion rules become non-
836
+ invertible. As we have discussed, the existence of the topological operators is intimately
837
+ tied to the existence of precise higher form symmetries, a discrete subgroup of which
838
+ can be gauged in a particular way. In this section we compare our approach to the
839
+ 9Similar comments as in previous section apply here. One may wonder whether other modified
840
+ Bianchi identities can give rise to non-invertible symmetries upon gauging the other non-invertible
841
+ symmetries.
842
+ – 13 –
843
+
844
+ one pioneered in [36, 38] and applied to supergravity in [38]. The idea is elegant and
845
+ simple, consider for instance the Page charge associated to 3.21,
846
+ Uα(Σ4) = exp
847
+
848
+ 2πiα
849
+
850
+ Σ4
851
+ ˜F4 − A1 ∧ H3
852
+
853
+ (4.1)
854
+ This current is topological but not gauge invariant under A1 → A1 + dλ0.
855
+ In the
856
+ preceding sections we saw that a gauge invariant version of 4.1 can be written for
857
+ α = p/N by stacking an appropriate TQFT. A simpler approach is to introduce a
858
+ Stueckelberg-like field that restores gauge invariance. A compact scalar θ transforming
859
+ under A1 gauge transformations as θ → θ + λ0 does the job,
860
+ U ′
861
+ α(Σ4) =
862
+
863
+
864
+ ���
865
+ Σ4exp
866
+
867
+ 2πiα
868
+
869
+ Σ4
870
+ ˜F4 − (A1 − dθ) ∧ H3
871
+
872
+ (4.2)
873
+ An immediate advantage of this approach is that α is not limited to be rational and
874
+ the whole U(1) symmetry is unbroken. Still this U(1) can be seen to act non-invertibly
875
+ on sources of H3 flux. Consider the following setup, Σ4 = S3 × S1 with m units of H3
876
+ flux
877
+
878
+ S3 H3 = m, which could be sourced by m NS5-branes, for instance. We wish to
879
+ evaluate the path integral with the insertion of the topological operator,
880
+ ⟨U ′
881
+ α(S3 × S1)⟩ =
882
+ ��
883
+
884
+ ���
885
+ Σ4exp
886
+
887
+ 2πiα
888
+
889
+ Σ4
890
+ ˜F4 − (A1 − dθ) ∧ H3
891
+ ��
892
+ (4.3)
893
+ We can evaluate explicitly the term ∼ dθH3 by taking H3 to be a background:
894
+
895
+ Dθ exp
896
+
897
+ 2πiα
898
+
899
+ S3×S1 dθ ∧ H3
900
+
901
+ =
902
+
903
+ ω∈z
904
+ e2πiαmω
905
+ (4.4)
906
+ where we have traded the integral over θ by a sum over its periods in S1. The resulting
907
+ sum is a delta function that is non-zero only for
908
+ αm ∈ Z
909
+ (4.5)
910
+ An important remark is that this operator does not act on objects that only source
911
+ dA1. Indeed, if we consider a similar setup as before but with Σ4 = S2 × S1 × S1 and
912
+ m units of dA1 flux in S2, we find ⟨U ′
913
+ α(S2 × S1 × S1)⟩ = 1. This is in sharp contrast
914
+ with the topological operator that we built previously, 3.24, which acts non-invertibly
915
+ both on sources of H3 and dA1. In order to better understand the connection between
916
+ the two approaches, let us define yet another operator,
917
+ ˆUα(Σ4) =
918
+
919
+ DθDc1
920
+ ���
921
+ Σ4exp
922
+
923
+ 2πiα
924
+
925
+ Σ4
926
+ ˜F4 − 1
927
+ 2(A1 − dθ) ∧ H3 − 1
928
+ 2(B2 − dc1) ∧ dA1
929
+
930
+ (4.6)
931
+ – 14 –
932
+
933
+ Where (B2, c1) → (B2 + dΛ1, c1 + Λ1).
934
+ That this operator is gauge invariant and
935
+ topological on a closed manifold can be checked by extending it to 5d10. This new
936
+ operator acts non-invertibly in sectors with either
937
+
938
+ H3 = m or
939
+
940
+ dA1 = m fluxes.
941
+ The argument is analog to the previous case and implies the vanishing of the partition
942
+ function unless
943
+ αm ∈ 2Z.
944
+ We note that the action of this operator is hence not equivalent to 3.24 for which α
945
+ has to satisfy a less stringent condition in terms of m: 3.2511. A further difference is
946
+ that this operator admits a straightforward generalization to arbitrary Page charges.
947
+ Let us write for completeness the one associated with the current in Eq. 3.22:
948
+ ˆUα(Σ6) =
949
+
950
+ Dc1Dc2
951
+ ���
952
+ Σ6exp
953
+
954
+ 2πiα
955
+
956
+ Σ6
957
+ ˜F6 − 1
958
+ 2(A3 − dc2) ∧ H3 − 1
959
+ 2(B2 − dc1) ∧ dA3
960
+
961
+ (4.8)
962
+ Gauge invariance is a bit trickier since dA3 transforms under A1 gauge transformations
963
+ as dA3 → dA3 − dλ0 ∧ H3 but can be checked to hold in closed manifolds.
964
+ 5
965
+ Discussion
966
+ In this note we have started the exploration of non-invertible symmetries in supergrav-
967
+ ity, the low energy limit of M-theory and String Theory. We have found that, in each
968
+ case studied, a U(1) higher form symmetry that seemed broken by Chern-Simons (or
969
+ modified Bianchi Identities) can be recovered in the form of a non-invertible topological
970
+ operators for each rational value of α ∈ [0, 1). We have described how these operators
971
+ can be constructed explicitly by higher gauging discrete subgroups of the invertible sym-
972
+ metries of the theory. This construction automatically implies their topological nature
973
+ and allowed us to deduce their action on the different brane probes of the theory. Fi-
974
+ nally, we have constructed alternative topological operators by using Stueckelberg-like
975
+ fields. This approach has the advantage of being defined for any irrational α. Another
976
+ advantage is that it admits a straightforward generalization to any Page charge, which
977
+ is unclear how to do for the first method. We conclude in this section by making some
978
+ 10For further arguments on its topological nature one can formulate similar considerations as the
979
+ ones presented in Appendix A of [38].
980
+ 11For the interest of simplicity we have avoided discussing the 11d supergravity topological operator
981
+ in this approach. In that case one must compare,
982
+ U ′
983
+ α(Σ7) =
984
+
985
+ Dc2
986
+ ���
987
+ Σ7exp
988
+
989
+ 2πiα
990
+
991
+ Σ7
992
+ ⋆11F4 − (A3 − dc2) ∧ F4
993
+
994
+ (4.7)
995
+ and 3.8. Again a factor 2 appears which implies that the two operators are not completely equivalent.
996
+ – 15 –
997
+
998
+ comments on the applications of these symmetries, particularly in the context of the
999
+ Swampland program [69, 70].
1000
+ An interesting application of non-invertible symmetries of this kind is that they
1001
+ require the existence of auxiliary symmetries that need to be gauged for the non-
1002
+ invertible topological operator to exist.
1003
+ This gives rise to a hierarchy between the
1004
+ scales of symmetry breaking of the different symmetries of the theory12.
1005
+ Consider
1006
+ the operator in 3.8 for instance. The existence of the 3-form non-invertible symmetry
1007
+ requires the existence of an exact 6-form magnetic symmetry. This implies a hierarchy
1008
+ between the energy at which these symmetries are broken in a UV-completion.
1009
+ E(Γ(3)
1010
+ Q ) ≤ E(U(1)(6)
1011
+ m )
1012
+ (5.1)
1013
+ In particular, if one assumes that Γ(3)
1014
+ Q , U(1)(6)
1015
+ m are broken explicitly by including dy-
1016
+ namical objects electrically charged under them, M2- and M5-branes, respectively, one
1017
+ concludes the following relation between their tension,
1018
+ T 1/3
1019
+ M2 ≲ T 1/6
1020
+ M5
1021
+ (5.2)
1022
+ which is true up to an order 1 factor with the values TM2 = (2π)−2l−3
1023
+ p , TM5 = (2π)−5l−6
1024
+ p
1025
+ 13.
1026
+ Similar considerations apply to the operator in 3.24. In that case, to build Γ(5)
1027
+ Q defects
1028
+ we need to gauge a subgroup of U(1)(6)
1029
+ m × U(1)(7)
1030
+ m which implies,
1031
+ E(Γ(3)
1032
+ Q ) ≤ min
1033
+
1034
+ E(U(1)(6)
1035
+ m ), E(U(1)(7)
1036
+ m )
1037
+
1038
+ (5.3)
1039
+ If we again assume that the symmetries are only broken by the presence of the minimal
1040
+ branes charged under them we have,
1041
+ T 1/5
1042
+ D4 ≲ min
1043
+
1044
+ T 1/6
1045
+ NS5, T 1/7
1046
+ D6
1047
+
1048
+ (5.4)
1049
+ which again is fulfilled in Type IIA String Theory by the NS5-brane. A similar relation,
1050
+ which is again satisfied, applies to the topological operator of Type IIB String Theory.
1051
+ The assumption that the symmetries are only broken by their coupling to their minimal
1052
+ objects is not true in either M-theory or Type II String Theory, so these relations needed
1053
+ not hold. It is still amusing to see that they are indeed satisfied.
1054
+ Let us now elaborate a bit on the relation between these symmetries and the
1055
+ completeness hypothesis. The two simplest mechanisms by which a putative UV com-
1056
+ pletion breaks higher-form symmetries of the low energy theory is by the presence of
1057
+ 12This is clear for the operators constructed in section 3, while it is less clear for the ones in section 4.
1058
+ 13Note that a similar relation is enforced by the 2-group structure.
1059
+ – 16 –
1060
+
1061
+ Chern-Simons couplings and the introduction of dynamical objects [3, 51, 60]. In this
1062
+ work we have argued that Chern-Simons terms are generically not enough to effectively
1063
+ break the symmetries, making the case for the need of adding dynamical objects. This
1064
+ makes more clear than ever the connection between having a complete spectrum (The
1065
+ Completeness Hypothesis) and the absence of generalized global symmetries.
1066
+ We finish with an outlook of the future work.
1067
+ • We have seen two different approaches to finding non-invertible symmetries for
1068
+ non-conservation equations. It would be very interesting to better understand
1069
+ the connection between the two approaches. In particular, it would help if we
1070
+ understood how to construct the operators in section 4 explicitly, maybe from
1071
+ higher gauging.
1072
+ • Relatedly, we have left for future work the understanding of whether more “ra-
1073
+ tional valued” non-invertible operators can be built in Type II supergravity by
1074
+ further gauging the non-invertible symmetries.
1075
+ • It would be very interesting to elaborate on the actions of the topological opera-
1076
+ tors in the different probe branes. We expect very rich fusion rules, in a similar
1077
+ spirit to [66].
1078
+ • We expect these results to have many applications and generalizations in string
1079
+ compactifications. In particular, it would be very interesting to see if our methods
1080
+ can be generalized to compactifications with generalized θ-terms, as studied in
1081
+ [71].
1082
+ Acknowledgments
1083
+ I am pleased to thank Jerem´ıas Aguilera, Riccardo Argurio, Shani Meynet, Miguel
1084
+ Montero and Damian van de Heisteeg for related and insightful discussions. I would
1085
+ also like to thank Shani Meynet, Antoine Pasternak, Valdo Tatitscheff and specially
1086
+ Riccardo Argurio and I˜naki Garc´ıa-Etxebarria, for taking time during Christmas to read
1087
+ this manuscript and share important comments. Finally, I want to thank Raquel for
1088
+ her support. This research is supported by a Margarita Salas award CA1/RSUE/2021-
1089
+ 00738 from the Plan de Recuperaci´on, Transformaci´on y Resiliencia of the Ministerio
1090
+ de Universidades, UAM and Harvard.
1091
+ A
1092
+ The 7d ZN TQFT
1093
+ In this appendix we define the 7d TQFT with 3-form ZN symmmetry and anomaly as
1094
+ needed to render the 11d supergravity defects gauge invariant. We start by briefly re-
1095
+ – 17 –
1096
+
1097
+ viewing the 3d An,p theory. Consider a smooth deformation of the would-be topological
1098
+ defect in 2.6 with α = p/N. The operators fails to be topological due to the equation
1099
+ of motion, which gives it a phase,
1100
+ exp
1101
+ �iπp
1102
+ N
1103
+
1104
+ M4
1105
+ F2 ∧ F2
1106
+
1107
+ (A.1)
1108
+ One looks for a TQFT that can cancel this phase. This is precisely what the AN,p[B2]
1109
+ theory does. Indeed it is defined to have 1-form symmetry Z(1)
1110
+ N and anomaly given by
1111
+ inflow as,
1112
+ S(N,p)
1113
+ 3
1114
+ = −iπpN
1115
+
1116
+ M4
1117
+ B2 ∧ B2
1118
+ (A.2)
1119
+ Where B2 is a Z(1)
1120
+ N
1121
+ background gauge field with holonomies in Z/N. We may now
1122
+ identify the background B2 = F2/N to cancel the phase A.1 so that,
1123
+ U p
1124
+ N (Σ3) × A(N,p)
1125
+ �F2
1126
+ N
1127
+
1128
+ (A.3)
1129
+ is topological and gauge invariant, as reviewed in the main text. For further details on
1130
+ how to define the A(N,p)[B2] theory the reader may check [16, 64]. Consider now the
1131
+ case of 11d supergravity and the non-conservation equation in 3.4. The naive operator
1132
+ is not topological, as it picks up a phase
1133
+ exp
1134
+ �iπp
1135
+ N
1136
+
1137
+ M8
1138
+ F4 ∧ F4
1139
+
1140
+ (A.4)
1141
+ In analogy with the discussion above we define a 7d TQFT A(N,p)
1142
+ 7
1143
+ [B4] with 3-form Z(3)
1144
+ N
1145
+ symmmetry and anomaly characterized by inflow as,
1146
+ S(N,p)
1147
+ 7
1148
+ = −iπpN
1149
+
1150
+ M8
1151
+ B4 ∧ B4
1152
+ (A.5)
1153
+ Where B4 is a Z(3)
1154
+ N background gauge field with holonomies in Z/N.
1155
+ B
1156
+ Explicit construction of the non-invertible defects by half
1157
+ gauging
1158
+ In this appendix we construct the rational valued non-invertible defects introduced
1159
+ in the main text by using the technique of higher gauging [10].
1160
+ Consider first the
1161
+ non-invertible topological operator introduced for 11d supergravity 3.8. We choose an
1162
+ – 18 –
1163
+
1164
+ auxiliary manifold Σ8 such that Σ7 = ∂Σ8 and gauge a Z(6)
1165
+ N
1166
+ subgroup of the mag-
1167
+ netic symmetry U(1)(6) with appropriate discrete torsion. This gauging is described by
1168
+ adding the following terms to the path integral,
1169
+ δS = 2πi
1170
+
1171
+ Σ8
1172
+ Nb4 ∧ dˆc3 + b4 ∧ F4 + Np′
1173
+ 2 b4 ∧ b4
1174
+ (B.1)
1175
+ Where pp′ = 1 mod N. Let us unpack a bit the expression above. The second term
1176
+ describes the coupling of the U(1)(6) current to a a background b4 in Σ8. The first term
1177
+ is a coupling to a U(1) Lagrange multiplier gauge field ˆc3 whose job is to restrict the
1178
+ holonomy of b4 so that it is effectively a ZN gauge field. The third term is a discrete
1179
+ torsion that one may always add. The equation of motion for b4 is Ndˆc3+F4+Np′b4 = 0.
1180
+ Making b4 dynamical implements the gauging. Consider a closed Σ8. If we use the
1181
+ equation of motion and remove terms that are multiples of 2πi, we see that our gauging
1182
+ precisely cancels the phase in A.4, as needed. If we instead take p′ = 1 and Σ8 manifold
1183
+ with boundary ∂Σ8 = Σ7 we explicitly find,
1184
+ δS = −iπN
1185
+
1186
+ M8
1187
+ b4 ∧ b4 +
1188
+
1189
+ ∂Σ7
1190
+ A(N,1)
1191
+ 7
1192
+ [B4]
1193
+ (B.2)
1194
+ We thus conclude that the gauging above precisely generates A(N,p)
1195
+ 7
1196
+ in Σ7 = ∂Σ8. This
1197
+ explicit construction of the defect is a further check of its topological nature.
1198
+ An
1199
+ important point for this gauging to work is that the theory must be self-dual under it,
1200
+ as emphasized in [66]. In particular, if one p-gauges a discrete q-form symmetry in a
1201
+ d-dimensional theory, for the symmetries to be the same before and after the gauging
1202
+ the following relation must hold,
1203
+ q = (d + p − 2)/2
1204
+ (B.3)
1205
+ In the case at hand, q = 6, p = 3 and d = 11, the relation is fulfilled.
1206
+ Consider now the topological defect introduced for type IIA supergravity 3.24. The
1207
+ classical phase that one whises to cancel is now,
1208
+ exp
1209
+ �2πip
1210
+ N
1211
+
1212
+ M5
1213
+ dA1 ∧ H3
1214
+
1215
+ (B.4)
1216
+ We have to gauge a Z(6)
1217
+ N ×Z(7)
1218
+ N subgroup of the magnetic symmetry U(1)(6)
1219
+ m ×U(1)(7)
1220
+ m in
1221
+ a 5-dimensional manifold Σ5, so we introduce two background fields b2 and b3, respec-
1222
+ tively. We also introduce Lagrange multipliers ˆc1, ˆv2 enforcing ZN holonomies and a
1223
+ discrete torsion term. The mixed gauging is hence implemented by adding the following
1224
+ term to the path integral,
1225
+ δS = 2πi
1226
+
1227
+ Σ5
1228
+ Np′b3 ∧ b2 + Nb3 ∧ dˆc1 + Nb2 ∧ dˆv2 − b3 ∧ dA1 − b2 ∧ H3
1229
+ (B.5)
1230
+ – 19 –
1231
+
1232
+ Direct computation in a closed manifold using the equations of motion of b2, b3 shows
1233
+ that one precisely cancels the phase in B.4. In a manifold with boundary and with
1234
+ p = 1 one recovers the TQFT in 3.24 as expected. This gauging allows us to generalize
1235
+ the topological defect to arbitrary p. Note that in this gauging the quantum symmetries
1236
+ are interchanged, in the sense that the quantum symmetry from gauging Z(6)
1237
+ N becomes
1238
+ part of U(1)(7)
1239
+ m after gauging and viceversa. This ensures that the symmetry before and
1240
+ after the gauging is the same. A similar construction applies with minor modifications
1241
+ to the Type IIB defect in 3.38.
1242
+ References
1243
+ [1] D. Gaiotto, A. Kapustin, N. Seiberg, and B. Willett, Generalized Global Symmetries,
1244
+ JHEP 02 (2015) 172, [arXiv:1412.5148].
1245
+ [2] T. Rudelius and S.-H. Shao, Topological Operators and Completeness of Spectrum in
1246
+ Discrete Gauge Theories, JHEP 12 (2020) 172, [arXiv:2006.10052].
1247
+ [3] B. Heidenreich, J. McNamara, M. Montero, M. Reece, T. Rudelius, and I. Valenzuela,
1248
+ Non-invertible global symmetries and completeness of the spectrum, JHEP 09 (2021)
1249
+ 203, [arXiv:2104.07036].
1250
+ [4] M. Nguyen, Y. Tanizaki, and M. ¨Unsal, Semi-Abelian gauge theories, non-invertible
1251
+ symmetries, and string tensions beyond N-ality, JHEP 03 (2021) 238,
1252
+ [arXiv:2101.02227].
1253
+ [5] M. Koide, Y. Nagoya, and S. Yamaguchi, Non-invertible topological defects in
1254
+ 4-dimensional Z2 pure lattice gauge theory, PTEP 2022 (2022), no. 1 013B03,
1255
+ [arXiv:2109.05992].
1256
+ [6] Y. Choi, C. Cordova, P.-S. Hsin, H. T. Lam, and S.-H. Shao, Non-Invertible Duality
1257
+ Defects in 3+1 Dimensions, arXiv:2111.01139.
1258
+ [7] J. Kaidi, K. Ohmori, and Y. Zheng, Kramers-Wannier-like Duality Defects in (3+1)D
1259
+ Gauge Theories, Phys. Rev. Lett. 128 (2022), no. 11 111601, [arXiv:2111.01141].
1260
+ [8] C. Cordova, K. Ohmori, and T. Rudelius, Generalized Symmetry Breaking Scales and
1261
+ Weak Gravity Conjectures, arXiv:2202.05866.
1262
+ [9] F. Benini, C. Copetti, and L. Di Pietro, Factorization and global symmetries in
1263
+ holography, arXiv:2203.09537.
1264
+ [10] K. Roumpedakis, S. Seifnashri, and S.-H. Shao, Higher Gauging and Non-invertible
1265
+ Condensation Defects, arXiv:2204.02407.
1266
+ [11] L. Bhardwaj, L. Bottini, S. Schafer-Nameki, and A. Tiwari, Non-Invertible
1267
+ Higher-Categorical Symmetries, arXiv:2204.06564.
1268
+ – 20 –
1269
+
1270
+ [12] G. Arias-Tamargo and D. Rodriguez-Gomez, Non-Invertible Symmetries from Discrete
1271
+ Gauging and Completeness of the Spectrum, arXiv:2204.07523.
1272
+ [13] Y. Hayashi and Y. Tanizaki, Non-invertible self-duality defects of Cardy-Rabinovici
1273
+ model and mixed gravitational anomaly, arXiv:2204.07440.
1274
+ [14] Y. Choi, C. Cordova, P.-S. Hsin, H. T. Lam, and S.-H. Shao, Non-invertible
1275
+ Condensation, Duality, and Triality Defects in 3+1 Dimensions, arXiv:2204.09025.
1276
+ [15] J. Kaidi, G. Zafrir, and Y. Zheng, Non-Invertible Symmetries of N = 4 SYM and
1277
+ Twisted Compactification, arXiv:2205.01104.
1278
+ [16] Y. Choi, H. T. Lam, and S.-H. Shao, Non-invertible Global Symmetries in the Standard
1279
+ Model, arXiv:2205.05086.
1280
+ [17] C. Cordova and K. Ohmori, Non-Invertible Chiral Symmetry and Exponential
1281
+ Hierarchies, arXiv:2205.06243.
1282
+ [18] A. Antinucci, G. Galati, and G. Rizi, On Continuous 2-Category Symmetries and
1283
+ Yang-Mills Theory, arXiv:2206.05646.
1284
+ [19] V. Bashmakov, M. Del Zotto, and A. Hasan, On the 6d Origin of Non-invertible
1285
+ Symmetries in 4d, arXiv:2206.07073.
1286
+ [20] J. Aguilera Damia, R. Argurio, and L. Tizzano, Continuous Generalized Symmetries in
1287
+ Three Dimensions, arXiv:2206.14093.
1288
+ [21] J. A. Damia, R. Argurio, and E. Garcia-Valdecasas, Non-Invertible Defects in 5d,
1289
+ Boundaries and Holography, arXiv:2207.02831.
1290
+ [22] Y. Choi, H. T. Lam, and S.-H. Shao, Non-invertible Time-reversal Symmetry,
1291
+ arXiv:2208.04331.
1292
+ [23] L. Bhardwaj, S. Schafer-Nameki, and J. Wu, Universal Non-Invertible Symmetries,
1293
+ Fortsch. Phys. 70 (2022), no. 11 2200143, [arXiv:2208.05973].
1294
+ [24] T. Bartsch, M. Bullimore, A. E. V. Ferrari, and J. Pearson, Non-invertible Symmetries
1295
+ and Higher Representation Theory I, arXiv:2208.05993.
1296
+ [25] L. Lin, D. G. Robbins, and E. Sharpe, Decomposition, Condensation Defects, and
1297
+ Fusion, Fortsch. Phys. 70 (2022), no. 11 2200130, [arXiv:2208.05982].
1298
+ [26] I. n. Garc´ıa Etxebarria, Branes and Non-Invertible Symmetries, Fortsch. Phys. 70
1299
+ (2022), no. 11 2200154, [arXiv:2208.07508].
1300
+ [27] F. Apruzzi, I. Bah, F. Bonetti, and S. Schafer-Nameki, Non-Invertible Symmetries
1301
+ from Holography and Branes, arXiv:2208.07373.
1302
+ [28] J. J. Heckman, M. H¨ubner, E. Torres, and H. Y. Zhang, The Branes Behind
1303
+ Generalized Symmetry Operators, arXiv:2209.03343.
1304
+ – 21 –
1305
+
1306
+ [29] D. S. Freed, G. W. Moore, and C. Teleman, Topological symmetry in quantum field
1307
+ theory, arXiv:2209.07471.
1308
+ [30] P. Niro, K. Roumpedakis, and O. Sela, Exploring Non-Invertible Symmetries in Free
1309
+ Theories, arXiv:2209.11166.
1310
+ [31] J. Kaidi, K. Ohmori, and Y. Zheng, Symmetry TFTs for Non-Invertible Defects,
1311
+ arXiv:2209.11062.
1312
+ [32] N. Mekareeya and M. Sacchi, Mixed Anomalies, Two-groups, Non-Invertible
1313
+ Symmetries, and 3d Superconformal Indices, arXiv:2210.02466.
1314
+ [33] A. Antinucci, F. Benini, C. Copetti, G. Galati, and G. Rizi, The holography of
1315
+ non-invertible self-duality symmetries, arXiv:2210.09146.
1316
+ [34] S. Chen and Y. Tanizaki, Solitonic symmetry beyond homotopy: invertibility from
1317
+ bordism and non-invertibility from TQFT, arXiv:2210.13780.
1318
+ [35] V. Bashmakov, M. Del Zotto, A. Hasan, and J. Kaidi, Non-invertible Symmetries of
1319
+ Class S Theories, arXiv:2211.05138.
1320
+ [36] A. Karasik, On anomalies and gauging of U(1) non-invertible symmetries in 4d QED,
1321
+ arXiv:2211.05802.
1322
+ [37] C. Cordova, S. Hong, S. Koren, and K. Ohmori, Neutrino Masses from Generalized
1323
+ Symmetry Breaking, arXiv:2211.07639.
1324
+ [38] I. n. Garc´ıa Etxebarria and N. Iqbal, A Goldstone theorem for continuous
1325
+ non-invertible symmetries, arXiv:2211.09570.
1326
+ [39] E. P. Verlinde, Fusion Rules and Modular Transformations in 2D Conformal Field
1327
+ Theory, Nucl. Phys. B 300 (1988) 360–376.
1328
+ [40] V. B. Petkova and J. B. Zuber, Generalized twisted partition functions, Phys. Lett. B
1329
+ 504 (2001) 157–164, [hep-th/0011021].
1330
+ [41] J. Fuchs, I. Runkel, and C. Schweigert, TFT construction of RCFT correlators 1.
1331
+ Partition functions, Nucl. Phys. B 646 (2002) 353–497, [hep-th/0204148].
1332
+ [42] J. Frohlich, J. Fuchs, I. Runkel, and C. Schweigert, Kramers-Wannier duality from
1333
+ conformal defects, Phys. Rev. Lett. 93 (2004) 070601, [cond-mat/0404051].
1334
+ [43] L. Bhardwaj and Y. Tachikawa, On finite symmetries and their gauging in two
1335
+ dimensions, JHEP 03 (2018) 189, [arXiv:1704.02330].
1336
+ [44] Y. Tachikawa, On gauging finite subgroups, SciPost Phys. 8 (2020), no. 1 015,
1337
+ [arXiv:1712.09542].
1338
+ [45] C.-M. Chang, Y.-H. Lin, S.-H. Shao, Y. Wang, and X. Yin, Topological Defect Lines
1339
+ and Renormalization Group Flows in Two Dimensions, JHEP 01 (2019) 026,
1340
+ [arXiv:1802.04445].
1341
+ – 22 –
1342
+
1343
+ [46] R. Thorngren and Y. Wang, Fusion Category Symmetry I: Anomaly In-Flow and
1344
+ Gapped Phases, arXiv:1912.02817.
1345
+ [47] D. Gaiotto and J. Kulp, Orbifold groupoids, JHEP 02 (2021) 132, [arXiv:2008.05960].
1346
+ [48] Z. Komargodski, K. Ohmori, K. Roumpedakis, and S. Seifnashri, Symmetries and
1347
+ strings of adjoint QCD2, JHEP 03 (2021) 103, [arXiv:2008.07567].
1348
+ [49] M. Nguyen, Y. Tanizaki, and M. ¨Unsal, Noninvertible 1-form symmetry and Casimir
1349
+ scaling in 2D Yang-Mills theory, Phys. Rev. D 104 (2021), no. 6 065003,
1350
+ [arXiv:2104.01824].
1351
+ [50] R. Thorngren and Y. Wang, Fusion Category Symmetry II: Categoriosities at c = 1
1352
+ and Beyond, arXiv:2106.12577.
1353
+ [51] M. Montero, A. M. Uranga, and I. Valenzuela, A Chern-Simons Pandemic, JHEP 07
1354
+ (2017) 123, [arXiv:1702.06147].
1355
+ [52] T. Banks and L. J. Dixon, Constraints on String Vacua with Space-Time
1356
+ Supersymmetry, Nucl. Phys. B 307 (1988) 93–108.
1357
+ [53] T. Banks and N. Seiberg, Symmetries and Strings in Field Theory and Gravity, Phys.
1358
+ Rev. D 83 (2011) 084019, [arXiv:1011.5120].
1359
+ [54] D. Harlow and H. Ooguri, Symmetries in quantum field theory and quantum gravity,
1360
+ Commun. Math. Phys. 383 (2021), no. 3 1669–1804, [arXiv:1810.05338].
1361
+ [55] D. Harlow and E. Shaghoulian, Global symmetry, Euclidean gravity, and the black hole
1362
+ information problem, JHEP 04 (2021) 175, [arXiv:2010.10539].
1363
+ [56] Y. Chen and H. W. Lin, Signatures of global symmetry violation in relative entropies
1364
+ and replica wormholes, JHEP 03 (2021) 040, [arXiv:2011.06005].
1365
+ [57] P.-S. Hsin, L. V. Iliesiu, and Z. Yang, A violation of global symmetries from replica
1366
+ wormholes and the fate of black hole remnants, Class. Quant. Grav. 38 (2021), no. 19
1367
+ 194004, [arXiv:2011.09444].
1368
+ [58] M. Sasieta, Wormholes from heavy operator statistics in AdS/CFT,
1369
+ arXiv:2211.11794.
1370
+ [59] I. Bah, Y. Chen, and J. Maldacena, Estimating global charge violating amplitudes from
1371
+ wormholes, arXiv:2212.08668.
1372
+ [60] B. Heidenreich, J. McNamara, M. Montero, M. Reece, T. Rudelius, and I. Valenzuela,
1373
+ Chern-Weil global symmetries and how quantum gravity avoids them, JHEP 11 (2021)
1374
+ 053, [arXiv:2012.00009].
1375
+ [61] D. Marolf, Chern-Simons terms and the three notions of charge, in International
1376
+ Conference on Quantization, Gauge Theory, and Strings: Conference Dedicated to the
1377
+ Memory of Professor Efim Fradkin, pp. 312–320, 6, 2000. hep-th/0006117.
1378
+ – 23 –
1379
+
1380
+ [62] C. Petersson, Superpotentials From Stringy Instantons Without Orientifolds, JHEP 05
1381
+ (2008) 078, [arXiv:0711.1837].
1382
+ [63] N. Seiberg and E. Witten, String theory and noncommutative geometry, JHEP 09
1383
+ (1999) 032, [hep-th/9908142].
1384
+ [64] P.-S. Hsin, H. T. Lam, and N. Seiberg, Comments on One-Form Global Symmetries
1385
+ and Their Gauging in 3d and 4d, SciPost Phys. 6 (2019), no. 3 039,
1386
+ [arXiv:1812.04716].
1387
+ [65] J. J. Heckman and L. Tizzano, 6D Fractional Quantum Hall Effect, JHEP 05 (2018)
1388
+ 120, [arXiv:1708.02250].
1389
+ [66] Y. Choi, H. T. Lam, and S.-H. Shao, Non-invertible Gauss Law and Axions,
1390
+ arXiv:2212.04499.
1391
+ [67] K. Becker, M. Becker, and J. H. Schwarz, String theory and M-theory: A modern
1392
+ introduction. Cambridge University Press, 12, 2006.
1393
+ [68] R. Yokokura, Non-invertible symmetries in axion electrodynamics, arXiv:2212.05001.
1394
+ [69] C. Vafa, The String landscape and the swampland, hep-th/0509212.
1395
+ [70] N. B. Agmon, A. Bedroya, M. J. Kang, and C. Vafa, Lectures on the string landscape
1396
+ and the Swampland, arXiv:2212.06187.
1397
+ [71] T. W. Grimm, S. Lanza, and T. van Vuren, Global symmetry-breaking and generalized
1398
+ theta-terms in Type IIB EFTs, arXiv:2211.11769.
1399
+ – 24 –
1400
+
ANAyT4oBgHgl3EQf3_og/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
AtFRT4oBgHgl3EQfuDj6/content/tmp_files/2301.13630v1.pdf.txt ADDED
@@ -0,0 +1,1001 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 1
2
+ Enhancing NOMA Networks via Reconfigurable
3
+ Multi-Functional Surface
4
+ Ailing Zheng, Wanli Ni, Wen Wang, and Hui Tian
5
+ Abstract—By flexibly manipulating the radio propagation en-
6
+ vironment, reconfigurable intelligent surface (RIS) is a promising
7
+ technique for future wireless communications. However, the
8
+ single-side coverage and double-fading attenuation faced by con-
9
+ ventional RISs largely restrict their applications. To address this
10
+ issue, we propose a novel concept of multi-functional RIS (MF-
11
+ RIS), which provides reflection, transmission, and amplification
12
+ simultaneously for the incident signal. With the aim of enhancing
13
+ the performance of a non-orthogonal multiple-access (NOMA)
14
+ downlink multiuser network, we deploy an MF-RIS to maximize
15
+ the sum rate by jointly optimizing the active beamforming and
16
+ MF-RIS coefficients. Then, an alternating optimization algorithm
17
+ is proposed to solve the formulated non-convex problem by
18
+ exploiting successive convex approximation and penalty-based
19
+ method. Numerical results show that the proposed MF-RIS
20
+ outperforms conventional RISs under different settings.
21
+ Index Terms—Multi-functional reconfigurable intelligent sur-
22
+ face, non-orthogonal multiple access, rate maximization.
23
+ I. INTRODUCTION
24
+ Compared to orthogonal multiple access (OMA), non-
25
+ orthogonal multiple access (NOMA) is capable of achieving
26
+ high spectrum efficiency and massive connectivity [1]. Prior
27
+ investigations have shown that the differences between users’
28
+ channel conditions can be exploited to enhance NOMA per-
29
+ formance [2]. However, users in large-scale networks may
30
+ have poor or similar channel conditions, which hinders the
31
+ application of successive interference cancellation (SIC) and
32
+ the effective implementation of NOMA. Therefore, adjusting
33
+ channel conditions and enhancing channel diversity are able
34
+ to release the potential of NOMA in practical networks.
35
+ Recently, with the ability to reshape the wireless propaga-
36
+ tion environment, reconfigurable intelligent surface (RIS) has
37
+ emerged as a key technique to improve the performance of
38
+ NOMA networks [3]. By properly designing the reflection
39
+ coefficients, RIS is able to smartly change the combined
40
+ channels to enhance the differences among users, thus boosting
41
+ the performance of NOMA in large-scale networks. Initial
42
+ investigations on RIS-aided NOMA networks in [3]–[7] had
43
+ verified the superiority of the integration of NOMA and RIS.
44
+ Specifically, the authors of [3] and [4] performed comprehen-
45
+ sive discussions of the main challenges and futuristic use cases
46
+ regarding RIS-aided NOMA networks. Moreover, the works in
47
+ [5]–[7] demonstrated the benefits brought by RISs to achieve
48
+ performance trade-off among multiple NOMA users through
49
+ This letter was supported by the Natural Science Foundation of Shandong
50
+ Province under Grant No. ZR2021LZH010. The associate editor coordinating
51
+ the review of this letter and approving it for publication was Lina Bariah.
52
+ (Corresponding author: Hui Tian.)
53
+ A. Zheng, W. Ni, W. Wang, and H. Tian are with the State Key Lab-
54
+ oratory of Networking and Switching Technology, Beijing University of
55
+ Posts and Telecommunications, Beijing 100876, China (e-mail: {ailing.zheng,
56
+ charleswall, wen.wang, tianhui}@bupt.edu.cn).
57
+ smartly adjusting the decoding order. However, the existing
58
+ literature on RIS-aided NOMA networks mostly uses single
59
+ functional RIS (SF-RIS) that only supports signal reflection
60
+ or transmission/refraction. This implies that only users located
61
+ in a single side can be served by the SF-RIS if no additional
62
+ operations are performed.
63
+ To overcome this limitation, the authors of [8] proposed the
64
+ concept of dual-functional RIS (DF-RIS). Unlike SF-RIS, DF-
65
+ RIS refers to the reconfigurable dual-functional surface that
66
+ can conduct signal reflection and transmission simultaneously,
67
+ such as simultaneous transmitting and reflecting RIS (STAR-
68
+ RIS) [9] and intelligent omni-surface (IOS) [10]. Specifically,
69
+ the coverage characterization of STAR-RIS-aided NOMA net-
70
+ works was investigated in [9] by studying a coverage range
71
+ maximization problem. The authors of [10] considered the
72
+ average rate maximization problem in an IOS-aided NOMA
73
+ networks with spatially correlated channels. Furthermore, the
74
+ effective capacity and secrecy outage probability of STAR-
75
+ RIS-aided NOMA networks were derived in [11] and [12],
76
+ respectively. However, although the effective coverage can
77
+ be enhanced by the existing DF-RIS, the signals relayed by
78
+ the DF-RIS still suffer from channel fading twice due to
79
+ the features of cascaded channels. This double-fading effect
80
+ inevitably deteriorates the achievable performance of passive
81
+ RIS-assisted wireless networks. Therefore, it is necessary to
82
+ design new RIS architectures to mitigate the double-fading
83
+ attenuation problem faced by the existing RISs.
84
+ In this letter, a novel multi-functional RIS (MF-RIS) is
85
+ proposed to address the issues aforementioned. Specifically,
86
+ the proposed MF-RIS can not only divide the incident signal
87
+ into transmission and reflection two parts based on the field
88
+ equivalence principle, but also amplify the outgoing signal
89
+ with the help of active loads. Thus, the MF-RIS is able
90
+ to facilitate a full-space coverage and overcome the double-
91
+ fading issue. Then, we investigate a sum rate maximization
92
+ problem in an MF-RIS-aided NOMA network. Compared to
93
+ the existing problems formulated in [8] and [9], the newly
94
+ introduced MF-RIS constraints and highly coupled variables
95
+ make the performance optimization more complicated. The
96
+ main contributions of this letter are summarized as follows:
97
+ 1) We propose a new concept of MF-RIS by integrating the
98
+ surface electric and magnetic impedances, and power amplifier
99
+ into each element so that the incident signal can be reflected,
100
+ refracted, and amplified simultaneously. 2) We formulate a
101
+ non-convex optimization problem to maximize the throughout
102
+ of an MF-RIS-aided NOMA network, where the MF-RIS is
103
+ deployed to constructively enhance the channel condition by
104
+ flexibly adjusting the radio propagation environment. 3) To
105
+ solve the formulated non-convex problem, we propose an
106
+ efficient iterative algorithm by alternatively optimizing the
107
+ arXiv:2301.13630v1 [cs.IT] 31 Jan 2023
108
+
109
+ 2
110
+ BS
111
+ BS
112
+ Transmission-only
113
+ Reflection-only
114
+ BS
115
+ BS
116
+ ���
117
+ ���
118
+ ���
119
+ ���
120
+ ���
121
+ ���
122
+ Reflected signal
123
+ Transmitted signal
124
+ Power supply
125
+ Incident signal
126
+ SF-RIS
127
+ DF-RIS
128
+ MF-RIS
129
+
130
+ Users in the reflection space
131
+ Users in the transmission space
132
+ Incident signal
133
+ coverage
134
+ Signal amplification
135
+ Passive relay
136
+ Amplifier
137
+ Phase shifter
138
+ Power supply
139
+ Reflected phase
140
+ Transmitted phase
141
+ Patch
142
+ Patch
143
+ max
144
+
145
+
146
+ Incident signal
147
+ Reflected signal
148
+ Transmitted signal
149
+ Power splitting
150
+ Reflected signal Transmitted signal
151
+ /
152
+ max
153
+ max
154
+ [0,
155
+ ]
156
+ r t
157
+ m
158
+ r
159
+ t
160
+ m
161
+ m
162
+
163
+
164
+
165
+
166
+
167
+
168
+
169
+
170
+ 360�
171
+ 1
172
+ r
173
+ t
174
+ m
175
+ m
176
+
177
+
178
+
179
+
180
+ /
181
+ /
182
+ /
183
+ r t
184
+ m
185
+ j
186
+ r t
187
+ r t
188
+ m
189
+ m
190
+ m
191
+ y
192
+ e
193
+ s
194
+
195
+
196
+
197
+ /
198
+ /
199
+ /
200
+ r t
201
+ m
202
+ j
203
+ r t
204
+ r t
205
+ m
206
+ m
207
+ m
208
+ y
209
+ e
210
+ s
211
+
212
+
213
+
214
+ Fig. 1. Conventional RIS vs. the proposed MF-RIS-aided NOMA networks.
215
+ active beamforming and MF-RIS coefficients based on the
216
+ penalty-based method and successive convex approximation
217
+ (SCA). 4) Simulation results show that the proposed MF-RIS-
218
+ aided NOMA network can provide up to about 59% sum rate
219
+ gain than the SF-RIS, and the MF-RIS prefers to be deployed
220
+ at the user side for better performance.
221
+ II. SYSTEM MODEL AND PROBLEM FORMULATION
222
+ A. System Model
223
+ We consider an MF-RIS-aided NOMA downlink network,
224
+ where an N-antenna BS communicates with K single-antenna
225
+ users with the aid of an MF-RIS comprising M elements, as
226
+ shown in the right of Fig. 1. The sets of elements and users
227
+ are denoted by M = {1, 2, . . . , M} and K = {1, 2, . . . , K},
228
+ respectively. The channels of BS-user, BS-RIS, and RIS-
229
+ user are denoted by hk
230
+
231
+ CN×1, H
232
+
233
+ CM×N, and
234
+ gk ∈ CM×1, respectively. Furthermore, we define up =
235
+ [
236
+
237
+ βp
238
+ 1ejθp
239
+ 1 ,
240
+
241
+ βp
242
+ 2ejθp
243
+ 2 , . . . ,
244
+
245
+ βp
246
+ Mejθp
247
+ M ]T
248
+ ∈ CM×1 as the
249
+ transmission (p = t) or reflection (p = r) beamforming vector,
250
+ where p ∈ {t, r} denotes the transmission and reflection
251
+ spaces, βp
252
+ m ∈ [0, βmax] and θp
253
+ m ∈ [0, 2π) represent the
254
+ amplitude and the phase shift response of the m-th element,
255
+ respectively, with the maximum amplification factor βmax ≥ 1.
256
+ Due to the law of energy conservation, we have βr
257
+ m + βt
258
+ m ≤
259
+ βmax. If user k is located at the reflection space, the diagonal
260
+ matrix of the MF-RIS for user k is given by Θk = diag(ur);
261
+ otherwise Θk = diag(ut).
262
+ We assume that the perfect channel state information (CSI)
263
+ of all channels is available at the BS. Then the signal received
264
+ at user k is expressed as
265
+ yk = (hH
266
+ k + gH
267
+ k ΘkH)x + gH
268
+ k Θkns + nk, ∀k,
269
+ (1)
270
+ where x = �
271
+ k wksk denotes the transmit signal, wk and sk ∈
272
+ CN(0, 1) represent the transmit precoder and the information
273
+ symbol for user k, respectively. ns ∈ CN(0, σ2
274
+ sIM) denotes
275
+ the dynamic noise at the MF-RIS with each element’s noise
276
+ power σ2
277
+ s, and nk ∈ CN(0, σ2
278
+ k) denotes the additive white
279
+ Gaussian noise at user k with power σ2
280
+ k.
281
+ By employing SIC, the strong user can mitigate the inter-
282
+ ference from weak users to improve the signal-to-interference-
283
+ plus-noise ratio. Similar to [4]–[6], with the assistance of RIS
284
+ to flexibly adjust channel conditions of multiple users, we
285
+ assume that users’ indexes are ranked in an increasing order
286
+ with respect to their channel gains, i.e.,
287
+ ∥�h1∥2 ≤ ∥�h2∥2 ≤ · · · ≤ ∥�hK∥2,
288
+ (2)
289
+ where �hk = hH
290
+ k +gH
291
+ k ΘkH is the equivalent combined channel.
292
+ For the fixed decoding order, the corresponding achievable
293
+ sum rate of user k is given by Rk = log2(1 + γk), where γk
294
+ can be obtained by
295
+ γk =
296
+ |�hkwk|2
297
+ �K
298
+ i=k+1(|�hkwi|2) + |gH
299
+ k Θkns|2 + σ2
300
+ k
301
+ , ∀k.
302
+ (3)
303
+ B. Problem Formulation
304
+ In this letter, we aim to maximize the achievable sum rate
305
+ of all users by jointly optimizing the active beamforming at
306
+ the BS and the coefficients at the MF-RIS. Under the transmit
307
+ and amplification power constraints, and the quality-of-service
308
+ (QoS) requirement of users, the considered optimization prob-
309
+ lem can be formulated as
310
+ max
311
+ wk,Θk
312
+ �K
313
+ k=1 Rk
314
+ (4a)
315
+ s.t.
316
+ �K
317
+ k=1 ∥wk∥2 ≤ Pmax,
318
+ (4b)
319
+ �K
320
+ k=1(∥ΘkHwk∥2+∥ΘkIM∥2
321
+ F σ2
322
+ s)≤Po,
323
+ (4c)
324
+ βr
325
+ m + βt
326
+ m ≤ βmax, 0 ≤ βp
327
+ m ≤ βmax, ∀m, ∀p, (4d)
328
+ Rk ≥ Rmin
329
+ k
330
+ , θp
331
+ m ∈ [0, 2π), (2), ∀k, ∀m, ∀p, (4e)
332
+ where Pmax and Po denote the maximum transmit and am-
333
+ plification power at the BS and MF-RIS, respectively. Rmin
334
+ k
335
+ represents the minimum rate requirement of user k. Specifi-
336
+ cally, the constraints for transmit power, amplification power,
337
+ the QoS requirements and the decoding order are given in
338
+ (4b)-(4e), respectively. It can be observed that the formulated
339
+ problem (4) is intractable due to the non-convex objective
340
+ function and constraints. Besides, the active beamforming and
341
+ MF-RIS coefficients are highly coupled, making it difficult
342
+ to be solved directly. Thus, we aim to transform problem
343
+ (4) into some tractable convex subproblems and solve them
344
+
345
+ 3
346
+ separately and alternatively over iterations. In the next section,
347
+ we adopt alternating optimization method to obtain the active
348
+ beamforming and the MF-RIS coefficients efficiently.
349
+ III. PROPOSED SOLUTION
350
+ A. Active Beamforming Design
351
+ Given the MF-RIS coefficients, the active beamforming
352
+ optimization problem is still non-convex. To solve it, we first
353
+ introduce an auxiliary variable set {Ak, Bk|k ∈ K}, where Ak
354
+ and Bk are defined as
355
+ Ak
356
+ −1 = |�hkwk|2,
357
+ (5)
358
+ Bk =
359
+ �K
360
+ i=k+1(|�hkwi|2) + |gH
361
+ k Θkns|2 + σ2
362
+ k.
363
+ (6)
364
+ Thus, the achievable data rate can be rewritten as Rk =
365
+ log2
366
+
367
+ 1 + (AkBk)−1�
368
+ .
369
+ Then, the active beamforming optimization problem in (4)
370
+ can be equivalently expressed as
371
+ max
372
+ wk,Ak,Bk,Rk
373
+ �K
374
+ k=1 Rk
375
+ (7a)
376
+ s.t.
377
+ log2
378
+
379
+ 1 + (AkBk)−1�
380
+ ≥ Rk, ∀k,
381
+ (7b)
382
+ Ak
383
+ −1 ≤ |�hkwk|2, ∀k,
384
+ (7c)
385
+ Bk ≥
386
+ K
387
+
388
+ i=k+1
389
+ (|�hkwi|2)+|gH
390
+ k Θkns|2+σ2
391
+ k, ∀k,(7d)
392
+ Rk ≥ Rmin
393
+ k
394
+ , (4b), (4c), ∀k.
395
+ (7e)
396
+ We further define �Hk = �hH
397
+ k �hk, Dk = (HHΘk)(HHΘk)H
398
+ and Wk = wkwH
399
+ k , where Wk ⪰ 0, and rank(Wk) = 1.
400
+ Then, we have
401
+ |�hkwk|2 = Tr( �HkWk), ∥ΘkHwk∥2 = Tr(WkDk).
402
+ (8)
403
+ Therefore, problem (7) can be reformulated as
404
+ max
405
+ Wk,Ak,Bk,Rk
406
+ �K
407
+ k=1 Rk
408
+ (9a)
409
+ s.t.
410
+ Ak
411
+ −1 ≤ Tr( �HkWk), ∀k,
412
+ (9b)
413
+ Bk≥
414
+ K
415
+
416
+ i=k+1
417
+ Tr( �HkWi)+|gH
418
+ k Θkns|2+σ2
419
+ k, ∀k,(9c)
420
+ �K
421
+ k=1Tr(Wk) ≤ Pmax,
422
+ (9d)
423
+ �K
424
+ k=1
425
+
426
+ Tr(WkDk)+∥ΘkIM∥2σ2
427
+ s
428
+
429
+ ≤Po,
430
+ (9e)
431
+ rank(Wk) = 1, ∀k,
432
+ (9f)
433
+ Wk ⪰ 0, Rk ≥ Rmin
434
+ k
435
+ , (7b), ∀k.
436
+ (9g)
437
+ In order to deal with the non-convex constraint (7b), we
438
+ adopt the first-order Taylor expansion, and then we obtain the
439
+ lower bound as follows:
440
+ log2(1+
441
+ 1
442
+ AkBk
443
+ )≥log2(1+
444
+ 1
445
+ A(τ1)
446
+ k
447
+ B(τ1)
448
+ k
449
+ )− log2 e(Ak−A(τ1)
450
+ k
451
+ )
452
+ A(τ1)
453
+ k
454
+ (1+A(τ1)
455
+ k
456
+ B(τ1)
457
+ k
458
+ )
459
+
460
+ log2 e(Bk−B(τ1)
461
+ k
462
+ )
463
+ B(τ1)
464
+ k
465
+ (1+A(τ1)
466
+ k
467
+ B(τ1)
468
+ k
469
+ )
470
+ ∆= Rk,
471
+ (10)
472
+ where A(τ1)
473
+ k
474
+ and B(τ1)
475
+ k
476
+ are feasible points of Ak and Bk in
477
+ the τ1-th iteration, respectively.
478
+ For the non-convex rank-one constraint in (9f), we assume
479
+ to transform it to a penalty term in the objective function,
480
+ which can be solved by SCA. Thus, we firstly introduce an
481
+ equivalent equality:
482
+ ∥Wk∥∗ − ∥Wk∥2 = 0, ∀k,
483
+ (11)
484
+ where ∥Wk∥∗ = �
485
+ i εi(Wk) and ∥Wk∥2 = ε1(Wk) denote
486
+ the nuclear norm and the spectral norm of Wk, respectively.
487
+ εi(Wk) is the i-th largest singular value of matrix Wk. Thus,
488
+ when the matrix Wk is rank-one, equality (11) holds.
489
+ Next, we employ the penalty method to solve problem
490
+ (9) by adding (11) to the objective function (9a). Since the
491
+ penalty term (11) makes the objective function not convex,
492
+ we apply the first-order Taylor expansion to obtain a convex
493
+ upper bound of (11) as follows:
494
+ ∥Wk∥∗ − ∥Wk∥2 ≤ ∥Wk∥∗ − ∥Wk∥2,
495
+ (12)
496
+ where ∥Wk∥2
497
+ =
498
+ ∥W(τ1)
499
+ k
500
+ ∥2 + Tr
501
+
502
+ e(τ1)
503
+ k
504
+ (e(τ1)
505
+ k
506
+ )H(Wk −
507
+ W(τ1)
508
+ k
509
+ )
510
+
511
+ , and e(τ1)
512
+ k
513
+ is the eigenvector corresponding to the
514
+ largest eigenvalue of W(τ1)
515
+ k
516
+ in the τ1-th iteration.
517
+ By introducing (12) to the objective function (9a), we obtain
518
+ the following problem:
519
+ max
520
+ Wk,Ak,Bk,Rk
521
+ �K
522
+ k=1Rk− 1
523
+ η
524
+
525
+ k(∥Wk∥∗−∥Wk∥2)
526
+ (13a)
527
+ s.t.
528
+ Rk ≥ Rk, Wk ⪰ 0, Rk ≥ Rmin
529
+ k
530
+ , ∀k, (13b)
531
+ (9b) − (9e),
532
+ (13c)
533
+ where η > 0 is the penalty factor penalizing (13a) if Wk is
534
+ not rank-one. It can be verified that, when η → 0, the solution
535
+ {Wk} of problem (13) always satisfies equality (11).
536
+ The reformulated problem (13) is a standard convex semi-
537
+ definite programming (SDP), which can be efficiently solved
538
+ via CVX. To obtain a high quality solution, we first initialize
539
+ a large η to find a feasible starting point, and then gradually
540
+ decrease η with η = µη, µ < 1 to a sufficiently small value to
541
+ obtain an overall suboptimal solution. The process terminates
542
+ when the penalty term satisfies the following criterion:
543
+ max{∥Wk∥∗ − ∥Wk∥2, ∀k} ≤ ϵ1,
544
+ (14)
545
+ where ϵ1 denotes a predefined maximum violation of (11).
546
+ B. MF-RIS Coefficient Design
547
+ For the coefficient design at the MF-RIS, we define
548
+ vk
549
+ = [ur; 1] if user k is located at the space r; oth-
550
+ erwise vk
551
+ =
552
+ [ut; 1]. Then, we define Vk
553
+ =
554
+ vkvH
555
+ k ,
556
+ with
557
+ Vk
558
+
559
+ 0
560
+ and
561
+ rank(Vk)
562
+ =
563
+ 1.
564
+ Let
565
+ gk
566
+ =
567
+ [gk,1, gk,2, . . . , gk,M]H and Gk
568
+ =
569
+ Hwk, then we have
570
+ Qk
571
+ =
572
+ diag
573
+ ��
574
+ |gk,1|2, |gk,2|2, . . . , |gk,M|2��
575
+ and
576
+ �Gk
577
+ =
578
+ diag
579
+ ��
580
+ |Gk,1|2, |Gk,2|2, . . . , |Gk,M|2��
581
+ + σ2
582
+ sIM. Given
583
+ Qk =
584
+ � Qk
585
+ 0
586
+ 0
587
+ 0
588
+
589
+ , Gk =
590
+ � �Gk
591
+ 0
592
+ 0
593
+ 0
594
+
595
+ ,
596
+ (15)
597
+ we can obtain
598
+ ∥ΘkHwk∥2 + ∥ΘkIM∥2
599
+ F σ2
600
+ s = Tr(VkGk),
601
+ (16)
602
+ ∥gH
603
+ k Θk∥2 = Tr(VkQk).
604
+ (17)
605
+ Thus, constraint (4c) can be replaced by (16).
606
+ In order to handle the non-convex constraints (2) and (5), we
607
+ define fk = diag(gH
608
+ k )Gk, Rk = diag(gH
609
+ k )H, ˜hk = ∥hH
610
+ k ∥2,
611
+ and dk = wH
612
+ k hk, then we have
613
+ Fk =
614
+ � fkf H
615
+ k
616
+ fkd∗
617
+ k
618
+ dkf H
619
+ k
620
+ |dk|2
621
+
622
+ , Rk =
623
+ � RkRH
624
+ k
625
+ Rkhk
626
+ hH
627
+ k RH
628
+ k
629
+ ˜hk
630
+
631
+ .
632
+ (18)
633
+
634
+ 4
635
+ According to the above transformation, we can obtain
636
+ |�hkwk|2 = |(hH
637
+ k + gH
638
+ k ΘkH)wk|2 = Tr(VkFk),(19)
639
+ ∥�hk∥2 = Tr(VkRk).
640
+ (20)
641
+ Based on (20), the decoding order in (2) is rewritten as
642
+ Tr(V1R1) ≤ Tr(V2R2) ≤ · · · ≤ Tr(VKRK). (21)
643
+ Then, given the active beamforming vector, the subproblem
644
+ of MF-RIS coefficient design can be given by
645
+ max
646
+ Vk,Ak,Bk,Rk
647
+ �K
648
+ k=1 Rk
649
+ (22a)
650
+ s.t.
651
+ Ak
652
+ −1 ≤Tr(VkFk), ∀k,
653
+ (22b)
654
+ Bk≤
655
+ K
656
+
657
+ i=k+1
658
+ Tr(VkFi)+σ2
659
+ sTr(VkQk)+σ2
660
+ k, ∀k,(22c)
661
+ �K
662
+ k=1 Tr(VkGk) ≤ Po,
663
+ (22d)
664
+ Vk ⪰ 0, Rk ≥ Rmin
665
+ k
666
+ , ∀k,
667
+ (22e)
668
+ [Vk]m,m = βk
669
+ m, [Vk]M+1,M+1 = 1, ∀k,
670
+ (22f)
671
+ rank(Vk) = 1, ∀k,
672
+ (22g)
673
+ θp
674
+ m ∈ [0, 2π), (4d), (7b), (21), ∀m, ∀p,
675
+ (22h)
676
+ where Fi denotes Fk when wk is replaced by wi.
677
+ Similar to (12), we replace the rank-one constraint in (22g)
678
+ with the following form:
679
+ ∥Vk∥∗ − ∥Vk∥2 ≤ ∥Vk∥∗ − ∥Vk∥2,
680
+ (23)
681
+ where ∥Vk∥∗ and ∥Vk∥2 denote the nuclear norm and the
682
+ spectral norm of matrix Vk, respectively. Besides, ∥Vk∥2 =
683
+ ∥V(τ2)
684
+ k
685
+ ∥2 + Tr
686
+
687
+ z(τ2)
688
+ k
689
+ (z(τ2)
690
+ k
691
+ )H(Vk − V(τ2)
692
+ k
693
+ )
694
+
695
+ and z(τ2)
696
+ k
697
+ is the
698
+ eigenvector corresponding to the largest eigenvalue of V(τ2)
699
+ k
700
+ in the τ2-th iteration.
701
+ By introducing (10) into (7b), problem (22) can be refor-
702
+ mulated as
703
+ max
704
+ Vk,Ak,Bk,Rk
705
+ �K
706
+ k=1Rk− 1
707
+ ξ
708
+
709
+ k(∥Vk∥∗−∥Vk∥2)
710
+ (24a)
711
+ s.t.
712
+ Rk ≥ Rk, θp
713
+ m ∈ [0, 2π), ∀k, ∀m, ∀p, (24b)
714
+ (4d), (21), (22b)−(22f),
715
+ (24c)
716
+ where ξ > 0 is the penalty factor to ensure Vk is rank-one.
717
+ The problem (24) is a standard SDP problem. It can be
718
+ solved by CVX. The termination criterion is given by
719
+ max{∥Vk∥∗ − ∥Vk∥2, ∀k} ≤ ϵ2,
720
+ (25)
721
+ where ϵ2 denotes a predefined maximum violation.
722
+ Based on the above derivation, we propose a penalty-
723
+ based iterative algorithm to solve problem (4) efficiently. The
724
+ details are given in Algorithm 1. Specifically, the initial points
725
+ {W(0)
726
+ k } and {V(0)
727
+ k } are obtained by selecting the feasible
728
+ ones from some random points. Since both the objectives of
729
+ problems (13) and (24) are non-decreasing over iterations and
730
+ the system throughout is upper-bounded by a finite value, the
731
+ proposed Algorithm 1 is guaranteed to converge. Moreover,
732
+ if the interior point method is employed, the complexity of
733
+ Algorithm 1 is O(IoutIin(KN 3.5 + 2M 3.5)), where K, M
734
+ and N are the numbers of users, BS antennas and MF-
735
+ RIS elements, respectively. The terms Iin and Iout denote
736
+ the number of the inner and outer iterations required for
737
+ convergence, respectively.
738
+ Algorithm 1 Penalty-Based Iterative Algorithm
739
+ 1: Initialize {W(0)
740
+ k }, {V(0)
741
+ k }, the error tolerance ∆, the
742
+ maximum number of iteration T0,max, the penalty factors
743
+ η and ξ, and the predefined threshold ϵ.
744
+ 2: repeat
745
+ 3:
746
+ Set the iteration index τ0 = 0;
747
+ 4:
748
+ repeat
749
+ 5:
750
+ Given V(τ0)
751
+ k
752
+ , update W(τ0+1)
753
+ k
754
+ by solving (13);
755
+ 6:
756
+ Given W(τ0+1)
757
+ k
758
+ , update V(τ0+1)
759
+ k
760
+ by solving (24);
761
+ 7:
762
+ Update τ0 = τ0 + 1;
763
+ 8:
764
+ until | R(τ0)
765
+ sum −R(τ0−1)
766
+ sum
767
+ R(τ0−1)
768
+ sum
769
+ | < ∆ or τ0 > T0,max.
770
+ 9:
771
+ Update {W(0)
772
+ k , V(0)
773
+ k } with {W(τ0)
774
+ k
775
+ , V(τ0)
776
+ k
777
+ };
778
+ 10:
779
+ Update η = µη, ξ = µξ;
780
+ 11: until the constraints (14) and (25) satisfy ϵ;
781
+ 12: Output the converged solutions {W∗
782
+ k} and {V∗
783
+ k}.
784
+ TABLE I: Simulation Parameters
785
+ Parameter
786
+ Value
787
+ Path loss exponents of BS-MF-RIS,
788
+ BS-users, MF-RIS-users links
789
+ 2.5, 3.5, 2.8
790
+ Rician factors of all links
791
+ 3 dB
792
+ Noise power at MF-RIS and users
793
+ −80 dBm
794
+ Minimum required QoS for users
795
+ 0.1 bit/s/Hz
796
+ Maximum amplification power [13]
797
+ Po = 10 dBm
798
+ Maximum amplification factor
799
+ βmax = 22 dB
800
+ Convergence tolerance
801
+ ∆ = 10−6
802
+ IV. SIMULATION RESULTS
803
+ In this section, numerical results are provided to validate
804
+ the performance of an MF-RIS-aided NOMA network. The
805
+ BS and the MF-RIS are located at (0, 0, 0) and (0, 50, 20),
806
+ respectively. Besides, the users are divided into two parts,
807
+ distributed on circles centered on (0, 45, 0) and (0, 55, 0)
808
+ with radius r = 3, respectively. We adopt Rician fading for
809
+ all channels, and set K = 6, N = 16, M = 100, and
810
+ Pmax = 20 dBm. Other parameters are listed in Table I. We
811
+ compare the proposed MF-RIS with three existing RISs:
812
+ • SF-RIS [7]: The SF-RIS only supports signal reflection
813
+ or transmission, i.e., βmax = 1 and Θt or Θr = 0M×M.
814
+ • Active RIS [13]: The active RIS simultaneously supports
815
+ signal reflection and amplification, i.e., Θt = 0M×M.
816
+ • STAR-RIS [9]: The STAR-RIS provides full space cov-
817
+ erage by splitting signals to two sides, i.e., βmax = 1.
818
+ Fig. 2(a) depicts the sum rate versus the maximum transmit
819
+ power Pmax. It can be observed that the sum rates of all
820
+ schemes increase with Pmax. Besides, the proposed MF-RIS
821
+ always yields a better performance than other benchmarks.
822
+ Specifically, when Pmax = 10 dBm, the MF-RIS enjoys
823
+ a 59% higher sum rate than the SF-RIS. This is because
824
+ the MF-RIS serves all users in full space through signal
825
+ reflection, transmission and amplification functions. Besides,
826
+ by providing additional energy to amplify the incident signal,
827
+ the MF-RIS is able to efficiently mitigate the double-fading at-
828
+
829
+ 5
830
+ 5
831
+ 10
832
+ 15
833
+ 20
834
+ 25
835
+ 30
836
+ 2
837
+ 4
838
+ 6
839
+ 8
840
+ 10
841
+ 12
842
+ 14
843
+ 16
844
+ Sum Rate (bps/Hz)
845
+ Maximum transmit power, (dBm)
846
+ MF-RIS
847
+ Active RIS
848
+ STAR-RIS
849
+ SF-RIS
850
+ Without RIS
851
+ 59%
852
+ 16%
853
+ 44%
854
+ (a) Sum rate vs. the power budget.
855
+ 10
856
+ 20
857
+ 30
858
+ 40
859
+ 50
860
+ 60
861
+ 70
862
+ 80
863
+ 90
864
+ 100
865
+ 7
866
+ 8
867
+ 9
868
+ 10
869
+ 11
870
+ 12
871
+ 13
872
+ Sum Rate (bps/Hz)
873
+ Number of elements,
874
+ MF-RIS
875
+ Active RIS
876
+ STAR-RIS
877
+ SF-RIS
878
+ Without RIS
879
+ 12%
880
+ 6%
881
+ (b) Sum rate vs. the number of elements.
882
+ 0
883
+ 10
884
+ 20
885
+ 30
886
+ 40
887
+ 50
888
+ 7
889
+ 8
890
+ 9
891
+ 10
892
+ 11
893
+ 12
894
+ 13
895
+ 14
896
+ Sum Rate (bps/Hz)
897
+ Y-coordinate of RIS
898
+ MF-RIS
899
+ Active RIS
900
+ STAR-RIS
901
+ SF-RIS
902
+ Without RIS
903
+ 39%
904
+ 28%
905
+ (c) Sum rate vs. the Y -coordinate of RIS.
906
+ Fig. 2. Simulation results for the sum rate versus different transmit power, number of elements and RIS locations.
907
+ tenuation, which helps to improve the channel gain of cascaded
908
+ links. Furthermore, due to the limitations faced by the active
909
+ RIS and STAR-RIS counterparts (i.e., half-space coverage
910
+ and double-fading attenuation), the MF-RIS improves the rate
911
+ performance by 16% and 44% when Pmax = 10 dBm,
912
+ respectively. Additionally, it is evident that all RIS-aided
913
+ schemes achieve significant gains than the scheme without
914
+ RIS. This demonstrates the superiority of using RIS to improve
915
+ the performance of wireless networks.
916
+ Fig. 2(b) shows that the sum rate of all RIS-aided schemes
917
+ increase with M. This is because a larger M enables a higher
918
+ beamforming gain, thus improving the system performance. In
919
+ addition, with more degree of freedoms to manipulate signal
920
+ propagation, the STAR-RIS is capable of enjoying a 6% higher
921
+ sum rate than the SF-RIS. Moreover, although only the users
922
+ located in the reflection space are served by the active RIS,
923
+ it outperforms the STAR-RIS with a 12% higher sum rate.
924
+ This is because the performance gain obtained from the signal
925
+ amplification of active RIS is greater than that from full-
926
+ space coverage of STAR-RIS. This also implies that the signal
927
+ amplification function plays an important role in improving the
928
+ performance of RIS-aided networks.
929
+ Fig. 2(c) illustrates the sum rate versus the Y -coordinate of
930
+ RIS (from 0 to 50), where the RIS moves from the BS side
931
+ to the user side. We can observe that the sum rates of the
932
+ STAR-RIS and the SF-RIS first decrease and then increase.
933
+ The reason behind this is that the channel gain decreases with
934
+ the link distance. Specifically, when the STAR-RIS and the SF-
935
+ RIS are located close to the middle point, the received signals
936
+ at users are attenuated the most, resulting in the lowest sum
937
+ rate. In contrast, owing to the signal amplification function,
938
+ the MF-RIS and the active RIS are less affected by the
939
+ double-fading attenuation, which achieve 39% and 28% gains
940
+ in the middle point compared to the SF-RIS. Moreover, the
941
+ corresponding sum rate maintains a continuous upward trend
942
+ even when the MF-RIS and active RIS are far away from the
943
+ BS. This is because as the RIS comes closer to the users, the
944
+ power of the incident signal at the RIS is weaker. Thus, under
945
+ a fixed amplification power budget, the MF-RIS can provide
946
+ more amplification gain when deployed closer to users. This
947
+ compensates for the attenuation caused by the double-fading
948
+ issue. This observation also reveals that the MF-RIS should
949
+ be deployed close to the users for better performance.
950
+ V. CONCLUSION
951
+ In this letter, we proposed a novel MF-RIS architecture to
952
+ alleviate the double-fading attenuation via transmitting and
953
+ reflecting the incident signal with power amplification. Then,
954
+ we investigated the resource allocation problem in a downlink
955
+ multiuser MF-RIS-aided NOMA network. Specifically, the
956
+ active beamforming and MF-RIS coefficients were jointly
957
+ optimized to maximize the achievable sum rate by leveraging
958
+ SCA and penalty-based method. Numerical results validated
959
+ the effectiveness of the proposed MF-RIS and the superiority
960
+ of MF-RIS over traditional RISs. In the future, we are inter-
961
+ ested in studying the coupled phase and hardware impairment
962
+ problems of the MF-RIS. In addition, the robust beamforming
963
+ under imperfect CSI cases deserves exploration as well.
964
+ REFERENCES
965
+ [1] Y. Liu, Z. Qin, Elkashlan et al., “Nonorthogonal multiple access for 5G
966
+ and beyond,” Proc. IEEE, vol. 105, no. 12, pp. 2347–2381, Dec.2017.
967
+ [2] M. Elhattab, M. A. Arfaoui, C. Assi et al., “RIS-assisted joint transmis-
968
+ sion in a two-cell downlink NOMA cellular system,” IEEE J. Sel. Areas
969
+ Commun., vol. 40, no. 4, pp. 1270–1286, Apr. 2022.
970
+ [3] Y. Liu, X. Liu, X. Mu et al., “Reconfigurable intelligent surfaces:
971
+ Principles and opportunities,” IEEE Commun. Surveys Tuts., vol. 23,
972
+ no. 3, pp. 1546–1577, thirdquarter 2021.
973
+ [4] A. S. d. Sena, D. Carrillo, F. Fang et al., “What role do intelligent re-
974
+ flecting surfaces play in multi-antenna non-orthogonal multiple access?”
975
+ IEEE Wireless Commun., vol. 27, no. 5, pp. 24–31, Oct. 2020.
976
+ [5] Z. Ding and H. Vincent Poor, “A simple design of IRS-NOMA transmis-
977
+ sion,” IEEE Commun. Lett., vol. 24, no. 5, pp. 1119–1123, Feb. 2020.
978
+ [6] B. Zheng, Q. Wu, and R. Zhang, “Intelligent reflecting surface-assisted
979
+ multiple access with user pairing: NOMA or OMA?” IEEE Commun.
980
+ Lett., vol. 24, no. 4, pp. 753–757, Jan. 2020.
981
+ [7] W. Ni, X. Liu, Y. Liu et al., “Resource allocation for multi-cell IRS-
982
+ aided NOMA networks,” IEEE Trans. Wireless Commun., vol. 20, no. 7,
983
+ pp. 4253–4268, Jul. 2021.
984
+ [8] W. Wang, W. Ni, H. Tian et al., “Safeguarding NOMA networks via
985
+ reconfigurable dual-functional surface under imperfect CSI,” IEEE J.
986
+ Sel. Topics Signal Process., vol. 16, no. 5, pp. 950–966, Aug. 2022.
987
+ [9] C. Wu, Y. Liu, X. Mu et al., “Coverage characterization of STAR-RIS
988
+ networks: NOMA and OMA,” IEEE Commun. Lett., vol. 25, no. 9,
989
+ pp.3036–3040, Sept. 2021.
990
+ [10] T. Wang, M.-A. Badiu, G. Chen et al., “Performance analysis of IOS-
991
+ assisted NOMA system with channel correlation and phase errors,” IEEE
992
+ Trans. Veh. Technol., vol. 71, no. 11, pp. 11 861–11 875, Nov. 2022.
993
+ [11] H. Liu, G. Li, X. Li et al., “Effective capacity analysis of STAR-RIS-
994
+ assisted NOMA networks,” IEEE Wireless Commun. Lett., vol. 11, no. 9,
995
+ pp. 1930–1934, Sept. 2022.
996
+ [12] X. Li, Y. Zheng, M. Zeng et al., “Enhancing secrecy performance for
997
+ STAR-RIS NOMA networks,” IEEE Trans. Veh. Technol., Oct. 2022.
998
+ [13] R. Long, Y.-C. Liang, Y. Pei et al., “Active reconfigurable intelligent
999
+ surface-aided wireless communications,” IEEE Trans. Wireless Com-
1000
+ mun., vol. 20, no. 8, pp. 4962–4975, Aug. 2021.
1001
+
AtFRT4oBgHgl3EQfuDj6/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,428 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf,len=427
2
+ page_content='1 Enhancing NOMA Networks via Reconfigurable Multi-Functional Surface Ailing Zheng, Wanli Ni, Wen Wang, and Hui Tian Abstract—By flexibly manipulating the radio propagation en- vironment, reconfigurable intelligent surface (RIS) is a promising technique for future wireless communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
3
+ page_content=' However, the single-side coverage and double-fading attenuation faced by con- ventional RISs largely restrict their applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
4
+ page_content=' To address this issue, we propose a novel concept of multi-functional RIS (MF- RIS), which provides reflection, transmission, and amplification simultaneously for the incident signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
5
+ page_content=' With the aim of enhancing the performance of a non-orthogonal multiple-access (NOMA) downlink multiuser network, we deploy an MF-RIS to maximize the sum rate by jointly optimizing the active beamforming and MF-RIS coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
6
+ page_content=' Then, an alternating optimization algorithm is proposed to solve the formulated non-convex problem by exploiting successive convex approximation and penalty-based method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
7
+ page_content=' Numerical results show that the proposed MF-RIS outperforms conventional RISs under different settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
8
+ page_content=' Index Terms—Multi-functional reconfigurable intelligent sur- face, non-orthogonal multiple access, rate maximization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
9
+ page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
10
+ page_content=' INTRODUCTION Compared to orthogonal multiple access (OMA), non- orthogonal multiple access (NOMA) is capable of achieving high spectrum efficiency and massive connectivity [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
11
+ page_content=' Prior investigations have shown that the differences between users’ channel conditions can be exploited to enhance NOMA per- formance [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
12
+ page_content=' However, users in large-scale networks may have poor or similar channel conditions, which hinders the application of successive interference cancellation (SIC) and the effective implementation of NOMA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
13
+ page_content=' Therefore, adjusting channel conditions and enhancing channel diversity are able to release the potential of NOMA in practical networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
14
+ page_content=' Recently, with the ability to reshape the wireless propaga- tion environment, reconfigurable intelligent surface (RIS) has emerged as a key technique to improve the performance of NOMA networks [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
15
+ page_content=' By properly designing the reflection coefficients, RIS is able to smartly change the combined channels to enhance the differences among users, thus boosting the performance of NOMA in large-scale networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
16
+ page_content=' Initial investigations on RIS-aided NOMA networks in [3]–[7] had verified the superiority of the integration of NOMA and RIS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
17
+ page_content=' Specifically, the authors of [3] and [4] performed comprehen- sive discussions of the main challenges and futuristic use cases regarding RIS-aided NOMA networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
18
+ page_content=' Moreover, the works in [5]–[7] demonstrated the benefits brought by RISs to achieve performance trade-off among multiple NOMA users through This letter was supported by the Natural Science Foundation of Shandong Province under Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
19
+ page_content=' ZR2021LZH010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
20
+ page_content=' The associate editor coordinating the review of this letter and approving it for publication was Lina Bariah.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
21
+ page_content=' (Corresponding author: Hui Tian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
22
+ page_content=') A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
23
+ page_content=' Zheng, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
24
+ page_content=' Ni, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
25
+ page_content=' Wang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
26
+ page_content=' Tian are with the State Key Lab- oratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China (e-mail: {ailing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
27
+ page_content='zheng, charleswall, wen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
28
+ page_content='wang, tianhui}@bupt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
29
+ page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
30
+ page_content='cn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
31
+ page_content=' smartly adjusting the decoding order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
32
+ page_content=' However, the existing literature on RIS-aided NOMA networks mostly uses single functional RIS (SF-RIS) that only supports signal reflection or transmission/refraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
33
+ page_content=' This implies that only users located in a single side can be served by the SF-RIS if no additional operations are performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
34
+ page_content=' To overcome this limitation, the authors of [8] proposed the concept of dual-functional RIS (DF-RIS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
35
+ page_content=' Unlike SF-RIS, DF- RIS refers to the reconfigurable dual-functional surface that can conduct signal reflection and transmission simultaneously, such as simultaneous transmitting and reflecting RIS (STAR- RIS) [9] and intelligent omni-surface (IOS) [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
36
+ page_content=' Specifically, the coverage characterization of STAR-RIS-aided NOMA net- works was investigated in [9] by studying a coverage range maximization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
37
+ page_content=' The authors of [10] considered the average rate maximization problem in an IOS-aided NOMA networks with spatially correlated channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
38
+ page_content=' Furthermore, the effective capacity and secrecy outage probability of STAR- RIS-aided NOMA networks were derived in [11] and [12], respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
39
+ page_content=' However, although the effective coverage can be enhanced by the existing DF-RIS, the signals relayed by the DF-RIS still suffer from channel fading twice due to the features of cascaded channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
40
+ page_content=' This double-fading effect inevitably deteriorates the achievable performance of passive RIS-assisted wireless networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
41
+ page_content=' Therefore, it is necessary to design new RIS architectures to mitigate the double-fading attenuation problem faced by the existing RISs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
42
+ page_content=' In this letter, a novel multi-functional RIS (MF-RIS) is proposed to address the issues aforementioned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
43
+ page_content=' Specifically, the proposed MF-RIS can not only divide the incident signal into transmission and reflection two parts based on the field equivalence principle, but also amplify the outgoing signal with the help of active loads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
44
+ page_content=' Thus, the MF-RIS is able to facilitate a full-space coverage and overcome the double- fading issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
45
+ page_content=' Then, we investigate a sum rate maximization problem in an MF-RIS-aided NOMA network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
46
+ page_content=' Compared to the existing problems formulated in [8] and [9], the newly introduced MF-RIS constraints and highly coupled variables make the performance optimization more complicated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
47
+ page_content=' The main contributions of this letter are summarized as follows: 1) We propose a new concept of MF-RIS by integrating the surface electric and magnetic impedances, and power amplifier into each element so that the incident signal can be reflected, refracted, and amplified simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
48
+ page_content=' 2) We formulate a non-convex optimization problem to maximize the throughout of an MF-RIS-aided NOMA network, where the MF-RIS is deployed to constructively enhance the channel condition by flexibly adjusting the radio propagation environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
49
+ page_content=' 3) To solve the formulated non-convex problem, we propose an efficient iterative algorithm by alternatively optimizing the arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
50
+ page_content='13630v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
51
+ page_content='IT] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
52
+ page_content='31 Jan 2023 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
53
+ page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
54
+ page_content='BS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
55
+ page_content='BS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
56
+ page_content='Transmission-only ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
57
+ page_content='Reflection-only ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
58
+ page_content='BS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
59
+ page_content='BS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
60
+ page_content='��� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
61
+ page_content='��� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
62
+ page_content='��� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
63
+ page_content='��� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
64
+ page_content='��� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
65
+ page_content='��� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
66
+ page_content='Reflected signal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
67
+ page_content='Transmitted signal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
68
+ page_content='Power supply ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
69
+ page_content='Incident signal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
70
+ page_content='SF-RIS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
71
+ page_content='DF-RIS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
72
+ page_content='MF-RIS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
73
+ page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
74
+ page_content='Users in the reflection space ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
75
+ page_content='Users in the transmission space ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
76
+ page_content='Incident signal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
77
+ page_content='coverage ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
78
+ page_content='Signal amplification ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
79
+ page_content='Passive relay ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
80
+ page_content='Amplifier ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
81
+ page_content='Phase shifter ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
82
+ page_content='Power supply ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
83
+ page_content='Reflected phase ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
84
+ page_content='Transmitted phase ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
85
+ page_content='Patch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
86
+ page_content='Patch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
87
+ page_content='max ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
88
+ page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
89
+ page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
90
+ page_content='Incident signal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
91
+ page_content='Reflected signal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
92
+ page_content='Transmitted signal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
93
+ page_content='Power splitting ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
94
+ page_content='Reflected signal Transmitted signal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
95
+ page_content='/ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
96
+ page_content='max ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
97
+ page_content='max ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
98
+ page_content='[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
99
+ page_content=' ] r t m r t m m � � � � � � � � 360� 1 r t m m � � � � / / / r t m j r t r t m m m y e s � � � / / / r t m j r t r t m m m y e s � � � Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
100
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
101
+ page_content=' Conventional RIS vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
102
+ page_content=' the proposed MF-RIS-aided NOMA networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
103
+ page_content=' active beamforming and MF-RIS coefficients based on the penalty-based method and successive convex approximation (SCA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
104
+ page_content=' 4) Simulation results show that the proposed MF-RIS- aided NOMA network can provide up to about 59% sum rate gain than the SF-RIS, and the MF-RIS prefers to be deployed at the user side for better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
105
+ page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
106
+ page_content=' SYSTEM MODEL AND PROBLEM FORMULATION A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
107
+ page_content=' System Model We consider an MF-RIS-aided NOMA downlink network, where an N-antenna BS communicates with K single-antenna users with the aid of an MF-RIS comprising M elements, as shown in the right of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
108
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
109
+ page_content=' The sets of elements and users are denoted by M = {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
110
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
111
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
112
+ page_content=' , M} and K = {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
113
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
114
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
115
+ page_content=' , K}, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
116
+ page_content=' The channels of BS-user, BS-RIS, and RIS- user are denoted by hk ∈ CN×1, H ∈ CM×N, and gk ∈ CM×1, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
117
+ page_content=' Furthermore, we define up = [ � βp 1ejθp 1 , � βp 2ejθp 2 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
118
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
119
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
120
+ page_content=' , � βp Mejθp M ]T ∈ CM×1 as the transmission (p = t) or reflection (p = r) beamforming vector, where p ∈ {t, r} denotes the transmission and reflection spaces, βp m ∈ [0, βmax] and θp m ∈ [0, 2π) represent the amplitude and the phase shift response of the m-th element, respectively, with the maximum amplification factor βmax ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
121
+ page_content=' Due to the law of energy conservation, we have βr m + βt m ≤ βmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
122
+ page_content=' If user k is located at the reflection space, the diagonal matrix of the MF-RIS for user k is given by Θk = diag(ur);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
123
+ page_content=' otherwise Θk = diag(ut).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
124
+ page_content=' We assume that the perfect channel state information (CSI) of all channels is available at the BS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
125
+ page_content=' Then the signal received at user k is expressed as yk = (hH k + gH k ΘkH)x + gH k Θkns + nk, ∀k, (1) where x = � k wksk denotes the transmit signal, wk and sk ∈ CN(0, 1) represent the transmit precoder and the information symbol for user k, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
126
+ page_content=' ns ∈ CN(0, σ2 sIM) denotes the dynamic noise at the MF-RIS with each element’s noise power σ2 s, and nk ∈ CN(0, σ2 k) denotes the additive white Gaussian noise at user k with power σ2 k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
127
+ page_content=' By employing SIC, the strong user can mitigate the inter- ference from weak users to improve the signal-to-interference- plus-noise ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
128
+ page_content=' Similar to [4]–[6], with the assistance of RIS to flexibly adjust channel conditions of multiple users, we assume that users’ indexes are ranked in an increasing order with respect to their channel gains, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
129
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
130
+ page_content=', ∥�h1∥2 ≤ ∥�h2∥2 ≤ · · · ≤ ∥�hK∥2, (2) where �hk = hH k +gH k ΘkH is the equivalent combined channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
131
+ page_content=' For the fixed decoding order, the corresponding achievable sum rate of user k is given by Rk = log2(1 + γk), where γk can be obtained by γk = |�hkwk|2 �K i=k+1(|�hkwi|2) + |gH k Θkns|2 + σ2 k , ∀k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
132
+ page_content=' (3) B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
133
+ page_content=' Problem Formulation In this letter, we aim to maximize the achievable sum rate of all users by jointly optimizing the active beamforming at the BS and the coefficients at the MF-RIS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
134
+ page_content=' Under the transmit and amplification power constraints, and the quality-of-service (QoS) requirement of users, the considered optimization prob- lem can be formulated as max wk,Θk �K k=1 Rk (4a) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
135
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
136
+ page_content=' �K k=1 ∥wk∥2 ≤ Pmax, (4b) �K k=1(∥ΘkHwk∥2+∥ΘkIM∥2 F σ2 s)≤Po, (4c) βr m + βt m ≤ βmax, 0 ≤ βp m ≤ βmax, ∀m, ∀p, (4d) Rk ≥ Rmin k , θp m ∈ [0, 2π), (2), ∀k, ∀m, ∀p, (4e) where Pmax and Po denote the maximum transmit and am- plification power at the BS and MF-RIS, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
137
+ page_content=' Rmin k represents the minimum rate requirement of user k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
138
+ page_content=' Specifi- cally, the constraints for transmit power, amplification power, the QoS requirements and the decoding order are given in (4b)-(4e), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
139
+ page_content=' It can be observed that the formulated problem (4) is intractable due to the non-convex objective function and constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
140
+ page_content=' Besides, the active beamforming and MF-RIS coefficients are highly coupled, making it difficult to be solved directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
141
+ page_content=' Thus, we aim to transform problem (4) into some tractable convex subproblems and solve them 3 separately and alternatively over iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
142
+ page_content=' In the next section, we adopt alternating optimization method to obtain the active beamforming and the MF-RIS coefficients efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
143
+ page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
144
+ page_content=' PROPOSED SOLUTION A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
145
+ page_content=' Active Beamforming Design Given the MF-RIS coefficients, the active beamforming optimization problem is still non-convex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
146
+ page_content=' To solve it, we first introduce an auxiliary variable set {Ak, Bk|k ∈ K}, where Ak and Bk are defined as Ak −1 = |�hkwk|2, (5) Bk = �K i=k+1(|�hkwi|2) + |gH k Θkns|2 + σ2 k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
147
+ page_content=' (6) Thus, the achievable data rate can be rewritten as Rk = log2 � 1 + (AkBk)−1� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
148
+ page_content=' Then, the active beamforming optimization problem in (4) can be equivalently expressed as max wk,Ak,Bk,Rk �K k=1 Rk (7a) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
149
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
150
+ page_content=' log2 � 1 + (AkBk)−1� ≥ Rk, ∀k, (7b) Ak −1 ≤ |�hkwk|2, ∀k, (7c) Bk ≥ K � i=k+1 (|�hkwi|2)+|gH k Θkns|2+σ2 k, ∀k,(7d) Rk ≥ Rmin k , (4b), (4c), ∀k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
151
+ page_content=' (7e) We further define �Hk = �hH k �hk, Dk = (HHΘk)(HHΘk)H and Wk = wkwH k , where Wk ⪰ 0, and rank(Wk) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
152
+ page_content=' Then, we have |�hkwk|2 = Tr( �HkWk), ∥ΘkHwk∥2 = Tr(WkDk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
153
+ page_content=' (8) Therefore, problem (7) can be reformulated as max Wk,Ak,Bk,Rk �K k=1 Rk (9a) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
154
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
155
+ page_content=' Ak −1 ≤ Tr( �HkWk), ∀k, (9b) Bk≥ K � i=k+1 Tr( �HkWi)+|gH k Θkns|2+σ2 k, ∀k,(9c) �K k=1Tr(Wk) ≤ Pmax, (9d) �K k=1 � Tr(WkDk)+∥ΘkIM∥2σ2 s � ≤Po, (9e) rank(Wk) = 1, ∀k, (9f) Wk ⪰ 0, Rk ≥ Rmin k , (7b), ∀k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
156
+ page_content=' (9g) In order to deal with the non-convex constraint (7b), we adopt the first-order Taylor expansion, and then we obtain the lower bound as follows: log2(1+ 1 AkBk )≥log2(1+ 1 A(τ1) k B(τ1) k )− log2 e(Ak−A(τ1) k ) A(τ1) k (1+A(τ1) k B(τ1) k ) − log2 e(Bk−B(τ1) k ) B(τ1) k (1+A(τ1) k B(τ1) k ) ∆= Rk, (10) where A(τ1) k and B(τ1) k are feasible points of Ak and Bk in the τ1-th iteration, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
157
+ page_content=' For the non-convex rank-one constraint in (9f), we assume to transform it to a penalty term in the objective function, which can be solved by SCA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
158
+ page_content=' Thus, we firstly introduce an equivalent equality: ∥Wk∥∗ − ∥Wk∥2 = 0, ∀k, (11) where ∥Wk∥∗ = � i εi(Wk) and ∥Wk∥2 = ε1(Wk) denote the nuclear norm and the spectral norm of Wk, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
159
+ page_content=' εi(Wk) is the i-th largest singular value of matrix Wk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
160
+ page_content=' Thus, when the matrix Wk is rank-one, equality (11) holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
161
+ page_content=' Next, we employ the penalty method to solve problem (9) by adding (11) to the objective function (9a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
162
+ page_content=' Since the penalty term (11) makes the objective function not convex, we apply the first-order Taylor expansion to obtain a convex upper bound of (11) as follows: ∥Wk∥∗ − ∥Wk∥2 ≤ ∥Wk∥∗ − ∥Wk∥2, (12) where ∥Wk∥2 = ∥W(τ1) k ∥2 + Tr � e(τ1) k (e(τ1) k )H(Wk − W(τ1) k ) � , and e(τ1) k is the eigenvector corresponding to the largest eigenvalue of W(τ1) k in the τ1-th iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
163
+ page_content=' By introducing (12) to the objective function (9a), we obtain the following problem: max Wk,Ak,Bk,Rk �K k=1Rk− 1 η � k(∥Wk∥∗−∥Wk∥2) (13a) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
164
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
165
+ page_content=' Rk ≥ Rk, Wk ⪰ 0, Rk ≥ Rmin k , ∀k, (13b) (9b) − (9e), (13c) where η > 0 is the penalty factor penalizing (13a) if Wk is not rank-one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
166
+ page_content=' It can be verified that, when η → 0, the solution {Wk} of problem (13) always satisfies equality (11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
167
+ page_content=' The reformulated problem (13) is a standard convex semi- definite programming (SDP), which can be efficiently solved via CVX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
168
+ page_content=' To obtain a high quality solution, we first initialize a large η to find a feasible starting point, and then gradually decrease η with η = µη, µ < 1 to a sufficiently small value to obtain an overall suboptimal solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
169
+ page_content=' The process terminates when the penalty term satisfies the following criterion: max{∥Wk∥∗ − ∥Wk∥2, ∀k} ≤ ϵ1, (14) where ϵ1 denotes a predefined maximum violation of (11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
170
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
171
+ page_content=' MF-RIS Coefficient Design For the coefficient design at the MF-RIS, we define vk = [ur;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
172
+ page_content=' 1] if user k is located at the space r;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
173
+ page_content=' oth- erwise vk = [ut;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
174
+ page_content=' 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
175
+ page_content=' Then, we define Vk = vkvH k , with Vk ⪰ 0 and rank(Vk) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
176
+ page_content=' Let gk = [gk,1, gk,2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
177
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
178
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
179
+ page_content=' , gk,M]H and Gk = Hwk, then we have Qk = diag �� |gk,1|2, |gk,2|2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
180
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
181
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
182
+ page_content=' , |gk,M|2�� and �Gk = diag �� |Gk,1|2, |Gk,2|2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
183
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
184
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
185
+ page_content=' , |Gk,M|2�� + σ2 sIM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
186
+ page_content=' Given Qk = � Qk 0 0 0 � , Gk = � �Gk 0 0 0 � , (15) we can obtain ∥ΘkHwk∥2 + ∥ΘkIM∥2 F σ2 s = Tr(VkGk), (16) ∥gH k Θk∥2 = Tr(VkQk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
187
+ page_content=' (17) Thus, constraint (4c) can be replaced by (16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
188
+ page_content=' In order to handle the non-convex constraints (2) and (5), we define fk = diag(gH k )Gk, Rk = diag(gH k )H, ˜hk = ∥hH k ∥2, and dk = wH k hk, then we have Fk = � fkf H k fkd∗ k dkf H k |dk|2 � , Rk = � RkRH k Rkhk hH k RH k ˜hk � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
189
+ page_content=' (18) 4 According to the above transformation, we can obtain |�hkwk|2 = |(hH k + gH k ΘkH)wk|2 = Tr(VkFk),(19) ∥�hk∥2 = Tr(VkRk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
190
+ page_content=' (20) Based on (20), the decoding order in (2) is rewritten as Tr(V1R1) ≤ Tr(V2R2) ≤ · · · ≤ Tr(VKRK).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
191
+ page_content=' (21) Then, given the active beamforming vector, the subproblem of MF-RIS coefficient design can be given by max Vk,Ak,Bk,Rk �K k=1 Rk (22a) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
192
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
193
+ page_content=' Ak −1 ≤Tr(VkFk), ∀k, (22b) Bk≤ K � i=k+1 Tr(VkFi)+σ2 sTr(VkQk)+σ2 k, ∀k,(22c) �K k=1 Tr(VkGk) ≤ Po, (22d) Vk ⪰ 0, Rk ≥ Rmin k , ∀k, (22e) [Vk]m,m = βk m, [Vk]M+1,M+1 = 1, ∀k, (22f) rank(Vk) = 1, ∀k, (22g) θp m ∈ [0, 2π), (4d), (7b), (21), ∀m, ∀p, (22h) where Fi denotes Fk when wk is replaced by wi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
194
+ page_content=' Similar to (12), we replace the rank-one constraint in (22g) with the following form: ∥Vk∥∗ − ∥Vk∥2 ≤ ∥Vk∥∗ − ∥Vk∥2, (23) where ∥Vk∥∗ and ∥Vk∥2 denote the nuclear norm and the spectral norm of matrix Vk, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
195
+ page_content=' Besides, ∥Vk∥2 = ∥V(τ2) k ∥2 + Tr � z(τ2) k (z(τ2) k )H(Vk − V(τ2) k ) � and z(τ2) k is the eigenvector corresponding to the largest eigenvalue of V(τ2) k in the τ2-th iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
196
+ page_content=' By introducing (10) into (7b), problem (22) can be refor- mulated as max Vk,Ak,Bk,Rk �K k=1Rk− 1 ξ � k(∥Vk∥∗−∥Vk∥2) (24a) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
197
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
198
+ page_content=' Rk ≥ Rk, θp m ∈ [0, 2π), ∀k, ∀m, ∀p, (24b) (4d), (21), (22b)−(22f), (24c) where ξ > 0 is the penalty factor to ensure Vk is rank-one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
199
+ page_content=' The problem (24) is a standard SDP problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
200
+ page_content=' It can be solved by CVX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
201
+ page_content=' The termination criterion is given by max{∥Vk∥∗ − ∥Vk∥2, ∀k} ≤ ϵ2, (25) where ϵ2 denotes a predefined maximum violation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
202
+ page_content=' Based on the above derivation, we propose a penalty- based iterative algorithm to solve problem (4) efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
203
+ page_content=' The details are given in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
204
+ page_content=' Specifically, the initial points {W(0) k } and {V(0) k } are obtained by selecting the feasible ones from some random points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
205
+ page_content=' Since both the objectives of problems (13) and (24) are non-decreasing over iterations and the system throughout is upper-bounded by a finite value, the proposed Algorithm 1 is guaranteed to converge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
206
+ page_content=' Moreover, if the interior point method is employed, the complexity of Algorithm 1 is O(IoutIin(KN 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
207
+ page_content='5 + 2M 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
208
+ page_content='5)), where K, M and N are the numbers of users, BS antennas and MF- RIS elements, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
209
+ page_content=' The terms Iin and Iout denote the number of the inner and outer iterations required for convergence, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
210
+ page_content=' Algorithm 1 Penalty-Based Iterative Algorithm 1: Initialize {W(0) k }, {V(0) k }, the error tolerance ∆, the maximum number of iteration T0,max, the penalty factors η and ξ, and the predefined threshold ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
211
+ page_content=' 2: repeat 3: Set the iteration index τ0 = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
212
+ page_content=' 4: repeat 5: Given V(τ0) k , update W(τ0+1) k by solving (13);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
213
+ page_content=' 6: Given W(τ0+1) k , update V(τ0+1) k by solving (24);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
214
+ page_content=' 7: Update τ0 = τ0 + 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
215
+ page_content=' 8: until | R(τ0) sum −R(τ0−1) sum R(τ0−1) sum | < ∆ or τ0 > T0,max.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
216
+ page_content=' 9: Update {W(0) k , V(0) k } with {W(τ0) k , V(τ0) k };' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
217
+ page_content=' 10: Update η = µη, ξ = µξ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
218
+ page_content=' 11: until the constraints (14) and (25) satisfy ϵ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
219
+ page_content=' 12: Output the converged solutions {W∗ k} and {V∗ k}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
220
+ page_content=' TABLE I: Simulation Parameters Parameter Value Path loss exponents of BS-MF-RIS, BS-users, MF-RIS-users links 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
221
+ page_content='5, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
222
+ page_content='5, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
223
+ page_content='8 Rician factors of all links 3 dB Noise power at MF-RIS and users −80 dBm Minimum required QoS for users 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
224
+ page_content='1 bit/s/Hz Maximum amplification power [13] Po = 10 dBm Maximum amplification factor βmax = 22 dB Convergence tolerance ∆ = 10−6 IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
225
+ page_content=' SIMULATION RESULTS In this section, numerical results are provided to validate the performance of an MF-RIS-aided NOMA network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
226
+ page_content=' The BS and the MF-RIS are located at (0, 0, 0) and (0, 50, 20), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
227
+ page_content=' Besides, the users are divided into two parts, distributed on circles centered on (0, 45, 0) and (0, 55, 0) with radius r = 3, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
228
+ page_content=' We adopt Rician fading for all channels, and set K = 6, N = 16, M = 100, and Pmax = 20 dBm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
229
+ page_content=' Other parameters are listed in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
230
+ page_content=' We compare the proposed MF-RIS with three existing RISs: SF-RIS [7]: The SF-RIS only supports signal reflection or transmission, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
231
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
232
+ page_content=', βmax = 1 and Θt or Θr = 0M×M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
233
+ page_content=' Active RIS [13]: The active RIS simultaneously supports signal reflection and amplification, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
234
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
235
+ page_content=', Θt = 0M×M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
236
+ page_content=' STAR-RIS [9]: The STAR-RIS provides full space cov- erage by splitting signals to two sides, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
237
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
238
+ page_content=', βmax = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
239
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
240
+ page_content=' 2(a) depicts the sum rate versus the maximum transmit power Pmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
241
+ page_content=' It can be observed that the sum rates of all schemes increase with Pmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
242
+ page_content=' Besides, the proposed MF-RIS always yields a better performance than other benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
243
+ page_content=' Specifically, when Pmax = 10 dBm, the MF-RIS enjoys a 59% higher sum rate than the SF-RIS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
244
+ page_content=' This is because the MF-RIS serves all users in full space through signal reflection, transmission and amplification functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
245
+ page_content=' Besides, by providing additional energy to amplify the incident signal, the MF-RIS is able to efficiently mitigate the double-fading at- 5 5 10 15 20 25 30 2 4 6 8 10 12 14 16 Sum Rate (bps/Hz) Maximum transmit power, (dBm) MF-RIS Active RIS STAR-RIS SF-RIS Without RIS 59% 16% 44% (a) Sum rate vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
246
+ page_content=' the power budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
247
+ page_content=' 10 20 30 40 50 60 70 80 90 100 7 8 9 10 11 12 13 Sum Rate (bps/Hz) Number of elements, MF-RIS Active RIS STAR-RIS SF-RIS Without RIS 12% 6% (b) Sum rate vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
248
+ page_content=' the number of elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
249
+ page_content=' 0 10 20 30 40 50 7 8 9 10 11 12 13 14 Sum Rate (bps/Hz) Y-coordinate of RIS MF-RIS Active RIS STAR-RIS SF-RIS Without RIS 39% 28% (c) Sum rate vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
250
+ page_content=' the Y -coordinate of RIS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
251
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
252
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
253
+ page_content=' Simulation results for the sum rate versus different transmit power, number of elements and RIS locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
254
+ page_content=' tenuation, which helps to improve the channel gain of cascaded links.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
255
+ page_content=' Furthermore, due to the limitations faced by the active RIS and STAR-RIS counterparts (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
256
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
257
+ page_content=', half-space coverage and double-fading attenuation), the MF-RIS improves the rate performance by 16% and 44% when Pmax = 10 dBm, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
258
+ page_content=' Additionally, it is evident that all RIS-aided schemes achieve significant gains than the scheme without RIS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
259
+ page_content=' This demonstrates the superiority of using RIS to improve the performance of wireless networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
260
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
261
+ page_content=' 2(b) shows that the sum rate of all RIS-aided schemes increase with M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
262
+ page_content=' This is because a larger M enables a higher beamforming gain, thus improving the system performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
263
+ page_content=' In addition, with more degree of freedoms to manipulate signal propagation, the STAR-RIS is capable of enjoying a 6% higher sum rate than the SF-RIS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
264
+ page_content=' Moreover, although only the users located in the reflection space are served by the active RIS, it outperforms the STAR-RIS with a 12% higher sum rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
265
+ page_content=' This is because the performance gain obtained from the signal amplification of active RIS is greater than that from full- space coverage of STAR-RIS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
266
+ page_content=' This also implies that the signal amplification function plays an important role in improving the performance of RIS-aided networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
267
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
268
+ page_content=' 2(c) illustrates the sum rate versus the Y -coordinate of RIS (from 0 to 50), where the RIS moves from the BS side to the user side.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
269
+ page_content=' We can observe that the sum rates of the STAR-RIS and the SF-RIS first decrease and then increase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
270
+ page_content=' The reason behind this is that the channel gain decreases with the link distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
271
+ page_content=' Specifically, when the STAR-RIS and the SF- RIS are located close to the middle point, the received signals at users are attenuated the most, resulting in the lowest sum rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
272
+ page_content=' In contrast, owing to the signal amplification function, the MF-RIS and the active RIS are less affected by the double-fading attenuation, which achieve 39% and 28% gains in the middle point compared to the SF-RIS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
273
+ page_content=' Moreover, the corresponding sum rate maintains a continuous upward trend even when the MF-RIS and active RIS are far away from the BS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
274
+ page_content=' This is because as the RIS comes closer to the users, the power of the incident signal at the RIS is weaker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
275
+ page_content=' Thus, under a fixed amplification power budget, the MF-RIS can provide more amplification gain when deployed closer to users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
276
+ page_content=' This compensates for the attenuation caused by the double-fading issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
277
+ page_content=' This observation also reveals that the MF-RIS should be deployed close to the users for better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
278
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
279
+ page_content=' CONCLUSION In this letter, we proposed a novel MF-RIS architecture to alleviate the double-fading attenuation via transmitting and reflecting the incident signal with power amplification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
280
+ page_content=' Then, we investigated the resource allocation problem in a downlink multiuser MF-RIS-aided NOMA network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
281
+ page_content=' Specifically, the active beamforming and MF-RIS coefficients were jointly optimized to maximize the achievable sum rate by leveraging SCA and penalty-based method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
282
+ page_content=' Numerical results validated the effectiveness of the proposed MF-RIS and the superiority of MF-RIS over traditional RISs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
283
+ page_content=' In the future, we are inter- ested in studying the coupled phase and hardware impairment problems of the MF-RIS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
284
+ page_content=' In addition, the robust beamforming under imperfect CSI cases deserves exploration as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
285
+ page_content=' REFERENCES [1] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
286
+ page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
287
+ page_content=' Qin, Elkashlan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
288
+ page_content=', “Nonorthogonal multiple access for 5G and beyond,” Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
289
+ page_content=' IEEE, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
290
+ page_content=' 105, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
291
+ page_content=' 12, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
292
+ page_content=' 2347–2381, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
293
+ page_content='2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
294
+ page_content=' [2] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
295
+ page_content=' Elhattab, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
296
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
297
+ page_content=' Arfaoui, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
298
+ page_content=' Assi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
299
+ page_content=', “RIS-assisted joint transmis- sion in a two-cell downlink NOMA cellular system,” IEEE J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
300
+ page_content=' Sel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
301
+ page_content=' Areas Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
302
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
303
+ page_content=' 40, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
304
+ page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
305
+ page_content=' 1270–1286, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
306
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
307
+ page_content=' [3] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
308
+ page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
309
+ page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
310
+ page_content=' Mu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
311
+ page_content=', “Reconfigurable intelligent surfaces: Principles and opportunities,” IEEE Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
312
+ page_content=' Surveys Tuts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
313
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
314
+ page_content=' 23, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
315
+ page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
316
+ page_content=' 1546–1577, thirdquarter 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
317
+ page_content=' [4] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
318
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
319
+ page_content=' d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
320
+ page_content=' Sena, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
321
+ page_content=' Carrillo, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
322
+ page_content=' Fang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
323
+ page_content=', “What role do intelligent re- flecting surfaces play in multi-antenna non-orthogonal multiple access?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
324
+ page_content=' IEEE Wireless Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
325
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
326
+ page_content=' 27, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
327
+ page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
328
+ page_content=' 24–31, Oct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
329
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
330
+ page_content=' [5] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
331
+ page_content=' Ding and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
332
+ page_content=' Vincent Poor, “A simple design of IRS-NOMA transmis- sion,” IEEE Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
333
+ page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
334
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
335
+ page_content=' 24, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
336
+ page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
337
+ page_content=' 1119–1123, Feb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
338
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
339
+ page_content=' [6] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
340
+ page_content=' Zheng, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
341
+ page_content=' Wu, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
342
+ page_content=' Zhang, “Intelligent reflecting surface-assisted multiple access with user pairing: NOMA or OMA?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
343
+ page_content=' IEEE Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
344
+ page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
345
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
346
+ page_content=' 24, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
347
+ page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
348
+ page_content=' 753–757, Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
349
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
350
+ page_content=' [7] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
351
+ page_content=' Ni, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
352
+ page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
353
+ page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
354
+ page_content=', “Resource allocation for multi-cell IRS- aided NOMA networks,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
355
+ page_content=' Wireless Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
356
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
357
+ page_content=' 20, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
358
+ page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
359
+ page_content=' 4253–4268, Jul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
360
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
361
+ page_content=' [8] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
362
+ page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
363
+ page_content=' Ni, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
364
+ page_content=' Tian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
365
+ page_content=', “Safeguarding NOMA networks via reconfigurable dual-functional surface under imperfect CSI,” IEEE J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
366
+ page_content=' Sel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
367
+ page_content=' Topics Signal Process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
368
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
369
+ page_content=' 16, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
370
+ page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
371
+ page_content=' 950–966, Aug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
372
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
373
+ page_content=' [9] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
374
+ page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
375
+ page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
376
+ page_content=' Mu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
377
+ page_content=', “Coverage characterization of STAR-RIS networks: NOMA and OMA,” IEEE Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
378
+ page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
379
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
380
+ page_content=' 25, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
381
+ page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
382
+ page_content='3036–3040, Sept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
383
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
384
+ page_content=' [10] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
385
+ page_content=' Wang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
386
+ page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
387
+ page_content=' Badiu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
388
+ page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
389
+ page_content=', “Performance analysis of IOS- assisted NOMA system with channel correlation and phase errors,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
390
+ page_content=' Veh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
391
+ page_content=' Technol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
392
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
393
+ page_content=' 71, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
394
+ page_content=' 11, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
395
+ page_content=' 11 861–11 875, Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
396
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
397
+ page_content=' [11] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
398
+ page_content=' Liu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
399
+ page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
400
+ page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
401
+ page_content=', “Effective capacity analysis of STAR-RIS- assisted NOMA networks,” IEEE Wireless Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
402
+ page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
403
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
404
+ page_content=' 11, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
405
+ page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
406
+ page_content=' 1930–1934, Sept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
407
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
408
+ page_content=' [12] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
409
+ page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
410
+ page_content=' Zheng, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
411
+ page_content=' Zeng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
412
+ page_content=', “Enhancing secrecy performance for STAR-RIS NOMA networks,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
413
+ page_content=' Veh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
414
+ page_content=' Technol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
415
+ page_content=', Oct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
416
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
417
+ page_content=' [13] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
418
+ page_content=' Long, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
419
+ page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
420
+ page_content=' Liang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
421
+ page_content=' Pei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
422
+ page_content=', “Active reconfigurable intelligent surface-aided wireless communications,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
423
+ page_content=' Wireless Com- mun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
424
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
425
+ page_content=' 20, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
426
+ page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
427
+ page_content=' 4962–4975, Aug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
428
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFRT4oBgHgl3EQfuDj6/content/2301.13630v1.pdf'}
BdFQT4oBgHgl3EQfNTaI/content/tmp_files/2301.13271v1.pdf.txt ADDED
@@ -0,0 +1,1770 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Probabilistic Neural Data Fusion for Learning from an Arbitrary
2
+ Number of Multi-fidelity Data Sets
3
+ Carlos Mora†1, Jonathan Tammer Eweis-Labolle†1, Tyler Johnson1, Likith Gadde2, and Ramin
4
+ Bostanabad∗1
5
+ 1Department of Mechanical and Aerospace Engineering, University of California, Irvine
6
+ 2Northwood High School, Irvine
7
+ Abstract
8
+ In many applications in engineering and sciences analysts have simultaneous access to multiple data
9
+ sources. In such cases, the overall cost of acquiring information can be reduced via data fusion or multi-
10
+ fidelity (MF) modeling where one leverages inexpensive low-fidelity (LF) sources to reduce the reliance
11
+ on expensive high-fidelity (HF) data. In this paper, we employ neural networks (NNs) for data fusion in
12
+ scenarios where data is very scarce and obtained from an arbitrary number of sources with varying levels
13
+ of fidelity and cost. We introduce a unique NN architecture that converts MF modeling into a nonlinear
14
+ manifold learning problem. Our NN architecture inversely learns non-trivial (e.g., non-additive and non-
15
+ hierarchical) biases of the LF sources in an interpretable and visualizable manifold where each data source is
16
+ encoded via a low-dimensional distribution. This probabilistic manifold quantifies model form uncertainties
17
+ such that LF sources with small bias are encoded close to the HF source. Additionally, we endow the output
18
+ of our NN with a parametric distribution not only to quantify aleatoric uncertainties, but also to reformulate
19
+ the network’s loss function based on strictly proper scoring rules which improve robustness and accuracy
20
+ on unseen HF data. Through a set of analytic and engineering examples, we demonstrate that our approach
21
+ provides a high predictive power while quantifying various sources uncertainties. Our codes and examples
22
+ can be accessed via GitLab.
23
+ Keywords: Multi-fidelity Modeling; Uncertainty Quantification; Bayesian Neural Networks; Inverse Prob-
24
+ lems; Manifold Learning; Data Fusion.
25
+ 1
26
+ Introduction
27
+ In an increasing number of applications in engineering and sciences analysts have simultaneous access to
28
+ multiple sources of information. For instance, materials’ properties can be estimated via multiple techniques
29
+ such as (in decreasing order of cost and accuracy/fidelity) experiments, direct numerical simulations (DNS),
30
+ a host of physics-based reduced order models (ROMs), or analytical methods [1–3]. In such applications,
31
+ the overall cost of gathering information about the system of interest can be reduced via multi-fidelity (MF)
32
+ modeling or data fusion where one leverages inexpensive low-fidelity (LF) sources to reduce the reliance
33
+ on expensive high-fidelity (HF) data sources. In this paper, we employ neural networks (NNs) for MF
34
+ modeling in scenarios where data is scarce and obtained from multiple sources with varying levels of fidelity
35
+ and cost (i.e., data is unbalanced since more samples are available from cheaper sources). In particular,
36
+ †Equal Contribution.
37
+ ∗Corresponding Author: [email protected]
38
+ GitLab repository: https://gitlab.com/TammerUCI/pro-ndf
39
+ 1
40
+ arXiv:2301.13271v1 [cs.LG] 30 Jan 2023
41
+
42
+ our contributions are as follows (1) we introduce a unique NN architecture that not only facilitates data
43
+ fusion, but also quantifies and visualizes the discrepancies/similarities between all data sources, and (2) we
44
+ illustrate that a Bayesian treatment, besides alleviating overfitting and providing a probabilistic surrogate
45
+ (i.e., an emulator), provides the means to develop a novel loss function (based on proper scoring rules) that
46
+ improves the performance and robustness of the resulting MF NN emulator.
47
+ Over the past few decades, many techniques have been developed for building MF surrogates which
48
+ are used in outer-loop applications such as design optimization [4, 5], calibration of computer models [6],
49
+ or Bayesian optimization [7]. The main motivation behind these techniques is to leverage the correlations
50
+ between LF and HF data sources (and the fact that sampling from the former is typically cheaper) to improve
51
+ the predictive performance of the surrogate while reducing the overall data acquisition costs. Early works
52
+ in this field focused primarily on hierarchically linking bi-fidelity data. For instance, in space mapping [8–
53
+ 10] or multi-level [11–13] techniques the inputs of the LF data are mapped following formulations such as
54
+ xl = F(xh) where xl and xh are the inputs of LF and HF sources, respectively. In this equation, F(·) is a
55
+ transformation function whose predefined functional form is calibrated such that yl(F(xh)) approximates
56
+ yh(xh) as closely as possible. These techniques are useful in applications where higher fidelity data are
57
+ obtained by successively refining the discretization in simulations [11, 12], e.g., by refining the mesh when
58
+ modeling the flow around an airfoil or estimating the fracture toughness of a microstructure. The main
59
+ disadvantages of space mapping techniques are that (1) they rely on iterative and time-consuming analysis
60
+ for choosing a near-optimal functional form for F(·), (2) they cannot jointly fuse more than two data sources
61
+ at a time, (3) they quantify similarity/discrepancy between the sources based on pre-defined functions whose
62
+ space may not include the true discrepancy, and (4) they do not quantify some uncertainty sources (such as
63
+ lack of data) and are rarely formulated within a Bayesian setting that leverages prior information.
64
+ A well-known hierarchical bi-fidelity modeling framework is that of Kennedy and O’Hagan (KOH) [14]
65
+ who assume that the discrepancy between the LF and HF sources is additive (multiplicative terms have also
66
+ been explored [15]) and that both sources as well as the discrepancy between them can be modeled via
67
+ Gaussian processes (GPs). Upon this modeling assumption, KOH find the joint posterior of GPs’ hyperpa-
68
+ rameters via either fully [16, 17] or modular Bayesian inference [18–21]. While KOH’s approach considers
69
+ multiple uncertainties and has been successfully applied to a broad range of applications [22–24], it has three
70
+ main limitations: (1) it only accommodates two data sources at a time, (2) it places a priori independence
71
+ assumption between the GPs, and (3) it does not provide a low-dimensional, visualizable, and interpretable
72
+ metric that quantifies the correlations between the data sources.
73
+ Recent works have acknowledged the limitations of hierarchical methods and devised new methodologies
74
+ to address them. For instance, MF modeling can be achieved via a recursive scheme [25] where a bi-fidelity
75
+ method is repeatedly applied from the lowest to the highest fidelities. However, such recursive schemes
76
+ inherit the limitations of bi-fidelity methods, cannot jointly fuse multi-source data sets, and are sensitive to
77
+ the ordering (i.e., the relative accuracy of all sources must be known a priori).
78
+ As another example, [26] presents MF networks (MFNets): an approach based on directed acyclic graphs
79
+ that builds a MF surrogate using an arbitrary number of data sources. MFNets accommodate noisy data
80
+ and are trained via gradient-based minimization of a nonlinear least squares objective. While MFNets can
81
+ learn non-hierarchical relations between data sources, they: (1) rely on having prior knowledge on a set of
82
+ latent variables that explain the relations between the sources, (2) assume each source can be surrogated via
83
+ a linear subspace model, (3) are not probabilistic and also require regularization, (4) impose independence
84
+ assumption among the data sources to derive the likelihood (i.e., the objective) function, and (5) rely on
85
+ iterative approaches for finding the optimal graph structure.
86
+ Other notable works that have studied the limitations of hierarchical techniques include [27–29] which are
87
+ focused on identifying (and correcting) non-additive discrepancies between LF and HF sources. However,
88
+ 2
89
+
90
+ the proposed solution in these works is intrusive and relies on some rather strong modeling assumptions that
91
+ largely limit the applications. These limitations arise because the formulation of the discrepancy is learned
92
+ via an embedded operator whose functional form and interaction with the LF source are constructed a priori.
93
+ We have recently developed a GP-based approach [30] that addresses the above issues by converting MF
94
+ modeling into a manifold learning problem where the relations between the sources are automatically quan-
95
+ tified via an appropriately learnt distance measure. The conversion is achieved via latent map Gaussian
96
+ processes [30] (LMGPs, see Section 2.1) which enable GPs to handle categorical variables and, correspond-
97
+ ingly, data fusion: by augmenting the inputs via a categorical variable (which indicates the source of a data
98
+ point) and then concatenating all the data sets, LMGPs can simultaneously learn from an arbitrary number of
99
+ information sources. We have shown [30] that LMGP-based MF modeling consistently outperforms KOH’
100
+ approach and can also handle calibration problems.
101
+ Following the success of LMGPs in data fusion, in this work we examine the potentials of NNs in match-
102
+ ing (and, hopefully, improving) LMGPs’ efficiency in MF modeling. Our current studies are motivated by
103
+ the facts that (1) when viewed as (probabilistic or deterministic) graphical models [31], NNs provide unique
104
+ opportunities to use MF data sets to uncover complex hidden relations between the corresponding sources,
105
+ (2) the recent hardware and software advancements have dramatically accelerated architecture design and
106
+ training of NNs, and (3) NNs scale to higher dimensions and big data significantly better than GPs.
107
+ Over the past few years some NN-based approaches have been developed for MF modeling [26, 32–34].
108
+ However, most of these works design the network architecture primarily based on hierarchical methods and
109
+ consequently inherit their limitations. For instance, [32] builds two sequentially connected deterministic
110
+ networks based on KOH’s method where the first and second NNs are tasked to emulate the LF and HF
111
+ sources, respectively. In addition to sharing the limitations of KOH’s method, such a sequential bi-fidelity
112
+ NN requires the LF and HF training data to be available at the same inputs (unless the two parts of the
113
+ network are trained separately) and also relies on manual tuning of the architecture and loss function. It has
114
+ been argued [33] that such sequentially trained NNs bridge MF modeling with transfer learning where the
115
+ knowledge gained from the LF data is used in building the NN module that surrogates the HF source.
116
+ Non-sequential NNs are rarely used for MF modeling (esp. with > 2 sources) due to the fact that searching
117
+ for the optimum architecture (and effectively training it with small data) is a difficult task. We address this
118
+ challenge by drawing inspiration from LMGPs where we design the architecture such that any number of
119
+ MF data sets can be simultaneously fused and the overall discrepancies between sources are quantified with
120
+ visualizable metrics. We also illustrate that making specific parts of the network probabilistic, in addition
121
+ to being superior to both deterministic and all-probabilistic NNs, enables us to infuse a proper scoring rule
122
+ [35] into the loss function and, in turn, improve the performance of the MF emulator. The particular rule
123
+ that we adopt is interval score which is frequently used in testing the quality of probabilistic predictions but,
124
+ to the best of our knowledge, has never been used in the training stage of a probabilistic NN. In summary,
125
+ our major contributions are as follows:
126
+ • We introduce a unique NN architecture for MF modeling that can fuse an arbitrary number of data
127
+ sets and quantify both epistemic and aleatoric uncertainties.
128
+ • We inversely learn the accuracy of the LF sources (with respect to the HF source) and visualize the
129
+ learned relations in an interpretable manifold.
130
+ • We show that a probabilistic setting allows us to develop a novel loss function (based on proper scoring
131
+ rules) that improves the performance of the emulator.
132
+ • We validate the performance of our approach on analytical and real-world examples and show that it
133
+ 3
134
+
135
+ performs on par with the state of the art while providing improved scalability to high dimensions and
136
+ big data.
137
+ The rest of this paper is organized as follows. We review the relevant technical background in Section 2
138
+ and then introduce our approach in Section 3. We test the performance of our approach on a host of analytical
139
+ problems and real-world data sets in Section 4 and conclude the paper in Section 5.
140
+ 2
141
+ Technical Preliminaries
142
+ In this section we first review LMGPs which are extensions of GPs that handle categorical inputs and,
143
+ thus, can readily fuse any number of data sets. Then, we provide some background on Bayesian neural
144
+ networks (BNNs) which form the foundation of our neural data fusion framework.
145
+ 2.1
146
+ Latent Map Gaussian Processes (LMGPs)
147
+ Let us denote the output and inputs in the training data by y ∈ Y ≡ R and x = [x1, x2, . . . , xdx]T ∈ Rdx,
148
+ respectively, with an individual training point i = 1, . . . , n denoted by the pair (y(i), x(i)). Assume the
149
+ training data is a realization from a constant-mean1 GP and that the following relation holds:
150
+ y(x) = m + ξ(x)
151
+ (1)
152
+ where m is the unknown constant mean and ξ(x) is a zero-mean GP whose covariance function or kernel
153
+ is:
154
+ cov(ξ(x), ξ(x
155
+ ′)) = c(x, x
156
+ ′) = s2r(x, x
157
+ ′)
158
+ (2)
159
+ where s2 is the variance of the process and r(·, ·) is a parametric correlation function such as the Gaussian:
160
+ r(x, x
161
+ ′) = exp
162
+
163
+
164
+ dx
165
+
166
+ i=1
167
+ 10ωi(xi − x
168
+
169
+ i)2
170
+
171
+ = exp
172
+
173
+
174
+
175
+ x − x
176
+ ′�T
177
+ 10Ω �
178
+ x − x
179
+ ′��
180
+ (3)
181
+ where ω = [ω1, . . . , ωdx]T are the roughness or scale parameters and Ω = diag(ω).
182
+ The training process and prediction formulas for a GP depend on the choice of the correlation function,
183
+ which relies on a weighted Cartesian distance metric between any two inputs, see Equation (3). As we
184
+ recently motivated in [36], to directly use GPs for mixed-variable modeling we reformulate r(·, ·) as detailed
185
+ below such that it can handle categorical (qualitative) inputs.
186
+ Let us denote the categorical inputs by t = [t1, . . . , tdt]T where the total number of distinct levels for
187
+ qualitative variable ti is τi. To handle mixed inputs, LMGP learns a parametric function that maps cate-
188
+ gorical variables to some points in a quantitative manifold or latent space2. These points (and hence the
189
+ mapping function) can be incorporated into any standard correlation function, such as the Gaussian, which
190
+ is reformulated as follows for mixed inputs:
191
+ r
192
+
193
+ (x, t), (x
194
+ ′, t
195
+ ′)
196
+
197
+ = exp
198
+
199
+
200
+ ���z(t) − z(t
201
+ ′)
202
+ ���
203
+ 2
204
+ 2 −
205
+
206
+ x − x
207
+ ′�T
208
+ 10Ω �
209
+ x − x
210
+ ′��
211
+ (4)
212
+ or, equivalently,
213
+ r
214
+
215
+ (x, t), (x
216
+ ′, t
217
+ ′)
218
+
219
+ = exp
220
+
221
+
222
+ dx
223
+
224
+ i=1
225
+ 10ωi(xi − x
226
+
227
+ i)2
228
+
229
+ × exp
230
+
231
+
232
+ dz
233
+
234
+ i=1
235
+ (zi(t) − zi(t
236
+ ′))2
237
+
238
+ (5)
239
+ 1GPs (and LMGPs) can also be formulated by using a linear combination of basis functions in place of the constant mean. This
240
+ formulation relies on prior knowledge of the functional form of the output and can improve performance in extrapolation, see [30].
241
+ 2Multiple mapping functions can also be used to build multiple manifolds. We leverage this in Section 4 where we build two
242
+ manifolds for data fusion problems with categorical or mixed inputs.
243
+ 4
244
+
245
+ where ∥·∥2 denotes the Euclidean 2-norm and z(t) = [z1(t), . . . , zdz(t)]1×dz is the to-be-learned latent
246
+ space point corresponding to the particular combination of categorical variables denoted by t. To find these
247
+ points in the latent space, LMGP assigns a unique vector (i.e., a prior representation) to each combination of
248
+ categorical variables. Then, it uses matrix multiplication3 to map each of these vectors to a point in a latent
249
+ space of dimension dz:
250
+ z(t) = ζ(t)A
251
+ (6)
252
+ where ζ(t) is the 1 × �dt
253
+ i=1 τi unique prior vector representation of t and A is a �dt
254
+ i=1 τi × dz matrix that
255
+ maps ζ(t) to z(t). In this paper, we use dz = 2 since it simplifies visualization and has been shown to
256
+ provide sufficient flexibility for learning the latent relations [36]. We construct ζ via a form of one-hot
257
+ encoding where we first construct the 1 × τi vector vi =
258
+
259
+ vi
260
+ 1, vi
261
+ 2, . . . , vi
262
+ τi
263
+
264
+ for each categorical variable ti
265
+ such that vi
266
+ j = 1 when ti is at level k = j and vi
267
+ j = 0 when ti is at level k ̸= j for k ∈ 1, 2, . . . , τi. Then,
268
+ we set ζ(t) = [v1, v2, . . . , vdt]. For example, for the two categorical variables t1 and t2 with 2 and 3 levels,
269
+ ζ(t) = [0, 1, 0, 1, 0] encodes the combination where both variables are at level 2.
270
+ To train an LMGP, we use maximum likelihood estimation (MLE) to jointly estimate all of its parameters:
271
+
272
+ ˆm, ˆs2, ˆω, ˆA
273
+
274
+ = argmax
275
+ m,s2,ω,A
276
+ ��2πs2R
277
+ ��− 1
278
+ 2 × exp
279
+
280
+ −1
281
+ 2(y − 1m)T (s2R)−1(y − 1m)
282
+
283
+ (7)
284
+ where |·| denotes the determinant operator, y = [y1, . . . , yn]T is the n × 1 vector of outputs in the training
285
+ data, R is the n × n correlation matrix with the (i, j)th element Rij = r
286
+
287
+ (x(i), t(i)), (x(j), t(j))
288
+
289
+ for i, j =
290
+ 1, . . . , n, and 1 is a n × 1 vector of ones.
291
+ After estimating the hyperparameters, we use the conditional distribution formulas to predict the response
292
+ distribution at the arbitrary point p∗ = (x∗, t∗). The mean and variance of this normal distribution are:
293
+ E [y (p∗)] = ˆm + rT (p∗) R−1 (y − 1 ˆm)
294
+ (8)
295
+ cov
296
+
297
+ y(p∗), y(p
298
+ ′)
299
+
300
+ = ˆs2r(p∗, p
301
+ ′) = ˆs2 �
302
+ 1 − rT (p∗)R−1r(p
303
+ ′) + g(p∗)(1T R−11)−1g(p
304
+ ′)
305
+
306
+ (9)
307
+ where E denotes expectation, r (p∗) is an (n × 1) vector with the ith element r
308
+
309
+ p(i), p∗�
310
+ , and g (p∗) =
311
+ 1 − 1T R−1r (p∗).
312
+ To perform data fusion via LMGP, we re-frame multi-fidelity modeling as a manifold learning problem.
313
+ Assume that we have ds data sources whose inputs and outputs are denoted by xsi, ysi, respectively, with
314
+ i = 1, . . . , ds. We first pre-process the data by appending the inputs with a single categorical variable ts with
315
+ ds levels (hereafter referred to as the source index variable) that distinguishes the data sources. Specifically,
316
+ we add ts at level i for source si, i.e., xsi → [xsi, insi×1], where insi×1 is an nsi × 1 vector of i’s and nsi
317
+ is the number of data points for source si. We then combine the data for all sources into one unified data set
318
+ and fit an LMGP directly it, i.e., we fit LMGP to all of the data from all sources at once.
319
+ The fitted LMGP can provide predictions for any desired data source based on the level used for ts and
320
+ as such is an emulator for all of the data sources. Additionally, since the data sources are distinguished via
321
+ a categorical variable, LMGP learns the correlations between them via a visualizable latent representation
322
+ and uses these correlations to improve its predictions [30]. In the case that the raw inputs contain categorical
323
+ variables tc, we use separate mappings for ts and tc, i.e., we assign unique priors ζ(ts) and ζ(tc) which
324
+ LMGP uses to find mapping matrices As and Ac. The latent points corresponding to each mapping are then
325
+ zs and zc, respectively.
326
+ 3More complex transformations based on, e.g., NNs, may also be used, although we do not do so in this paper.
327
+ 5
328
+
329
+ Note that the correlation function in Equation (5) depends directly on the euclidean distance between a
330
+ pair of latent points. This means that relative distances in the latent space directly correspond to correlations,
331
+ e.g., if a pair of data sources ys1 and ys2 have corresponding latent points with a distance ∆ in the latent
332
+ space then this directly implies by Equation (5) that LMGP has found those two sources to have a correlation
333
+ of exp (−∆2).
334
+ 2.2
335
+ Bayesian Neural Networks
336
+ Feedforward neural networks (FFNNs) are one of the most common models used in deep learning and
337
+ their main goal is to learn the underlying function f(x) that maps the inputs x to the target y [37]. To this
338
+ end, an FFNN defines the mapping ˆf(x; θ) whose parameters θ are estimated such that ˆy = ˆf(x; θ) best
339
+ approximates f(x). NN-based approaches for MF emulation can provide attractive advantages since they
340
+ are universal function approximators [38] and can handle high-dimensional inputs and large data sets. In
341
+ this subsection, we first describe the working principle of FFNNs and motivate the use of BNNs and Bayes
342
+ by backprop [39].
343
+ FFNNs propagate information from the inputs x to the output y through intermediate computations that
344
+ define ˆf. They are traditionally built via a succession of L layers where L − 2 hidden layers are placed
345
+ between the input and output layers. The output of layer k is denoted by zk and is obtained as follows:
346
+ z1 = x,
347
+ (10)
348
+ zk = φk (W kzk−1 + bk)
349
+ ∀ k ∈ [2, L − 1],
350
+ (11)
351
+ ˆy = φL (W LzL−1 + bL)
352
+ (12)
353
+ where φ is the (typically non-linear) activation function. The parameters θk = (W k, bk), where W k and bk
354
+ are the weight matrices and bias vectors, respectively, correspond to the connections between the (k − 1)th
355
+ and kth layer. For brevity, we denote the parameters of the entire network by θ.
356
+ From a statistical perspective, an FFNN aims to learn the conditional distribution P(y|x; θ) given the
357
+ noisy data set D with independent and identically distributed samples:
358
+ y(i) = f(x(i)) + ϵ ≈ ˆf(x(i); θ) + ϵ
359
+ (13)
360
+ where ϵ ∼ N(0, σ2) represents noise. Equation (13) indicates that P(y(i)|x(i)) ∼ N(f(x(i)), σ2) and
361
+ hence the conditional probability P (D|θ) can be written as:
362
+ P(D|θ) =
363
+ n
364
+
365
+ i=1
366
+ P(y(i)|x(i), θ)P(x(i)) =
367
+ n
368
+
369
+ i=1
370
+ N(y(i); ˆy(i), σ2)P(x(i))
371
+ =
372
+ n
373
+
374
+ i=1
375
+ 1
376
+ σ
377
+
378
+ 2π exp
379
+
380
+ − 1
381
+ 2σ2
382
+
383
+ y(i) − ˆy(i)�2�
384
+ P(x(i))
385
+ (14)
386
+ Since the likelihood function L(θ) ≡ P(D|θ), the parameters θ can be estimated by maximizing L(θ) (the
387
+ dependence on x is dropped for brevity):
388
+ θMLE = arg max
389
+ θ
390
+ L(θ) ≡ arg min
391
+ θ
392
+ − log P(D|θ) = arg min
393
+ θ
394
+ 1
395
+ n
396
+ n
397
+
398
+ i=1
399
+
400
+ y(i) − ˆf(x(i); θ)
401
+ �2
402
+ (15)
403
+ which is equivalent to minimizing the mean squared error (MSE) of the predictions ˆy = ˆf(x; θ) with
404
+ respect to the targets y. Equation (15) can be updated via Bayes rule to consider prior knowledge on θ in
405
+ the optimization. These maximum a posteriori (MAP) estimates are obtained via:
406
+ θMAP = arg max
407
+ θ
408
+ P (θ|D) = arg max
409
+ θ
410
+ log P (θ|D) = arg max
411
+ θ
412
+ log P (D|θ) + log P(θ)
413
+ (16)
414
+ 6
415
+
416
+ where the first term recovers MSE as in Equation (15) and the second term depends on the prior distribution
417
+ assigned to the parameters. Equation (16) illustrates that Gaussian and Laplacian priors are equivalent to L2
418
+ and L1 regularization, respectively [37, 39].
419
+ FFNNs are likely to overfit in scenarios where data is scarce. Additionally, they cannot directly quantify
420
+ prediction uncertainty and are often overconfident in extrapolation [40]. BNNs are developed to address
421
+ these issues [41, 42]. In BNNs, the weights are endowed with probability distributions (rather than single
422
+ point estimates) which naturally results in probabilistic predictions and can dramatically reduce overfitting
423
+ via parameter regularization and model averaging.
424
+ Predictions via a BNN requires sampling from the posterior distribution of the parameters, i.e., P (θ|D),
425
+ which does not have a closed form and is highly complex. Over the past few years, various techniques
426
+ have been developed to obtain samples from P (θ|D) (or an approximation thereof). The most popular
427
+ techniques are based on either Markov Chain Monte Carlo (MCMC) [43] or variational inference (VI) [44]
428
+ which, unlike MCMC, learns an approximation of the posterior distribution.
429
+ Although MCMC methods are arguably the best techniques for sampling from the exact posterior, their
430
+ lack of scalability makes them inefficient for BNNs of any practical size [45]. Hence, we employ Bayes by
431
+ backprop [39] which is a variational method that approximates P (θ|D) with the parameterized distribution
432
+ q(θ|ϕ) . The parameters ϕ are learned by minimizing the Kullback–Leibler (KL) divergence between the
433
+ true and approximated posteriors:
434
+ KL[q(θ|ϕ)||P(θ|D)] =
435
+
436
+ q(θ|ϕ) log
437
+ � q(θ|ϕ)
438
+ P(θ|D)
439
+
440
+ dθ =
441
+
442
+ q(θ|ϕ) log
443
+ � q(θ|ϕ)P(D)
444
+ P(D|θ)P(θ)
445
+
446
+
447
+ =
448
+
449
+ q(θ|ϕ) log P(D)dθ +
450
+
451
+ q(θ|ϕ) log
452
+ �q(θ|ϕ)
453
+ P(θ)
454
+
455
+ dθ −
456
+
457
+ q(θ|ϕ) log P(D|θ)dθ
458
+ = log P(D) + KL[q(θ||ϕ)|P(θ)] − Eq(θ|ϕ)[log P(D|θ)]
459
+ (17)
460
+ where Bayes rule is applied to P(θ|D) in the first line. Then, the parameters ϕ are estimated by minimizing
461
+ Equation (17):
462
+ ϕ∗ = argmin
463
+ ϕ
464
+ KL[q(θ|ϕ)||P(θ|D)] = argmin
465
+ ϕ
466
+ KL[q(θ|ϕ)||P(θ)] − Eq(θ|ϕ)[log P(D|θ)]
467
+ (18)
468
+ where the term log P(D) is excluded as it is constant. Equation (18) aims to minimize the sum of two terms.
469
+ The second term corresponds to the expectation of the negative log-likelihood while the first term acts as a
470
+ regularizer and corresponds to the KL divergence between the approximated posterior and the prior.
471
+ 3
472
+ Probabilistic Neural Data Fusion
473
+ Designing a multi-fidelity NN that leverages an ensemble of LF data sets to better learn an HF source is
474
+ a very challenging task because of the following major reasons:
475
+ 1. The relations among the data sources can be unknown. For instance, in the Rational example (see
476
+ Table 4 in Appendix A) there are three LF sources whose biases are not additive. Additionally, these
477
+ LF sources are not hierarchically ordered in the sense that the second LF source is more accurate than
478
+ the first one.
479
+ 2. There are typically (but not always) more LF data available since LF sources are generally cheaper
480
+ compared to the HF source. Learning from such an unbalanced MF data is quite difficult especially in
481
+ the presence of scarce HF data (as an example, see the sample sizes for the engineering applications
482
+ described in Appendix B).
483
+ 7
484
+
485
+ 3. NNs can be built in many ways and, as shown in Section 4, their performance heavily depends on their
486
+ architecture and training mechanism. Building an optimum4 NN with small, unbalanced, and MF data
487
+ is even more difficult since the sensitivity to the architecture and training mechanism considerably
488
+ increases.
489
+ We propose to address the above challenges by converting MF modeling to a manifold5 learning problem
490
+ which is then solved via an NN. We design the architecture, loss function, and training mechanism of this
491
+ NN with a particular focus on uncertainty sources that include data scarcity (especially HF samples), noise
492
+ with unknown variance (which can affect any of the data sources), non-trivial biases of LF sources, and data
493
+ imbalances.
494
+ As schematically demonstrated in Figure 1, we convert MF modeling to manifold learning by augmenting
495
+ the input space with the categorical variable ts whose levels (e.g., {′1′,′ 2′, · · · } or {a, b, · · · }) indicate the
496
+ source that generates a sample. We then map this source indicator variable to a low-dimensional manifold
497
+ via a BNN (see Block 1 in Figure 1). If the original input space has the categorical variables tc, we similarly
498
+ map them to a manifold (but this time we use a deterministic NN, see Block 2 in Figure 1). Afterwards,
499
+ we combine the latent variables of these two manifolds with the quantitative inputs x via a deterministic
500
+ NN, see Block 3 in Figure 1. As opposed to the other two blocks, we require Block 3 to produce a normal
501
+ probability distribution in order to capture aleatoric uncertainties. Finally, we train the entire network on the
502
+ entire6 data using our custom loss function that noticeably improves the prediction intervals.
503
+ In the following subsections, we elaborate on our rationale for designing a multi-block architecture and
504
+ a custom loss function in Section 3.1 and Section 3.2, respectively. Then, we provide some details on the
505
+ training and inference stages in Section 3.3.
506
+ 3.1
507
+ Multi-Block Architecture
508
+ Each block of our network is designed to address particular challenges associated with MF modeling.
509
+ Specifically, the BNN of Block 1 maps a quantitative prior representation ζ(ts) of the source indicator
510
+ variable ts to a continuous manifold zs. We design ζ(ts) by one-hot encoding ts to merely inform the
511
+ network about the source that generates a sample7. We build zs based on a categorical variable because
512
+ it forces the manifold to uncover the relations between sources (i.e., the levels of ts). These relations are
513
+ represented as distances in zs where sources that produce similar data are encoded with close-by points (see
514
+ Section 4 for multiple examples). This distance learning is in sharp contrast to existing approaches since (1)
515
+ it does not assume there is any hierarchy between the data sources, (2) it is scalable to an arbitrary number
516
+ of data sets, (3) it enables training the entire network via all available samples, (4) it is visualizable and
517
+ interpretable which helps in identifying anomalous data sources, and (5) it does not assume any specific
518
+ form (e.g., additive, multiplicative, etc.) for the biases of LF sources.
519
+ Block 1 is the only part of our network where the weights and biases are endowed with probability
520
+ distributions. We make this choice to better learn model form errors and more accurately quantify the
521
+ epistemic uncertainties due to lack of data and source-wise discrepancies. We note that, while the outputs
522
+ of Block 1 do not parameterize a probability distribution, they are probabilistic by nature since they are
523
+ obtained by propagating the deterministic vector ζ(ts) through some probabilistic hidden layers.
524
+ Block 2 is an FFNN that maps the quantitative prior representation ζ(tc) of the categorical inputs tc to the
525
+ 4We measure optimality in terms of NN’s error in predicting unseen data from the HF source.
526
+ 5A manifold or a latent-space is a compact representation of a high-dimensional object such as an image.
527
+ 6By entire, we mean the combined data sets from all sources.
528
+ 7If there is some prior knowledge about the relation among the sources, ζ(ts) can be designed to reflect it. We do not pursue
529
+ designing such informative priors in this work.
530
+ 8
531
+
532
+ Targets
533
+ Bayesian Neural Network
534
+ Feedforward Neural Network
535
+ Probabilistic Latent Mapping
536
+ Probabilistic Output
537
+ Source Indicator
538
+ Feedforward Neural Network
539
+ Deterministic Latent Mapping
540
+ Numerical Inputs
541
+ C
542
+ Source
543
+ Source
544
+ Categorical Inputs
545
+ Block 1
546
+ Block 2
547
+ Block 3
548
+ Inputs
549
+ Multi-fidelity data
550
+ : Concatenation
551
+ C
552
+ Figure 1 Probabilistic neural data fusion (Pro-NDF): The proposed architecture allows to combine an arbitrary number of
553
+ sources by appending a source indicator variable to the data sets and then concatenating them. Pro-NDF consists of three blocks
554
+ that perform separate tasks related to MF modeling: (1) Block 1 is a BNN that maps a quantitative prior representation of the source
555
+ indicator ζ(ts) to a continuous manifold, (2) Block 2 is an FFNN that maps a quantitative prior representation of the categorical
556
+ inputs ζ(tc) to a continuous manifold, and (3) Block 3 is an FFNN with a probabilistic output that maps the numerical inputs and
557
+ the latent variables to a parametric distribution.
558
+ manifold zc (Block 2 is omitted if the original inputs are purely quantitative). Similar to Block 1, we design
559
+ ζ(tc) via one-hot encoding and use deterministic outputs. However, unlike Block 1 we use a deterministic
560
+ FFNN in Block 2 to map ζ(tc) into zc. We make this decision to reduce the number of parameters and also
561
+ because the meaning (and hence effects) of categorical inputs across different sources is typically the same8.
562
+ We set the manifold dimension to 2 for both Block 1 and Block 2, i.e., dzs = dzc = 2. While higher
563
+ dimensions provide more learning capacities, our results in Section 4 and those reported elsewhere [46–
564
+ 51] indicate that low-dimensional manifolds are quite powerful in learning highly complex relations. For
565
+ instance, [52] shows that a single latent variable can encode smiling in images of human faces which is a
566
+ 8Due to severe discrepancies such as large model form errors, the effects of a categorical variable on the response may be quite
567
+ different across the sources.
568
+ 9
569
+
570
+ high-dimensional and complex feature in the original data space. Additionally, our choice simplifies the
571
+ visualization of the manifolds and reduces the chances of overfitting since we are primarily interested in
572
+ scarce data applications.
573
+ Block 3 is also an FFNN that maps the numerical inputs and the latent variables in both manifolds to
574
+ a parametric distribution which represents the output. Block 3 has deterministic weights and biases since
575
+ source-wise uncertainties are propagated to it via Block 1. However, we equip Block 3 with a probabilistic
576
+ output because it: (1) quantifies aleatoric uncertainties that are inherent to the data sets9, and (2) enables
577
+ designing a multi-task loss that considers the quality of the prediction intervals (detailed in Section 3.2).
578
+ Additionally, Block 3 is responsible for learning the behavior for all data sources simultaneously, which
579
+ allows it to leverage correlations between sources to augment predictions through a process akin to weight
580
+ sharing.
581
+ 3.2
582
+ Uncertainty-Focused Loss
583
+ NNs typically provide overconfident predictions especially when they are trained on small and unbalanced
584
+ data. As explained in Section 3.1, we aim to address this issue by making Block 1 and the network’s final
585
+ output probabilistic. However, for these measures to work, we must develop an effective optimization10
586
+ scheme where the loss function appropriately rewards prediction intervals (PIs) that are sufficiently wide
587
+ (but not too wide) to cover unseen data (especially HF data). To design such a loss function, we draw
588
+ inspiration from strictly proper scoring rules [35] and augment Equation (18) with the negatively oriented
589
+ interval score. Our loss is defined as:
590
+ L = LNLL + α1LKL + α2LIS + α3L2
591
+ (19)
592
+ where LNLL refers to the negative log-likelihood, LKL is the KL divergence between the prior and the vari-
593
+ ational posterior distributions on the parameters (only applicable for the BNN from Block 1), LIS denotes
594
+ the interval score term, and L2 is L2 regularization (only applicable for deterministic NNs, i.e., Block 2 and
595
+ 3). α1, α2 and α3 are hyperparameters that, respectively, determine the relative strengths of LKL, LIS and
596
+ L2 compared to LNLL. The four terms in Equation (19) are calculated as:
597
+ LNLL = − 1
598
+ N
599
+ N
600
+
601
+ i=1
602
+ log N(y(i); ˆµ(i),
603
+
604
+ ˆσ(i)�2
605
+ )
606
+ (20)
607
+ LKL = KL[q(θ|ϕ)||P(θ)]
608
+ (21)
609
+ LIS = 1
610
+ N
611
+ N
612
+
613
+ i=1
614
+ [(ˆu(i) − ˆl(i)) + 2
615
+ γ (ˆl(i) − y(i))1{y(i) < ˆl(i)} + 2
616
+ γ (y(i) − ˆu(i))1{y(i) > ˆu(i)}]
617
+ (22)
618
+ L2 = |θ|2
619
+ (23)
620
+ where LKL is computed via a Monte Carlo approximation, N is the batch size, and 1{·} denotes the indica-
621
+ tor function that returns 1 if the event in brackets is true and 0 otherwise. The three terms of Equation (19)
622
+ compose a multi-task loss where: (1) the likelihood term LNLL penalizes the model if the predicted distri-
623
+ bution does not match the target distribution, (2) the KL divergence term LKL favors variational posteriors
624
+ that are similar to the assumed prior as per Equation (18), and (3) the interval score term LIS rewards nar-
625
+ row PIs while penalizing the model for each observation y(i) that lies outside the (1 − γ) × 100% prediction
626
+ interval that spans the range [ˆl(i), ˆu(i)] where ˆl(i) = ˆµ(i) − 1.96ˆσ(i) and ˆu(i) = ˆµ(i) + 1.96ˆσ(i). In this paper,
627
+ we use γ = 5%, thus implying that LIS is minimized by a distribution whose 95% PI is as tight as possible
628
+ while containing all the training data.
629
+ 9The predicted variance also includes epistemic uncertainties that are propagated from Block 1, see Section 3.3
630
+ 10Recall that we use Bayes by backprop which takes a variational approach towards finding the posteriors, see Section 2.2.
631
+ 10
632
+
633
+ 3.3
634
+ Training and Prediction
635
+ In BNNs, the variational posterior of θ is typically defined layer-wise as a multivariate Gaussian with
636
+ mean µ ∈ Rck and covariance matrix Σ ∈ Rck×ck, i.e., N(µ, Σ), where ck is the total number of con-
637
+ nections between two consecutive layers. Estimating the full covariance matrix requires learning O(c2
638
+ k)
639
+ parameters and is thus computationally prohibitive in most applications [45]. To reduce the costs, some
640
+ simplifications have been adopted in the literature, such as learning diagonal or block diagonal [53] covari-
641
+ ance matrices. However, our approach does not suffer from this computational issue since the only Bayesian
642
+ part of our network is Block 1 (see Figure 1) whose size is typically very small (we use one hidden layer
643
+ with 5 neurons for all the studies in Section 4). Hence, we estimate a dense covariance matrix between any
644
+ two layers of Block 1 to improve its uncertainty quantification capacity. As for the prior, we use a zero
645
+ mean Gaussian distribution with diagonal covariance matrix which makes the KL term equivalent to L2
646
+ regularization with a rate defined by the standard deviation of the prior distribution [54]. Thus, the standard
647
+ deviation is a hyperparameter that needs to be tuned specifically to each problem.
648
+ BNNs represent their weights and biases by parameterized distributions which in our case are multivariate
649
+ normal with dense covariance matrices. In a forward pass during either training or prediction, we take
650
+ individual samples from these distributions and assign them to the weights and biases. In this way, instead
651
+ of explicitly obtaining the true posterior distribution of the output of Block 1 (i.e., zs, see Figure 1), we
652
+ obtain an empirical distribution in the zs manifold by taking a number of forward passes, see Figure 2. We
653
+ refer to these forward passes as realizations and as explained below we use different number of passes in
654
+ training versus prediction.
655
+ To obtain the response (in training or testing) at the input u using Pro-NDF, which contains both a BNN
656
+ component and a probabilistic output, we use ensemble prediction formulas [34]:
657
+ ˆµ(u) = 1
658
+ M
659
+ M
660
+
661
+ j=1
662
+ ˆµθj(u)
663
+ (24)
664
+ ˆσ(u) = 1
665
+ M
666
+ M
667
+
668
+ j=1
669
+
670
+ ˆσ2
671
+ θj(u) + ˆµ2
672
+ θj(u)
673
+
674
+ − ˆµ2(u)
675
+ (25)
676
+ where ˆµθj(u) and ˆσθj(u) are, respectively, the mean and standard deviation of the output distribution in
677
+ the jth realization and θj are the associated network parameters. For predictions with a fitted NN, we use
678
+ M = 1000 since it provides a higher accuracy in quantifying the uncertainty associated with learning the
679
+ fidelity manifold (i.e., zs). While training the network, we use M = 200 to reduce the computational costs.
680
+ The performance of an NN is highly sensitive to its architecture and hyperparameters if the training data
681
+ is small, unbalanced, and multi-fidelity. To reduce this sensitivity and leverage the low costs of training a
682
+ single NN on small data, we perform automated hyperparameter tuning 11. To this end, we use RayTune
683
+ [55] and Hyperopt to find the optimum hyperparameters and architecture by minimizing the five-fold cross-
684
+ validation errors on predicting the high-fidelity data.
685
+ For our approach specifically, we apply the above tuning strategy to the architecture of Block 3, the
686
+ learning rate of the Adam optimizer, α1, α2 and α3 in Equation (19), the prior standard deviation of weight
687
+ matrices in Block 1, and the batch size. We fix the architectures of Block 1 and Block 2 to one hidden layer
688
+ with 5 neurons and the dimension of both manifolds to 2. The activation function for all the neurons of
689
+ Block 1 and 3 is hyperbolic tangent, whereas for Block 2 it is the sigmoid function. For more information
690
+ and full details on implementation, please see our GitLab repository.
691
+ 11We use this approach for all NN-based data fusion approaches (including ours) in Section 4.
692
+ 11
693
+
694
+ Block 1
695
+ Block 2
696
+ Block 3
697
+ Probabilistic Fidelity Manifold
698
+ Predictions with
699
+ Uncertainty Quantification
700
+ Source Indicator
701
+ Numerical Inputs
702
+ Categorical Inputs
703
+ realizations
704
+ Get
705
+ Categorical Inputs Manifold
706
+ Figure 2 Outputs of Pro-NDF : We visualize the outputs of Pro-NDF after it is trained on the MF data of the HOIP data set, which
707
+ does not have any numerical inputs (see Appendix B for more details). To provide probabilistic predictions that quantify both
708
+ epistemic as well as aleatoric uncertainties, Pro-NDF learns a probabilistic fidelity manifold (where sources with similar behavior
709
+ are encoded with close-by distributions) and a deterministic manifold for the categorical inputs.
710
+ 4
711
+ Results and Discussions
712
+ In this section, we validate our approach on three analytic and two real-world MF problems (detailed
713
+ in Appendices A and B) and compare its performance against LMGP and two other existing NN-based
714
+ approaches which are based on simple feedforward networks or sequential multi-fidelity (SMF) networks
715
+ which are described in Appendix C. The hyperparameters of all the NN-based approaches are tuned as
716
+ described in Section 3.3. We refer the reader to our GitLab repository for specific details on implementation,
717
+ estimated hyperparameters, and training/test data. For LMGP, none of its architectural parameters (such as
718
+ the kernel type, mean function, latent map, etc.) are tuned.
719
+ We first conduct an ablation study in Section 4.1 to quantify the impacts of our designed architecture,
720
+ loss function, and probabilistic elements. Then, we test the performance of the four MF approaches on the
721
+ analytic and real-world problems in Section 4.2 and Section 4.3, respectively. In each problem, the goal
722
+ is to model the HF source as accurately as possible, i.e., to obtain the lowest mean prediction error while
723
+ maximizing the number of training/test samples that fall in the 95% PI. To this end, we use mean squared-
724
+ error (MSE) and mean negatively oriented interval score (IS). Note that the FFNN and SMF approaches
725
+ are not probabilistic, i.e., they provide point estimates rather than PIs and therefore they are only evaluated
726
+ based on MSE.
727
+ 12
728
+
729
+ 0.8
730
+ 0.6
731
+ 0.4
732
+ 0.2
733
+
734
+ yli
735
+ yl2
736
+ 0.2
737
+
738
+ yh
739
+ 1.5
740
+ -0.5
741
+ 0.5
742
+ 21Level 1
743
+ Level 2
744
+ Level 3
745
+ Level4
746
+ Level 5
747
+ Level 6
748
+ Level 7
749
+ Level 8
750
+ Level 9
751
+ Level 10
752
+ Level 11
753
+ Level 12
754
+ Level 13
755
+ Level 14
756
+ Level15
757
+ Level 160]
758
+ .0
759
+ 8
760
+ -10
761
+ Predicted
762
+ 0
763
+ 20
764
+ 8
765
+ 00
766
+ 88
767
+ 30
768
+ 0
769
+ OT
770
+ Pro-NDF mean
771
+ 50
772
+ Pro-NDF 95% PI
773
+ 35
774
+ -30
775
+ -25
776
+ -20
777
+ -15
778
+ -10
779
+ -5
780
+ True4.1
781
+ Ablation Study
782
+ To evaluate the impact of the key components of Pro-NDF, we perform an ablation study on the Rational
783
+ and DNS-ROM problems which are detailed in Appendices A and B, respectively. Namely, we analyze the
784
+ impact of:
785
+ 1. Using a BNN rather than a deterministic FFNN in Block 1 for probabilistically learning the relations
786
+ between the data sources.
787
+ 2. Considering LIS in the loss function of Equation (19).
788
+ 3. Fitting the model to the parameters of a distribution instead of a scalar, i.e., using a probabilistic
789
+ output.
790
+ 4. Leveraging the fidelity map to detect the least accurate LF source and, in turn, assessing whether this
791
+ source helps emulating the HF source.
792
+ Regarding the third item above we note that we no longer use IS in the loss once the probabilistic output is
793
+ removed. However, we still calculate the IS after training based on the empirical distribution of the fidelity
794
+ manifold which is produced by the multiple realizations of the BNN component.
795
+ We summarize the results of the ablation study on the two examples in Table 1. For both problems, we
796
+ observe that using all components minimizes the test MSE and IS. Notably, both of our model’s probabilistic
797
+ components significantly increase the performance: the probabilistic output enables Pro-NDF to not only
798
+ capture aleatoric uncertainty, but also leverage IS in its loss function. Additionally, using a BNN improves
799
+ Pro-NDF’s HF emulation capabilities by preventing overfitting in scarce data regions (since Block 1 is
800
+ regularized) and by partially disentangling epistemic and aleatoric uncertainties which yields better PIs.
801
+ We observe that without a probabilistic output, the IS (and hence the uncertainty quantification accuracy)
802
+ drops quite significantly (compare V3 to V1 and the base in either of the problems) since the model can no
803
+ longer account for aleatoric uncertainties. By comparing V1 to the base model in either of the problems in
804
+ Table 1 Results of the ablation study: We evaluate the effect of removing individual components of Pro-NDF from it by reporting
805
+ the MSE and IS on unseen HF data. All models are trained as discussed in Section 3 (e.g., all models benefit from automatic
806
+ hyperparameter tuning). For both MSE and IS, lower numbers indicate better performance. The ticks indicate whether a component
807
+ is used. The acronyms and symbols are defined as: HF: high-fidelity, LF1: low-fidelity 1, LF2: low-fidelity 2, LF3: low-fidelity
808
+ 3, LIS: negatively oriented interval score term in the loss function of Equation (19), PB1: probabilistic Block 1, PO: probabilistic
809
+ output.
810
+ Problem
811
+ Model
812
+ Version
813
+ Input data
814
+ Components
815
+ MSE
816
+ IS
817
+ HF
818
+ LF1
819
+ LF2
820
+ LF3
821
+ LIS
822
+ PB1
823
+ PO
824
+ Rational
825
+ Base
826
+
827
+
828
+
829
+
830
+
831
+
832
+
833
+ 1.65 × 10−3
834
+ 0.22
835
+ V1
836
+
837
+
838
+
839
+
840
+ 
841
+
842
+
843
+ 3.31 × 10−3
844
+ 0.26
845
+ V2
846
+
847
+
848
+
849
+
850
+
851
+ 
852
+
853
+ 2.83 × 10−3
854
+ 0.24
855
+ V3
856
+
857
+
858
+
859
+
860
+ 
861
+
862
+ 
863
+ 2.01 × 10−3
864
+ 0.40
865
+ V4
866
+
867
+
868
+
869
+ 
870
+
871
+
872
+
873
+ 5.68 × 10−3
874
+ 0.98
875
+ DNS-ROM
876
+ Base
877
+
878
+
879
+
880
+
881
+
882
+
883
+
884
+ 8.99 × 109
885
+ 4.84 × 105
886
+ V1
887
+
888
+
889
+
890
+
891
+ 
892
+
893
+
894
+ 1.23 × 1010
895
+ 5.89 × 105
896
+ V2
897
+
898
+
899
+
900
+
901
+
902
+ 
903
+
904
+ 2.09 × 1010
905
+ 1.11 × 106
906
+ V3
907
+
908
+
909
+
910
+
911
+ 
912
+
913
+ 
914
+ 1.70 × 1010
915
+ 4.07 × 106
916
+ V4
917
+
918
+
919
+
920
+ 
921
+
922
+
923
+
924
+ 8.17 × 109
925
+ 5.06 × 105
926
+ 13
927
+
928
+ Table 1 we see that for a model with a probabilistic output the optimal performance is obtained when LIS
929
+ is used in the loss. That is, leveraging the IS in training improves both mean prediction and uncertainty
930
+ quantification (measured via MSE and IS, respectively).
931
+ In both problems, evaluating V1 through V3 against one another indicates that there is a trade-off between
932
+ MSE and IS. That is, versions that perform well in terms of MSE, do not generally provide the smallest IS.
933
+ However, when all of these components are included in Pro-NDF (see the base model in Table 1 for either
934
+ of the problems), both MSE and IS are reduced. This improvement is due to the fact that the priors and
935
+ LIS effectively regularize the model whose learning capacity is substantially increased by the probabilistic
936
+ natures of Block 1 and the output.
937
+ The probabilistic fidelity manifold (i.e., output of Block 1) provides an intuitive and visualizable tools
938
+ to learn the similarity/discrepancy among the sources. Hence, once we fit the base model in each problem,
939
+ we analyze the learned fidelity manifold to determine the LF source that has the least similarity to the HF
940
+ source, see Figure 4(a) and Figure 5(a). Based on the distances in the fidelity manifold of each problem, we
941
+ conclude that the third LF source is the least correlated one with the HF source in both cases. We exclude
942
+ this source and its data from MF modeling and refit the base model to the rest of the data, see version V4
943
+ for both problems.
944
+ One of the major outputs of Pro-NDF is the learned fidelity manifold which indicates which LF source
945
+ has the highest discrepancy compared to the HF source. Hence, after training a Pro-NDF and inversely
946
+ identifying the least accurate LF source, we can build another Pro-NDF while excluding the data from this
947
+ source. In the Rational problem, omitting the lowest-fidelity source results in much worse MSE and IS.
948
+ We explain this observation by noting that this problem has an extremely small number of HF samples and
949
+ therefore it is important to judiciously use all available data in training. However, in the DNS-ROM problem
950
+ version V4 achieves the best MSE while Pro-NDF with all components achieves the best IS and second best
951
+ MSE (compare base to V4 in Table 1). We explain this trend by noting that the size of the training data in the
952
+ DNS-ROM problem is significantly higher than that in the Rational problem. Therefore, omitting a highly
953
+ inaccurate data source improves mean prediction accuracy for the HF source in the DNS-ROM problem
954
+ since the input-output relationships learned by Block 3 for the different sources are more similar. Omitting
955
+ data from this source also increases the ratio of HF data available in the unified data set which helps in
956
+ learning the HF behavior. However, using all data sources provides Pro-NDF with more information which
957
+ improves the uncertainty quantification capability and hence a smaller error on IS.
958
+ 4.2
959
+ Analytic Problems
960
+ In this section, we validate our approach against LMGP and existing NN-based technologies for the
961
+ Rational, Wing-weight, and Borehole examples detailed in Appendix A. These examples cover a wider range
962
+ of input dimensionality, number of sources, and model form errors (e.g., additive and nonlinear biases).
963
+ Similar to the previous section, we use MSE and IS on HF test data as the performance metrics. The input
964
+ space of these three examples does not have categorical features and hence both Pro-NDF and LMGP learn
965
+ a single manifold. We visualize the fidelity manifolds learned by Pro-NDF and LMGP to examine these
966
+ models’ ability in inversely learning the relationships among the data sources (note that the LF sources are
967
+ not ordered based on their accuracy). We highlight that, unlike Pro-NDF, the fidelity manifold of LMGP is
968
+ not probabilistic and hence each data source is encoded with a single point in the manifold.
969
+ The results for each approach on each problem are summarized in Table 2 and demonstrate that the
970
+ probabilistic approaches, i.e., LMGP and Pro-NDF, significantly outperform the deterministic approaches
971
+ in all problems. The FFNN approach performs significantly worse than LMGP and sometimes approaches
972
+ the performance of Pro-NDF in MSE, while the SMF approach shows poor performance for all problems.
973
+ 14
974
+
975
+ We explain SMF’s poor performance by noting that, as explained in Appendix C, hierarchical MF techniques
976
+ such as SMF heavily rely on the knowledge of fidelity levels to process the data sources sequentially in the
977
+ order of increasing accuracy. Since we assume in the problem setup that we only know which source has
978
+ the highest fidelity and do not know the relative fidelity levels of the LF sources, the LF sources are ordered
979
+ sub-optimally in the SMF approach which leads to a very poor prediction accuracy. The FFNN approach, by
980
+ contrast, does not rely on the knowledge of fidelity levels and as such performs better than SMF. However,
981
+ its performance lags behind that of LMGP and Pro-NDF because the architecture is not designed with MF
982
+ problems in mind.
983
+ LMGP, which is considered as our gold standard for MF problems with small data, outperforms Pro-NDF
984
+ in both MSE and IS for the Wing-weight and Borehole problems, and in IS for the Rational problem. The
985
+ Rational problem is simultaneously the most data deficient and least complex of the problems examined in
986
+ this paper: as shown in Table 4, there are 4 data sources with only one being especially inaccurate, the input
987
+ and output are both 1D, and there are only 5 training samples provided for the HF source. Pro-NDF and
988
+ LMGP are well suited to tackle this problem as they both perform well for low-dimensional problems with
989
+ simple underlying functional forms and well-correlated sources, and as such they have similar performance.
990
+ Figure 3(a,b) reveals that LMGP captures all of the training points in a narrower 95% PI compared to Pro-
991
+ NDFwhich explains LMGP’s lower IS in Table 2. However, Pro-NDF shows a better performance for this
992
+ problem in terms of mean prediction accuracy and it also has a higher degree of agreement with the true
993
+ function in extrapolation while LMGP reverts to its mean. We therefore conclude that both methods perform
994
+ on par on the Rational problem.
995
+ The learned fidelity manifold of Pro-NDF for the Rational problem is shown in Figure 4(a) which indicates
996
+ that the network has inversely learned the true relationship between the data sources as yl1 and yl2 are
997
+ encoded close to yh while yl3 is quite far from yh. These relative distances are proportional to the accuracy
998
+ of the LF sources with respect to the HF source which are reported in Table 4. The fidelity manifold also
999
+ shows a high spread in the distributions of the realizations for individual sources which indicates either a
1000
+ poor fit to the data or a lack of training samples. In this case, we attribute this spread to lack of data since
1001
+ the performance in IS and MSE is quite good.
1002
+ The Wing-weight and Borehole problems are both high-dimensional problems with relatively complex
1003
+ underlying functional forms and small amounts of data. LMGP is very well suited to tackle this type of
1004
+ problem[30] because the number of its hyperparameters scales much better than NN-based approaches such
1005
+ as Pro-NDF . Accordingly, we observe that LMGP achieves lower MSE and IS for both examples.
1006
+ Comparing the performance of Pro-NDF across the two high-dimensional problems, we observe that
1007
+ it performs much better on the Borehole problem. Examining the fidelity manifold learned by Pro-NDF
1008
+ and LMGP for the Wing-weight problem, see Figure 4(c) and Figure 4(d), respectively, we see that both ap-
1009
+ Table 2 Results on the analytic examples for different models: We test the performance of Pro-NDF against LMGP and existing
1010
+ NN-based technologies for the Rational, Wing-weight and Borehole examples detailed in Table 4. The training procedure for Pro-
1011
+ NDF, LMGP, FFNN and SMF is discussed in Section 3, Section 2.1, Appendix C.1 and, Appendix C.2 respectively. We report the
1012
+ MSE and IS on unseen HF data.
1013
+ Rational
1014
+ Wing-weight
1015
+ Borehole
1016
+ Model
1017
+ MSE
1018
+ IS
1019
+ MSE
1020
+ IS
1021
+ MSE
1022
+ IS
1023
+ Pro-NDF
1024
+ 1.65 × 10−3
1025
+ 0.22
1026
+ 59.14
1027
+ 37.09
1028
+ 14.94
1029
+ 19.24
1030
+ LMGP
1031
+ 1.70 × 10−3
1032
+ 0.20
1033
+ 37.97
1034
+ 29.70
1035
+ 12.69
1036
+ 17.57
1037
+ FFNN
1038
+ 2.95 × 10−3
1039
+ -
1040
+ 64.23
1041
+ -
1042
+ 21.16
1043
+ -
1044
+ SMF
1045
+ 8.08 × 10−3
1046
+ -
1047
+ 542.74
1048
+ -
1049
+ 172.87
1050
+ -
1051
+ 15
1052
+
1053
+ Figure 3 High-fidelity emulation on the Rational problem: Pro-NDF and LMGP approaches produce similar results in terms of
1054
+ mean prediction in interpolation. However, LMGP has a narrower 95% PI and reverts to its mean in extrapolation.
1055
+ proaches accurately determine the relationship between the sources as they agree with the RRMSEs reported
1056
+ in Table 4. Specifically, yl1 is closer to yh than yl2, which in turn is closer than yl3. Notably, both LMGP
1057
+ and Pro-NDF have the same relative ordering and positioning of the sources, i.e., (1) the mean position of all
1058
+ sources lies on an axis, and (2) yl1 is in the opposite direction relative to yh from yl2 and yl3. This reinforces
1059
+ our earlier assertion in Section 3: the positions of the sources in the fidelity manifold learned by Pro-NDF
1060
+ reflect correlations between the data sources. However, the relative distances between the LF sources in the
1061
+ latent space found by LMGP more accurately represents the true relationships between the sources because
1062
+ the position for yl3 is much more distant from yh than encoded positions of the other sources.
1063
+ In Figure 4(a) we observe a large spread in the realizations (i.e., the posterior distributions in the fidelity
1064
+ manifold are quite wide) which partially explains the poor12 performance in this problem. We attribute this
1065
+ performance level to the relative accuracy of the data sources since only one source, yl1, is at all accurate
1066
+ with respect to yh while the other LF sources are quite inaccurate. LMGP’s performance is not inherently
1067
+ hampered by including poorly correlated sources in the data fusion problem [30] since its performance,
1068
+ upon successful optimization, is at worse on par with fitting separate GPs to each source. By contrast, since
1069
+ Pro-NDF’s Block 3 is responsible for learning the relations between all sources and uses weight sharing,
1070
+ including especially inaccurate sources leads to relatively poor performance as shown in 4.1.
1071
+ The Borehole problem has five total sources where two LF sources (yl3 and yl4) are accurate while two
1072
+ LF sources (yl1 and yl2) are quite inaccurate with respect to yh. Since there are more total data available
1073
+ compared to the Wing-weight problem due to the additional data source, and since there are more high-
1074
+ accuracy LF sources, we observe that the spread of the realizations in the fidelity manifold of Pro-NDF is
1075
+ much smaller than in the Wing-weight, see Figure 4(e). This narrow spread indicates that a good fit has been
1076
+ achieved. We again observe that the relative directions and distances of the LF sources from yh are nearly
1077
+ identical in the manifolds of Pro-NDF and LMGP, see Figure 4(f), and that both methods have correctly
1078
+ identified the relationships between the sources, see Table 4. Based on these observations, it is no surprise
1079
+ that Pro-NDF achieves very good performance and nearly matches LMGP in terms of MSE and IS.
1080
+ 12Poor with respect to LMGP. The performance of Pro-NDF is still much better than the other two NN-based approaches.
1081
+ 16
1082
+
1083
+ Figure 4 Pro-NDF and LMGP fidelity manifolds for the analytic problems: (a) Pro-NDF for Rational problem, (b) LMGP
1084
+ for Rational problem, (c) Pro-NDF for Wing-weight problem, (d) LMGP for Wing-weight problem, (e) Pro-NDF for Borehole
1085
+ problem, and (f) LMGP for Borehole problem.
1086
+ 4.3
1087
+ Real-World Problems
1088
+ In this section, we validate our approach against LMGP and existing NN-based technologies on two
1089
+ engineering applications which are detailed in Appendix B. We again use MSE and IS on HF test data
1090
+ as our performance metrics and examine the manifolds learned by Pro-NDF and LMGP. In both of these
1091
+ applications, the input space has categorical features (so Pro-NDF and LMGP each build two manifolds)
1092
+ 17
1093
+
1094
+ Figure 5 Pro-NDF and LMGP fidelity manifolds for the real-world problems: (a) Pro-NDF for DNS-ROM data set, (b) LMGP
1095
+ for DNS-ROM data set, (c) Pro-NDF for HOIP data set, (d) LMGP for HOIP data set.
1096
+ and we do not know the underlying relationships between the data sources.
1097
+ The results for each approach on each problem are summarized in Table 3 and demonstrate that the prob-
1098
+ abilistic approaches again significantly outperform the deterministic ones. The FFNN approach performs
1099
+ nearly as well as LMGP and Pro-NDF in the DNS-ROM problem, but lags behing Pro-NDF and LMGP in
1100
+ the HOIP problem in terms of MSE. The SMF approach shows poor performance for both problems for the
1101
+ Table 3 Results on the real-world examples for different models: We test the performance of Pro-NDF against LMGP and
1102
+ existing NN-based technologies for the DNS-ROM and HOIP data sets detailed in Appendix B. We report the MSE and IS on
1103
+ unseen HF data.
1104
+ DNS-ROM
1105
+ HOIP
1106
+ Model
1107
+ MSE
1108
+ IS
1109
+ MSE
1110
+ IS
1111
+ Pro-NDF
1112
+ 8.99 × 109
1113
+ 4.84 × 105
1114
+ 14.16
1115
+ 14.84
1116
+ LMGP
1117
+ 9.66 × 109
1118
+ 6.60 × 105
1119
+ 14.33
1120
+ 20.08
1121
+ FFNN
1122
+ 1.12 × 1010
1123
+ -
1124
+ 21.55
1125
+ -
1126
+ SMF
1127
+ 1.81 × 1010
1128
+ -
1129
+ 28.09
1130
+ -
1131
+ 18
1132
+
1133
+ Figure 6 Pro-NDF categorical manifold for the HOIP problem: The combination of the categorical variables’ levels are color-
1134
+ coded based on: (a) the levels of tc
1135
+ 1, (b) the levels of tc
1136
+ 2, (c) the levels of tc
1137
+ 3, (d) the average output value.
1138
+ same reasons provided in Section 4.2. Notably, Pro-NDF outperforms LMGP for both problems in terms
1139
+ of both metrics which we partially explain by noting that there are much more data are available in these
1140
+ real-world problems compared to the analytical examples of Appendix A. Being an NN-based approach,
1141
+ Pro-NDF scales very well with additional data while the performance of LMGP has diminishing returns and
1142
+ eventually plateaus (recall that the latent map and kernel of LMGP are not tuned which contribute to this
1143
+ plateauing performance).
1144
+ As shown in Figure 5(a-b), the fidelity manifolds learned by Pro-NDF and LMGP for the DNS-ROM
1145
+ problem are nearly analogous as the relative distances are quite similar. However, LMGP finds all sources to
1146
+ be on the diagonal axis while Pro-NDF learns a more nuanced relationship between the sources, which may
1147
+ contribute to its superior performance. We also observe that the spreads in the individual realizations for
1148
+ each point are fairly tight, which indicates that Pro-NDF is able to learn the relations between the sources
1149
+ reasonably well and, accordingly, provide good performance in terms of MSE and IS.
1150
+ The HOIP problem has three categorical inputs with 10, 3, and 16 levels and as such Pro-NDF uses two
1151
+ 19
1152
+
1153
+ Figure 7 LMGP categorical manifold for the HOIP problem: The combination of the categorical variables’ levels are color-
1154
+ coded based on: (a) the levels of tc
1155
+ 1, (b) the levels of tc
1156
+ 2, (c) the levels of tc
1157
+ 3, (d) the average output value.
1158
+ separate latent transformations (one for the data source and the other for the three categorical variables) that
1159
+ correspond to Blocks 1 and 2 in Figure 1. The learned categorical manifolds for Pro-NDF and LMGP are
1160
+ shown in Figures 6 and 7 where the zc is visualized four times as the combinations of the categorical vari-
1161
+ ables are color-coded based on the levels of each of the three categorical variables and based on the average
1162
+ value of the output 13. Since there are no numerical features, the combined inputs are u = [ζ(ts), ζ(tc)] and
1163
+ ν = [zs, zc] are the inputs to Block 3 of Pro-NDF . Recall that we only use a BNN in Block 1 and as such
1164
+ we show only one realization for the manifold for Pro-NDF that encodes the categorical variables.
1165
+ Pro-NDF outperforms LMGP in terms of MSE by a small margin and IS by a significant margin for this
1166
+ problem which we attribute to the size of the data sets. Pro-NDF is able to leverage these additional data
1167
+ much more readily than LMGP which only uses simple mapping functions to handle categorical variables
1168
+ ts and tc. Pro-NDF also finds fairly tight spreads in the probabilistic fidelity manifold, see Figure 5(c);
1169
+ indicating that it has high certainty in its outputs and that we should expect good performance. We note
1170
+ 13This average is obtained using the entire data set including both the training and test data
1171
+ 20
1172
+
1173
+ that all sources are found to be roughly on one axis and roughly spaced evenly from each other, which
1174
+ may indicate that Pro-NDF has failed to learn the more nuanced relationships between the sources. Equally
1175
+ likely, however, is that the relationships between the sources are simple enough to be represented in this
1176
+ way; since we do not know the underlying functional forms for this problem, we cannot give a definitive
1177
+ answer.
1178
+ We can also glean some information about the relationships between the categorical variable levels and
1179
+ their impact on the output by examining the corresponding manifolds in Figures 6 and 7. Figure 6(c) shows
1180
+ that Pro-NDF finds distinct clusters for all 16 levels of tc
1181
+ 3 which indicates that distinguishing between the
1182
+ levels of tc
1183
+ 3 is important to learning the output. Similarly, the levels of tc
1184
+ 2 are distinguishable in Figure 6(b)
1185
+ as tc
1186
+ 2 affects the response value. By contrast, Figure 6(a) shows no apparent trend between the 10 levels of tc
1187
+ 1
1188
+ which implies that tc
1189
+ 1 has little effect on the output as Pro-NDF does not learn to distinguish the levels from
1190
+ each other. By contrast, the manifold found by LMGP, shown in Figure 7 shows much less distinct clustering
1191
+ for each of the three categorical variables, which may help explain why it achieves a lower IS than Pro-NDF
1192
+ . Finally, we examine whether the latent positions for the categorical combinations are influenced by the
1193
+ average output value in Figure 6(d) and Figure 7(d). The manifold for Pro-NDF shows a clear trend of the
1194
+ average output value increasing as the latent points move from the bottom-left of the space to the top-right,
1195
+ while for LMGP there is no obvious trend. Based on these manifolds, Pro-NDF shows superior ability to
1196
+ discern relationships between the categorical combinations and between levels of categorical variables.
1197
+ 5
1198
+ Conclusion
1199
+ In this paper, we introduce Pro-NDF for data fusion under uncertainty. Pro-NDF is based on a multi-block
1200
+ NN where each block is designed to take on specific tasks for MF modeling that arise in typical engineering
1201
+ applications. One of these blocks is probabilistic whose visualizable output can be used to detect LF sources
1202
+ with large model form errors. The final output of Pro-NDF is also probabilistic which enables to not only
1203
+ quantify aleatoric uncertainties, but also leverage strictly proper scoring rules during training.
1204
+ We validate each of the key components of Pro-NDF by performing an ablation study on an analytic and a
1205
+ real-world example. We also demonstrate that Pro-NDF outperforms other NN-based data fusion approaches
1206
+ by a large margin. Moreover, Pro-NDF performs on par to LMGP in low-dimensional cases with small data
1207
+ sets and slightly lags behind LMGP (a competing GP-based approach) in high-dimensional examples with
1208
+ very small data sets. However, as the size of the training data increases, Pro-NDF scales better than LMGP
1209
+ and provides smaller errors. In these studies, we test the performance on unseen HF data but note that
1210
+ Pro-NDF builds an MF emulator that probabilistically surrogates all the data sources simultaneously.
1211
+ A particularly useful output of Pro-NDF is its learnt fidelity manifold which encodes source-wise simi-
1212
+ larities/discrepancies. While the learnt distances in this manifold do not directly link correlation between
1213
+ the sources, we observe that the fidelity manifold of Pro-NDF and LMGP look quite similar in our stud-
1214
+ ies. Since the fidelity manifold of LMGP is embedded in its kernel and hence indicates the correlations,
1215
+ we believe the fidelity of Pro-NDF also estimates a scaled version of correlation. An added benefit of Pro-
1216
+ NDF’s fidelity manifold is that it is probabilistic where wide distributions can indicate if Pro-NDF is able
1217
+ to learn the relation between the data sources. Reducing this uncertainty via domain knowledge (especially
1218
+ qualitative information in engineering applications) is a future direction that we plan to investigate.
1219
+ The performance of any data fusion approach (including ours) can drop if there are one or more very
1220
+ inaccurate LF sources. With Pro-NDF , the learned fidelity manifold can be used to identify-discard such
1221
+ sources and then retrain Pro-NDF anew. This process can be repeated until all LF sources are encoded close
1222
+ to the HF source in the fidelity manifold. This iterative approach is, however, quite inefficient so we plan
1223
+ to develop an automated mechanism that perhaps leverages the fidelity manifold to adjust the loss function
1224
+ 21
1225
+
1226
+ and, in turn, prevent Pro-NDF from learning the highly inaccurate LF sources.
1227
+ 6
1228
+ Acknowledgement
1229
+ We appreciate the support from National Science Foundation (award numbers OAC-2211908 and OAC-
1230
+ 2103708) and the Early Career Faculty grant from NASA’s Space Technology Research Grants Program
1231
+ (award number 80NSSC21K1809).
1232
+ Appendices
1233
+ We provide the formulations of the analytic problems in Appendix A, the background and details of the
1234
+ real-world problems in Appendix B, and the methodology and details of the FFNN and SMF methods in
1235
+ Appendix C.
1236
+ A
1237
+ Table of Analytic Examples
1238
+ Table 4 details the analytic functions used for the examples covered in Section 4. For each multi-fidelity
1239
+ problem, we calculate the accuracy of each LF source with respect to the HF source via relative root mean
1240
+ squared error (RRMSE):
1241
+ RRMSE =
1242
+
1243
+ (yl − yh)T (yl − yh)
1244
+ 10000 × var(yh)
1245
+ (A-1)
1246
+ where yl and yh are 10000×1 arrays of outputs sampled randomly via Sobol sequence from the LF and HF
1247
+ sources, respectively. We use the same sample locations and outputs as our test data when evaluating MSE
1248
+ and IS in Sections 4.1 and 4.2.
1249
+ B
1250
+ Background on Real-World Examples
1251
+ In the DNS-ROM problem, the goal is to predict the toughness of a multiscale metallic component with
1252
+ spatially varying porosity by combining four sources of data: (1) high-fidelity: direct numerical simula-
1253
+ tions (DNS) and (2, 3, 4) low-fidelity: a reduced-order model (ROM) with three different number of clusters
1254
+ (800, 1600, 3200) which balance accuracy against computational costs. The data sets have six numerical
1255
+ inputs that include pore volume fraction, number of pores, pore aspect ratio, average nearest neighbor dis-
1256
+ tance among the pores, evolutionary rate parameter, and critical effective plastic strain (the last two inputs
1257
+ govern the damage response of the material under load). The more clusters are used in the ROM, the more
1258
+ similar are its results compared to those of DNS at the expense of a higher computational burden. The data
1259
+ set contains nh = 70, nl1 = 110, nl2 = 170, nl3 = 250 samples. We use 80% of the available samples for
1260
+ each source for training and 20% for testing. For further details on this data set, we refer the reader to [2].
1261
+ In the HOIP problem, the goal is to predict the inter-molecular binding energy in hybrid organic-inorganic
1262
+ perovskite (HOIP) crystals. The data set has three categorical inputs with l1 = 10, l2 = 3, and l3 = 16
1263
+ levels which correspond to the elements present in each crystal. There are one HF and three LF data sets
1264
+ with unknown levels of fidelity and nh = 480, nl1 = 480, nl2 = 179, nl3 = 240. We use 90% of the
1265
+ available samples for each source for training and 10% for testing.
1266
+ C
1267
+ Other Multi-Fidelity NN-Based Approaches
1268
+ 22
1269
+
1270
+ Table 4 Table of analytic functions: The analytic examples have different input dimensionality, number of sources, and forms of
1271
+ model error. n denotes the number of samples, σ2 is the variance of the noise, and RRMSE is the relative root mean squared error
1272
+ of an LF source with respect to an HF source, see Equation (A-1).
1273
+ Name
1274
+ Source ID
1275
+ Formulation
1276
+ n
1277
+ σ2
1278
+ RRMSE
1279
+ Rational
1280
+ yh(x)
1281
+ 1
1282
+ 0.1x3+x2+x+1
1283
+ 5
1284
+ 0.001
1285
+ -
1286
+ yl1(x)
1287
+ 1
1288
+ 0.2x3+x2+x+1
1289
+ 30
1290
+ 0.001
1291
+ 0.23
1292
+ yl2(x)
1293
+ 1
1294
+ 0×x3+x2+x+1
1295
+ 30
1296
+ 0.001
1297
+ 0.15
1298
+ yl3(x)
1299
+ 1
1300
+ 0×x3+x2+0×x+1
1301
+ 30
1302
+ 0.001
1303
+ 0.73
1304
+ Wing Weight
1305
+ yh(x)
1306
+ 0.036S0.758
1307
+ ω
1308
+ W 0.0035
1309
+
1310
+
1311
+ A
1312
+ cos2(Λ)
1313
+ �0.6
1314
+ q0.006×
1315
+ 15
1316
+ 25
1317
+ -
1318
+ λ0.04 �
1319
+ 100tc
1320
+ cos(Λ)
1321
+ �−0.3
1322
+ + (NzWdg)0.49 + SωWp
1323
+ yl1(x)
1324
+ 0.036S0.758
1325
+ ω
1326
+ W 0.0035
1327
+
1328
+
1329
+ A
1330
+ cos2(Λ)
1331
+ �0.6
1332
+ q0.006×
1333
+ 50
1334
+ 25
1335
+ 0.20
1336
+ λ0.04 �
1337
+ 100tc
1338
+ cos(Λ)
1339
+ �−0.3
1340
+ + (NzWdg)0.49 + 1 × Wp
1341
+ yl2(x)
1342
+ 0.036S0.8
1343
+ ω W 0.0035
1344
+
1345
+
1346
+ A
1347
+ cos2(Λ)
1348
+ �0.6
1349
+ q0.006×
1350
+ 50
1351
+ 25
1352
+ 1.14
1353
+ λ0.04 �
1354
+ 100tc
1355
+ cos(Λ)
1356
+ �−0.3
1357
+ + (NzWdg)0.49 + 1 × Wp
1358
+ yl3(x)
1359
+ 0.036S0.9
1360
+ ω W 0.0035
1361
+
1362
+
1363
+ A
1364
+ cos2(Λ)
1365
+ �0.6
1366
+ q0.006×
1367
+ 50
1368
+ 25
1369
+ 5.75
1370
+ λ0.04 �
1371
+ 100tc
1372
+ cos(Λ)
1373
+ �−0.3
1374
+ + (NzWdg)0.49 + 0 × Wp
1375
+ Borehole
1376
+ yh(x)
1377
+ 2πTu(Hu−Hl)
1378
+ ln
1379
+
1380
+ r
1381
+ rw
1382
+ ��
1383
+ 1+
1384
+ 2LTu
1385
+ ln( r
1386
+ rw )r2wkw
1387
+ + Tu
1388
+ Tl
1389
+
1390
+ 15
1391
+ 6.25
1392
+ -
1393
+ yl1(x)
1394
+ 2πTu(Hu−0.8Hl)
1395
+ ln
1396
+
1397
+ r
1398
+ rw
1399
+ ��
1400
+ 1+
1401
+ 1LTu
1402
+ ln( r
1403
+ rw )r2wkw
1404
+ + Tu
1405
+ Tl
1406
+
1407
+ 50
1408
+ 6.25
1409
+ 3.67
1410
+ yl2(x)
1411
+ 2πTu(Hu−3Hl)
1412
+ ln
1413
+
1414
+ r
1415
+ rw
1416
+ ��
1417
+ 1+
1418
+ 8LTu
1419
+ ln( r
1420
+ rw )r2wkw
1421
+ +0.75 Tu
1422
+ Tl
1423
+
1424
+ 50
1425
+ 6.25
1426
+ 3.73
1427
+ yl3(x)
1428
+ 2πTu(1.1Hu−Hl)
1429
+ ln
1430
+
1431
+ 4r
1432
+ rw
1433
+ ��
1434
+ 1+
1435
+ 3LTu
1436
+ ln( r
1437
+ rw )r2wkw
1438
+ + Tu
1439
+ Tl
1440
+
1441
+ 50
1442
+ 6.25
1443
+ 0.38
1444
+ yl4(x)
1445
+ 2πTu(1.05Hu−Hl)
1446
+ ln
1447
+
1448
+ 2r
1449
+ rw
1450
+ ��
1451
+ 1+
1452
+ 2LTu
1453
+ ln( r
1454
+ rw )r2wkw
1455
+ + Tu
1456
+ Tl
1457
+
1458
+ 50
1459
+ 6.25
1460
+ 0.19
1461
+ C.1
1462
+ Feedforward Neural Networks
1463
+ As depicted in Figure 8, for MF modeling via an FFNN we simply feed the numerical inputs x, the prior
1464
+ representation of the source indicator ζ(ts) and the prior representation of the categorical inputs ζ(tc) into
1465
+ the FFNN to produce the output. This approach has two clear disadvantages with respect to Pro-NDF: (1)
1466
+ it does not provide a tool such as the fidelity manifold of Pro-NDF that provides a direct visualization of
1467
+ the correlation between the data sources, and (2) it has a fully deterministic setting which does not enable
1468
+ uncertainty quantification and thus using a loss function based on proper scoring rules. In particular, we use
1469
+ 23
1470
+
1471
+ Targets
1472
+ Source Indicator
1473
+ Numerical Inputs
1474
+ C
1475
+ Categorical Inputs
1476
+ Inputs
1477
+ Multi-fidelity data
1478
+ Feedforward Neural Network
1479
+ Source
1480
+ Source
1481
+ : Concatenation
1482
+ C
1483
+ Figure 8 FFNN for multi-fidelity modeling: As Pro-NDF , this approach allows to use an arbitrary number of sources by ap-
1484
+ pending a source indicator variable to each data set and concatenating them. The FFNN maps the numerical inputs x, a priori
1485
+ representation of the source indicator ζ(ts), and categorical inputs ζ(tc) to the output.
1486
+ the following loss function for training the FFNN:
1487
+ L = LMSE + βL2
1488
+ (C-2)
1489
+ where LMSE is the mean squared error of the predictions and L2 is L2 regularization:
1490
+ LMSE = 1
1491
+ N
1492
+ N
1493
+
1494
+ i=1
1495
+ (y(i) − ˆy(i))2
1496
+ (C-3)
1497
+ L2 = |θ|2
1498
+ (C-4)
1499
+ We employ Adam as the optimizer and use RayTune [55] and Hyperopt with five-fold cross-validation to
1500
+ find the optimum architecture and hyperparameters which include the learning rate, regularization parameter
1501
+ β, and batch size N. For further details on implementation, please see our GitLab repository.
1502
+ C.2
1503
+ Sequential Multi-Fidelity Networks
1504
+ Unlike the other methods presented in this paper, multi-fidelity modeling via SMF requires training a
1505
+ separate sorrgate for each data source. As depicted in Figure 9, individual FFNNs are trained for each
1506
+ source in the sequence that ends with the HF source. After a sorrugate is trained for a data source, its
1507
+ outputs are used to augment the inputs of the next model in the sequence and hence the resulting input-
1508
+ output relationships are:
1509
+ 24
1510
+
1511
+ Multi-fidelity data
1512
+ Targets
1513
+ LF Source
1514
+ Inputs
1515
+ Targets
1516
+ LF Source
1517
+ Inputs
1518
+ Targets
1519
+ HF Source
1520
+ Inputs
1521
+ Numerical Inputs
1522
+ Categorical Inputs
1523
+ Feedforward Neural Network
1524
+ Numerical Inputs
1525
+ Categorical Inputs
1526
+ Feedforward Neural Network
1527
+ Numerical Inputs
1528
+ Categorical Inputs
1529
+ Feedforward Neural Network
1530
+ Figure 9 Sequential Multi-fidelity (SMF) Networks: SMF is a hierarchical approach that relies on sequentially training a model
1531
+ (e.g., an FFNN) for each data source in an ascending order based on the fidelities. The inputs of a model are augmented with the
1532
+ outputs of the previous one until reaching the model of the HF source.
1533
+ ˆys1 = ˆfs1 (us1)
1534
+ ˆys2 = ˆfs2 (us2, ˆys1(us2))
1535
+ · · ·
1536
+ ˆysds = ˆfsds (usds, ˆysds−1(usds))
1537
+ (C-5)
1538
+ where ˆysi is the output of the FFNN , ˆfsi is the mapping defined by the FFNN, usi is the combined numeric
1539
+ and categorical input u = [x, ζ(tc)], and i denotes the data source with i = ds being the HF source.
1540
+ Each individual FFNN employs the same loss function and optimizer as in the FFNN method presented in
1541
+ Appendix C.1.
1542
+ Unlike the other three MF methods we study in this paper, the SMF approach is highly sensitive to the
1543
+ ordering of the data sources in the sequence. In the case that the fidelity levels are known, they are assigned
1544
+ in the order of increasing fidelity, i.e., source 1 is the least accurate LF source while source ds−1 is the most
1545
+ 25
1546
+
1547
+ accurate. With this ordering, the SMF approach leverages the entire data set to achieve good HF prediction
1548
+ accuracy by minimizing the complexity of the mapping learned by each successive FFNN. However, in
1549
+ the case that the fidelities are not known, the order of the LF sources is assigned randomly. In this case,
1550
+ the mappings of the successive FFNNs no longer monotonically approaches that of the HF function, and
1551
+ the SMF approach is unable to properly leverage the additional LF data. In this paper, we assume that the
1552
+ fidelity levels are unknown and therefore assign the data source ordering randomly when using SMF.
1553
+ Similar to the FFNN approach, the SMF approach does not provide a latent mapping and is entirely deter-
1554
+ ministic. Like all hierarchical approaches, it also requires knowledge of fidelity levels for good performance.
1555
+ These factors lead to a marked disadvantage in the context of the problems examined in this paper, and we
1556
+ therefore expect the SMF method to perform poorly.
1557
+ We use RayTune and Hyperopt with five-fold cross-validation to find the optimum architecture and hyper-
1558
+ parameters for each FFNN in the SMF method. Namely, we tune the learning rate, regularization parameter
1559
+ β, and batch size N. We also tune an additional parameter that determines whether to use the numeric and
1560
+ categorical inputs u in the final FFNN, since the mapping may be simple enough to learn from just the
1561
+ previous FFNN outputs in the case that the last LF source is highly accurate. For further details on imple-
1562
+ mentation, please see our GitLab repository.
1563
+ 26
1564
+
1565
+ References
1566
+ [1]
1567
+ Ghanshyam Pilania, James E Gubernatis, and Turab Lookman. “Multi-fidelity machine learning mod-
1568
+ els for accurate bandgap predictions of solids”. In: Computational Materials Science 129 (2017),
1569
+ pp. 156–163.
1570
+ [2]
1571
+ Shiguang Deng, Carlos Mora, Diran Apelian, and Ramin Bostanabad. “Data-Driven Calibration of
1572
+ Multi-Fidelity Multiscale Fracture Models”. In: arXiv preprint arXiv:2205.12157 (2022).
1573
+ [3]
1574
+ Xiaotong Liu, Pierre-Paul De Breuck, Linghui Wang, and Gian-Marco Rignanese. “A simple de-
1575
+ noising approach to exploit multi-fidelity data for machine learning materials properties”. In: arXiv
1576
+ preprint arXiv:2204.10430 (2022).
1577
+ [4]
1578
+ Souvik Chakraborty, Tanmoy Chatterjee, Rajib Chowdhury, and Sondipon Adhikari. “A surrogate
1579
+ based multi-fidelity approach for robust design optimization”. In: Applied Mathematical Modelling
1580
+ 47 (2017), pp. 726–744.
1581
+ [5]
1582
+ P´eter Z´en´o Korondi, Mariapia Marchi, Lucia Parussini, and Carlo Poloni. “Multi-fidelity design opti-
1583
+ misation strategy under uncertainty with limited computational budget”. In: Optimization and Engi-
1584
+ neering 22.2 (2021), pp. 1039–1064.
1585
+ [6]
1586
+ Ghina N Absi and Sankaran Mahadevan. “Multi-fidelity approach to dynamics model calibration”.
1587
+ In: Mechanical Systems and Signal Processing 68 (2016), pp. 189–206.
1588
+ [7]
1589
+ Sanaz Zanjani Foumani, Mehdi Shishehbor, Amin Yousefpour, and Ramin Bostanabad. “Multi-
1590
+ Fidelity Cost-Aware Bayesian Optimization”. In: Available at SSRN 4268166 (2022).
1591
+ [8]
1592
+ Siyu Tao, Daniel W Apley, Wei Chen, Andrea Garbo, David J Pate, and Brian J German. “Input
1593
+ mapping for model calibration with application to wing aerodynamics”. In: AIAA journal 57.7 (2019),
1594
+ pp. 2734–2745.
1595
+ [9]
1596
+ Slawomir Koziel, Qingsha S Cheng, and John W Bandler. “Space mapping”. In: IEEE Microwave
1597
+ Magazine 9.6 (2008), pp. 105–122.
1598
+ [10]
1599
+ John W Bandler, Radoslaw M Biernacki, Shao Hua Chen, Piotr A Grobelny, and Ronald H Hemmers.
1600
+ “Space mapping technique for electromagnetic optimization”. In: IEEE Transactions on microwave
1601
+ theory and techniques 42.12 (1994), pp. 2536–2544.
1602
+ [11]
1603
+ Anand Amrit, Leifur Leifsson, and Slawomir Koziel. “Fast multi-objective aerodynamic optimiza-
1604
+ tion using sequential domain patching and multifidelity models”. In: Journal of Aircraft 57.3 (2020),
1605
+ pp. 388–398.
1606
+ [12]
1607
+ Slawomir Koziel and Leifur Leifsson. “Multi-level CFD-based airfoil shape optimization with auto-
1608
+ mated low-fidelity model selection”. In: Procedia Computer Science 18 (2013), pp. 889–898.
1609
+ [13]
1610
+ Leifur Leifsson and Slawomir Koziel. “Aerodynamic shape optimization by variable-fidelity compu-
1611
+ tational fluid dynamics models: a review of recent progress”. In: Journal of Computational Science
1612
+ 10 (2015), pp. 45–54.
1613
+ [14]
1614
+ Marc C Kennedy and Anthony O’Hagan. “Bayesian calibration of computer models”. In: Journal of
1615
+ the Royal Statistical Society: Series B (Statistical Methodology) 63.3 (2001), pp. 425–464.
1616
+ [15]
1617
+ John McFarland and Sankaran Mahadevan. “Multivariate significance testing and model calibration
1618
+ under uncertainty”. In: Computer methods in applied mechanics and engineering 197.29-32 (2008),
1619
+ pp. 2467–2479.
1620
+ [16]
1621
+ Matthew Plumlee. “Bayesian calibration of inexact computer models”. In: Journal of the American
1622
+ Statistical Association 112.519 (2017), pp. 1274–1285.
1623
+ 27
1624
+
1625
+ [17]
1626
+ Dave Higdon, Marc Kennedy, James C Cavendish, John A Cafeo, and Robert D Ryne. “Combining
1627
+ field data and computer simulations for calibration and prediction”. In: SIAM Journal on Scientific
1628
+ Computing 26.2 (2004), pp. 448–466.
1629
+ [18]
1630
+ Daniel W Apley, Jun Liu, and Wei Chen. “Understanding the effects of model uncertainty in robust
1631
+ design with computer experiments”. In: (2006).
1632
+ [19]
1633
+ Maria J Bayarri, James O Berger, Rui Paulo, Jerry Sacks, John A Cafeo, James Cavendish, Chin-Hsu
1634
+ Lin, and Jian Tu. “A framework for validation of computer models”. In: Technometrics 49.2 (2007),
1635
+ pp. 138–154.
1636
+ [20]
1637
+ Paul D Arendt, Daniel W Apley, Wei Chen, David Lamb, and David Gorsich. “Improving identifia-
1638
+ bility in model calibration using multiple responses”. In: (2012).
1639
+ [21]
1640
+ Paul D Arendt, Daniel W Apley, and Wei Chen. “Quantification of model uncertainty: Calibration,
1641
+ model discrepancy, and identifiability”. In: (2012).
1642
+ [22]
1643
+ David A Stainforth, Tolu Aina, Carl Christensen, Mat Collins, Nick Faull, Dave J Frame, Jamie A
1644
+ Kettleborough, S Knight, A Martin, JM Murphy, et al. “Uncertainty in predictions of the climate
1645
+ response to rising levels of greenhouse gases”. In: Nature 433.7024 (2005), pp. 403–406.
1646
+ [23]
1647
+ Weizhao Zhang, Ramin Bostanabad, Biao Liang, Xuming Su, Danielle Zeng, Miguel A Bessa, Yan-
1648
+ chao Wang, Wei Chen, and Jian Cao. “A numerical Bayesian-calibrated characterization method for
1649
+ multiscale prepreg preforming simulations with tension-shear coupling”. In: Composites Science and
1650
+ Technology 170 (2019), pp. 15–24.
1651
+ [24]
1652
+ Robert B Gramacy, Derek Bingham, James Paul Holloway, Michael J Grosskopf, Carolyn C Kuranz,
1653
+ Erica Rutter, Matt Trantham, and R Paul Drake. “Calibrating a large computer experiment simulating
1654
+ radiative shock hydrodynamics”. In: The Annals of Applied Statistics 9.3 (2015), pp. 1141–1168.
1655
+ [25]
1656
+ Lluis Jofre, Gianluca Geraci, Hillary Fairbanks, Alireza Doostan, and Gianluca Iaccarino. “Multi-
1657
+ fidelity uncertainty quantification of irradiated particle-laden turbulence”. In: arXiv preprint
1658
+ arXiv:1801.06062 (2018).
1659
+ [26]
1660
+ Alex A Gorodetsky, John D Jakeman, Gianluca Geraci, and Michael S Eldred. “MFNets: multi-
1661
+ fidelity data-driven networks for Bayesian learning and prediction”. In: International Journal for
1662
+ Uncertainty Quantification 10.6 (2020).
1663
+ [27]
1664
+ Rebecca E Morrison, Todd A Oliver, and Robert D Moser. “Representing model inadequacy: A
1665
+ stochastic operator approach”. In: SIAM/ASA Journal on Uncertainty Quantification 6.2 (2018),
1666
+ pp. 457–496.
1667
+ [28]
1668
+ Rebecca E Morrison. “Embedded discrepancy operators in reduced models of interacting species”.
1669
+ In: arXiv preprint arXiv:1910.08191 (2019).
1670
+ [29]
1671
+ Teresa Portone, Damon McDougall, and Robert D Moser. “A stochastic operator approach to model
1672
+ inadequacy with applications to contaminant transport”. In: arXiv preprint arXiv:1702.07779 (2017).
1673
+ [30]
1674
+ Jonathan Tammer Eweis-Labolle, Nicholas Oune, and Ramin Bostanabad. “Data Fusion With Latent
1675
+ Map Gaussian Processes”. In: Journal of Mechanical Design 144.9 (2022), p. 091703.
1676
+ [31]
1677
+ Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016. ISBN:
1678
+ 0262337371.
1679
+ [32]
1680
+ Xuhui Meng and George Em Karniadakis. “A composite neural network that learns from multi-
1681
+ fidelity data: Application to function approximation and inverse PDE problems”. In: Journal of Com-
1682
+ putational Physics 401 (2020), p. 109020.
1683
+ 28
1684
+
1685
+ [33]
1686
+ Subhayan De, Jolene Britton, Matthew Reynolds, Ryan Skinner, Kenneth Jansen, and Alireza
1687
+ Doostan. “On transfer learning of neural networks using bi-fidelity data for uncertainty propagation”.
1688
+ In: International Journal for Uncertainty Quantification 10.6 (2020).
1689
+ [34]
1690
+ Suraj Pawar, Omer San, Prakash Vedula, Adil Rasheed, and Trond Kvamsdal. “Multi-fidelity infor-
1691
+ mation fusion with concatenated neural networks”. In: Scientific Reports 12.1 (2022), p. 5900. ISSN:
1692
+ 2045-2322. DOI: 10.1038/s41598-022-09938-8. URL: https://doi.org/10.1038/
1693
+ s41598-022-09938-8.
1694
+ [35]
1695
+ Tilmann Gneiting and Adrian E Raftery. “Strictly proper scoring rules, prediction, and estimation”.
1696
+ In: Journal of the American statistical Association 102.477 (2007), pp. 359–378.
1697
+ [36]
1698
+ Nicholas Oune and Ramin Bostanabad. “Latent map Gaussian processes for mixed variable meta-
1699
+ modeling”. In: Computer Methods in Applied Mechanics and Engineering 387 (2021), p. 114128.
1700
+ [37]
1701
+ Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. “Deep learning”. In: nature 521.7553 (2015),
1702
+ pp. 436–444.
1703
+ [38]
1704
+ Kurt Hornik, Maxwell Stinchcombe, and Halbert White. “Multilayer feedforward networks are uni-
1705
+ versal approximators”. In: Neural networks 2.5 (1989), pp. 359–366.
1706
+ [39]
1707
+ Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. “Weight uncertainty in
1708
+ neural network”. In: International conference on machine learning. PMLR. 2015, pp. 1613–1622.
1709
+ [40]
1710
+ Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. “On calibration of modern neural net-
1711
+ works”. In: International conference on machine learning. PMLR. 2017, pp. 1321–1330.
1712
+ [41]
1713
+ John Mitros and Brian Mac Namee. “On the validity of Bayesian neural networks for uncertainty
1714
+ estimation”. In: arXiv preprint arXiv:1912.01530 (2019).
1715
+ [42]
1716
+ Agustinus Kristiadi, Matthias Hein, and Philipp Hennig. “Being bayesian, even just a bit, fixes
1717
+ overconfidence in relu networks”. In: International conference on machine learning. PMLR. 2020,
1718
+ pp. 5436–5446.
1719
+ [43]
1720
+ W Keith Hastings. “Monte Carlo sampling methods using Markov chains and their applications”. In:
1721
+ (1970).
1722
+ [44]
1723
+ David M Blei, Alp Kucukelbir, and Jon D McAuliffe. “Variational inference: A review for statisti-
1724
+ cians”. In: Journal of the American statistical Association 112.518 (2017), pp. 859–877.
1725
+ [45]
1726
+ Laurent Valentin Jospin, Hamid Laga, Farid Boussaid, Wray Buntine, and Mohammed Bennamoun.
1727
+ “Hands-on Bayesian neural networks—A tutorial for deep learning users”. In: IEEE Computational
1728
+ Intelligence Magazine 17.2 (2022), pp. 29–48.
1729
+ [46]
1730
+ S. T. Roweis and L. K. Saul. “Nonlinear dimensionality reduction by locally linear embedding”.
1731
+ In: Science 290.5500 (2000), pp. 2323–6. ISSN: 0036-8075 (Print) 0036-8075 (Linking). DOI: 10.
1732
+ 1126/science.290.5500.2323. URL: https://www.ncbi.nlm.nih.gov/pubmed/
1733
+ 11125150.
1734
+ [47]
1735
+ D. L. Donoho and C. Grimes. “Hessian eigenmaps: locally linear embedding techniques for high-
1736
+ dimensional data”. In: Proc Natl Acad Sci U S A 100.10 (2003), pp. 5591–6. ISSN: 0027-8424 (Print)
1737
+ 0027-8424 (Linking). DOI: 10.1073/pnas.1031596100. URL: https://www.ncbi.nlm.
1738
+ nih.gov/pubmed/16576753.
1739
+ [48]
1740
+ J. B. Tenenbaum, V. de Silva, and J. C. Langford. “A global geometric framework for nonlinear
1741
+ dimensionality reduction”. In: Science 290.5500 (2000), pp. 2319–23. ISSN: 0036-8075 (Print) 0036-
1742
+ 8075 (Linking). DOI: 10.1126/science.290.5500.2319. URL: https://www.ncbi.
1743
+ nlm.nih.gov/pubmed/11125149.
1744
+ 29
1745
+
1746
+ [49]
1747
+ Ashutosh Saxena, Abhinav Gupta, and Amitabha Mukerjee. “Non-linear dimensionality reduction by
1748
+ locally linear isomaps”. In: Neural Information Processing. Springer, pp. 1038–1043.
1749
+ [50]
1750
+ Ronald R. Coifman and St´ephane Lafon. “Diffusion maps”. In: Applied and Computational Harmonic
1751
+ Analysis 21.1 (2006), pp. 5–30. ISSN: 10635203. DOI: 10.1016/j.acha.2006.04.006. URL:
1752
+ http://www.sciencedirect.com/science/article/pii/S1063520306000546.
1753
+ [51]
1754
+ N. Lawrence. “Probabilistic non-linear principal component analysis with Gaussian process latent
1755
+ variable models”. In: Journal of Machine Learning Research 6.Nov (2005), pp. 1783–1816. ISSN:
1756
+ 1532-4435. URL: %3CGo%20to%20ISI%3E://WOS:000236330700002.
1757
+ [52]
1758
+ Francois Chollet. Deep learning with python. Manning Publications Co., 2017. ISBN: 1617294438.
1759
+ [53]
1760
+ Hippolyt Ritter, Aleksandar Botev, and David Barber. “A scalable laplace approximation for neural
1761
+ networks”. In: 6th International Conference on Learning Representations, ICLR 2018-Conference
1762
+ Track Proceedings. Vol. 6. International Conference on Representation Learning. 2018.
1763
+ [54]
1764
+ Meire Fortunato, Charles Blundell, and Oriol Vinyals. “Revisiting Bayes by Backprop”. In: (2018).
1765
+ [55]
1766
+ Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, and Ion Stoica.
1767
+ “Tune: A Research Platform for Distributed Model Selection and Training”. In: arXiv preprint
1768
+ arXiv:1807.05118 (2018).
1769
+ 30
1770
+
BdFQT4oBgHgl3EQfNTaI/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
CNAyT4oBgHgl3EQf4foN/content/tmp_files/2301.00785v1.pdf.txt ADDED
@@ -0,0 +1,3322 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ CLIP-Driven Universal Model for Organ Segmentation and Tumor Detection
2
+ Jie Liu1, Yixiao Zhang2, Jie-Neng Chen2, Junfei Xiao2, Yongyi Lu2, Bennett A. Landman3,
3
+ Yixuan Yuan4,5, Alan Yuille2, Yucheng Tang3,6,∗, and Zongwei Zhou2,*
4
+ 1City University of Hong Kong
5
+ 2Johns Hopkins University
6
+ 3Vanderbilt University
7
+ 4Chinese University of Hong Kong
8
+ 5CUHK Shenzhen Research Institute
9
+ 6NVIDIA
10
+ Project: https://github.com/ljwztc/CLIP-Driven-Universal-Model
11
+ Abstract
12
+ An increasing number of public datasets have shown
13
+ a marked clinical impact on assessing anatomical struc-
14
+ tures. However, each of the datasets is small, partially la-
15
+ beled, and rarely investigates severe tumor subjects. More-
16
+ over, current models are limited to segmenting specific or-
17
+ gans/tumors, which can not be extended to novel domains
18
+ and classes. To tackle these limitations, we introduce em-
19
+ bedding learned from Contrastive Language–Image Pre-
20
+ training (CLIP) to segmentation models, dubbed the CLIP-
21
+ Driven Universal Model. The Universal Model can better
22
+ segment 25 organs and 6 types of tumors by exploiting the
23
+ semantic relationship between abdominal structures. The
24
+ model is developed from an assembly of 14 datasets with
25
+ 3,410 CT scans and evaluated on 6,162 external CT scans
26
+ from 3 datasets. We rank first on the public leaderboard of
27
+ the Medical Segmentation Decathlon (MSD) and achieve
28
+ the state-of-the-art results on Beyond The Cranial Vault
29
+ (BTCV). Compared with dataset-specific models, the Uni-
30
+ versal Model is computationally more efficient (6× faster),
31
+ generalizes better to CT scans from varying sites, and shows
32
+ stronger transfer learning performance on novel tasks. The
33
+ design of CLIP embedding enables the Universal Model to
34
+ be easily extended to new classes without catastrophically
35
+ forgetting the previously learned classes.
36
+ 1. Introduction
37
+ Enormous advances in medical imaging benefit from the
38
+ ever-growing amount of annotated datasets [1,30,38,39,68].
39
+ Although a total of around 5,000 annotated abdominal CT
40
+ scans are publicly available, an ingrained impression re-
41
+ mains: medical imaging datasets are too small to develop
42
+ robust AI models [13,48,58,64,82,83]. One of the reasons
43
+ for this impression is that these public datasets are associ-
44
+ *Corresponding authors: Yucheng Tang ([email protected]) and
45
+ Zongwei Zhou ([email protected])
46
+ Figure 1. Cosine similarity between CLIP embeddings. The
47
+ CLIP embedding reveals the intrinsic semantics of the anatomical
48
+ structures by mapping similar concepts close to each other in the
49
+ embedding space. For example, “Liver” has a large similarity with
50
+ “Liver Tumor” and “Hepatic Vessel” (the hepatic vessel returns
51
+ low-oxygen blood from your liver back to the heart, which has a
52
+ high anatomical relationship with the liver); “Left Kidney” has a
53
+ large similarity with “Right Kidney”.
54
+ ated with imaging competitions. As per the fairness con-
55
+ cern, these datasets must be used in isolation (no external
56
+ data are allowed). Since each institute has limited time,
57
+ budget, and particular clinical purposes, the number of CT
58
+ scans in each dataset is limited, and the types of annotated
59
+ organs vary significantly from institute to institute. What is
60
+ more, only a small proportion (100’s) of public CT scans
61
+ contain annotated tumors performed by experts [1,3,25].
62
+ The potential of AI models, trained on a combination of
63
+ existing public datasets, for multi-organ segmentation and
64
+ tumor detection is unknown. This has motivated us to re-
65
+ lax the requirement that no external data is allowed, exploit
66
+ the public datasets with partial labels, and demonstrate the
67
+ clinical impact of AI performance, including model gen-
68
+ eralizability (i.e., robust to CT scans from various hospi-
69
+ tals) [39], transferability (i.e., generic image representation
70
+ 1
71
+ arXiv:2301.00785v1 [eess.IV] 2 Jan 2023
72
+
73
+ Left Kidney
74
+ Liver
75
+ Tumor
76
+ Kidneythat is transferable to multiple downstream tasks) [86], and
77
+ extensibility (i.e., adaptable to novel classes without for-
78
+ getting previously learned classes) [36]. Specifically, we
79
+ have assembled 14 publicly available datasets, including
80
+ 3,672 CT scans with 25 partially annotated organs and 6
81
+ tumors. Furthermore, we anticipate that most of the medi-
82
+ cal datasets, released in the near future, will also focus on a
83
+ small set of organs/tumors and that some current unlabeled
84
+ organs/tumors, such as the vermiform appendix, would be
85
+ annotated. This requires us to develop new strategies that
86
+ can continually deal with more partially labeled datasets
87
+ with novel classes from a variety of institutes.
88
+ Formidable challenges exist in assembling partially an-
89
+ notated datasets. First, label inconsistency in three aspects.
90
+ (i) Index inconsistency. The same organ can be labeled as
91
+ different indexes. For example, the stomach is labeled ‘7’
92
+ in BTCV, but ‘5’ in WORD. (ii) Name inconsistency. Nam-
93
+ ing can be confusing if multiple labels refer to the same
94
+ anatomical structure. For example, “postcava” in AMOS22
95
+ and “inferior vena cava�� in BTCV. (iii) Background incon-
96
+ sistency. For example, when combining Pancreas-CT and
97
+ MSD-Spleen, the pancreas is marked as the background
98
+ in MSD-Spleen whereas it should have been marked as
99
+ the foreground. Second, label orthogonality. Most seg-
100
+ mentation methods, trained with one-hot labels [77], dis-
101
+ miss the semantic relationship between classes.
102
+ Given
103
+ liver [1,0,0], liver tumor [0,1,0], and pancreas [0,0,1], there
104
+ is no semantic difference between liver↔liver tumor and
105
+ liver↔pancreas. A possible solution is few-hot labels [54],
106
+ with which, the liver, liver tumor, and pancreas can be en-
107
+ coded as [1,0,0], [1,1,0], and [0,0,1]. Although few-hot la-
108
+ bels could indicate that liver tumors are part of the liver, the
109
+ relationship between organs remains orthogonal and, more
110
+ importantly, neither one-hot nor few-hot labels are easy to
111
+ extend to more classes [6, 23]. Adding novel classes re-
112
+ quires increasing the dimensionality of one- or few-hot la-
113
+ bels and retraining the previously trained model.
114
+ To address the label inconsistency, we maintain a revised
115
+ label taxonomy from a collection of public datasets, gener-
116
+ ate a binary segmentation mask for each class, and compute
117
+ loss only for the classes with available labels. To address
118
+ the label orthogonality, inspired by Guo et al. [19], one- or
119
+ few-hot labels are replaced by the text embedding gener-
120
+ ated by the pre-trained text encoder from CLIP1. Figure 1
121
+ illustrates that CLIP embedding presents the relationship
122
+ between organs and tumors. More importantly, the fixed-
123
+ length CLIP embedding allows us to adapt the pre-trained
124
+ model to open-vocabulary segmentation and extend to novel
125
+ classes without forgetting previously learned classes.
126
+ In this work, we propose a CLIP-driven Universal Model
127
+ 1CLIP (Contrastive Language–Image Pre-training) was pre-trained on
128
+ 400 million image-text pairs (some are medical images and text [5]), ex-
129
+ ploiting the semantic relationship between images and language.
130
+ that can segment 25 organs and detect 6 tumors with state-
131
+ of-the-art performance, generalize to CT scans from dif-
132
+ ferent institutes, and can be extended to more classes, in
133
+ contrast with existing dataset-specific models [59]. Specif-
134
+ ically, experimental results have demonstrated six advan-
135
+ tages of the CLIP-driven Universal Model.
136
+ 1. High abdominal organ segmentation performance. We
137
+ rank first in the MSD and BTCV challenges, leading
138
+ to substantial performance improvement over others.
139
+ 2. A higher specificity of tumor detection than existing
140
+ models while maintaining compelling sensitivity.
141
+ 3. Computationally more efficient than dataset-specific
142
+ models, accelerating the testing speed by order of six.
143
+ 4. The performance of organ segmentation and tumor de-
144
+ tection is generalized to CT scans from a variety of
145
+ hospitals without additional tuning and adaptation.
146
+ 5. An effective Foundation Model for numerous down-
147
+ stream tasks, showing a strong transferability on tasks
148
+ across multiple diseases, organs, and datasets.
149
+ 6. The extensibility to novel classes shows the capability
150
+ of quickly adapting to novel classes without forgetting
151
+ previously learned classes.
152
+ With the Universal Model, we have also created a large
153
+ dataset of 3,672 CT scans with 6 organs annotated by either
154
+ experts or the model. Refinement of model prediction for
155
+ some cases is performed. This dataset, comprising multi-
156
+ center, multi-vendor, multi-phase, and multi-disease cases,
157
+ provides a diverse test bed to develop high-performance AI
158
+ models for organ segmentation and tumor detection.
159
+ 2. Related Work
160
+ Partial label problem. Publicly available datasets for ab-
161
+ dominal imaging focus on different organs and tumors [30,
162
+ 33, 38, 39], e.g., AbdomenCT-1K dataset for 4 organ seg-
163
+ mentation [39], WORD dataset for 16 organ segmenta-
164
+ tion [38] and TotalSegmentor dataset for 104 anatomical
165
+ structure segmentation [68]. The partial label problem oc-
166
+ curs when training AI models on a combination of these
167
+ datasets due to their inconsistent label taxonomy. To ex-
168
+ ploit the partial labels, several approaches have been in-
169
+ vestigated [18, 77, 78, 81], aiming for a single model that
170
+ can perform organ segmentation [12, 35] and tumor detec-
171
+ tion [2,37,41,43,71,73,89]. These studies have the follow-
172
+ ing limitations. (1) Due to the small scale of the dataset as-
173
+ sembly2, the potential of assembling datasets was not con-
174
+ vincing. Their performance was similar to dataset-specific
175
+ 2Zhou et al. [81] assembled 150 CT scans from 4 datasets; Fang et
176
+ al. [18] assembled 548 CT scans from 4 datasets; Zhang et al. [77] assem-
177
+ bled 1,155 CT scans from 7 datasets.
178
+ 2
179
+
180
+ Figure 2. Overview. We have developed a Universal Model from an assembly of 14 public datasets of 3,410 CT scans. In total, 25 organs
181
+ and 6 types of tumors are partially labeled (detailed in Appendix Table 7). To deal with partial labels, Universal Model consists of a text
182
+ branch (purple) and a vision branch (blue) (§3.2). The official test set of MSD and BTCV are used to benchmark the performance of
183
+ organ segmentation (§4.1) and tumor detection (§4.2). 3D-IRCADb and TotalSegmentator are used for independent, external validation of
184
+ model generalizability (§5.2), and transferability (§5.3). In addition to public datasets, the Universal Model has also been evaluated on a
185
+ large-scale private dataset, consisting of 5,038 CT scans with 21 annotated organs, to investigate the extensibility to new classes (§5.4).
186
+ models and was not evaluated on the official benchmark.
187
+ (2) Due to the one-hot labels, the semantic relationship be-
188
+ tween organs and tumors was discarded. Table 1 reveals
189
+ that the introduction of CLIP embedding is a salient factor
190
+ to our proposed framework.
191
+ CLIP in medical imaging. With the widespread success of
192
+ large models in the field of language processing and under-
193
+ standing [4,15,56], large-scale pre-trained vision-language
194
+ models (VLM), e.g., CLIP [14], have recently been applied
195
+ to multiple vision tasks [5,47,50,66], but rarely to the med-
196
+ ical domain [16, 67]. Qin et al. [49] suggested that VLM
197
+ could be used for zero-shot learning in the medical domain
198
+ and recognize novel classes with well-designed prompts.
199
+ Grounded by these two findings, we are among the first to
200
+ introduce CLIP embedding to medical segmentation tasks
201
+ using partial labels, in which we underline the importance
202
+ of the semantic relationship between anatomical structures
203
+ in segmentation and incremental learning.
204
+ 3. Methodology
205
+ 3.1. Background
206
+ Problem definition.
207
+ Let M and N be the total number
208
+ of datasets to combine and data points in the combina-
209
+ tion of the datasets, respectively.
210
+ Given a dataset D =
211
+ {(X1, Y1), (X2, Y2), ..., (XN, YN)}, there are a total of
212
+ K unique classes.
213
+ For ∀n ∈ [1, N], if the presence of
214
+ ∀k ∈ [1, K] classes in Xi is annotated in Yi, D is a fully
215
+ labeled dataset; otherwise, D is a partially labeled dataset.
216
+ Previous solutions. Two groups of solutions were proposed
217
+ to address the partial label problem. Given a data point
218
+ Xn, n ∈ [1, N], the objective is to train a model F(·) us-
219
+ ing the assembly dataset DA = {D1, D2, ..., DM}, and the
220
+ model can predict all K classes, if presented in Xn.
221
+ • Solution #1 [11,18,27,54,54,61,72,81] aims to solve
222
+ Fθ(Xn) = P k
223
+ n , n ∈ [1, N], k ∈ [1, K], where the
224
+ prediction Pn is one-hot encoding with length k.
225
+ • Solution #2 [31, 77, 88] aims to solve Fθ(Xn, wk) =
226
+ Pn, n ∈ [1, N], k ∈ [1, K], where wk is an one-hot
227
+ vector to indicate which class to be predicted.
228
+ According to Zhang et al. [77], both solutions have sim-
229
+ ilar segmentation performance, whereas #2 is computation-
230
+ ally more efficient. However, both solutions rely on one-hot
231
+ labels, sharing two limitations. First, they dismiss the se-
232
+ mantic and anatomical relationship between organs and tu-
233
+ mors. Second, they are inappropriate in segmenting various
234
+ subtypes of tumors (and novel classes). To address these
235
+ limitations, we modify wk in Solution #2 to CLIP embed-
236
+ ding and introduce in-depth in the following sections.
237
+ 3.2. CLIP-Driven Universal Model
238
+ The overall framework of CLIP-Driven Universal Model
239
+ (see Figure 2) has a text branch and a vision branch. The
240
+ text branch first generates the CLIP embedding for each or-
241
+ gan and tumor using an appropriate medical prompting (Ta-
242
+ ble 1), and then the vision branch takes both CT scans and
243
+ CLIP embedding to predict the segmentation mask.
244
+ Text branch. Let wk be the CLIP embedding of the k-th
245
+ class, produced by the pre-trained text encoder in CLIP and
246
+ 3
247
+
248
+ ACT of
249
+ text
250
+ a [CLS].
251
+ encoder
252
+ 100 3D-IRCADb (13;0)
253
+ CLIP embedding
254
+ classes
255
+ prompt temp
256
+ Total=6162
257
+ 1024 TotalSegmentator (104;0)
258
+ 5038 JHH (21;0)
259
+ text-based controller
260
+ global pooling
261
+ for testing
262
+ dataset
263
+ processing
264
+ Param. 0
265
+ 000
266
+ 82 Pancreas-CT (1;0)
267
+ 201 LiTS (1;1)
268
+ 网 300 KiTS (1;1)
269
+ dataset 2
270
+ text-driven Segmentor
271
+ 1000 AbdomenCT-1K (4;0)
272
+ standardized I
273
+ vision
274
+ 140 CT-ORG (4;0)
275
+ encoder
276
+ Cony
277
+ Conv
278
+ Cony
279
+ Total=3410
280
+ 40 CHAOS (4;0)
281
+ 947 MSD (7;4)
282
+ 50 BTCV (13;0)
283
+ e.g., Swin UNETR
284
+ 网 500 AMOS (15;0)
285
+ 150 WORD (16;0)
286
+ dataset 14
287
+ for training
288
+ public datasets
289
+ partial labelsEmbedding
290
+ prompt
291
+ DSC
292
+ One-hot [77]
293
+ -
294
+ 67.18
295
+ CLIP V1
296
+ A photo of a [CLS].
297
+ 69.70
298
+ CLIP V2
299
+ There is [CLS] in this computerized tomography.
300
+ 73.49
301
+ CLIP V3
302
+ A computerized tomography of a [CLS].
303
+ 73.86
304
+ Table 1. Medical prompt templates. The DSC is the average
305
+ of 25 organs and 6 tumors in the 5-fold cross-validation of MSD.
306
+ All three prompts can elicit knowledge from CLIP, achieving sig-
307
+ nificant improvement over the conventional one-hot labels (DoD-
308
+ Net [77]) on the MSD dataset.
309
+ a medical prompt (e.g., “a computerized tomography of a
310
+ [CLS]”, where [CLS] is a concrete class names). We first
311
+ concatenate the CLIP embedding (wk) and the global im-
312
+ age feature (f) and then input it to a multi-layer perceptron
313
+ (MLP), namely text-based controller [62], to generate pa-
314
+ rameters (θk), i.e.,
315
+ θk = MLP(wk ⊕ f),
316
+ (1)
317
+ where ⊕ is the concatenation. Although CLIP embedding
318
+ significantly outperforms one-hot labels [77], we mark that
319
+ the choice of medical prompt template is critical. Table 1
320
+ presents the effectiveness of three prompt templates. More-
321
+ over, the introduction of CLIP embedding addresses the
322
+ label orthogonality problem by encoding hierarchical rela-
323
+ tionship among organs and tumors (illustrated in Figure 1).
324
+ The CLIP embedding also allows us to extend the Univer-
325
+ sal Model to novel classes (e.g., organs, tumors, bones, and
326
+ other anatomical structures) because the length of CLIP em-
327
+ bedding is fixed and the incremental learning of new classes
328
+ will not affect other classes (elaborated in §5.4).
329
+ Vision branch. We pre-process CT scans using isotropic
330
+ spacing and uniformed intensity scale to reduce the domain
331
+ gap among various datasets3. The standardized and normal-
332
+ ized CT scans are then processed by the vision encoder. Let
333
+ F be the image features extracted by the vision encoder.
334
+ To process F , we use three sequential convolutional layers
335
+ with 1 × 1 × 1 kernels, namely text-driven segmentor. The
336
+ first two layers have 8 channels, and the last one has 1 chan-
337
+ nel, corresponding to the class of [CLS]k. The prediction
338
+ for the class [CLS]k is computed as
339
+ Pk = Sigmoid (((F ∗ θk1) ∗ θk2) ∗ θk3) ,
340
+ (2)
341
+ where θk1, θk2, θk3 are computed by Equation 1, and * rep-
342
+ resents the convolution. For each class [CLS]k, we gener-
343
+ ate the prediction using one vs. all manner (i.e., Sigmoid
344
+ instead of Softmax).
345
+ 3A standardized and normalized CT pre-processing is important when
346
+ combining multiple datasets. Substantial differences in CT scans can occur
347
+ in image quality and technical display, originating from different acquisi-
348
+ tion parameters, reconstruction kernels, contrast enhancements, intensity
349
+ variation, and so on [20,46,74].
350
+ Masked back-propagation. To address the label inconsis-
351
+ tency problem, we proposed the masked back-propagation
352
+ technique. The BCE loss function is utilized for supervi-
353
+ sion. We masked the loss terms of these classes that are not
354
+ contained in Y and only back-propagate the accurate super-
355
+ vision to update the whole framework. The masked back-
356
+ propagation addresses the label inconsistency in the partial
357
+ label problem. Specifically, partially labeled datasets anno-
358
+ tate some other organs as background, leading to the dis-
359
+ ability of existing training schemes (Solution #1).
360
+ 3.3. Assembling Public Datasets
361
+ To fully explore the potential of the proposed Universal
362
+ Model in multi-organ segmentation and tumor detectio, we
363
+ have thus far assembled a total of 3,410 CT scans from 14
364
+ public datasets (many more can be included when datasets
365
+ are available). However, it is non-trivial to assemble par-
366
+ tially labeled datasets due to several challenges. Apart from
367
+ the label inconsistency and label orthogonality that we have
368
+ addressed, two challenges remain.
369
+ Inconsistent label protocols. In AMOS, “Aorta” refers to
370
+ the entire region of Aorta, but in AbdomenCT-1K, a part of
371
+ the upper regions annotation is missing. Aorta is consid-
372
+ ered and annotated (see Appendix Figure 10). It is because
373
+ of the inconsistent definitions in different datasets and this
374
+ requires considerable manual corrections when assembling
375
+ these datasets together.
376
+ Long-tail problem. The assembly of public datasets leads
377
+ to severe class imbalance problems, especially for small tu-
378
+ mors. Appendix Figure 11 entails the proportion of each
379
+ class in the datasets. In this paper, we utilize data augmen-
380
+ tation to alleviate the long-tail problem, but more research
381
+ is encouraged to explore the solution to these two problems.
382
+ 4. Experiments & Results
383
+ Datasets and evaluation metrics.
384
+ A total of 14 public
385
+ datasets consisting of 3,410 CT scans are assembled for
386
+ training.
387
+ Other 2 public and 1 private datasets are used
388
+ for testing.
389
+ Due to page limits, dataset details and pre-
390
+ processing are described in Appendix §B. Dice Similarity
391
+ Coefficient (DSC) and Normalized Surface Distance (NSD)
392
+ are evaluated for organ/tumor segmentation; Sensitivity and
393
+ Specificity are evaluated for tumor detection.
394
+ Implementation details. The Universal Model is trained us-
395
+ ing the AdamW optimizer with a warm-up cosine scheduler
396
+ of 50 epochs. The segmentation experiments use batch-size
397
+ of 6 per GPU with a patch size of 96 × 96 × 96. Default
398
+ initial learning rate of 4e−4, momentum of 0.9 and decay
399
+ of 1e−5 on multi-GPU (4) with DDP. The framework is im-
400
+ plemented in MONAI 0.9.04. The five-fold cross validation
401
+ 4https://monai.io/
402
+ 4
403
+
404
+ Task03 Liver
405
+ Task07 Pancreas
406
+ Method
407
+ Dice1
408
+ Dice2
409
+ Avg.
410
+ NSD1
411
+ NSD2
412
+ Avg.
413
+ Dice1
414
+ Dice2
415
+ Avg.
416
+ NSD1
417
+ NSD2
418
+ Avg.
419
+ Kim et al. [32]
420
+ 94.25
421
+ 72.96
422
+ 83.61
423
+ 96.76
424
+ 88.58
425
+ 92.67
426
+ 80.61
427
+ 51.75
428
+ 66.18
429
+ 95.83
430
+ 73.09
431
+ 84.46
432
+ Trans VW [21]
433
+ 95.18
434
+ 76.90
435
+ 86.04
436
+ 97.86
437
+ 92.03
438
+ 94.95
439
+ 81.42
440
+ 51.08
441
+ 66.25
442
+ 96.07
443
+ 70.13
444
+ 83.10
445
+ C2FNAS [75]
446
+ 94.98
447
+ 72.89
448
+ 83.94
449
+ 98.38
450
+ 89.15
451
+ 93.77
452
+ 80.76
453
+ 54.41
454
+ 67.59
455
+ 96.16
456
+ 75.58
457
+ 85.87
458
+ Models Gen. [86]
459
+ 95.72
460
+ 77.50
461
+ 86.61
462
+ 98.48
463
+ 91.92
464
+ 95.20
465
+ 81.36
466
+ 50.36
467
+ 65.86
468
+ 96.16
469
+ 70.02
470
+ 83.09
471
+ nnUNet [28]
472
+ 95.75
473
+ 75.97
474
+ 85.86
475
+ 98.55
476
+ 90.65
477
+ 94.60
478
+ 81.64
479
+ 52.78
480
+ 67.21
481
+ 96.14
482
+ 71.47
483
+ 83.81
484
+ DiNTS [24]
485
+ 95.35
486
+ 74.62
487
+ 84.99
488
+ 98.69
489
+ 91.02
490
+ 94.86
491
+ 81.02
492
+ 55.35
493
+ 68.19
494
+ 96.26
495
+ 75.90
496
+ 86.08
497
+ Swin UNETR [61]
498
+ 95.35
499
+ 75.68
500
+ 85.52
501
+ 98.34
502
+ 91.59
503
+ 94.97
504
+ 81.85
505
+ 58.21
506
+ 70.71
507
+ 96.57
508
+ 79.10
509
+ 87.84
510
+ Universal Model
511
+ 95.44
512
+ 77.27
513
+ 86.36
514
+ 98.44
515
+ 92.60
516
+ 95.52
517
+ 82.20
518
+ 62.21
519
+ 72.21
520
+ 96.69
521
+ 84.91
522
+ 90.80
523
+ Task08 Hepatic Vessel
524
+ Task06 Lung
525
+ Task09 Spleen
526
+ Task10 Colon
527
+ Method
528
+ Dice1
529
+ Dice2
530
+ Avg.
531
+ NSD1
532
+ NSD2
533
+ Avg.
534
+ Dice1
535
+ NSD1
536
+ Dice1
537
+ NSD1
538
+ Dice1
539
+ NSD1
540
+ Kim et al. [32]
541
+ 62.34
542
+ 68.63
543
+ 65.49
544
+ 83.22
545
+ 78.43
546
+ 80.83
547
+ 63.10
548
+ 62.51
549
+ 91.92
550
+ 94.83
551
+ 49.32
552
+ 62.21
553
+ Trans VW [21]
554
+ 65.80
555
+ 71.44
556
+ 68.62
557
+ 84.01
558
+ 80.15
559
+ 82.08
560
+ 74.54
561
+ 76.22
562
+ 97.35
563
+ 99.87
564
+ 51.47
565
+ 60.53
566
+ C2FNAS [75]
567
+ 64.30
568
+ 71.00
569
+ 67.65
570
+ 83.78
571
+ 80.66
572
+ 82.22
573
+ 70.44
574
+ 72.22
575
+ 96.28
576
+ 97.66
577
+ 58.90
578
+ 72.56
579
+ Models Gen. [86]
580
+ 65.80
581
+ 71.44
582
+ 68.62
583
+ 84.01
584
+ 80.15
585
+ 82.08
586
+ 74.54
587
+ 76.22
588
+ 97.35
589
+ 99.87
590
+ 51.47
591
+ 60.53
592
+ nnUNet [28]
593
+ 66.46
594
+ 71.78
595
+ 69.12
596
+ 84.43
597
+ 80.72
598
+ 82.58
599
+ 73.97
600
+ 76.02
601
+ 97.43
602
+ 99.89
603
+ 58.33
604
+ 68.43
605
+ DiNTS [24]
606
+ 64.50
607
+ 71.76
608
+ 68.13
609
+ 83.98
610
+ 81.03
611
+ 82.51
612
+ 74.75
613
+ 77.02
614
+ 96.98
615
+ 99.83
616
+ 59.21
617
+ 70.34
618
+ Swin UNETR [61]
619
+ 65.69
620
+ 72.20
621
+ 68.95
622
+ 84.83
623
+ 81.62
624
+ 83.23
625
+ 76.60
626
+ 77.40
627
+ 96.99
628
+ 99.84
629
+ 59.45
630
+ 70.89
631
+ Universal Model
632
+ 66.34
633
+ 75.19
634
+ 70.77
635
+ 85.27
636
+ 85.16
637
+ 85.22
638
+ 78.47
639
+ 78.67
640
+ 96.92
641
+ 99.69
642
+ 61.17
643
+ 72.86
644
+ Table 2. Leaderboard performance on MSD. The results are evaluated in the server on the MSD competition test dataset. All Dice and
645
+ NSD metrics are obtained from the MSD public leaderboard.
646
+ Figure 3. Benchmark on MSD validation dataset. We compare Universal Model with Swin UNETR [61] (previously ranked first on the
647
+ MSD leaderboard) on 5-hold cross-validation of the MSD dataset. Universal Model achieves overall better segmentation performance and
648
+ offers substantial improvement in the tasks of segmenting liver tumors (+14%), pancreatic tumors (+8%), and colon tumors (+11%).
649
+ Figure 4. Intra-observer variability. We obtain similar perfor-
650
+ mance between pseudo labels generated by the Universal Model
651
+ (AI) and annotations performed by two human experts (Dr1,2) on
652
+ 6 organs. Spleen (Spl), liver (Liv), kidneys (Kid), stomach (Sto),
653
+ gallbladder (Gall), and pancreas (Pan) can be annotated by AI with
654
+ a similar intra-observer variability to humans. Examples of pseudo
655
+ labels and human annotations are provided in Appendix Figure 7.
656
+ strategy is performed. We select the best model in each
657
+ fold by evaluating the validation best metrics. Models are
658
+ trained on NVIDIA RTX A5000 cards.
659
+ 4.1. Organ Segmentation on MSD and BTCV
660
+ We offer the top #1 solution in both Medical Segmen-
661
+ tation Decathlon (MSD)5 and Beyond The Cranial Vault
662
+ 5decathlon-10.grand-challenge.org/evaluation/challenge/leaderboard/
663
+ (BTCV), surpassing the runners-up by a considerable mar-
664
+ gin. Table 2 and Figure 3 present detailed comparison on
665
+ the official test set and 5-hold cross validation on MSD, re-
666
+ spectively. Table 3 compares Universal Model with other
667
+ methods in the validation set of BTCV, offering at least
668
+ 3.5% improvements over the second best.
669
+ Manual annotations have inter-rater and intra-rater vari-
670
+ ance [29], particularly in segmentation tasks, because some
671
+ of the organs’ boundaries are blurry and ambiguous. We
672
+ assess the quality of pseudo labels predicted by Universal
673
+ Model and manual annotation performed by human experts.
674
+ 17 CT scans in BTCV have been annotated by two indepen-
675
+ dent groups of radiologists from different institutes (not test
676
+ server labels). As a result, each CT scan is associated with
677
+ AI prediction, and two human annotations (Dr1 and Dr2).
678
+ Figure 4 presents their mutual DSC scores, i.e., AI↔Dr1,
679
+ AI↔Dr2, and Dr1↔Dr2. We find the DSC between AI
680
+ and humans is slightly larger than the DSC between humans
681
+ in segmenting 6 types of organs (i.e., spleen, liver, kidney,
682
+ stomach, and pancreas). With this high-quality AI predic-
683
+ tion, we assemble a large dataset of 3,410 CT scans from
684
+ a diverse set of hospitals (Figure 2 and generate pseudo la-
685
+ 5
686
+
687
+ 96.5 94.1
688
+ 96.7 95.8
689
+ 100 -
690
+ 82.7 80.1
691
+ 62.1
692
+ DSC (%)
693
+ 80
694
+ 71.9
695
+ 67.1 68.9
696
+ 60.8
697
+ 69.4 68.6
698
+ H
699
+ 62.6 62.3
700
+ Universal Mode
701
+ 57.9
702
+ 50.5
703
+ 60
704
+
705
+ 52.5
706
+ Swin UNETR (SOTA)
707
+
708
+ 40
709
+ Liv
710
+ Liv Tumor
711
+ Pan
712
+ Pan Tumor
713
+ HepaticVes
714
+ Hepatic Tumor
715
+ Spl
716
+ Lung Tumor100
717
+ 堂古
718
+ 丰丰
719
+ 90
720
+ (Al, Dr1)
721
+ C
722
+ DS
723
+ (Al, Dr2)
724
+ 80
725
+ (Dr1, Dr2)
726
+ 70
727
+ Spl
728
+ Liv
729
+ Kid
730
+ Sto
731
+ Gall
732
+ PanMethods
733
+ Spl
734
+ RKid
735
+ LKid
736
+ Gall
737
+ Eso
738
+ Liv
739
+ Sto
740
+ Aor
741
+ IVC
742
+ Veins
743
+ Pan
744
+ AG
745
+ Avg.
746
+ ASPP [8]
747
+ 94.19
748
+ 91.24
749
+ 88.02
750
+ 63.58
751
+ 72.64
752
+ 93.61
753
+ 80.05
754
+ 86.20
755
+ 80.11
756
+ 71.49
757
+ 74.29
758
+ 64.58
759
+ 78.81
760
+ PaNN [81]
761
+ 94.04
762
+ 90.21
763
+ 88.58
764
+ 64.96
765
+ 73.20
766
+ 93.50
767
+ 80.87
768
+ 87.26
769
+ 80.32
770
+ 70.59
771
+ 75.28
772
+ 64.92
773
+ 79.13
774
+ TransUNet [7]
775
+ 94.10
776
+ 90.22
777
+ 88.84
778
+ 65.49
779
+ 73.19
780
+ 93.24
781
+ 80.85
782
+ 87.47
783
+ 80.48
784
+ 71.47
785
+ 74.26
786
+ 64.76
787
+ 79.16
788
+ CoTr* [69]
789
+ 95.60
790
+ 89.22
791
+ 88.58
792
+ 67.54
793
+ 74.95
794
+ 95.97
795
+ 81.95
796
+ 88.58
797
+ 81.20
798
+ 72.83
799
+ 76.69
800
+ 65.81
801
+ 80.36
802
+ CoTr [69]
803
+ 95.51
804
+ 88.03
805
+ 89.19
806
+ 68.49
807
+ 75.83
808
+ 95.93
809
+ 81.84
810
+ 89.01
811
+ 82.32
812
+ 73.39
813
+ 75.12
814
+ 65.78
815
+ 80.48
816
+ RandPatch [60]
817
+ 95.82
818
+ 88.52
819
+ 90.14
820
+ 68.31
821
+ 75.01
822
+ 96.48
823
+ 82.93
824
+ 88.96
825
+ 82.49
826
+ 73.54
827
+ 75.48
828
+ 66.09
829
+ 80.76
830
+ TransBTS [28]
831
+ 94.59
832
+ 89.23
833
+ 90.47
834
+ 68.50
835
+ 75.59
836
+ 96.14
837
+ 83.72
838
+ 88.85
839
+ 82.28
840
+ 74.25
841
+ 75.12
842
+ 66.74
843
+ 80.94
844
+ nnFormer [28]
845
+ 94.51
846
+ 88.49
847
+ 93.39
848
+ 65.51
849
+ 74.49
850
+ 96.10
851
+ 83.83
852
+ 88.91
853
+ 80.58
854
+ 75.94
855
+ 77.71
856
+ 68.19
857
+ 81.22
858
+ UNETR [22]
859
+ 94.91
860
+ 92.10
861
+ 93.12
862
+ 76.98
863
+ 74.01
864
+ 96.17
865
+ 79.98
866
+ 89.74
867
+ 81.20
868
+ 75.05
869
+ 80.12
870
+ 62.60
871
+ 81.43
872
+ nnU-Net [28]
873
+ 95.92
874
+ 88.28
875
+ 92.62
876
+ 66.58
877
+ 75.71
878
+ 96.49
879
+ 86.05
880
+ 88.33
881
+ 82.72
882
+ 78.31
883
+ 79.17
884
+ 67.99
885
+ 82.01
886
+ Swin UNETR [61]
887
+ 95.44
888
+ 93.38
889
+ 93.40
890
+ 77.12
891
+ 74.14
892
+ 96.39
893
+ 80.12
894
+ 90.02
895
+ 82.93
896
+ 75.08
897
+ 81.02
898
+ 64.98
899
+ 82.06
900
+ Universal Model
901
+ 95.82
902
+ 94.28
903
+ 94.11
904
+ 79.52
905
+ 76.55
906
+ 97.05
907
+ 92.59
908
+ 91.63
909
+ 86.00
910
+ 77.54
911
+ 83.17
912
+ 70.52
913
+ 86.13
914
+ Table 3. Benchmark on BTCV validation dataset. For a fair comparison, we did not use model ensemble during the evaluation. All
915
+ experiments are under the same data splits, computing resources, and testing conditions. Evaluated on the BTCV validation set, Universal
916
+ Model achieves the overall best performance, yielding at least +3.5% DSC improvement over the state-of-the-art method.
917
+ Methods
918
+ Liver Tumor
919
+ Kidney Tumor
920
+ Pancreatic Tumor
921
+ Sen. (LiTS)
922
+ Spec. (CHAOS)
923
+ Sen. (KiTS)
924
+ Spec. (CHAOS)
925
+ Sen. (MSD-Pancreas)
926
+ Spec. (Pancreas-CT)
927
+ nnU-Net [28]
928
+ 94.44
929
+ 75.00
930
+ 96.88
931
+ 85.00
932
+ 95.18
933
+ 88.75
934
+ UNet++ [85]
935
+ 94.44
936
+ 80.00
937
+ N/A
938
+ N/A
939
+ N/A
940
+ N/A
941
+ UNETR [22]
942
+ 86.11
943
+ 95.00
944
+ 93.75
945
+ 95.00
946
+ 90.36
947
+ 81.25
948
+ Swin UNETR [61]
949
+ 91.67
950
+ 85.00
951
+ 97.91
952
+ 70.00
953
+ 97.59
954
+ 87.50
955
+ Universal Model
956
+ 88.89
957
+ 95.00
958
+ 91.67
959
+ 95.00
960
+ 93.98
961
+ 91.25
962
+ Table 4. Tumor detection performance. The CT scans in LiTS [3], KiTS [26], and MSD Pancreas [1] contain tumors in liver, kidney and
963
+ pancreas, respectively. These scans are used to compute the sensitivity of tumor detection. To perform an alternative check of specificity,
964
+ we use CHAOS [63] and Pancreas-CT [52]. It has been confirmed that CHAOS has no liver or kidney tumor, and Pancreas-CT has no
965
+ pancreatic tumor in the CT scans. Universal Model achieves a higher specificity than most baseline models, while maintaining a compelling
966
+ sensitivity. High specificity is clinically important because it reveals that Universal Model does not predict many false positives.
967
+ bels for 25 organs and 6 tumors6. Pseudo-label refinement
968
+ has been performed for a few CT scans where AI’s predic-
969
+ tion is uncertain. This fully annotated dataset will be re-
970
+ leased. Now that these 6 organs can be segmented by AI
971
+ with a similar variance to human experts, we encourage the
972
+ research community to concentrate on creating annotations
973
+ for harder organs and tumors.
974
+ 4.2. Tumor Detection on Five Datasets
975
+ Figure 3 demonstrates that Universal Model surpasses
976
+ Swin UNETR by a large margin in segmenting liver, pan-
977
+ creatic, and colon tumors, leading to 14%, 8%, and 12%
978
+ improvement in DSC scores, respectively. However, DSC
979
+ scores cannot faithfully reveal the tumor detection perfor-
980
+ mance because, by default, they are only calculated on ab-
981
+ normal CT scans (with tumors) [28]. The AI might gener-
982
+ ate numerous false positives when encountering normal CT
983
+ scans (that have no tumor) [53]. Therefore, we also eval-
984
+ uate patient-level Sensitivity and Specificity for detecting
985
+ the three types of tumors. To obtain normal CT scans, we
986
+ adopt the CHAOS and Pancreas-CT datasets because these
987
+ two datasets provide pathological verification that no tu-
988
+ 6The quality of 19 other organs and 6 tumors has not been compared
989
+ with human annotations because there is no publicly available CT scans
990
+ that have been annotated by multiple independent groups on these objects.
991
+ mors are present [52, 63]. Table 4 indicates that Universal
992
+ Model achieves the highest Specificity of 95%, 95%, and
993
+ 91% on normal CT scans, while maintaining a compelling
994
+ Sensitivity of 89%, 92%, and 94% on abnormal CT scans.
995
+ Moreover, Rows 1–3 in Figure 5 depict the prediction of
996
+ small/medium/large pancreatic tumors; Row 4 shows that
997
+ Universal Model can precisely segment the pancreas and
998
+ reduce the number of false positives on normal CT scans.
999
+ Compared with dataset-specific models, the smaller number
1000
+ of false positives predicted by our Universal Model under-
1001
+ lines the necessity of assembling diverse datasets, benefit-
1002
+ ing from not only sufficient positive examples for training
1003
+ but also a larger number of negative examples as a control.
1004
+ 5. Intriguing Properties
1005
+ 5.1. Efficiency: FLOPs vs. DSC Scores
1006
+ It is clinically important to make AI models faster [9,17].
1007
+ The floating-point operations per second (FLOPS) are used
1008
+ to indicate the inference speed. Figure 6 presents a speed-
1009
+ performance plot, showing that Universal Model is com-
1010
+ putationally more efficient compared with dataset-specific
1011
+ models (>6× faster), while maintaining the highest DSC
1012
+ score of 74% on average.
1013
+ 6
1014
+
1015
+ CT scan
1016
+ ground
1017
+ truth
1018
+ Universal
1019
+ Model
1020
+ Swin
1021
+ UNETR
1022
+ zoom-in
1023
+ UNETR
1024
+ nnU-Net
1025
+ UNesT
1026
+ nnFormer
1027
+ Length: 60.4mm
1028
+ Length: 16.2mm
1029
+ Length: 9.6mm
1030
+ Figure 5. Pancreatic tumor detection. Qualitative visualizations of the proposed Universal Model and five competitive baseline methods.
1031
+ We review the detection results of tumors from smaller to larger sizes (Rows 1–3). When it comes a CT scan without tumor from other
1032
+ hospitals, the Universal Model generalize well in organ segmentation and does not generate many false positives of tumors (Row 4; §4.2;
1033
+ §5.2). The visualization of tumor detection in other organs (e.g., liver tumors and kidney tumors) can be found in Appendix Figures 8–9.
1034
+ 3D-IRCADb
1035
+ spleen
1036
+ kidneyR
1037
+ kidneyL
1038
+ gallbladder liver
1039
+ stomach
1040
+ pancreas
1041
+ lungR
1042
+ lungL
1043
+ mDSC*
1044
+ mDSC
1045
+ SegResNet [55]
1046
+ 94.08
1047
+ 80.01
1048
+ 91.60
1049
+ 69.59
1050
+ 95.62
1051
+ 89.53
1052
+ 79.19
1053
+ N/A
1054
+ N/A
1055
+ 85.66
1056
+ N/A
1057
+ nnFormer [79]
1058
+ 93.75
1059
+ 88.20
1060
+ 90.11
1061
+ 62.22
1062
+ 94.93
1063
+ 87.93
1064
+ 78.90
1065
+ N/A
1066
+ N/A
1067
+ 85.14
1068
+ N/A
1069
+ UNesT [76]
1070
+ 94.02
1071
+ 84.90
1072
+ 94.95
1073
+ 68.58
1074
+ 95.10
1075
+ 89.28
1076
+ 79.94
1077
+ N/A
1078
+ N/A
1079
+ 86.68
1080
+ N/A
1081
+ TransBTS [65]
1082
+ 91.33
1083
+ 76.22
1084
+ 88.87
1085
+ 62.50
1086
+ 94.42
1087
+ 85.87
1088
+ 63.90
1089
+ N/A
1090
+ N/A
1091
+ 80.44
1092
+ N/A
1093
+ TransUNet [7]
1094
+ 94.09
1095
+ 82.07
1096
+ 89.92
1097
+ 63.07
1098
+ 95.55
1099
+ 89.12
1100
+ 79.53
1101
+ N/A
1102
+ N/A
1103
+ 84.76
1104
+ N/A
1105
+ UNETR [22]
1106
+ 92.23
1107
+ 91.28
1108
+ 94.19
1109
+ 56.20
1110
+ 94.25
1111
+ 86.73
1112
+ 72.56
1113
+ 91.56
1114
+ 93.31
1115
+ 83.92
1116
+ 85.81
1117
+ Swin UNETR [61]
1118
+ 93.51
1119
+ 66.34
1120
+ 90.63
1121
+ 61.05
1122
+ 94.73
1123
+ 87.37
1124
+ 73.77
1125
+ 93.72
1126
+ 92.17
1127
+ 81.05
1128
+ 83.69
1129
+ Universal Model
1130
+ 95.76
1131
+ 94.99
1132
+ 94.42
1133
+ 88.79
1134
+ 97.03
1135
+ 89.36
1136
+ 80.99
1137
+ 97.71
1138
+ 96.72
1139
+ 91.62
1140
+ 92.86
1141
+ JHH
1142
+ spleen
1143
+ kidneyR
1144
+ kidneyL
1145
+ gallbladder liver
1146
+ stomach
1147
+ pancreas
1148
+ arota
1149
+ postcava
1150
+ vein
1151
+ mDSC
1152
+ SegResNet [55]
1153
+ 93.11
1154
+ 89.92
1155
+ 87.84
1156
+ 74.62
1157
+ 95.37
1158
+ 87.90
1159
+ 76.33
1160
+ 84.05
1161
+ 79.36
1162
+ 57.13
1163
+ 82.56
1164
+ nnFormer [79]
1165
+ 86.71
1166
+ 87.03
1167
+ 84.28
1168
+ 63.37
1169
+ 91.64
1170
+ 73.18
1171
+ 71.88
1172
+ 84.73
1173
+ 78.61
1174
+ 55.31
1175
+ 77.67
1176
+ UNesT [76]
1177
+ 93.82
1178
+ 90.42
1179
+ 89.04
1180
+ 76.40
1181
+ 95.30
1182
+ 89.65
1183
+ 78.97
1184
+ 84.36
1185
+ 79.61
1186
+ 59.70
1187
+ 83.73
1188
+ TransBTS [65]
1189
+ 85.47
1190
+ 81.58
1191
+ 82.00
1192
+ 60.58
1193
+ 92.50
1194
+ 72.29
1195
+ 63.25
1196
+ 83.47
1197
+ 75.07
1198
+ 55.38
1199
+ 75.16
1200
+ TransUNet [7]
1201
+ 94.63
1202
+ 89.86
1203
+ 89.61
1204
+ 77.28
1205
+ 95.85
1206
+ 88.95
1207
+ 79.98
1208
+ 85.06
1209
+ 81.02
1210
+ 59.76
1211
+ 84.20
1212
+ UNETR [22]
1213
+ 91.89
1214
+ 89.07
1215
+ 87.60
1216
+ 66.97
1217
+ 91.48
1218
+ 83.18
1219
+ 70.56
1220
+ 82.92
1221
+ 75.20
1222
+ 57.53
1223
+ 79.64
1224
+ Swin UNETR [61]
1225
+ 92.23
1226
+ 84.34
1227
+ 82.95
1228
+ 74.06
1229
+ 94.91
1230
+ 82.28
1231
+ 71.17
1232
+ 85.50
1233
+ 79.18
1234
+ 55.11
1235
+ 80.17
1236
+ Universal Model
1237
+ 93.94
1238
+ 91.53
1239
+ 90.21
1240
+ 84.15
1241
+ 96.25
1242
+ 92.51
1243
+ 82.72
1244
+ 77.35
1245
+ 79.64
1246
+ 57.10
1247
+ 84.54
1248
+ Table 5. Generalizability: Results on external datasets. We evaluate Universal Model and eight models on data from two external
1249
+ sources without additional fine-tuning or domain adaptation. mDSC* is the average dice score of the first seven organs. Compared with
1250
+ dataset-specific models, our Universal Model performs more robustly to CT scans taken from a variety of scanners, protocols, and institutes.
1251
+ 5.2. Generalizability: External Datasets Results
1252
+ A key expectation of reliable medical AI models is their
1253
+ generalizability, i.e., performance on new data across many
1254
+ hospitals, rather than the performance tailored to a sin-
1255
+ gle dataset [42, 45]. Compared with dataset-specific mod-
1256
+ els, Universal Model was trained on the order of magni-
1257
+ tude more diverse CT scans, therefore demonstrating sig-
1258
+ nificantly better generalizability (i.e., directly testing the
1259
+ model on external data without adaptation or fine-tuning).
1260
+ We conduct the evaluation on a public dataset 3D-IRCADb
1261
+ and a private dataset JHH, which are absolutely not seen
1262
+ in the training and can be regarded as external validation.
1263
+ As shown in Table 5, Universal Model substantially outper-
1264
+ forms the previous methods on 3D-IRCADb and JHH with
1265
+ 7
1266
+
1267
+ pancreas_414_0000
1268
+ 10cmpancreas_382_0000
1269
+ 10cm382-00000000382
1270
+ 0000S13B200000000
1271
+ cm682
1272
+ 000082
1273
+ 0000
1274
+ cm382
1275
+ 0000pancreas_402_0000
1276
+ 10cm402_00000000000000004020000DOOC0000PANCREAS 00030000
1277
+ A
1278
+ R
1279
+ 10 cm
1280
+ PPANCREAS00030000
1281
+ A
1282
+ R
1283
+ 1cmPANCREAS00030000
1284
+ A
1285
+ R
1286
+ 1cmPANCREAS 00030000
1287
+ A
1288
+ R
1289
+ 1cmPANCREAS 00030000
1290
+ A
1291
+ R
1292
+ 1cmPANCREAS 00030000
1293
+ A
1294
+ R
1295
+ 1cmPANCREAS OO030O00
1296
+ A
1297
+ R
1298
+ 1cmPANCREAS OO030000
1299
+ A
1300
+ R
1301
+ 1cmPANCREAS OO030000
1302
+ A
1303
+ R
1304
+ 1cmMethod
1305
+ TotalSeg vertebrae
1306
+ TotalSeg cardiac
1307
+ TotalSeg muscles
1308
+ TotalSeg organs
1309
+ JHH cardiac
1310
+ JHH organs
1311
+ Scratch
1312
+ 81.06
1313
+ 84.47
1314
+ 88.83
1315
+ 86.42
1316
+ 71.63
1317
+ 89.08
1318
+ MedicalNet [10]
1319
+ 82.28
1320
+ 87.40
1321
+ 91.36
1322
+ 86.90
1323
+ 58.07
1324
+ 77.68
1325
+ Models Gen. [87]
1326
+ 85.12
1327
+ 86.51
1328
+ 89.96
1329
+ 85.78
1330
+ 74.25
1331
+ 88.64
1332
+ Swin UNETR [61]
1333
+ 86.23
1334
+ 87.91
1335
+ 92.39
1336
+ 88.56
1337
+ 67.85
1338
+ 87.21
1339
+ UniMiSS [70]
1340
+ 85.12
1341
+ 88.96
1342
+ 92.86
1343
+ 88.51
1344
+ 69.33
1345
+ 82.53
1346
+ Universal model
1347
+ 86.49
1348
+ 89.57
1349
+ 94.43
1350
+ 88.95
1351
+ 72.06
1352
+ 89.37
1353
+ Table 6. Transferability: Fine-tuning performance. Fine-tuning Universal Model significantly outperforms learning from scratch on
1354
+ two downstream datasets (i.e., TotalSegmentator and JHH). Moreover, Universal Model, trained by image segmentation as proxy task, can
1355
+ extract better visual representation—more related to segmentation tasks—than other pre-trained models developed in the medical domain.
1356
+ Due to the space, the per-class evaluation of TotalSegmentator and JHH can be found in Appendix Tables 9–12 and Table 13, respectively.
1357
+ Figure 6. Efficiency: FLOPs vs. DSC. We plot the average DSC
1358
+ score on the 6 MSD tasks against the FLOPs (Floating-point op-
1359
+ erations per second). The FLOPs is computed based on input with
1360
+ spatial size 96 × 96 × 96. The size of each circle indicates the
1361
+ number of parameters (‘#P’). In the inference, Universal Model is
1362
+ faster than nnU-Net (2nd best in performance) and Swin UNETR
1363
+ (3rd best) by 19× and 6× measured by FLOPs, respectively.
1364
+ a DSC improvement of 5% and 4%, respectively.
1365
+ 5.3. Transferability: Fine-tuning Results
1366
+ Another property of Universal Model is serving as a
1367
+ powerful pre-training model for segmentation.
1368
+ Through
1369
+ pre-training by assembly dataset directly and fine-tuning
1370
+ to other datasets, the Universal Model achieves the high-
1371
+ est DSC compared with other pre-training methods with
1372
+ 86.49%, 89.57%, 94.43% and 88.95% for four tasks in To-
1373
+ talSegmentator dataset, as shown in Table 6.
1374
+ Since the
1375
+ other unrelated pre-training tasks (reconstruction, coloriza-
1376
+ tion, jigsaw) can not capture the fine-grained information
1377
+ for segmentation.
1378
+ 5.4. Extensibility: Incremental Learning Results
1379
+ Unlike existing models (e.g., [28, 84]), introducing ad-
1380
+ ditional classes (CLIP embeddings) would not affect the
1381
+ segmentation model architecture, allowing the Universal
1382
+ Model to perform incremental learning. As a result, Uni-
1383
+ versal Model is easily extended to upcoming datasets in
1384
+ the future with novel anatomical structures annotated. As
1385
+ a demonstration, our model can be easily adapted to seg-
1386
+ ment the renal veins in the JHH dataset (which are absent in
1387
+ the current public dataset). To segment the left and right re-
1388
+ nal veins, two new CLIP embeddings for the left and right
1389
+ renal veins are concatenated to the original CLIP embed-
1390
+ dings. Another text-driven segmentor for the new classes is
1391
+ also added to the model, and the segmentation output for the
1392
+ new classes is generated by this new segmentor. The other
1393
+ parameters of the model remain the same. In this way, the
1394
+ performance of the previously learned organs and tumors
1395
+ will not be affected. We perform continual fine-tuning to
1396
+ the Universal Model (pre-trained on 14 datasets) using the
1397
+ AdamW optimizer with a learning rate 1e−4 for 100 epochs.
1398
+ Our model achieved DSC 28.6% and 21.5% on the left and
1399
+ right renal veins, respectively.
1400
+ 6. Conclusion
1401
+ In this work, we present a CLIP-Driven Universal Model
1402
+ for abdominal organ segmentation and tumor detection. We
1403
+ represent the first attempt to explore the potential of med-
1404
+ ical AI models in an assembly dataset, which consists of
1405
+ 3,672 CT scans with 25 partially annotated organs and 6
1406
+ tumors. To solve the label inconsistency and orthogonality
1407
+ problem and build up an extensible framework, this work
1408
+ proposes a Universal Model with CLIP embedding to en-
1409
+ able a flexible and powerful segmentor, allowing the model
1410
+ to learn from the assembly of partially labeled datasets for
1411
+ high-performing organ segmentation and tumor detection
1412
+ (ranking first in both MSD and BTCV). More importantly,
1413
+ we have demonstrated that CLIP embedding can establish
1414
+ the anatomical relationship and enables the model to be
1415
+ extended to novel organs and tumors. Finally, the experi-
1416
+ ment results validate several clinically important merits of
1417
+ the CLIP-Driven Universal Model, such as compelling effi-
1418
+ ciency, generalizability, transferability, and extensibility.
1419
+ Acknowledgments. This work was supported by the Lust-
1420
+ garten Foundation for Pancreatic Cancer Research and Na-
1421
+ tional Natural Science Foundation of China (62001410).
1422
+ We thank Tong Li, Wenxuan Li for their feedback and con-
1423
+ structive suggestions at several stages of the project.
1424
+ 8
1425
+
1426
+ 80
1427
+ 75
1428
+ Swin UNETR
1429
+ Universal Model
1430
+ (#P 371.94 M)
1431
+ (#P 62.25 M)
1432
+ 70
1433
+ DSC (%)
1434
+ nnU-Net
1435
+ (#P 370.74 M)
1436
+ UNETR
1437
+ 65
1438
+ nnFormer
1439
+ (#P 555.71 M)
1440
+ (#P 894.48 M)
1441
+ 60
1442
+ Worse
1443
+ DoDNet
1444
+ (#P 17.535M)
1445
+ 55
1446
+ 200
1447
+ 400
1448
+ 800
1449
+ 1600
1450
+ 3200
1451
+ 6400
1452
+ FLOPs (G)References
1453
+ [1] Michela Antonelli, Annika Reinke, Spyridon Bakas, Keyvan
1454
+ Farahani, Bennett A Landman, Geert Litjens, Bjoern Menze,
1455
+ Olaf Ronneberger, Ronald M Summers, Bram van Ginneken,
1456
+ et al. The medical segmentation decathlon. arXiv preprint
1457
+ arXiv:2106.05735, 2021. 1, 6, 13, 14
1458
+ [2] Xiaoyu Bai and Yong Xia. An end-to-end framework for
1459
+ universal lesion detection with missing annotations. In 2022
1460
+ 16th IEEE International Conference on Signal Processing
1461
+ (ICSP), volume 1, pages 411–415. IEEE, 2022. 2
1462
+ [3] Patrick Bilic, Patrick Ferdinand Christ, Eugene Vorontsov,
1463
+ Grzegorz Chlebus, Hao Chen, Qi Dou, Chi-Wing Fu,
1464
+ Xiao Han, Pheng-Ann Heng, J¨urgen Hesser, et al.
1465
+ The
1466
+ liver tumor segmentation benchmark (lits). arXiv preprint
1467
+ arXiv:1901.04056, 2019. 1, 6, 13, 14
1468
+ [4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Sub-
1469
+ biah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakan-
1470
+ tan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Lan-
1471
+ guage models are few-shot learners. Advances in neural in-
1472
+ formation processing systems, 33:1877–1901, 2020. 3
1473
+ [5] Pierre Chambon, Christian Bluethgen, Curtis P Langlotz,
1474
+ and Akshay Chaudhari. Adapting pretrained vision-language
1475
+ foundational models to medical imaging domains.
1476
+ arXiv
1477
+ preprint arXiv:2210.04133, 2022. 2, 3
1478
+ [6] Junyu Chen, Eric Frey, and Yong Du.
1479
+ Class-incremental
1480
+ learning for multi-organ segmentation, 2022. 2
1481
+ [7] Jieneng Chen, Yongyi Lu, Qihang Yu, Xiangde Luo, Ehsan
1482
+ Adeli, Yan Wang, Le Lu, Alan L Yuille, and Yuyin Zhou.
1483
+ Transunet: Transformers make strong encoders for medi-
1484
+ cal image segmentation. arXiv preprint arXiv:2102.04306,
1485
+ 2021. 6, 7
1486
+ [8] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Flo-
1487
+ rian Schroff, and Hartwig Adam.
1488
+ Encoder-decoder with
1489
+ atrous separable convolution for semantic image segmenta-
1490
+ tion. arXiv:1802.02611, 2018. 6
1491
+ [9] Po-Hsuan Cameron Chen, Krishna Gadepalli, Robert Mac-
1492
+ Donald, Yun Liu, Shiro Kadowaki, Kunal Nagpal, Timo
1493
+ Kohlberger, Jeffrey Dean, Greg S Corrado, Jason D Hipp,
1494
+ et al. An augmented reality microscope with real-time ar-
1495
+ tificial intelligence integration for cancer diagnosis. Nature
1496
+ medicine, 25(9):1453–1457, 2019. 6
1497
+ [10] Sihong Chen, Kai Ma, and Yefeng Zheng. Med3d: Trans-
1498
+ fer learning for 3d medical image analysis. arXiv preprint
1499
+ arXiv:1904.00625, 2019. 8, 18, 19
1500
+ [11] S Chen, K Ma, and Y Zheng. Transfer learning for 3d medi-
1501
+ cal image analysis. arXiv preprint arXiv, 2019. 3
1502
+ [12] Xuming Chen, Shanlin Sun, Narisu Bai, Kun Han, Qianqian
1503
+ Liu, Shengyu Yao, Hao Tang, Chupeng Zhang, Zhipeng Lu,
1504
+ Qian Huang, et al. A deep learning-based auto-segmentation
1505
+ system for organs-at-risk on whole-body computed tomogra-
1506
+ phy images for radiation therapy. Radiotherapy and Oncol-
1507
+ ogy, 160:175–184, 2021. 2
1508
+ [13] Xuxin Chen, Ximin Wang, Ke Zhang, Kar-Ming Fung,
1509
+ Theresa C Thai, Kathleen Moore, Robert S Mannel, Hong
1510
+ Liu, Bin Zheng, and Yuchen Qiu. Recent advances and clin-
1511
+ ical applications of deep learning in medical image analysis.
1512
+ Medical Image Analysis, page 102444, 2022. 1
1513
+ [14] Alexis Conneau and Guillaume Lample. Cross-lingual lan-
1514
+ guage model pretraining. Advances in neural information
1515
+ processing systems, 32, 2019. 3
1516
+ [15] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina
1517
+ Toutanova.
1518
+ Bert:
1519
+ Pre-training of deep bidirectional
1520
+ transformers for language understanding.
1521
+ arXiv preprint
1522
+ arXiv:1810.04805, 2018. 3
1523
+ [16] Sedigheh Eslami, Gerard de Melo, and Christoph Meinel.
1524
+ Does clip benefit visual question answering in the medical
1525
+ domain as much as it does in the general domain?
1526
+ arXiv
1527
+ preprint arXiv:2112.13906, 2021. 3
1528
+ [17] Andre Esteva, Katherine Chou, Serena Yeung, Nikhil Naik,
1529
+ Ali Madani, Ali Mottaghi, Yun Liu, Eric Topol, Jeff Dean,
1530
+ and Richard Socher. Deep learning-enabled medical com-
1531
+ puter vision. NPJ digital medicine, 4(1):1–9, 2021. 6
1532
+ [18] Xi Fang and Pingkun Yan. Multi-organ segmentation over
1533
+ partially labeled datasets with multi-scale feature abstrac-
1534
+ tion. IEEE Transactions on Medical Imaging, 39(11):3619–
1535
+ 3629, 2020. 2, 3
1536
+ [19] Cheng Guo and Felix Berkhahn. Entity embeddings of cat-
1537
+ egorical variables. arXiv preprint arXiv:1604.06737, 2016.
1538
+ 2
1539
+ [20] Pengfei Guo, Puyang Wang, Jinyuan Zhou, Shanshan Jiang,
1540
+ and Vishal M Patel.
1541
+ Multi-institutional collaborations for
1542
+ improving deep learning-based magnetic resonance image
1543
+ reconstruction using federated learning. In Proceedings of
1544
+ the IEEE/CVF Conference on Computer Vision and Pattern
1545
+ Recognition, pages 2423–2432, 2021. 4
1546
+ [21] Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher,
1547
+ Zongwei Zhou, Michael B Gotway, and Jianming Liang.
1548
+ Transferable visual words:
1549
+ Exploiting the semantics of
1550
+ anatomical patterns for self-supervised learning.
1551
+ IEEE
1552
+ Transactions on Medical Imaging, 2021. 5
1553
+ [22] Ali Hatamizadeh, Yucheng Tang, Vishwesh Nath, Dong
1554
+ Yang, Andriy Myronenko, Bennett Landman, Holger R
1555
+ Roth, and Daguang Xu. Unetr: Transformers for 3d medical
1556
+ image segmentation. In Proceedings of the IEEE/CVF Win-
1557
+ ter Conference on Applications of Computer Vision, pages
1558
+ 574–584, 2022. 6, 7
1559
+ [23] Wanji He, Xin Wang, Lin Wang, Yelin Huang, Zhiwen Yang,
1560
+ Xuan Yao, Xin Zhao, Lie Ju, Liao Wu, Lin Wu, et al. Incre-
1561
+ mental learning for exudate and hemorrhage segmentation
1562
+ on fundus images. Information Fusion, 73:157–164, 2021. 2
1563
+ [24] Yufan He, Dong Yang, Holger Roth, Can Zhao, and Daguang
1564
+ Xu. Dints: Differentiable neural network topology search
1565
+ for 3d medical image segmentation.
1566
+ In Proceedings of
1567
+ the IEEE/CVF Conference on Computer Vision and Pattern
1568
+ Recognition, pages 5841–5850, 2021. 5
1569
+ [25] Nicholas Heller, Sean McSweeney, Matthew Thomas Peter-
1570
+ son, Sarah Peterson, Jack Rickman, Bethany Stai, Resha Tej-
1571
+ paul, Makinna Oestreich, Paul Blake, Joel Rosenberg, et al.
1572
+ An international challenge to use artificial intelligence to de-
1573
+ fine the state-of-the-art in kidney and kidney tumor segmen-
1574
+ tation in ct imaging., 2020. 1, 13, 14
1575
+ [26] Nicholas Heller, Niranjan Sathianathen, Arveen Kalapara,
1576
+ Edward Walczak, Keenan Moore, Heather Kaluzniak, Joel
1577
+ Rosenberg, Paul Blake, Zachary Rengel, Makinna Oestre-
1578
+ ich, et al. The kits19 challenge data: 300 kidney tumor cases
1579
+ 9
1580
+
1581
+ with clinical context, ct semantic segmentations, and surgical
1582
+ outcomes. arXiv preprint arXiv:1904.00445, 2019. 6
1583
+ [27] Rui Huang, Yuanjie Zheng, Zhiqiang Hu, Shaoting Zhang,
1584
+ and Hongsheng Li. Multi-organ segmentation via co-training
1585
+ weight-averaged models from few-organ datasets.
1586
+ In In-
1587
+ ternational Conference on Medical Image Computing and
1588
+ Computer-Assisted Intervention, pages 146–155. Springer,
1589
+ 2020. 3
1590
+ [28] Fabian Isensee, Paul F Jaeger, Simon AA Kohl, Jens Pe-
1591
+ tersen, and Klaus H Maier-Hein. nnu-net: a self-configuring
1592
+ method for deep learning-based biomedical image segmen-
1593
+ tation. Nature Methods, 18(2):203–211, 2021. 5, 6, 8
1594
+ [29] Wei Ji, Shuang Yu, Junde Wu, Kai Ma, Cheng Bian, Qi
1595
+ Bi, Jingjing Li, Hanruo Liu, Li Cheng, and Yefeng Zheng.
1596
+ Learning calibrated medical image segmentation via multi-
1597
+ rater agreement modeling. In Proceedings of the IEEE/CVF
1598
+ Conference on Computer Vision and Pattern Recognition,
1599
+ pages 12341–12351, 2021. 5
1600
+ [30] Yuanfeng Ji, Haotian Bai, Jie Yang, Chongjian Ge, Ye Zhu,
1601
+ Ruimao Zhang, Zhen Li, Lingyan Zhang, Wanling Ma, Xi-
1602
+ ang Wan, et al. Amos: A large-scale abdominal multi-organ
1603
+ benchmark for versatile medical image segmentation. Neu-
1604
+ ral Information Processing Systems (NeurIPS), 2022. 1, 2,
1605
+ 13, 14
1606
+ [31] Mintong Kang, Yongyi Lu, Alan L Yuille, and Zongwei
1607
+ Zhou. Label-assemble: Leveraging multiple datasets with
1608
+ partial labels. arXiv preprint arXiv:2109.12265, 2021. 3
1609
+ [32] Sungwoong Kim, Ildoo Kim, Sungbin Lim, Woonhyuk
1610
+ Baek, Chiheon Kim, Hyungjoo Cho, Boogeon Yoon, and
1611
+ Taesup Kim. Scalable neural architecture search for 3d med-
1612
+ ical image segmentation.
1613
+ In International Conference on
1614
+ Medical Image Computing and Computer-Assisted Interven-
1615
+ tion, pages 220–228. Springer, 2019. 5
1616
+ [33] Bennett Landman, Zhoubing Xu, Juan Eugenio Igelsias,
1617
+ Martin Styner, Thomas Robin Langerak, and Arno Klein.
1618
+ Multi-atlas labeling beyond the cranial vault-workshop and
1619
+ challenge. 2017. 2
1620
+ [34] Bennett Landman, Zhoubing Xu, J Igelsias, Martin Styner,
1621
+ T Langerak, and Arno Klein.
1622
+ Miccai multi-atlas la-
1623
+ beling beyond the cranial vault–workshop and challenge.
1624
+ In Proc. MICCAI Multi-Atlas Labeling Beyond Cranial
1625
+ Vault—Workshop Challenge, volume 5, page 12, 2015. 13,
1626
+ 14
1627
+ [35] Pengbo Liu, Yang Deng, Ce Wang, Yuan Hui, Qian Li, Jun
1628
+ Li, Shiwei Luo, Mengke Sun, Quan Quan, Shuxin Yang,
1629
+ et al. Universal segmentation of 33 anatomies. arXiv preprint
1630
+ arXiv:2203.02098, 2022. 2
1631
+ [36] Pengbo Liu, Li Xiao, and S Kevin Zhou.
1632
+ Incremental
1633
+ learning for multi-organ segmentation with partially labeled
1634
+ datasets. arXiv preprint arXiv:2103.04526, 2021. 2
1635
+ [37] Zhe Liu, Kai Han, Kaifeng Xue, Yuqing Song, Lu Liu,
1636
+ Yangyang Tang, and Yan Zhu. Improving ct-image univer-
1637
+ sal lesion detection with comprehensive data and feature en-
1638
+ hancements. Multimedia Systems, pages 1–12, 2022. 2
1639
+ [38] Xiangde Luo, Wenjun Liao, Jianghong Xiao, Tao Song, Xi-
1640
+ aofan Zhang, Kang Li, Guotai Wang, and Shaoting Zhang.
1641
+ Word: Revisiting organs segmentation in the whole abdomi-
1642
+ nal region. arXiv preprint arXiv:2111.02403, 2021. 1, 2, 13,
1643
+ 14
1644
+ [39] Jun Ma, Yao Zhang, Song Gu, Cheng Zhu, Cheng Ge, Yichi
1645
+ Zhang, Xingle An, Congcong Wang, Qiyuan Wang, Xin Liu,
1646
+ et al. Abdomenct-1k: Is abdominal organ segmentation a
1647
+ solved problem. IEEE Transactions on Pattern Analysis and
1648
+ Machine Intelligence, 2021. 1, 2, 13, 14
1649
+ [40] Ruotian Ma, Xin Zhou, Tao Gui, Yiding Tan, Qi Zhang, and
1650
+ Xuanjing Huang. Template-free prompt tuning for few-shot
1651
+ ner. arXiv preprint arXiv:2109.13532, 2021. 13
1652
+ [41] Tarun Mattikalli, Tejas Sudharshan Mathai, and Ronald M
1653
+ Summers. Universal lesion detection in ct scans using neural
1654
+ network ensembles. In Medical Imaging 2022: Computer-
1655
+ Aided Diagnosis, volume 12033, pages 864–868. SPIE,
1656
+ 2022. 2
1657
+ [42] John Mongan, Linda Moy, and Charles E. Kahn. Checklist
1658
+ for artificial intelligence in medical imaging (claim): A guide
1659
+ for authors and reviewers. Radiology: Artificial Intelligence,
1660
+ 2(2):e200029, 2020. PMID: 33937821. 7
1661
+ [43] Varun Naga, Tejas Sudharshan Mathai, Angshuman Paul,
1662
+ and Ronald M Summers.
1663
+ Universal lesion detection and
1664
+ classification using limited data and weakly-supervised self-
1665
+ training. In Workshop on Medical Image Learning with Lim-
1666
+ ited and Noisy Data, pages 55–64. Springer, 2022. 2
1667
+ [44] Stanislav Nikolov,
1668
+ Sam Blackwell,
1669
+ Alexei Zverovitch,
1670
+ Ruheena Mendes, Michelle Livne, Jeffrey De Fauw, Yojan
1671
+ Patel, Clemens Meyer, Harry Askham, Bernardino Romera-
1672
+ Paredes, et al. Deep learning to achieve clinically applica-
1673
+ ble segmentation of head and neck anatomy for radiotherapy.
1674
+ arXiv preprint arXiv:1809.04430, 2018. 15
1675
+ [45] Beau Norgeot, Giorgio Quer, Brett K Beaulieu-Jones, Ali
1676
+ Torkamani, Raquel Dias, Milena Gianfrancesco, Rima Ar-
1677
+ naout, Isaac S Kohane, Suchi Saria, Eric Topol, et al. Min-
1678
+ imum information about clinical artificial intelligence mod-
1679
+ eling: the mi-claim checklist. Nature medicine, 26(9):1320–
1680
+ 1324, 2020. 7
1681
+ [46] Mauricio Orbes-Arteaga, Thomas Varsavsky, Carole H Su-
1682
+ dre, Zach Eaton-Rosen, Lewis J Haddow, Lauge Sørensen,
1683
+ Mads Nielsen, Akshay Pai, S´ebastien Ourselin, Marc Modat,
1684
+ et al. Multi-domain adaptation in brain mri through paired
1685
+ consistency and adversarial learning. In Domain Adaptation
1686
+ and Representation Transfer and Medical Image Learning
1687
+ with Less Labels and Imperfect Data, pages 54–62. Springer,
1688
+ 2019. 4
1689
+ [47] Kwanyong Park, Sanghyun Woo, Seoung Wug Oh, In So
1690
+ Kweon, and Joon-Young Lee.
1691
+ Per-clip video object seg-
1692
+ mentation.
1693
+ In Proceedings of the IEEE/CVF Conference
1694
+ on Computer Vision and Pattern Recognition, pages 1352–
1695
+ 1361, 2022. 3
1696
+ [48] Francesco Piccialli, Vittorio Di Somma, Fabio Giampaolo,
1697
+ Salvatore Cuomo, and Giancarlo Fortino. A survey on deep
1698
+ learning in medicine: Why, how and when?
1699
+ Information
1700
+ Fusion, 66:111–137, 2021. 1
1701
+ [49] Ziyuan Qin, Huahui Yi, Qicheng Lao, and Kang Li.
1702
+ Medical image understanding with pretrained vision lan-
1703
+ guage models:
1704
+ A comprehensive study.
1705
+ arXiv preprint
1706
+ arXiv:2209.15517, 2022. 3
1707
+ 10
1708
+
1709
+ [50] Yongming Rao, Wenliang Zhao, Guangyi Chen, Yansong
1710
+ Tang, Zheng Zhu, Guan Huang, Jie Zhou, and Jiwen Lu.
1711
+ Denseclip: Language-guided dense prediction with context-
1712
+ aware prompting.
1713
+ In Proceedings of the IEEE/CVF Con-
1714
+ ference on Computer Vision and Pattern Recognition, pages
1715
+ 18082–18091, 2022. 3
1716
+ [51] Blaine Rister, Darvin Yi, Kaushik Shivakumar, Tomomi
1717
+ Nobashi, and Daniel L Rubin. Ct-org, a new dataset for mul-
1718
+ tiple organ segmentation in computed tomography. Scientific
1719
+ Data, 7(1):1–9, 2020. 13, 14
1720
+ [52] Holger R Roth, Le Lu, Amal Farag, Hoo-Chang Shin, Jiamin
1721
+ Liu, Evrim B Turkbey, and Ronald M Summers. Deeporgan:
1722
+ Multi-level deep convolutional networks for automated pan-
1723
+ creas segmentation. In International conference on medical
1724
+ image computing and computer-assisted intervention, pages
1725
+ 556–564. Springer, 2015. 6, 13, 14
1726
+ [53] Yiqiu Shen, Farah E Shamout, Jamie R Oliver, Jan Witowski,
1727
+ Kawshik Kannan, Jungkyu Park, Nan Wu, Connor Huddle-
1728
+ ston, Stacey Wolfson, Alexandra Millet, et al. Artificial intel-
1729
+ ligence system reduces false-positive findings in the interpre-
1730
+ tation of breast ultrasound exams. Nature communications,
1731
+ 12(1):1–13, 2021. 6
1732
+ [54] Gonglei Shi, Li Xiao, Yang Chen, and S Kevin Zhou.
1733
+ Marginal loss and exclusion loss for partially super-
1734
+ vised multi-organ segmentation.
1735
+ Medical Image Analysis,
1736
+ 70:101979, 2021. 2, 3
1737
+ [55] Md Mahfuzur Rahman Siddiquee and Andriy Myronenko.
1738
+ Redundancy reduction in semantic segmentation of 3d brain
1739
+ tumor mris. arXiv preprint arXiv:2111.00742, 2021. 7
1740
+ [56] Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi,
1741
+ Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tan-
1742
+ wani, Heather Cole-Lewis, Stephen Pfohl, et al. Large lan-
1743
+ guage models encode clinical knowledge.
1744
+ arXiv preprint
1745
+ arXiv:2212.13138, 2022. 3
1746
+ [57] L Soler, A Hostettler, V Agnus, A Charnoz, J Fasquel, J
1747
+ Moreau, A Osswald, M Bouhadjar, and J Marescaux.
1748
+ 3d
1749
+ image reconstruction for comparison of algorithm database:
1750
+ A patient specific anatomical and medical image database.
1751
+ IRCAD, Strasbourg, France, Tech. Rep, 2010. 14
1752
+ [58] Nima Tajbakhsh, Laura Jeyaseelan, Qian Li, Jeffrey N Chi-
1753
+ ang, Zhihao Wu, and Xiaowei Ding. Embracing imperfect
1754
+ datasets: A review of deep learning solutions for medical
1755
+ image segmentation. Medical Image Analysis, page 101693,
1756
+ 2020. 1
1757
+ [59] Hao Tang, Xingwei Liu, Kun Han, Xiaohui Xie, Xuming
1758
+ Chen, Huang Qian, Yong Liu, Shanlin Sun, and Narisu Bai.
1759
+ Spatial context-aware self-attention model for multi-organ
1760
+ segmentation. In Proceedings of the IEEE/CVF Winter Con-
1761
+ ference on Applications of Computer Vision, pages 939–949,
1762
+ 2021. 2
1763
+ [60] Yucheng Tang, Riqiang Gao, Ho Hin Lee, Shizhong Han,
1764
+ Yunqiang Chen, Dashan Gao, Vishwesh Nath, Camilo
1765
+ Bermudez, Michael R Savona, Richard G Abramson, et al.
1766
+ High-resolution 3d abdominal segmentation with random
1767
+ patch network fusion. Medical image analysis, 69:101894,
1768
+ 2021. 6
1769
+ [61] Yucheng Tang, Dong Yang, Wenqi Li, Holger R Roth,
1770
+ Bennett Landman, Daguang Xu, Vishwesh Nath, and Ali
1771
+ Hatamizadeh.
1772
+ Self-supervised pre-training of swin trans-
1773
+ formers for 3d medical image analysis. In Proceedings of
1774
+ the IEEE/CVF Conference on Computer Vision and Pattern
1775
+ Recognition, pages 20730–20740, 2022. 3, 5, 6, 7, 8, 14, 15,
1776
+ 18, 19
1777
+ [62] Zhi Tian, Chunhua Shen, and Hao Chen. Conditional convo-
1778
+ lutions for instance segmentation. In European conference
1779
+ on computer vision, pages 282–298. Springer, 2020. 4
1780
+ [63] Vanya V Valindria, Nick Pawlowski, Martin Rajchl, Ioannis
1781
+ Lavdas, Eric O Aboagye, Andrea G Rockall, Daniel Rueck-
1782
+ ert, and Ben Glocker. Multi-modal learning from unpaired
1783
+ images: Application to multi-organ segmentation in ct and
1784
+ mri. In 2018 IEEE winter conference on applications of com-
1785
+ puter vision (WACV), pages 547–556. IEEE, 2018. 6, 13, 14
1786
+ [64] Shanshan Wang, Cheng Li, Rongpin Wang, Zaiyi Liu,
1787
+ Meiyun Wang, Hongna Tan, Yaping Wu, Xinfeng Liu, Hui
1788
+ Sun, Rui Yang, et al. Annotation-efficient deep learning for
1789
+ automatic medical image segmentation. Nature communica-
1790
+ tions, 12(1):1–13, 2021. 1
1791
+ [65] Wenxuan Wang, Chen Chen, Meng Ding, Hong Yu, Sen Zha,
1792
+ and Jiangyun Li.
1793
+ Transbts: Multimodal brain tumor seg-
1794
+ mentation using transformer. In International Conference on
1795
+ Medical Image Computing and Computer-Assisted Interven-
1796
+ tion, pages 109–119. Springer, 2021. 7
1797
+ [66] Zhaoqing Wang, Yu Lu, Qiang Li, Xunqiang Tao, Yandong
1798
+ Guo, Mingming Gong, and Tongliang Liu.
1799
+ Cris: Clip-
1800
+ driven referring image segmentation.
1801
+ In Proceedings of
1802
+ the IEEE/CVF Conference on Computer Vision and Pattern
1803
+ Recognition, pages 11686–11695, 2022. 3
1804
+ [67] Zifeng Wang, Zhenbang Wu, Dinesh Agarwal, and Jimeng
1805
+ Sun. Medclip: Contrastive learning from unpaired medical
1806
+ images and text. arXiv preprint arXiv:2210.10163, 2022. 3
1807
+ [68] Jakob Wasserthal, Manfred Meyer, Hanns-Christian Breit,
1808
+ Joshy Cyriac, Shan Yang, and Martin Segeroth. Totalseg-
1809
+ mentator: robust segmentation of 104 anatomical structures
1810
+ in ct images. arXiv preprint arXiv:2208.05868, 2022. 1, 2,
1811
+ 14
1812
+ [69] Yutong Xie, Jianpeng Zhang, Chunhua Shen, and Yong Xia.
1813
+ Cotr: Efficiently bridging cnn and transformer for 3d medi-
1814
+ cal image segmentation. International conference on med-
1815
+ ical image computing and computer-assisted intervention,
1816
+ 2021. 6
1817
+ [70] Yutong Xie, Jianpeng Zhang, Yong Xia, and Qi Wu.
1818
+ Unimiss:
1819
+ Universal medical self-supervised learning via
1820
+ breaking dimensionality barrier. In European Conference on
1821
+ Computer Vision, pages 558–575. Springer, 2022. 8, 18, 19
1822
+ [71] Ke Yan, Jinzheng Cai, Adam P Harrison, Dakai Jin, Jing
1823
+ Xiao, and Le Lu. Universal lesion detection by learning from
1824
+ multiple heterogeneously labeled datasets.
1825
+ arXiv preprint
1826
+ arXiv:2005.13753, 2020. 2
1827
+ [72] Ke Yan, Jinzheng Cai, Youjing Zheng, Adam P Harrison,
1828
+ Dakai Jin, You-bao Tang, Yu-Xing Tang, Lingyun Huang,
1829
+ Jing Xiao, and Le Lu. Learning from multiple datasets with
1830
+ heterogeneous and partial labels for universal lesion detec-
1831
+ tion in ct. IEEE Transactions on Medical Imaging, 2020.
1832
+ 3
1833
+ [73] Ke Yan, Youbao Tang, Yifan Peng, Veit Sandfort, Moham-
1834
+ madhadi Bagheri, Zhiyong Lu, and Ronald M Summers.
1835
+ 11
1836
+
1837
+ Mulan: multitask universal lesion analysis network for joint
1838
+ lesion detection, tagging, and segmentation. In International
1839
+ Conference on Medical Image Computing and Computer-
1840
+ Assisted Intervention, pages 194–202. Springer, 2019. 2
1841
+ [74] Wenjun Yan, Lu Huang, Liming Xia, Shengjia Gu, Fuhua
1842
+ Yan, Yuanyuan Wang, and Qian Tao. Mri manufacturer shift
1843
+ and adaptation: increasing the generalizability of deep learn-
1844
+ ing segmentation for mr images acquired with different scan-
1845
+ ners. Radiology: Artificial Intelligence, 2(4):e190195, 2020.
1846
+ 4
1847
+ [75] Qihang Yu, Dong Yang, Holger Roth, Yutong Bai, Yixiao
1848
+ Zhang, Alan L Yuille, and Daguang Xu. C2fnas: Coarse-
1849
+ to-fine neural architecture search for 3d medical image seg-
1850
+ mentation.
1851
+ In Proceedings of the IEEE/CVF Conference
1852
+ on Computer Vision and Pattern Recognition, pages 4126–
1853
+ 4135, 2020. 5
1854
+ [76] Xin Yu, Qi Yang, Yinchi Zhou, Leon Y Cai, Riqiang Gao,
1855
+ Ho Hin Lee, Thomas Li, Shunxing Bao, Zhoubing Xu,
1856
+ Thomas A Lasko, et al. Unest: Local spatial representation
1857
+ learning with hierarchical transformer for efficient medical
1858
+ segmentation. arXiv preprint arXiv:2209.14378, 2022. 7
1859
+ [77] Jianpeng Zhang, Yutong Xie, Yong Xia, and Chunhua Shen.
1860
+ Dodnet: Learning to segment multi-organ and tumors from
1861
+ multiple partially labeled datasets.
1862
+ In Proceedings of the
1863
+ IEEE/CVF Conference on Computer Vision and Pattern
1864
+ Recognition, pages 1195–1204, 2021. 2, 3, 4
1865
+ [78] Wenhua Zhang, Jun Zhang, Xiyue Wang, Sen Yang, Junzhou
1866
+ Huang, Wei Yang, Wenping Wang, and Xiao Han. Merging
1867
+ nucleus datasets by correlation-based cross-training. Medi-
1868
+ cal Image Analysis, page 102705, 2022. 2
1869
+ [79] Hong-Yu Zhou, Jiansen Guo, Yinghao Zhang, Lequan Yu,
1870
+ Liansheng Wang, and Yizhou Yu.
1871
+ nnformer: Interleaved
1872
+ transformer for volumetric segmentation.
1873
+ arXiv preprint
1874
+ arXiv:2109.03201, 2021. 7
1875
+ [80] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei
1876
+ Liu. Learning to prompt for vision-language models. In-
1877
+ ternational Journal of Computer Vision, 130(9):2337–2348,
1878
+ 2022. 13
1879
+ [81] Yuyin Zhou, Zhe Li, Song Bai, Chong Wang, Xinlei Chen,
1880
+ Mei Han, Elliot Fishman, and Alan L Yuille. Prior-aware
1881
+ neural network for partially-supervised multi-organ segmen-
1882
+ tation. In Proceedings of the IEEE/CVF International Con-
1883
+ ference on Computer Vision, pages 10672–10681, 2019. 2,
1884
+ 3, 6
1885
+ [82] Zongwei Zhou. Towards Annotation-Efficient Deep Learn-
1886
+ ing for Computer-Aided Diagnosis.
1887
+ PhD thesis, Arizona
1888
+ State University, 2021. 1
1889
+ [83] Zongwei Zhou, Michael B Gotway, and Jianming Liang. In-
1890
+ terpreting medical images. In Intelligent Systems in Medicine
1891
+ and Health, pages 343–371. Springer, 2022. 1
1892
+ [84] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima
1893
+ Tajbakhsh, and Jianming Liang. Unet++: A nested u-net ar-
1894
+ chitecture for medical image segmentation. In Deep Learn-
1895
+ ing in Medical Image Analysis and Multimodal Learning for
1896
+ Clinical Decision Support, pages 3–11. Springer, 2018. 8
1897
+ [85] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima
1898
+ Tajbakhsh, and Jianming Liang. Unet++: Redesigning skip
1899
+ connections to exploit multiscale features in image segmen-
1900
+ tation. IEEE transactions on medical imaging, 39(6):1856–
1901
+ 1867, 2019. 6
1902
+ [86] Zongwei Zhou, Vatsal Sodha, Jiaxuan Pang, Michael B Got-
1903
+ way, and Jianming Liang. Models genesis. Medical image
1904
+ analysis, 67:101840, 2021. 2, 5
1905
+ [87] Zongwei Zhou, Vatsal Sodha, Md Mahfuzur Rahman Sid-
1906
+ diquee, Ruibin Feng, Nima Tajbakhsh, Michael B Gotway,
1907
+ and Jianming Liang. Models genesis: Generic autodidactic
1908
+ models for 3d medical image analysis. In International con-
1909
+ ference on medical image computing and computer-assisted
1910
+ intervention, pages 384–393. Springer, 2019. 8, 18, 19
1911
+ [88] Zengle Zhu, Mintong Kang, Alan Yuille, and Zongwei Zhou.
1912
+ Assembling and exploiting large-scale existing labels of
1913
+ common thorax diseases for improved covid-19 classifica-
1914
+ tion using chest radiographs.
1915
+ In Radiological Society of
1916
+ North America (RSNA), 2022. 3
1917
+ [89] Martin Zlocha,
1918
+ Ben Glocker,
1919
+ and Jonathan Passerat-
1920
+ Palmbach.
1921
+ Universal lesion detector: Deep learning for
1922
+ analysing medical scans. 2019. 2
1923
+ 12
1924
+
1925
+ Appendix: CLIP-Driven Universal Model
1926
+ Abstract. In this supplementary material, we provide addi-
1927
+ tional information about the CLIP-Driven Universal Model
1928
+ and the assembly of 14 public datasets, as well as more
1929
+ detailed experimental results than those in the main paper.
1930
+ Appendix A discusses the influence of the medical prompt
1931
+ template. Appendix B provides the specifications for the
1932
+ assembly of datasets. Appendix C elaborates on the imple-
1933
+ mentation details, including the data augmentations, model
1934
+ network structures and evaluation metrics used in the main
1935
+ paper. Appendix D supplements the qualitative and quan-
1936
+ titative analysis in the main paper, including the visualiza-
1937
+ tion of kidney tumors and liver tumors, public results on the
1938
+ MSD leaderboard, and complete evaluation results of the
1939
+ transfer learning experiment. Finally, Appendix E visual-
1940
+ izes several open challenges discussed in §3.3 of the main
1941
+ paper.
1942
+ A. Medical Prompt Template
1943
+ To fully explore the effect of templates on CLIP embed-
1944
+ ding, an experiment is performed in the whole assembly of
1945
+ datasets as shown in Table 1. Four text templates are em-
1946
+ ployed to show the context, i.e., “V1: A computerized to-
1947
+ mography of a [CLS].”, “V2: There is [CLS] in this comput-
1948
+ erized tomography.”, “V3: This computerized tomography
1949
+ has a [CLS].”, “V4: A photo of a [CLS].”. The effective-
1950
+ ness of the prompt template is slightly different from the
1951
+ toy experiment. With increasing organ numbers, templates
1952
+ V1 and V2 still show better performance in encoding the
1953
+ relationship, whereas template V3 would deteriorate the re-
1954
+ sults. In addition, a widely used template V4 could also
1955
+ promote the segmentation performance.
1956
+ As known, the prompt template is a crucial factor for text
1957
+ model [40, 80]. How select an appropriate template is still
1958
+ an open problem for the medical image text-vision models.
1959
+ We encourage more future work to explore this area.
1960
+ B. Assembly of Datasets
1961
+ The assembly of datasets consists of 14 publicly avail-
1962
+ able datasets for training and 2 public datasets and 1 large-
1963
+ scale private dataset for testing (summarized in Table 7). It
1964
+ is non-trivial to assemble datasets annotated from various
1965
+ institutions since the annotation protocols are inconsistent.
1966
+ As mentioned in the main paper, we unify the label index for
1967
+ all datasets. The corresponding relationship is as follows.
1968
+ (Spleen, 1); (Right Kidney, 2); (Left Kidney, 3); (Gall Blad-
1969
+ der, 4); (Esophagus, 5); (Liver, 6); (Stomach, 7); (Aorta, 8);
1970
+ (Postcava, 9); (Portal Vein and Splenic Vein, 10); (Pancreas,
1971
+ 11); (Right Adrenal Gland, 12); (Left Adrenal Gland, 13);
1972
+ (Duodenum, 14); (Hepatic Vessel, 15); (Right Lung, 16);
1973
+ (Left Lung, 17); (Colon, 18); (Intestine, 19); (Rectum, 20);
1974
+ (Bladder, 21); (Prostate/Uterus, 22); (Head of Femur Left,
1975
+ 23); (Head of Femur Right, 24); (Celiac Truck, 25); (Kid-
1976
+ ney Tumor, 26); (Liver Tumor, 27); (Pancreas Tumor, 28);
1977
+ (Hepatic Vessel Tumor, 29); (Lung Tumor, 30); (Colon Tu-
1978
+ mor, 31); (Kidney Cyst, 32). Firstly, we map all the datasets
1979
+ into the standard index template. Then, for these datasets
1980
+ (KiTS, WORD, AbdomenCT-1K, and CT-ORG), which do
1981
+ not distinguish between the left and right organs, we split
1982
+ the organ (Kidney, Adrenal Gland, and Lung) into left part
1983
+ and right part through the script. In addition, we have taken
1984
+ the inclusion relation into consideration, e.g., the organ tu-
1985
+ mor is part of the organ, and the hepatic vessel is inside
1986
+ the liver. Fortunately, we formulate each organ segmenta-
1987
+ tion result as a binary mask. Thus, we can organize the
1988
+ segmentation ground truth for these overlapped organs in-
1989
+ dependently in a binary mask manner.
1990
+ Pancreas-CT [52] consists of 82 contrast-enhanced ab-
1991
+ dominal CT volumes. This dataset only provides the pan-
1992
+ creas label annotated by an experienced radiologist, and all
1993
+ CT scans have no pancreatic tumor.
1994
+ LiTS [3] contains 131 and 70 contrast-enhanced 3-D ab-
1995
+ dominal CT scans for training and testing, respectively. The
1996
+ data set was acquired by different scanners and protocols at
1997
+ six different clinical sites, with a largely varying in-plane
1998
+ resolution from 0.55 to 1.0 mm and slice spacing from 0.45
1999
+ to 6.0 mm.
2000
+ KiTS [25] includes 210 training cases and 90 testing cases
2001
+ with annotations provided by the University of Minnesota
2002
+ Medical Center. Each CT scan has one or more kidney tu-
2003
+ mors.
2004
+ AbdomenCT-1K [39] consists of 1112 CT scans from five
2005
+ datasets with liver, kidney, spleen, and pancreas annota-
2006
+ tions.
2007
+ CT-ORG [51] is composed of 140 CT images containing 6
2008
+ organ classes, which are from 8 different medical centers.
2009
+ Most of the images exhibit liver lesions, both benign and
2010
+ malignant.
2011
+ CHAOS [63] provides 20 patients for multi-organ segmen-
2012
+ tation. All CT scans have no liver tumor.
2013
+ MSD CT Tasks [1] includes liver, lung, pancreas, colon,
2014
+ hepatic vessel, and spleen tasks for a total of 947 CT scans
2015
+ with 4 organs and 5 tumors.
2016
+ BTCV [34] consists of 50 abdominal CT scans from
2017
+ metastatic liver cancer patients or post-operative ventral
2018
+ hernia patients. They are collected from the Vanderbilt Uni-
2019
+ versity Medical Center.
2020
+ AMOS22 [30] is the abbreviation of the multi-modality ab-
2021
+ dominal multi-organ segmentation challenge of 2022. The
2022
+ AMOS dataset contains 500 CT with voxel-level annota-
2023
+ tions of 15 abdominal organs.
2024
+ WORD [38] collects 150 CT scans from 150 patients be-
2025
+ fore the radiation therapy in a single center. All of them are
2026
+ 13
2027
+
2028
+ Datasets
2029
+ # Targets
2030
+ # Scans
2031
+ Annotated Organs or Tumors
2032
+ 1. Pancreas-CT [52]
2033
+ 1
2034
+ 82
2035
+ Pancreas
2036
+ 2. LiTS [3]
2037
+ 2
2038
+ 201
2039
+ Liver, Liver Tumor∗
2040
+ 3. KiTS [25]
2041
+ 2
2042
+ 300
2043
+ Kidney, Kidney Tumor∗
2044
+ 4. AbdomenCT-1K [39]
2045
+ 4
2046
+ 1000
2047
+ Spleen, Kidney, Liver, Pancreas
2048
+ 5. CT-ORG [51]
2049
+ 4
2050
+ 140
2051
+ Lung, Liver, Kidneys and Bladder
2052
+ 6. CHAOS [63]
2053
+ 4
2054
+ 40
2055
+ Liver, Left Kidney, Right Kidney, Spl
2056
+ 7-11. MSD CT Tasks [1]
2057
+ 9
2058
+ 947
2059
+ Spl, Liver and Tumor∗, Lung Tumor∗, Colon Tumor∗, Pan and Tumor∗, Hepatic Vessel and
2060
+ Tumor∗
2061
+ 12. BTCV [34]
2062
+ 13
2063
+ 50
2064
+ Spl, RKid, LKid, Gall, Eso, Liv, Sto, Aor, IVC, R&SVeins, Pan, RAG, LAG
2065
+ 13. AMOS22 [30]
2066
+ 15
2067
+ 500
2068
+ Spl, RKid, LKid, Gall, Eso, Liv, Sto, Aor, IVC, Pan, RAG, LAG, Duo, Bla, Pro/UTE
2069
+ 14. WORD [38]
2070
+ 16
2071
+ 150
2072
+ Spl, RKid, LKid, Gall, Eso, Liv, Sto, Pan, RAG, Duo, Col, Int, Rec, Bla, LFH, RFH
2073
+ 15. 3D-IRCADb [57]
2074
+ 13
2075
+ 20
2076
+ Liv, Liv Cyst, RLung, LLung, Venous, PVein, Aor, Spl, RKid, LKid, Gall, IVC
2077
+ 16. TotalSegmentator [68]
2078
+ 104
2079
+ 1,024
2080
+ Clavicula, Humerus, Scapula, Rib 1-12, Vertebrae C1-7, Vertebrae T1-9, Vertebrae L1-5, Hip,
2081
+ Sacrum, Femur, Aorta, Pulmonary Artery, Right Ventricle, Right Atrium, Left Atrium, Left Ven-
2082
+ tricle, Myocardium, PVein, SVein, IVC, Iliac Artery, Iliac Vena, Brain, Trachea, Lung Upper
2083
+ Lobe, Lung Middle Lobe, Lung Lower Lobe, AG, Spl, Liv, Gall, Pan, Kid, Eso, Sto, Duo, Small
2084
+ Bowel, Colon, Bla, Autochthon, Iliopsoas, Gluteus Minimus, Gluteus Medius, Gluteus Maximus
2085
+ 17. JHH (private)
2086
+ 21
2087
+ 5,038
2088
+ Aor, AG, CBD, Celiac AA, Colon, duo, Gall, IVC, Lkid, RKid, Liv, Pan, Pan Duct, SMA, Small
2089
+ bowel, Spl, Sto, Veins, Kid LtRV, Kid RtRV, CBD Stent, PDAC∗, PanNET∗, Pancreatic Cyst∗
2090
+ Table 7. The information for an assembly of datasets. We have developed a Universal Model from an assembly of 1–14 public
2091
+ datasets. The official test and validation sets of Medical Segmentation Decathlon (MSD) and Beyond the Cranial Vault (BTCV) are used
2092
+ to benchmark the performance of organ segmentation (§4.1) and tumor detection (§4.2). 3D-IRCADb (15) and TotalSegmentator (16) are
2093
+ used for independent evaluation of model generalizability (§5.2), transferability (§5.3). In addition to public datasets, the Universal Model
2094
+ has also been evaluated on a large-scale private dataset (17), consisting of 5,038 CT scans with 21 annotated organs, to investigate the
2095
+ extensibility to new classes (§5.4). This list will continue to grow when more annotated datasets become available.
2096
+ scanned by a SIEMENS CT scanner without appearance en-
2097
+ hancement. Each CT volume consists of 159 to 330 slices
2098
+ of 512 × 512 pixels.
2099
+ 3D-IRCADb [57] contains 20 venous phase enhanced CT
2100
+ scans. Each CT scan has various annotations, and only an-
2101
+ notated organs are tested to validate the model’s generaliz-
2102
+ ability.
2103
+ TotalSegmentator [68] collects 1024 CT scans randomly
2104
+ sampled from PACS over the timespan of the last 10 years.
2105
+ The dataset contains CT images with different sequences
2106
+ (native, arterial, portal venous, late phase, dual-energy),
2107
+ with and without contrast agent, with different bulb volt-
2108
+ ages, with different slice thicknesses and resolution and
2109
+ with different kernels (soft tissue kernel, bone kernel).
2110
+ JHH (private) contains 5038 CT scans with 21 annotated
2111
+ organs, where each case was scanned by contrast-enhanced
2112
+ CT in both venous and arterial phases, acquired on Siemens
2113
+ MDCT scanners. The JHH dataset is used to investigate the
2114
+ extensibility of new classes.
2115
+ C. Implementation Details
2116
+ C.1. Data Augmentation
2117
+ Our data augmentation is implemented in python with
2118
+ MONAI7. The orientation of CT scans is changed into spec-
2119
+ ified axcodes. Isotropic spacing is adopted to re-slice each
2120
+ 7https://monai.io/
2121
+ Task
2122
+ SwinUNETR [61]
2123
+ Ours
2124
+ Task 03
2125
+ Liver
2126
+ 94.12±2.34
2127
+ 96.49±0.23
2128
+ Liver Tumor
2129
+ 57.86±4.72
2130
+ 71.94±3.74
2131
+ Task 06
2132
+ Lung Tmuor
2133
+ 68.90±5.44
2134
+ 67.15±5.81
2135
+ Task 07
2136
+ Pancreas
2137
+ 80.06±0.83
2138
+ 82.70±1.96
2139
+ Panc. Tumor
2140
+ 52.53±3.76
2141
+ 60.82±10.2
2142
+ Task 08
2143
+ Hepat. Ves.
2144
+ 62.33±2.44
2145
+ 62.55±3.64
2146
+ Ves. Tumor
2147
+ 68.56±3.82
2148
+ 69.39±2.29
2149
+ Task 09
2150
+ Spleen
2151
+ 95.80±0.56
2152
+ 96.71±0.21
2153
+ Task 10
2154
+ Col. Tumor
2155
+ 50.45±10.1
2156
+ 62.14±17.8
2157
+ Table 8.
2158
+ The 5-fold cross-validation performance on MSD.
2159
+ These are the tabular comparison between Universal Model and
2160
+ Swin UNETR [61] (previously ranked first on the MSD leader-
2161
+ board). The performance is evaluated by DSC scores.
2162
+ scan to the same voxel size of 1.5 × 1.5 × 1.5mm3. We
2163
+ truncate the intensity in each scan to the range [−175, 250]
2164
+ and linearly normalize them to [0, 1]. Considering the valid
2165
+ part is part of the whole medical image, we crop only the
2166
+ foreground object based on the images. During training, we
2167
+ crop random fixed-sized 96×96×96 regions with the center
2168
+ being a foreground or background voxel based on the pre-
2169
+ defined ratio. Also, we randomly rotate the input patch by
2170
+ 90 degrees and shift intensity with 0.1 offset with 0.1 and
2171
+ 0.2 probability. To avoid confusion between the organ in the
2172
+ right and left parts, we do not use mirroring augmentation.
2173
+ 14
2174
+
2175
+ C.2. Network Structures
2176
+ Text branch. We apply the pre-trained text encoder “ViT-
2177
+ B/32” of the CLIP as the text branch8. We can extract and
2178
+ store the text features to reduce overhead brought by the text
2179
+ encoder in the training and inference stage since the CLIP
2180
+ embedding only depends on the dictionary, which is fixed.
2181
+ Vision branch. We adopt Swin UNETR as a vision en-
2182
+ coder.
2183
+ The Swin UNETR consists of 4 attention stages
2184
+ comprising 2 transformer blocks and 5 convolution stages
2185
+ comprising of CNN-based structure. In the attention stage,
2186
+ a patch merging layer is used to reduce the resolution by
2187
+ a factor of 2. Stage 1 consists of a linear embedding layer
2188
+ and transformer blocks that maintain the number of tokens
2189
+ as H
2190
+ 2 × W
2191
+ 2 × D
2192
+ 2 . a patch merging layer groups patches with
2193
+ resolution 2 × 2 × 2 and concatenates them, resulting in a
2194
+ 4C-dimensional feature embedding. A linear layer is then
2195
+ used to down-sample the resolution by reducing the dimen-
2196
+ sion to 2C. The same procedure continues in stages 2, 3,
2197
+ and 4 [61]. The text-based controller is a single convolu-
2198
+ tional layer, which takes the CLIP embedding and global
2199
+ pooling feature from the last convolution stages in the vi-
2200
+ sion encoder as input.
2201
+ C.3. Evaluation Metrics
2202
+ The Dice similarity coefficient (DSC) and Normalized
2203
+ Surface Distance (NSD) are used as measurements for 3D
2204
+ segmentation results. The DSC metric is defined as:
2205
+ DSC =
2206
+ 2 �I
2207
+ i=1 Yi ˆYi
2208
+ �I
2209
+ i=1 Yi + �I
2210
+ i=1 ˆYi
2211
+ ,
2212
+ (3)
2213
+ where Y and ˆY denote the ground truth and prediction of
2214
+ voxel values. The details of Normalized Surface Distance
2215
+ (NSD) could refer to Sec. 4.6 in [44].
2216
+ D. Additional Evaluations
2217
+ Table 8 shows the detailed numerical result between
2218
+ Universal Model and Swin UNETR. Tables 9–12 and Ta-
2219
+ ble 13 show the per-class evaluation of TotalSegmentator
2220
+ and JHH, which validates the transferability of the proposed
2221
+ Universal Model. Figure 7 exhibits the contour line compar-
2222
+ ison among Universal Model and two human experts. We
2223
+ can see the model predictions are roughly similar to human
2224
+ annotation, which validates the effectiveness of the pseudo
2225
+ label generated by our Universal Model. Figure 9 and Fig-
2226
+ ure 8 shows several kidney and liver tumor cases compar-
2227
+ ison among the proposed Universal Model and four com-
2228
+ petitive baseline methods. Our method can not only detect
2229
+ small and big tumors in various organs but also not generate
2230
+ false positives of tumors.
2231
+ 8https://github.com/openai/CLIP
2232
+ E. Discussion of Open Challenges
2233
+ The first open challenge is the inconsistent annotation
2234
+ protocol. The annotation standard is different from institu-
2235
+ tion to institution. For example, the aorta annotation is dif-
2236
+ ferent in AbdomenCT-12organ and other datasets, as shown
2237
+ in Figure 10. A part of the upper aorta region is missing
2238
+ in AbdomenCT-12organ, while the annotation is complete
2239
+ in BTCV and AMOS. This open challenge requires several
2240
+ experienced radiology experts to re-annotate the CT scans.
2241
+ The second challenge is the long tail problem. We count
2242
+ the proportion of each organ and tumor in Figure 11. The
2243
+ assembly of datasets has a severe long-tail distribution,
2244
+ which would lead to unsatisfactory performance of tumor
2245
+ classes.
2246
+ Mitigating the long-tail distribution would con-
2247
+ tribute to more robust detection of the tumor.
2248
+ 15
2249
+
2250
+ Figure 7. Contour line comparison among pseudo labels and two human experts. The red line represents the annotation from Doctor
2251
+ 1; green line indicates the annotation from Doctor 2; blue line shows the results generated by Universal Model. Examples of CT scans
2252
+ annotated by our pseudo labels and two human experts with contour line comparison. The prediction results of these organs generated by
2253
+ the medical model are comparable with human experts.
2254
+ 16
2255
+
2256
+ Case
2257
+ 2
2258
+ Case
2259
+ 3
2260
+ Case
2261
+ 4
2262
+ Case
2263
+ 5
2264
+ Case
2265
+ Spleen
2266
+ Liver
2267
+ Stomach
2268
+ Gall Bladder
2269
+ PancreasCT scan
2270
+ ground
2271
+ truth
2272
+ ours
2273
+ Swin
2274
+ UNETR
2275
+ zoom-in
2276
+ Length: 8.5mm
2277
+ Length: 11.8mm
2278
+ Length: 10.5mm
2279
+ UNETR
2280
+ nnU-Net
2281
+ nnFormer
2282
+ Length: 22.1mm
2283
+ UNesT
2284
+ Length: 21.3mm
2285
+ Figure 8. Liver tumor detection. Qualitative visualizations of the proposed Universal Model and four competitive baseline methods. We
2286
+ review the detection results of tumors from smaller to larger sizes (Rows 1–4). The Universal Model succeeds in detecting small tumors
2287
+ ignored by other methods and in detecting multiple tumors in one CT. In addition, it avoids the false positive prediction, which validates
2288
+ the good practicability of Universal model.
2289
+ CT scan
2290
+ ground
2291
+ truth
2292
+ ours
2293
+ Swin
2294
+ UNETR
2295
+ zoom-in
2296
+ Length: 27.5mm
2297
+ Length: 17.8mm
2298
+ Length: 137.2mm
2299
+ UNETR
2300
+ nnU-Net
2301
+ nnFormer
2302
+ Length: 31.5mm
2303
+ UNesT
2304
+ Figure 9. Kidney tumor detection. Qualitative visualizations of the proposed Universal Model and four competitive baseline methods.
2305
+ We review the detection results of tumors from smaller to larger sizes (Rows 1–4). The Universal Model can detect well not only on the
2306
+ kidneys (red region), but also kidney tumors (green region) and cysts (blue region).
2307
+ 17
2308
+
2309
+ img_liver_40
2310
+ R
2311
+ 10 cmimg_liver_40
2312
+ cmimg_liver 92
2313
+ A
2314
+ R
2315
+ cmimgliver31
2316
+ A
2317
+ R
2318
+ cmimg_liver_40
2319
+ R
2320
+ CImg_liver 92
2321
+ R
2322
+ cmimg_liver_92
2323
+ A
2324
+ R
2325
+ 1
2326
+ 10 cmimgver31
2327
+ P
2328
+ R
2329
+ cmimg_liver_40
2330
+ :
2331
+ cmimg_liver_31
2332
+ R
2333
+ 10cmimg_liver 92
2334
+ A
2335
+ R
2336
+ cmimgliver31
2337
+ A
2338
+ R
2339
+ cmimg_liver_40
2340
+ R
2341
+ cmimg_liver 92
2342
+ A
2343
+ R
2344
+ cmimgliver31
2345
+ R
2346
+ cmimg_liver_40
2347
+ :
2348
+ cmimg_liver 92
2349
+ A
2350
+ R
2351
+ cmimgliver31
2352
+ A
2353
+ R
2354
+ cmimg_liver_73
2355
+ R
2356
+ L
2357
+ 10 cm
2358
+ Pimg_liver_73
2359
+ A
2360
+ R
2361
+
2362
+ P
2363
+ 1cmimg_liver_40
2364
+ cmimg_liver_73
2365
+ A
2366
+ R
2367
+
2368
+ P
2369
+ 1cmimg_liver_73
2370
+ A
2371
+ R
2372
+
2373
+ P
2374
+ 1cmimg_liver_73
2375
+ A
2376
+ R
2377
+ cmimg_liver_73
2378
+ A
2379
+ R
2380
+
2381
+ P
2382
+ 1cmimg_liver_73
2383
+ A
2384
+ R
2385
+
2386
+ P
2387
+ 1cmimg_liver_73
2388
+ A
2389
+ R
2390
+
2391
+ P
2392
+ 1cmimg_liver 92
2393
+ A
2394
+ R
2395
+ cmimg_liver_40
2396
+ :
2397
+ cmimg_liver 92
2398
+ A
2399
+ R
2400
+ cmimg_liver_73
2401
+ A
2402
+ R
2403
+
2404
+ P
2405
+ 1cmimgliver31
2406
+ A
2407
+ R
2408
+ cmimg_liver_40
2409
+ Cimg_liver 92
2410
+ A
2411
+ R
2412
+ cmimgliver31
2413
+ R
2414
+ cmimg_img0026
2415
+ A
2416
+ R
2417
+ 10 cm
2418
+ Pimg_img0026
2419
+ R
2420
+ 1cmimg_img0182
2421
+ R
2422
+ cmimg_img0042
2423
+ 10cmimg_img0026
2424
+ R
2425
+ 1cmimg_img0182
2426
+ R
2427
+ cmimg_img0182
2428
+ A
2429
+ R
2430
+ 10cmimg_img0042
2431
+ 10cmimg_img0026
2432
+ R
2433
+ Fcmimg_img0042
2434
+ R
2435
+ 10 cmimg_img0182
2436
+ R
2437
+ cmimg_img0042
2438
+ 10cmimg_img0026
2439
+ R
2440
+ 1cmimg_img0182
2441
+ R
2442
+ cmimg_img0042
2443
+ 10cmimg_img0026
2444
+ R
2445
+ 1cmimg_img0182
2446
+ R
2447
+ cmimg_img0042
2448
+ 10 cmimg_img0155
2449
+ R
2450
+ 10 cmimg.imgo155
2451
+ R
2452
+ cmimg_img0026
2453
+ R
2454
+ Fcmimg.imgo155
2455
+ R
2456
+ cmimg.imgo155
2457
+ R
2458
+ 1cmimg.imgo155
2459
+ R
2460
+ cmimg.imgo155
2461
+ R
2462
+ cmimg.imgo155
2463
+ R
2464
+ cmimg.imgo155
2465
+ R
2466
+ cmimg_img0182
2467
+ R
2468
+ cmimg_img0026
2469
+ R
2470
+ 1cmimg_img0182
2471
+ R
2472
+ cmimg_img0042
2473
+ 10 cmimg.imgo155
2474
+ R
2475
+ 1cmimg_img0042
2476
+ 10 cmimg_img0026
2477
+ R
2478
+ Fcmimg_img0182
2479
+ A
2480
+ R
2481
+ cmimg_img0042
2482
+ 10 cmMethod
2483
+ L5
2484
+ L4
2485
+ L3
2486
+ L2
2487
+ L1
2488
+ T12
2489
+ T11
2490
+ T10
2491
+ T9
2492
+ T8
2493
+ T7
2494
+ T6
2495
+ Scratch
2496
+ 86.68
2497
+ 88.37
2498
+ 89.83
2499
+ 84.28
2500
+ 91.98
2501
+ 87.45
2502
+ 88.29
2503
+ 86.78
2504
+ 83.50
2505
+ 75.70
2506
+ 77.73
2507
+ 75.84
2508
+ MedicalNet [10]
2509
+ 91.72
2510
+ 91.01
2511
+ 86.03
2512
+ 84.73
2513
+ 91.52
2514
+ 89.98
2515
+ 89.06
2516
+ 89.35
2517
+ 85.71
2518
+ 82.99
2519
+ 81.54
2520
+ 79.74
2521
+ Models Gen. [87]
2522
+ 89.64
2523
+ 89.24
2524
+ 89.38
2525
+ 82.85
2526
+ 90.79
2527
+ 88.62
2528
+ 90.11
2529
+ 90.43
2530
+ 89.22
2531
+ 85.21
2532
+ 80.83
2533
+ 77.40
2534
+ Swin UNETR [61]
2535
+ 89.56
2536
+ 90.80
2537
+ 93.08
2538
+ 86.38
2539
+ 94.35
2540
+ 89.65
2541
+ 92.02
2542
+ 91.99
2543
+ 89.65
2544
+ 82.20
2545
+ 85.01
2546
+ 81.06
2547
+ UniMiSS [70]
2548
+ 89.20
2549
+ 91.21
2550
+ 94.16
2551
+ 86.61
2552
+ 91.57
2553
+ 87.29
2554
+ 90.18
2555
+ 90.56
2556
+ 88.09
2557
+ 83.47
2558
+ 80.73
2559
+ 76.40
2560
+ Universal Model
2561
+ 88.95
2562
+ 91.38
2563
+ 93.82
2564
+ 87.04
2565
+ 93.53
2566
+ 88.96
2567
+ 90.50
2568
+ 91.40
2569
+ 89.18
2570
+ 84.25
2571
+ 83.63
2572
+ 79.95
2573
+ Method
2574
+ T5
2575
+ T4
2576
+ T3
2577
+ T2
2578
+ T1
2579
+ C7
2580
+ C6
2581
+ C5
2582
+ C4
2583
+ C3
2584
+ C2
2585
+ C1
2586
+ Average
2587
+ Scratch
2588
+ 73.14
2589
+ 72.26
2590
+ 77.12
2591
+ 80.36
2592
+ 85.76
2593
+ 83.39
2594
+ 69.80
2595
+ 70.23
2596
+ 69.82
2597
+ 85.74
2598
+ 83.35
2599
+ 78.18
2600
+ 81.06
2601
+ MedicalNet [10]
2602
+ 77.28
2603
+ 76.60
2604
+ 76.57
2605
+ 80.94
2606
+ 85.54
2607
+ 83.05
2608
+ 76.05
2609
+ 73.04
2610
+ 80.55
2611
+ 74.35
2612
+ 74.67
2613
+ 72.91
2614
+ 82.28
2615
+ Models Gen. [87]
2616
+ 79.59
2617
+ 78.73
2618
+ 82.01
2619
+ 84.63
2620
+ 90.02
2621
+ 88.20
2622
+ 81.09
2623
+ 78.90
2624
+ 78.21
2625
+ 89.69
2626
+ 88.06
2627
+ 80.23
2628
+ 85.12
2629
+ Swin UNETR [61]
2630
+ 82.33
2631
+ 77.74
2632
+ 81.78
2633
+ 83.53
2634
+ 88.22
2635
+ 87.81
2636
+ 78.38
2637
+ 80.36
2638
+ 83.00
2639
+ 92.68
2640
+ 87.97
2641
+ 80.16
2642
+ 86.23
2643
+ UniMiSS [70]
2644
+ 78.97
2645
+ 76.60
2646
+ 82.33
2647
+ 85.14
2648
+ 90.04
2649
+ 88.68
2650
+ 79.18
2651
+ 79.17
2652
+ 79.00
2653
+ 88.19
2654
+ 86.38
2655
+ 79.80
2656
+ 85.12
2657
+ Universal Model
2658
+ 83.07
2659
+ 78.67
2660
+ 82.97
2661
+ 86.06
2662
+ 90.67
2663
+ 88.75
2664
+ 77.03
2665
+ 80.87
2666
+ 83.05
2667
+ 92.94
2668
+ 88.20
2669
+ 80.87
2670
+ 86.49
2671
+ Table 9. The complete evaluation of TotalSeg vertebrae. The results are evaluated by DSC. Our Universal Model represents the best
2672
+ transferability.
2673
+ Method
2674
+ esophagus
2675
+ trachea
2676
+ HM
2677
+ HA left
2678
+ HV left
2679
+ HA right
2680
+ HV right
2681
+ PA
2682
+ brain
2683
+ Scratch
2684
+ 84.73
2685
+ 90.72
2686
+ 85.53
2687
+ 91.78
2688
+ 91.15
2689
+ 90.10
2690
+ 88.25
2691
+ 87.20
2692
+ 93.79
2693
+ MedicalNet [10]
2694
+ 89.43
2695
+ 94.08
2696
+ 88.71
2697
+ 93.50
2698
+ 92.17
2699
+ 90.90
2700
+ 90.83
2701
+ 89.51
2702
+ 95.11
2703
+ Models Gen. [87]
2704
+ 87.96
2705
+ 93.47
2706
+ 87.40
2707
+ 93.61
2708
+ 92.23
2709
+ 92.02
2710
+ 89.74
2711
+ 89.34
2712
+ 94.99
2713
+ Swin UNETR [61]
2714
+ 89.77
2715
+ 94.37
2716
+ 88.85
2717
+ 94.42
2718
+ 92.99
2719
+ 92.61
2720
+ 90.40
2721
+ 88.91
2722
+ 95.14
2723
+ UniMiSS [70]
2724
+ 90.45
2725
+ 94.51
2726
+ 90.29
2727
+ 94.34
2728
+ 93.70
2729
+ 93.10
2730
+ 91.46
2731
+ 89.67
2732
+ 94.99
2733
+ Universal Model
2734
+ 90.97
2735
+ 94.71
2736
+ 90.88
2737
+ 94.64
2738
+ 93.72
2739
+ 93.30
2740
+ 91.66
2741
+ 90.80
2742
+ 95.34
2743
+ Method
2744
+ IA left
2745
+ IA right
2746
+ IV left
2747
+ IV right
2748
+ small bow.
2749
+ duodenum
2750
+ colon
2751
+ UB
2752
+ face
2753
+ Average
2754
+ Scratch
2755
+ 80.32
2756
+ 79.78
2757
+ 79.80
2758
+ 81.69
2759
+ 81.97
2760
+ 72.21
2761
+ 82.51
2762
+ 89.59
2763
+ 69.40
2764
+ 84.47
2765
+ MedicalNet [10]
2766
+ 87.06
2767
+ 84.90
2768
+ 86.93
2769
+ 86.46
2770
+ 83.14
2771
+ 72.01
2772
+ 84.22
2773
+ 90.43
2774
+ 73.85
2775
+ 87.40
2776
+ Models Gen. [87]
2777
+ 85.71
2778
+ 83.09
2779
+ 85.77
2780
+ 85.79
2781
+ 81.75
2782
+ 69.37
2783
+ 85.25
2784
+ 90.31
2785
+ 69.42
2786
+ 86.51
2787
+ Swin UNETR [61]
2788
+ 88.26
2789
+ 86.44
2790
+ 87.13
2791
+ 87.59
2792
+ 83.29
2793
+ 70.71
2794
+ 87.50
2795
+ 89.93
2796
+ 74.08
2797
+ 87.91
2798
+ UniMiSS [70]
2799
+ 89.18
2800
+ 87.81
2801
+ 89.04
2802
+ 88.55
2803
+ 84.83
2804
+ 74.74
2805
+ 88.16
2806
+ 91.83
2807
+ 74.76
2808
+ 88.96
2809
+ Universal Model
2810
+ 89.89
2811
+ 88.54
2812
+ 89.58
2813
+ 89.27
2814
+ 84.85
2815
+ 76.23
2816
+ 89.06
2817
+ 92.07
2818
+ 76.81
2819
+ 89.57
2820
+ Table 10. The complete evaluation of TotalSeg cardiac. The results are evaluated by DSC. Our Universal Model represents the best
2821
+ transferability. The abbreviation in the table is listed as follows. HM (heart myocardium), HA (heart atrium), HV (heart ventricle), PA
2822
+ (pulmonary artery), IA (iliac artery), IV (iliac vena), UB (urinary bladder).
2823
+ Method
2824
+ Humerus L Humerus R Scapula L
2825
+ Scapula R
2826
+ Clav. L
2827
+ Clav. R
2828
+ Femur L Femur R
2829
+ Hip L
2830
+ Hip R
2831
+ Sacrum
2832
+ Scratch
2833
+ 84.27
2834
+ 84.44
2835
+ 91.71
2836
+ 89.78
2837
+ 80.38
2838
+ 75.81
2839
+ 93.41
2840
+ 93.02
2841
+ 92.90
2842
+ 88.66
2843
+ 83.63
2844
+ MedicalNet [10]
2845
+ 87.25
2846
+ 85.67
2847
+ 88.68
2848
+ 92.62
2849
+ 94.35
2850
+ 93.96
2851
+ 84.85
2852
+ 96.59
2853
+ 96.98
2854
+ 96.31
2855
+ 95.19
2856
+ Models Gen. [87]
2857
+ 90.61
2858
+ 79.73
2859
+ 88.56
2860
+ 92.06
2861
+ 91.19
2862
+ 92.57
2863
+ 86.08
2864
+ 93.57
2865
+ 85.35
2866
+ 82.40
2867
+ 87.91
2868
+ Swin UNETR [61]
2869
+ 88.32
2870
+ 86.35
2871
+ 90.82
2872
+ 93.88
2873
+ 94.90
2874
+ 94.52
2875
+ 85.92
2876
+ 97.71
2877
+ 97.42
2878
+ 97.49
2879
+ 95.73
2880
+ UniMiSS [70]
2881
+ 89.73
2882
+ 92.30
2883
+ 91.72
2884
+ 94.77
2885
+ 94.57
2886
+ 93.66
2887
+ 84.92
2888
+ 97.67
2889
+ 97.35
2890
+ 97.11
2891
+ 96.18
2892
+ Universal Model
2893
+ 91.32
2894
+ 93.87
2895
+ 93.11
2896
+ 95.59
2897
+ 95.00
2898
+ 95.88
2899
+ 86.79
2900
+ 98.48
2901
+ 98.04
2902
+ 98.32
2903
+ 96.94
2904
+ Method
2905
+ GMa L
2906
+ GMa R
2907
+ GMe L
2908
+ GMe R
2909
+ GMi L
2910
+ GMi R
2911
+ Aotu. L
2912
+ Aotu. R
2913
+ Iliopsoas L Iliopsoas R Average
2914
+ Scratch
2915
+ 95.53
2916
+ 91.78
2917
+ 85.27
2918
+ 94.80
2919
+ 86.54
2920
+ 93.01
2921
+ 95.17
2922
+ 93.44
2923
+ 87.99
2924
+ 83.95
2925
+ 88.83
2926
+ MedicalNet [10]
2927
+ 94.69
2928
+ 95.72
2929
+ 92.17
2930
+ 89.15
2931
+ 89.76
2932
+ 90.77
2933
+ 94.45
2934
+ 94.24
2935
+ 80.29
2936
+ 84.94
2937
+ 91.36
2938
+ Models Gen. [87]
2939
+ 96.19
2940
+ 92.06
2941
+ 90.07
2942
+ 94.99
2943
+ 92.12
2944
+ 92.60
2945
+ 95.86
2946
+ 95.93
2947
+ 85.64
2948
+ 83.82
2949
+ 89.96
2950
+ Swin UNETR [61]
2951
+ 95.32
2952
+ 96.34
2953
+ 93.57
2954
+ 89.87
2955
+ 90.75
2956
+ 91.74
2957
+ 95.16
2958
+ 94.86
2959
+ 83.53
2960
+ 86.00
2961
+ 92.39
2962
+ UniMiSS [70]
2963
+ 95.53
2964
+ 96.37
2965
+ 93.80
2966
+ 90.28
2967
+ 90.87
2968
+ 93.02
2969
+ 95.17
2970
+ 95.48
2971
+ 85.71
2972
+ 84.02
2973
+ 92.86
2974
+ Universal Model
2975
+ 96.68
2976
+ 96.99
2977
+ 95.55
2978
+ 91.36
2979
+ 93.19
2980
+ 94.52
2981
+ 96.31
2982
+ 96.34
2983
+ 86.92
2984
+ 88.89
2985
+ 94.29
2986
+ Table 11. The complete evaluation of TotalSeg muscles. The results are evaluated by DSC. Our Universal Model represents the best
2987
+ transferability. The abbreviation in the table is listed as follows. Clav. (Clavicula), GMa (gluteus maximus), GMe (gluteus medius), GMi
2988
+ (gluteus minimus), Aotu. (Autochthon)
2989
+ 18
2990
+
2991
+ Method
2992
+ spleen
2993
+ Kidney R
2994
+ Kidney L
2995
+ gallbladder
2996
+ liver
2997
+ stomach
2998
+ aorta
2999
+ IVC
3000
+ PSV
3001
+ Scratch
3002
+ 93.58
3003
+ 94.09
3004
+ 87.73
3005
+ 73.86
3006
+ 96.79
3007
+ 89.17
3008
+ 90.68
3009
+ 82.10
3010
+ 71.35
3011
+ MedicalNet [10]
3012
+ 95.54
3013
+ 92.43
3014
+ 90.86
3015
+ 79.36
3016
+ 97.10
3017
+ 91.53
3018
+ 90.12
3019
+ 86.18
3020
+ 73.34
3021
+ Models Gen. [87]
3022
+ 95.60
3023
+ 94.37
3024
+ 88.51
3025
+ 78.39
3026
+ 97.39
3027
+ 91.68
3028
+ 93.18
3029
+ 85.94
3030
+ 74.58
3031
+ Swin UNETR [61]
3032
+ 89.77
3033
+ 94.37
3034
+ 88.85
3035
+ 74.42
3036
+ 92.99
3037
+ 92.61
3038
+ 90.40
3039
+ 88.91
3040
+ 75.14
3041
+ UniMiSS [70]
3042
+ 95.78
3043
+ 94.75
3044
+ 89.35
3045
+ 79.14
3046
+ 97.39
3047
+ 91.87
3048
+ 93.50
3049
+ 86.19
3050
+ 75.26
3051
+ Universal Model
3052
+ 96.24
3053
+ 94.67
3054
+ 91.43
3055
+ 81.48
3056
+ 97.63
3057
+ 92.76
3058
+ 92.22
3059
+ 87.87
3060
+ 76.10
3061
+ Method
3062
+ pancreas
3063
+ AG R
3064
+ AG L
3065
+ LUL L
3066
+ LLL L
3067
+ LUL R
3068
+ LML R
3069
+ LLL R
3070
+ Average
3071
+ Scratch
3072
+ 80.80
3073
+ 78.94
3074
+ 72.83
3075
+ 95.88
3076
+ 91.66
3077
+ 87.17
3078
+ 88.91
3079
+ 93.71
3080
+ 86.42
3081
+ MedicalNet [10]
3082
+ 83.11
3083
+ 79.15
3084
+ 69.22
3085
+ 93.64
3086
+ 89.88
3087
+ 86.38
3088
+ 87.08
3089
+ 92.40
3090
+ 86.90
3091
+ Models Gen. [87]
3092
+ 82.97
3093
+ 83.05
3094
+ 75.49
3095
+ 95.79
3096
+ 92.90
3097
+ 90.10
3098
+ 91.06
3099
+ 94.65
3100
+ 85.78
3101
+ Swin UNETR [61]
3102
+ 85.24
3103
+ 81.86
3104
+ 74.33
3105
+ 95.06
3106
+ 92.16
3107
+ 88.37
3108
+ 89.45
3109
+ 94.04
3110
+ 88.56
3111
+ UniMiSS [70]
3112
+ 82.11
3113
+ 79.37
3114
+ 73.12
3115
+ 96.08
3116
+ 93.18
3117
+ 90.31
3118
+ 91.99
3119
+ 95.43
3120
+ 88.51
3121
+ Universal Model
3122
+ 85.21
3123
+ 82.25
3124
+ 75.01
3125
+ 95.04
3126
+ 92.28
3127
+ 88.21
3128
+ 89.69
3129
+ 94.06
3130
+ 88.95
3131
+ Table 12. The complete evaluation of TotalSeg organs. The results are evaluated by DSC. Our Universal Model represents the best
3132
+ transferability. The abbreviation in the table is listed as follows. IVC (inferior vena cava), PSV (portal vein and splenic vein), AG (adrenal
3133
+ gland), LUL (lung upper lobe), LLL (lung lower lobe), LML (lung middle lobe)
3134
+ Method
3135
+ spleen
3136
+ Kidney R
3137
+ Kidney L
3138
+ gallbladder
3139
+ liver
3140
+ stomach
3141
+ Scratch
3142
+ 95.66
3143
+ 94.43
3144
+ 93.69
3145
+ 86.14
3146
+ 96.74
3147
+ 94.30
3148
+ MedicalNet [10]
3149
+ 91.08
3150
+ 88.63
3151
+ 86.60
3152
+ 61.23
3153
+ 93.29
3154
+ 88.22
3155
+ Models Gen. [87]
3156
+ 95.02
3157
+ 93.44
3158
+ 93.07
3159
+ 84.73
3160
+ 94.12
3161
+ 94.05
3162
+ Swin UNETR [61]
3163
+ 94.71
3164
+ 93.95
3165
+ 92.27
3166
+ 81.75
3167
+ 96.00
3168
+ 92.79
3169
+ UniMiSS [70]
3170
+ 88.35
3171
+ 91.49
3172
+ 90.41
3173
+ 82.91
3174
+ 93.80
3175
+ 89.57
3176
+ Universal Model
3177
+ 95.98
3178
+ 94.71
3179
+ 94.00
3180
+ 87.18
3181
+ 96.87
3182
+ 94.50
3183
+ Method
3184
+ aorta
3185
+ IVC
3186
+ pancreas
3187
+ PSV
3188
+ AG
3189
+ CAA
3190
+ Average
3191
+ Scratch
3192
+ 87.68
3193
+ 79.73
3194
+ 85.03
3195
+ 68.48
3196
+ 66.61
3197
+ 50.61
3198
+ 81.98
3199
+ MedicalNet [10]
3200
+ 83.27
3201
+ 75.32
3202
+ 70.67
3203
+ 46.82
3204
+ 41.69
3205
+ 26.87
3206
+ 68.88
3207
+ Models Gen. [87]
3208
+ 89.46
3209
+ 81.50
3210
+ 84.23
3211
+ 71.79
3212
+ 70.46
3213
+ 54.23
3214
+ 82.81
3215
+ Swin UNETR [61]
3216
+ 87.43
3217
+ 80.89
3218
+ 81.19
3219
+ 66.71
3220
+ 65.04
3221
+ 36.38
3222
+ 79.55
3223
+ UniMiSS [70]
3224
+ 88.50
3225
+ 77.98
3226
+ 71.86
3227
+ 61.68
3228
+ 51.82
3229
+ 49.16
3230
+ 76.10
3231
+ Universal Model
3232
+ 88.36
3233
+ 79.98
3234
+ 85.82
3235
+ 69.38
3236
+ 65.88
3237
+ 50.53
3238
+ 82.24
3239
+ Table 13. The complete evaluation of JHH. The results are evaluated by DSC. IVC (inferior vena cava), PSV (portal vein and splenic
3240
+ vein), AG (adrenal gland), CAA (celiac abdominal aorta)
3241
+ 19
3242
+
3243
+ AbdomenCT-12organ
3244
+ BTCV
3245
+ AMOS
3246
+ Ground Truth
3247
+ Zoom-in
3248
+ CT Scan
3249
+ Figure 10. Inconsistent Label Protocol. The aorta annotation standard is inconsistent in AbdomenCT-12organ and other datasets. A part
3250
+ of the upper aorta region is missing in AbdomenCT-12organ, while the aorta annotation is complete in BTCV and AMOS.
3251
+ 20
3252
+
3253
+ Organ12_0003_0000
3254
+ R
3255
+ 10 cmimg0002
3256
+ S
3257
+ R
3258
+ 10cmimg0002
3259
+ R
3260
+ I
3261
+ cmimg0002
3262
+ R
3263
+ I
3264
+ cmOrgan12_0003_0000
3265
+ R
3266
+ cmOrgan12_0003_0000
3267
+ R
3268
+ cmOrgan12_0034_0000
3269
+ S
3270
+ R
3271
+ 10 cmOrgan12_0034L0000
3272
+ S
3273
+ ROrgan12_0034L0000
3274
+ Samos_0036
3275
+ R
3276
+ 10.cmamos_0036
3277
+ S
3278
+ R
3279
+ cmamos_0036
3280
+ S
3281
+ R
3282
+ cmFigure 11. The proportion of 32 classes. We observe that the assembly of datasets presents severe long-tail distribution.
3283
+ 21
3284
+
3285
+ Liver
3286
+ Right Lung
3287
+ Left Lung
3288
+ Intestine
3289
+ Colon
3290
+ Kidney Tumor
3291
+ Hepatic Vessel Tumor
3292
+ Spleen
3293
+ Pancreas
3294
+ Right Head of Femur
3295
+ Left Head of Femur
3296
+ Stomach
3297
+ Bladder
3298
+ Right Kidney
3299
+ Liver Tumor
3300
+ Left Kidney
3301
+ Prostate
3302
+ Hepatic Vessel
3303
+ Duodenum
3304
+ Rectum
3305
+ Postcava
3306
+ Arota
3307
+ Pancreas Tumor
3308
+ Gall Bladder
3309
+ Lung Tumor
3310
+ Kidney Cyst
3311
+ Colon Tumor
3312
+ Esophagus
3313
+ Portal Vein and Splenic Vein
3314
+ Left Adrenal Gland
3315
+ Right Adrenal Gland
3316
+ Celiac Truck
3317
+ 0
3318
+ 00
3319
+ 8T0
3320
+ 80'0
3321
+ 0.12
3322
+ Percentage
CNAyT4oBgHgl3EQf4foN/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
CtE2T4oBgHgl3EQfSAdG/content/2301.03787v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85cf0a7c228e505786dc4d6233710dc206a665ab51e325a8a09c88aac42d2190
3
+ size 551682
DNE3T4oBgHgl3EQfUwpD/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6c7140ef880a775bd42aaa9158c1e59aeb67a08a2319f64d3a4a1beb33a3a9c
3
+ size 1507373
DNE3T4oBgHgl3EQfUwpD/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb27f12c1ae8317995f915904f8db89539d91198d666513ca940f74b616058ce
3
+ size 60008
DdA0T4oBgHgl3EQfAv9Y/content/tmp_files/2301.01966v1.pdf.txt ADDED
@@ -0,0 +1,886 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.01966v1 [math.PR] 5 Jan 2023
2
+ Noname manuscript No.
3
+ (will be inserted by the editor)
4
+ Ruin Probabilities for a Sparre Andersen Model with
5
+ Investments: the Case of Annuity Payments
6
+ Yuri Kabanov · Platon Promyslov
7
+ January 6, 2023
8
+ Dedicated to the memory of Tomas Bj¨ork.
9
+ Abstract This note is a complement to the paper by Eberlein, Kabanov, and Schmidt
10
+ on the asymptotic of the ruin probability in a Sparre Andersen non-life insurance
11
+ model with investments a risky asset whose price follows a geometric L´evy process.
12
+ Using the techniques of semi-Markov processes we extend the result of the mentioned
13
+ paper to the case of annuities and models with two-sided jumps.
14
+ Keywords Ruin probabilities · Sparre Andersen model · Actuarial models with
15
+ investments · Renewal processes · Annuities · Distributional equations
16
+ Mathematics Subject Classification (2010) 60G44
17
+ JEL Classification G22 · G23
18
+ 1 Introduction
19
+ In the classical Sparre Andersen model of insurance company the counts of claims
20
+ form a renewal process. In recent studies, see [1], [2] and references therein, this
21
+ model was enriched by the assumption that the capital reserve of the insurance com-
22
+ pany is fully invested in a risky asset whose price evolves as a geometric L´evy pro-
23
+ cess. In the paper [2] by Eberlein, Kabanov, and Schmidt it was considered the non-
24
+ life insurance version of such a model. It was shown that under rather mild hypothe-
25
+ Lomonosov Moscow State University, Federal Research Center “Computer Science and Control” of
26
+ the Russian Academy of Sciences Moscow, Russia, and Universit´e de Franche-Comt´e, Laboratoire de
27
+ Math´ematiques, UMR CNRS 6623, 16 Route de Gray, 25030 Besanc¸on, France
28
+ E-mail: [email protected].
29
+ Lomonosov Moscow State University, Moscow, Russia
30
+ E-mail: [email protected]
31
+
32
+ 2
33
+ Yuri Kabanov, Platon Promyslov
34
+ ses on the business process the asymptotic behavior of the (ultimate) ruin probability
35
+ is essentially the same as in the Cram´er–Lundberg model with risky investments.
36
+ Namely, the ruin probability decays, up to a multiplicative constant, as the function
37
+ u−β where u, the initial capital, tends to infinity. The decay rate β depends only of
38
+ characteristics of the price process. The method of analysis in [2] is based heavily on
39
+ the assumption that the risk process has only downward jumps and, therefore, crosses
40
+ the zero level only by a jump. This specific feature allows a straightforward reduction
41
+ to a discrete-time Markovian framework.
42
+ The approach of [2] left an open question whether the results hold also in the case
43
+ of upward jumps. This is a feature of the annuity model when the risk process crosses
44
+ the zero level in a continuous way. In a less popular mixed model with two-sided
45
+ jumps the crossing may happen in both way. Of course, a positive answer is expected
46
+ here: this was already established for the Cram´er–Lundberg models with investments
47
+ analyzed by Kabanov, Pergamenshchikov,and Pukhlyakov, [5], [7], as well as in very
48
+ general L´evy Ornstein–Uhlenbeck models introduced and studied by Paulsen, see [8],
49
+ [9], [10], [11], and a more recent paper [6] by Kabanov and Pergamenshchikov.
50
+ Our note, based on the study [2], gives a positive answer for a Sparre Andersen
51
+ model with investments in its annuity version, with the upward jumps. We discuss
52
+ briefly the needed changes leading to a result for a model with upward and downward
53
+ jumps used serving to describe the evolution of the capital reserve of a company with
54
+ two types of the business activity. Our techniques is based on the imbedding of a
55
+ semi-Markov process into a Markov one by increasing the dimensionality.
56
+ In the paper we use standard notations of stochastic calculus and concepts dis-
57
+ cussed in details in [6], [2].
58
+ 2 The model
59
+ The Sparre Andersen model with risky investments considered contains two ingredi-
60
+ ents:
61
+ 1. The price process of a risky financial asset S = (St)t≥0. It is of the form
62
+ S = E(R) where E is the stochastic exponential, R is a L´evy process with the L´evy
63
+ triplet (a, σ2, Π) and such that Π((−∞, −1]) = 0. The latter condition ensures that
64
+ the jumps ∆R > −1, hence, the price S > 0. In such a case, S = eV where V = ln S
65
+ is again a L´evy process which can be given by the formula
66
+ Vt = at − 1
67
+ 2σ2t + σWt + h ∗ (µ − ν)t + (ln(1 + x) − h) ∗ µt,
68
+ (2.1)
69
+ where h(x) := xI{|x|≤1}. The L´evy triplet of V is (aV , σ2, ΠV ) with
70
+ aV = a − σ2
71
+ 2 + Π(h(ln(1 + x)) − h)
72
+ and ΠV = Πϕ−1, ϕ : x �→ ln(1 + x).
73
+ It is assume that R is non-deterministic, that is, at least one of the parameters σ2
74
+ or Π is not zero.
75
+
76
+ Ruin Probabilities for a Sparre Andersen Model with Investments
77
+ 3
78
+ 2. The ”business process“. It is an independent of S compound renewal process
79
+ P = (Pt). Classically, it can be written in the form
80
+ Pt = ct +
81
+ Nt
82
+
83
+ i=1
84
+ ξi,
85
+ (2.2)
86
+ where N = (Nt) is a counting renewal process with the interarrival times (lengths
87
+ of the inter jump intervals) Ui := Ti − Ti−1, i ≥ 2, forming an i.i.d. sequence
88
+ independent of the i.i.d. sequence of random variables ξi = ∆PTi, i ≥ 1, with the
89
+ common law Fξ, Fξ({0}) = 0. In the sequel, a “generic“ r.v. with such a law is
90
+ denoted by ξ. As usual, T0 := 0. The common law of Ui we denoted by F and use
91
+ the same character for its distribution function.
92
+ The risk process X = Xu, u > 0, is defined as the solution of the non-homogene-
93
+ ous linear stochastic equation
94
+ Xt = u +
95
+ � t
96
+ 0
97
+ Xs−dRs + Pt.
98
+ (2.3)
99
+ The ruin probability is the function of the initial capital Ψ(u) := P[τ u < ∞]
100
+ where τ u := inf{t : Xu
101
+ t ≤ 0}.
102
+ The cases of major interest are: c > 0 and ξi < 0 (a non-life insurance model,
103
+ considered in [2]) and c < 0 and ξi > 0 (annuities payments model). The latter case
104
+ studied here, is often interpreted as a model of venture company paying salary and
105
+ selling innovations. The case where Fξ charges both half-axes can be viewed as a
106
+ model of company combined two types of activity, see, e.g., [1]. and we study also
107
+ the case where ξi may take positive and negative values.
108
+ If c ≥ 0 and ξ > 0 the ruin never happens and this case is excluded from consid-
109
+ erations.
110
+ Standing assumption. The cumulant generating function H : q → ln E e−qVT1
111
+ of the random variable VT1 has a root β > 0 not laying on the boundary of the
112
+ effective domain of H. That is, if the int dom H = (q, ¯q), there is a unique root
113
+ β ∈ (0, ¯q).
114
+ We are looking for conditions under which
115
+ 0 < lim inf
116
+ u→∞ uβΨ(u) ≤ lim sup
117
+ u→∞ uβΨ(u) < ∞.
118
+ (2.4)
119
+ The paper [2] treats the case of the non-life insurance. We formulate its main
120
+ result in a more transparent form.
121
+ Theorem 2.1 ([2]) Suppose that the drift c ≥ 0, the law Fξ is concentrated on
122
+ (−∞, 0), E[|ξ|β] < ∞, and E[eεT1] < ∞ for some ε > 0. Then (2.4) holds if at
123
+ least one of the following conditions are fulfilled:
124
+ 1. σ ̸= 0 or ξ is unbounded from below.
125
+ 2a. Π((−1, 0)) > 0 and Π((0, ∞)) > 0.
126
+ 2b. Π((−1, 0)) = 0 and Π(h) = ∞.
127
+ 2c. Π((0, ∞)) = 0 and Π(|h|) = ∞.
128
+ 2d. Π((−∞, 0)) = 0, 0 < Π(h) < ∞, F((0, t)) > 0 for every t > 0.
129
+ 2e. Π((0, ∞)) = 0, 0 < Π(|h|) < ∞, F((0, t)) > 0 for every t > 0.
130
+
131
+ 4
132
+ Yuri Kabanov, Platon Promyslov
133
+ The proof in [2] used heavily the assumption that the business process has a posi-
134
+ tive drift and negative claims corresponding to the non-life insurance setting. In such
135
+ a case the ruin may happen only at an instant of jump and, therefore, one needs to
136
+ monitor the risk process only at T1, T2, and so on. Such a reduction to a discrete-time
137
+ ruin model does not work if ξi > 0.
138
+ In our paper we consider the annuity model of the Sparre Andersen type where
139
+ the ruin occurs because exhausting resources and the risk process reaches zero in a
140
+ continuous way. The main result can be formulated as follows.
141
+ Theorem 2.2 Suppose that the drift c < 0, the law Fξ is concentrated on (0, ∞),
142
+ E[ξβ] < ∞, and E[eεT1] < ∞ for some ε > 0. Then (2.4) holds if at least one of the
143
+ following conditions are fulfilled:
144
+ 1. σ ̸= 0.
145
+ 2a. Π((−1, 0)) > 0 and Π((0, ∞)) > 0.
146
+ 2b. Π((−1, 0)) = 0 and Π(h) = ∞.
147
+ 2c. Π((0, ∞)) = 0 and Π(|h|) = ∞.
148
+ 2d. Π((−1, 0)) = 0, 0 < Π(h) < ∞, F((t, ∞)) > 0 for every t > 0.
149
+ 2e. Π((0, ∞)) = 0, 0 < Π(|h|) < ∞, F((t, ∞)) > 0 for every t > 0.
150
+ For the mixed case we have the following result.
151
+ Theorem 2.3 Suppose that the drift c ∈ R, the law Fξ charges both half-lines
152
+ (−∞, 0) and (0, ∞), E[|ξ|β] < ∞, and E[eεT1] < ∞ for some ε > 0. Then (2.4)
153
+ holds if at least one of the following conditions are fulfilled:
154
+ 1. σ ̸= 0 or |ξ| is unbounded.
155
+ 2a. Π((−1, 0)) > 0 and Π((0, ∞)) > 0.
156
+ 2b. Π((−1, 0)) = 0 and Π(h) = ∞.
157
+ 2c. Π((0, ∞)) = 0 and Π(|h|) = ∞.
158
+ 2d. Π((−1, 0)) = 0, 0 < Π(h) < ∞, F((t, ∞)) > 0 for every t > 0 in the case
159
+ c < 0 and Fξ((0, ε)) > 0 for every ε > 0 in the case c ≥ 0.
160
+ 2e. Π((0, ∞)) = 0, 0 < Π(|h|) < ∞, F((t, ∞)) > 0 for every t > 0 in the case
161
+ c < 0 and Fξ((0, ε)) > 0 for every ε > 0 in the case c ≥ 0.
162
+ The annuity and the mixed setting require a different approach inspired by the the-
163
+ ory of semi-Markov processes. Namely, we consider the business process P as the
164
+ component of the two-dimensional Markov process (P, D) where the second compo-
165
+ nent D = Dr is a “clock”, i.e. a process measuring the elapsed time after the instant
166
+ of last claim. We assume that the law of U = T1 may be different from the common
167
+ law of the further interarrival times: at the instant zero a portion r of the interarrival
168
+ time is already elapsed. This feature admits obvious justifications: e.g., the venture
169
+ company may change the governance when a project was still in progress.
170
+ Here and throughout the paper we use the superscript r to emphasize that the law
171
+ of a random variable or a process depends on r, skipping usually r = 0.
172
+ Formally, the “clock”, Dr = (Dr
173
+ t ), is a process with the initial value Dr
174
+ 0 = r,
175
+ Dr
176
+ t = r+t on the interval [0, T1), and Dr
177
+ t := t−T r
178
+ n on all other interarrival intervals
179
+ [T r
180
+ n, T r
181
+ n+1), n ≥ 1. That is, the “clock” restarts from zero at each instant T r
182
+ n. We
183
+
184
+ Ruin Probabilities for a Sparre Andersen Model with Investments
185
+ 5
186
+ denote by F r the law of the first interarrival time T r
187
+ 1 = T r
188
+ 1 − T0. In accordance with
189
+ our convention F 0 = F.
190
+ Alternatively, Dr can be representing as the solution of the linear equation
191
+ Dr
192
+ t = r + t −
193
+
194
+ [0,t]
195
+ Dr
196
+ s−dNs.
197
+ Typically, P[T r
198
+ 1 > t] = P[Ti > t + r]/P[Ti > r], i > 1. In the case of
199
+ exponential distribution F r = F for all r ≥ 0 (“absence of memory”).
200
+ We assume that F r ≥ F.
201
+ Recall that the assumed independence of P r and R implies that the joint quadratic
202
+ characteristic [P r, R] is zero and the ruin process Xu,r can be written in the form
203
+ resembling the Cauchy formula for solutions of liner differential equations:
204
+ Xu,r
205
+ t
206
+ = eVt(u − Y r
207
+ t ),
208
+ (2.5)
209
+ where
210
+ Y r
211
+ t := −
212
+
213
+ (0,t]
214
+ E−1
215
+ s− (R)dP r
216
+ s = −
217
+
218
+ (0,t]
219
+ e−Vs−dP r
220
+ s .
221
+ (2.6)
222
+ The strict positivity of the process E(R) = eV implies that the ruin time
223
+ τ u,r := inf{t ≥ 0 : Xu,r
224
+ t
225
+ ≤ 0} = inf{t ≥ 0 : Y r
226
+ t ≥ u}.
227
+ The crucial element of our study is the following
228
+ Lemma 2.4 Suppose that Y r
229
+ t → Y r
230
+ ∞ almost surely as t → ∞ where Y r
231
+ ∞ is a finite
232
+ random variable such that ¯G(u, r) := P[Y r
233
+ ∞ > u] > 0 for every u > 0 and r ≥ 0. If
234
+ ¯G∗ := infq ¯G(0, q) > 0, then
235
+ ¯G(u, r) ≤ Ψ(u, r) =
236
+ ¯G(u, r)
237
+ E
238
+ � ¯G(Xu,r
239
+ τ u,r, Dr
240
+ τ u,r) | τ u,r < ∞
241
+ � ≤ 1
242
+ ¯G∗
243
+ ¯G(u, r).
244
+ (2.7)
245
+ Proof. Let τ be an arbitrary stopping time with respect to the filtration (FP,D,R
246
+ t
247
+ ). As
248
+ we assume that the finite limit Y r
249
+ ∞ exists, the random variable
250
+ Y r
251
+ τ,∞ :=
252
+
253
+ − limN→∞
254
+
255
+ (τ,τ+N] e−(Vs−−Vτ )dP r
256
+ s ,
257
+ τ < ∞,
258
+ 0,
259
+ τ = ∞,
260
+ is well defined. On the set {τ < ∞}
261
+ Y r
262
+ τ,∞ = eVτ (Y r
263
+ ∞ − Y r
264
+ τ ) = Xu,r
265
+ τ
266
+ + eVτ (Y r
267
+ ∞ − u).
268
+ (2.8)
269
+ Let ζ be a FP,D,R
270
+ τ
271
+ -measurable random variable.
272
+ Using the strong Markov property, we get that
273
+ P
274
+
275
+ Y r
276
+ τ,∞ > ζ, τ < ∞
277
+
278
+ = E
279
+ � ¯G(ζ, Dr
280
+ τ)I{τ<∞}
281
+
282
+ (2.9)
283
+
284
+ 6
285
+ Yuri Kabanov, Platon Promyslov
286
+ Noting that Ψ(u, r) := P [τu,r < ∞] ≥ P [Y r
287
+ ∞ > u] > 0, we deduce from here
288
+ using (2.8) that
289
+ ¯G(u, r) = P [Y r
290
+ ∞ > u, τ u,r < ∞] = P
291
+
292
+ Y r
293
+ τ u,r,∞ > Xu,r
294
+ τ u,r, τ u,r < ∞
295
+
296
+ = P[τ u,r < ∞]E
297
+ � ¯G(Xu,r
298
+ τ u,r, Dr
299
+ τ u,r) | τ u,r < ∞
300
+
301
+ ≥ P[τ u,r < ∞]E
302
+ � ¯G(0, Dr
303
+ τ u,r) | τ u,r < ∞
304
+
305
+ ≥ P[τ u,r < ∞] inf
306
+ q
307
+ ¯G(0, q)
308
+ and get the result. ✷
309
+ In view of the above lemma the proof of the main theorem is reduced to estab-
310
+ lishing the existence of finite limits Y r
311
+ ∞ and finding the asymptotic of the tail of their
312
+ distributions.
313
+ Let us introduce the notations
314
+ Qr
315
+ k := −
316
+
317
+ (T r
318
+ k−1,T r
319
+ k ]
320
+ e
321
+ −(Vs−−VT r
322
+ k−1 )dP r
323
+ s ,
324
+ M r
325
+ k := e
326
+ −(VT r
327
+ k −VT r
328
+ k−1).
329
+ (2.10)
330
+ Lemma 2.5 The random variables Y r
331
+ ∞ admit the representations
332
+ Y r
333
+ ∞ = Qr
334
+ 1 + M r
335
+ 1 ˜Y r
336
+ ∞,
337
+ where
338
+ Qr
339
+ 1 := −
340
+
341
+ [0,T r
342
+ 1 ]
343
+ e−Vs−dP r
344
+ s ,
345
+ M r
346
+ 1 := e−VT r
347
+ 1 ,
348
+ (2.11)
349
+ (Qr
350
+ 1, M r
351
+ 1) and ˜Y r
352
+ ∞ are independent, and the laws of ˜Y r
353
+ ∞ and Y 0
354
+ ∞ coincide.
355
+ Proof. Note that
356
+ Y r
357
+ T r
358
+ n = −
359
+
360
+ [0,T r
361
+ 1 ]
362
+ e−Vs−dP r
363
+ s −
364
+ n
365
+
366
+ k=2
367
+ e
368
+ VT r
369
+ k−1
370
+
371
+ (T r
372
+ k−1,T r
373
+ k ]
374
+ e
375
+ −(Vs−−VT r
376
+ k−1)dP r
377
+ s
378
+ = Qr
379
+ 1 + M r
380
+ 1
381
+
382
+ Qr
383
+ 2 +
384
+ n
385
+
386
+ k=3
387
+ M r
388
+ 2 ...M r
389
+ k−1Qr
390
+ k
391
+
392
+ ,
393
+ where the random variable in the parentheses is independent of (Qr
394
+ 1, M r
395
+ 1) and has the
396
+ same distribution as Y 0
397
+ n−1. ✷
398
+ Lemma 2.6 Suppose that Y∞ is unbounded from above. If c < 0, then
399
+ inf
400
+ q
401
+ ¯G(0, q) ≥ E[ ¯G(ξ, 0)] > 0.
402
+ If c ∈ R and the distribution function F r ≤ F, then
403
+ inf
404
+ q
405
+ ¯G(0, q) > 0.
406
+
407
+ Ruin Probabilities for a Sparre Andersen Model with Investments
408
+ 7
409
+ Proof. Using Lemma 2.5 we have:
410
+ ¯G(0, r) = P[Y r
411
+ ∞ > 0] = P[Qr
412
+ 1/M r
413
+ 1 + ˜Y r
414
+ ∞ > 0]
415
+ =
416
+
417
+ P
418
+
419
+ |c|eVt
420
+
421
+ [0,t]
422
+ e−Vsds − ξ1 + ˜Y r
423
+ ∞ > 0
424
+
425
+ FT r
426
+ 1 (dt)
427
+ ≥ P[ ˜Y r
428
+ ∞ > ξ1] =
429
+
430
+ P[ ˜Y r
431
+ ∞ > x]Fξ(dx) =
432
+
433
+ ¯G(x, 0)Fξ(dx) > 0
434
+ since Y∞ is unbounded.
435
+ The inspection of the proof reveals that the majority of arguments does work with
436
+ minor changes also for the case where c is of an arbitrary sign and the law Fξ charges
437
+ (−∞, 0) and (0, ∞). In particular, the proof that the finite limit Y∞ exists remains
438
+ the same.
439
+ Put ft := |ξ1| + |c|te2V ∗
440
+ t , where V ∗
441
+ t := sups≤t |Vs|. Then
442
+ |Qr
443
+ 1|/M r
444
+ 1 ≤ |ξ1| + |c|eVT r
445
+ 1
446
+
447
+ [0,T r
448
+ 1 ]
449
+ e−Vsds ≤ fT r
450
+ 1 .
451
+ It follows that
452
+ ¯G(0, r) = P[ ˜Y r
453
+ ∞ > −Qr
454
+ 1/M r
455
+ 1 ] = E ¯G(−Qr
456
+ 1/M r
457
+ 1, 0) ≥ E ¯G(|Qr
458
+ 1|/M r
459
+ 1, 0)
460
+ ≥ E
461
+
462
+ ¯G(ft, 0)F r(dt) = −E
463
+
464
+ F r(t)d ¯G(ft, 0)
465
+ ≥ −E
466
+
467
+ F(t)d ¯G(ft, 0) ≥ E
468
+
469
+ ¯G(ft, 0)F(dt) > 0,
470
+ where we use the property F r ≥ F. Thus, infr ¯G(0, r) > 0. ✷
471
+ 3 Tails of solutions of distributional equations
472
+ As a number of results on the ruin with investments, the proof is based on the implicit
473
+ renewal theory. As in [2] we shall use the following formulation combining several
474
+ useful facts:
475
+ Theorem 3.1 Suppose that for some β > 0,
476
+ E[M β] = 1,
477
+ E[M β (ln M)+] < ∞,
478
+ E[|Q|β] < ∞.
479
+ (3.1)
480
+ Let Y∞ be the solution of the distributional equation Y∞
481
+ d= Q + MY∞ and let
482
+ ¯G(u) := P[Y∞ > u]. Then lim sup uβ ¯G(u) < ∞. If the random variable Y∞ is
483
+ unbounded from above, then lim inf uβ ¯G(u) > 0.
484
+ In the previous section we introduce a process Y = (Yt). Assuming that it has
485
+ at infinity a limit Y∞, which is a finite unbounded from above random variable, we
486
+ have proved that it solves the required distributional equation and its tail function
487
+ gives lower and upper bounds for the ruin probability. It remains to check that the
488
+ hypotheses of Theorem 2.2 ensure the assumed properties and get the result applying
489
+ the above theorem. We do this in the next sections.
490
+
491
+ 8
492
+ Yuri Kabanov, Platon Promyslov
493
+ 4 The existence of the limit Y r
494
+
495
+ First, we recall several results from [2].
496
+ Lemma 4.1 ( [2], Lemma 2.1) Let T > 0 be a random variable independent of R.
497
+ Suppose that E[eεT ] < ∞ for some ε > 0. Let β ∈ (0, ¯q) be the root of the equation
498
+ H(q) = 0. If q ∈ [β, ¯q) is such that H(q) ≤ ε/2, then
499
+ E
500
+
501
+ sup
502
+ s≤T
503
+ e−qVs
504
+
505
+ < ∞.
506
+ (4.1)
507
+ Corollary 4.2 Suppose that E[eεT1] < ∞ for some ε > 0. Let
508
+ �Q1 := sup
509
+ t≤T1
510
+ |e−V− · Pt|.
511
+ If E[|ξ1|β] < ∞, then E[| �Qβ
512
+ 1] < ∞.
513
+ Though the above assertion is a bit more general than Corollary 2.2 in [2], the
514
+ proof is exactly the same. Note also that it does not depend on the sign of c or ξ1 and
515
+ needs only the integrability of |ξ1|β. It implies, in particular, that E[|Qβ
516
+ 1|] < ∞
517
+ Lemma 4.3 Suppose that E[eεT1] < ∞ and E[|ξ1|β∧ε∧1] < ∞ for some ε > 0.
518
+ Then Yt → Y∞ almost surely as t → ∞ where Y∞ is a finite random variable.
519
+ Proof. The convergence a.s. of the sequence YTn, n ≥ 1, to a finite r.v. Y∞ has
520
+ been proven in Lemma 4.1 of [2] as well as the fact that ρ := E[M p
521
+ 1 ] < 1 for any
522
+ p ∈ (0, β ∧ ε ∧ 1).
523
+ Put In := (Tn−1, Tn] and
524
+ ∆n := sup
525
+ v∈In
526
+ �����
527
+
528
+ (Tn−1,v]
529
+ e−Vs−dPs
530
+ ����� =
531
+ n−1
532
+
533
+ i=1
534
+ Mi sup
535
+ v∈In
536
+ �����
537
+
538
+ (Tn−1,v]
539
+ e−(Vs−−VTn−1)dPs
540
+ ����� .
541
+ By virtue of the Borel–Cantelli lemma, to get the announced result it is sufficient
542
+ to show that for every δ > 0
543
+
544
+
545
+ n=1
546
+ P[∆n ≥ δ] < ∞.
547
+ But this is true because the Chebyshev inequality and the Corollary 4.2 imply that
548
+ P[∆n ≥ δ] ≤ δ−pρpE[ �Q1|p]. ✷
549
+ By Lemma 2.5 the sequence Y r
550
+ T r
551
+ n converges a.s. to Y r
552
+ ∞ and the same arguments
553
+ as above allows us to conclude that Y r
554
+ t also converges.
555
+
556
+ Ruin Probabilities for a Sparre Andersen Model with Investments
557
+ 9
558
+ 5 When the distribution of Y∞ is unbounded from above?
559
+ The question in the title of the section is studied in [2] for the non-life insurance
560
+ case, i.e. when c < 0 and Fξ((0, ∞)) = 1. In the present paper we provide sufficient
561
+ conditions for the unboundedness from above for all new cases using the techniques
562
+ developed in the mentioned paper. It is based on the following elementary observa-
563
+ tion: if f : X × Y → R is a measurable function, r.v. η and ζ are independent and
564
+ have the laws Fη and Fζ, then the r.v. f(η, ζ) is unbounded from above provided that
565
+ there exists a measurable set X0 ⊆ X with Fη(X0) > 0 such that the r.v. f(x, ζ) is
566
+ unbounded from above for every x ∈ X0.
567
+ Let An := M1...Mn, n ≥ 1, A0 := 0.
568
+ A tractable sufficient condition is give by the following
569
+ Lemma 5.1 ([2], Lemma 5.1) If there exists n ≥ 1 such that the random variables
570
+ Q1 and (Q1+· · ·+An−1Qn)/An are unbounded from above, then Y∞ is unbounded
571
+ from above.
572
+ It usually works already with n = 1 but sometimes we need it with n = 2. A
573
+ short look at the expressions
574
+ Q1 = −c
575
+ � T1
576
+ 0
577
+ e−Vrdr − e−VT1ξ1
578
+ (5.1)
579
+ Q1/A1 = −ceVT1
580
+ � T1
581
+ 0
582
+ e−Vrdr − ξ1,
583
+ (5.2)
584
+ Q1/A2 + Q2/M2 = −ceVT2
585
+ � T2
586
+ 0
587
+ e−Vrdr − ξ1eVT2−VT1 − ξ2
588
+ (5.3)
589
+ shows that Y∞ is unbounded from above when ξ is unbounded from below (of course,
590
+ the latter property is not fulfilled for the annuity model).
591
+ Using the above sufficient condition of the unboundedness, we examine various
592
+ cases.
593
+ 1. Let σ ̸= 0. In this case the following lemma is helpful:
594
+ Lemma 5.2 Let K > 0, σ ̸= 0 and 0 ≤ s < t. Then the random variables
595
+ ζ := KeσWt −
596
+ � t
597
+ 0
598
+ eσWrdr,
599
+ ˜ζ := Keσ(Wt−Ws) − eσWt
600
+ � t
601
+ 0
602
+ eσWrdr,
603
+ (5.4)
604
+ are unbounded from below and from above.
605
+ The property that ζ and ˜ζ are unbounded from above has been proven in [2],
606
+ Lemma 5.2. The unboundedness from below can be established by similar arguments.
607
+ It is also clear, that if K = 0, then ζ and ˜ζ are unbounded from below.
608
+ The process ¯V := V − σW is independent of the Wiener process W. If c < 0,
609
+ then
610
+ Q1 ≥ |c| inf
611
+ r≤T1 e− ¯Vr
612
+ � T1
613
+ 0
614
+ e−σWrdr − ξ1e− ¯VT1e−σWT1 .
615
+
616
+ 10
617
+ Yuri Kabanov, Platon Promyslov
618
+ Using the conditioning with respect to ¯V , ξ1, T1 and the previous lemma, we get that
619
+ Q1 is unbounded from above. Since
620
+ Q1/A1 ≥ |c|e
621
+ ¯VT1 inf
622
+ r≤T1 e− ¯VreσWT1
623
+ � T1
624
+ 0
625
+ e−σWrdr − ξ1,
626
+ we conclude in the same way that Q1/A1 is unbounded from above. If c ≥ 0, then
627
+ necessary Fξ(−∞, 0)) > 0 (recall that we exclude the case c ≥ 0, ξ > 0 when the
628
+ ruin is impossible). Lemma 5.2 implies that the random variables Q1 and Q1/A2 +
629
+ Q2/M2 are unbounded from above.
630
+ 2. Let σ = 0 and let ξ be bounded from below. We treat separately several sub-
631
+ cases.
632
+ 2a. Π((−1, 0)) > 0 and Π((0, ∞)) > 0. Fix ε > 0 such that Π((−1, −ε)) > 0
633
+ and Π((ε, ∞)) > 0 and put
634
+ V (1) := I{−1<x<−ε} ln(1 + x) ∗ µ + I{x>ε} ln(1 + x) ∗ µ.
635
+ Then the processes V (1) and V (2) := V − V (1) are independent.
636
+ Note that V (1) is the sum of two independent compound Poisson processes with
637
+ negative and positive jumps, respectively, and the absolute values of jumps are larger
638
+ than some constant cε > 0.
639
+ Lemma 5.3 Let K > 0, t > 0. Then the random variable
640
+ ζ := Ke−V (1)
641
+ t
642
+
643
+ � t
644
+ 0
645
+ e−V (1)
646
+ r
647
+ dr
648
+ (5.5)
649
+ is unbounded from above and from below, the random variable
650
+ �ζ := e−V (1)
651
+ t
652
+ � t
653
+ 0
654
+ e−V (1)
655
+ r
656
+ dr.
657
+ is unbounded from above.
658
+ Proof. The arguments are simple and we explain only the idea. One can consider
659
+ trajectories where V (1) has a lot of negative jumps in a neighborhood of zero while all
660
+ positive jumps are concentrate in a neighborhood of t. Choosing suitable parameters
661
+ and using the independence of processes with positive and negative jumps we obtain
662
+ that, with a strictly positive probability, the first term in the definition of ζ is arbitrary
663
+ close to zero while the integral is arbitrary large. Thus, ζ is unbounded from below.
664
+ Symmetric arguments lead to the conclusion that ζ is unbounded from above. ✷
665
+ Let c < 0. Then the following bounds are obvious:
666
+ Q1 ≥ |c| inf
667
+ r≤T1 e−V (2)
668
+ r
669
+ � T1
670
+ 0
671
+ e−V (1)
672
+ r
673
+ dr − ξ1e− ¯V (2)
674
+ T1 e−V (1)
675
+ T1 ,
676
+ Q1/A1 ≥ |c|eV (2)
677
+ T1
678
+ inf
679
+ r≤T1 e−V (2)
680
+ r
681
+ eV (1)
682
+ T1
683
+ � T1
684
+ 0
685
+ e−V (1)
686
+ r
687
+ dr − ξ1.
688
+
689
+ Ruin Probabilities for a Sparre Andersen Model with Investments
690
+ 11
691
+ By conditioning with respect to the random variables V (2), T1, ξ1, which are inde-
692
+ pendent of V (1), and using Lemma 5.3 we easily obtain that the random variables Q1
693
+ and Q1/A1 are unbounded from above and, by Lemma 5.1, so is Y∞.
694
+ Let c ≥ 0. The same arguments as in [2] show that Q1 and Q1/A2 + Q2/M2 are
695
+ unbounded from above on the non-null set {ξ1 < 0, ξ2 < 0}.
696
+ 2b. Π((−1, 0)) = 0, Π(h) = ∞.
697
+ We use the decomposition of V depending on the choice of ε ∈ (0, 1). Namely,
698
+ put
699
+ V ε := I{x≤ε}h ∗ (µ − ν) + I{x≤ε}(ln(1 + x) − h) ∗ µ,
700
+ (5.6)
701
+ ˜V ε := I{x>ε}h ∗ (µ − ν) + I{x>ε}(ln(1 + x) − h) ∗ µ.
702
+ (5.7)
703
+ Note that Vt = at + V ε
704
+ t + ˜V ε
705
+ t and
706
+ ˜V ε = I{x>ε} ln(1 + x) ∗ µ − I{x>ε}h ∗ ν.
707
+ Lemma 5.4 Let K > 0, t > 0. Then the random variables
708
+ η :=
709
+ � t
710
+ 0
711
+ e−Vrdr − Ke−Vt,
712
+ η′ := eVt
713
+ � t
714
+ 0
715
+ e−Vrdr
716
+ (5.8)
717
+ are unbounded from above.
718
+ Proof. Without loss of generality we assume that a = 0. Fix N > 0 and choose ε > 0
719
+ small enough to ensure that Π(I{x>ε}h) ≥ N. Let Γε := {sups≤t |V ε
720
+ s | ≤ 1}. Let
721
+ denoting by Jε and ¯Jε the processes in the lhs of (5.6). Using the Doob inequality
722
+ and the elementary bound x − ln(1 + x) ≤ x2/2 for x > 0, we get that
723
+ P
724
+
725
+ sup
726
+ s≤t
727
+ |V ε
728
+ s | > 1
729
+
730
+ ≤ P
731
+
732
+ sup
733
+ s≤t
734
+ |Jε
735
+ s | > 1/2
736
+
737
+ + P
738
+
739
+ | ¯Jε
740
+ t | > 1/2
741
+
742
+ ≤ 2E
743
+
744
+ sup
745
+ s≤t
746
+ |Jε
747
+ s |
748
+
749
+ + 2E
750
+
751
+ | ¯Jε
752
+ t |
753
+
754
+ ≤ 2(I{x≤ε}h2 ∗ νt)1/2 + I{x≤ε}h2 ∗ νt → 0,
755
+ ε → 0.
756
+ Thus, the set Γε is non-null, at least, for sufficiently small ε, and on this set
757
+ η ≥ 1
758
+ e
759
+ � t
760
+ 0
761
+ e− ˜V ε
762
+ r dr − Ke− ˜V ε
763
+ t +1.
764
+ (5.9)
765
+ On the intersection Γε∩{I{x>ε}h∗µt/2 = 0, ln(1+ε)µ((t/2, t]×(ε, 1]) ≥ Nt+1}
766
+ we have
767
+ η ≥ 1
768
+ e
769
+ � t/2
770
+ 0
771
+ eNrdr − Ke =
772
+ 1
773
+ eN (eNt/2 − 1) − Ke.
774
+ Due to the independence of V ε and ˜V ε this intersection is a non-null set. Since N is
775
+ arbitrary large, the required property of η holds.
776
+ The analysis of η′ follows the same line. At the first stage we replace V by ˜V
777
+ and compensate the linear decrease of V by a large number of positive jumps on the
778
+ second half of the interval [0, t]. ✷
779
+
780
+ 12
781
+ Yuri Kabanov, Platon Promyslov
782
+ Let c < 0. Then the random variables
783
+ Q1 = |c|
784
+ � T1
785
+ 0
786
+ e−Vrdr − e−VT1ξ1,
787
+ Q1/A1 = |c|eVT1
788
+ � T1
789
+ 0
790
+ e−Vrdr − ξ1
791
+ are unbounded from above by virtue of the lemma.
792
+ Let c ≥ 0. The same arguments as in [2] lead to the conclusion that that Q1 and
793
+ Q1/A2 + Q2/M2 are unbounded from above on the non-null set {ξ1 < 0, ξ2 < 0}.
794
+ 2c. Π((0, ∞)) = 0, Π(|h|) = ∞.
795
+ Let c < 0. Again we use the decomposition of V depending on the choice of
796
+ ε ∈ (0, 1). Put
797
+ V ε := I{x≥−ε}h ∗ (µ − ν) + I{x≥−ε}(ln(1 + x) − h) ∗ µ,
798
+ (5.10)
799
+ ˜V ε := I{x<−ε}h ∗ (µ − ν) + I{x<−ε}(ln(1 + x) − h) ∗ µ,
800
+ (5.11)
801
+ L := I{x<−ε} ln(1 + x) ∗ µ and Πε := Π(I{x<−ε}|h|) ↑ ∞ as ε → 0. Then
802
+ ˜V ε
803
+ t = I{x<−ε} ln(1 + x) ∗ µt − I{x<−ε}h ∗ ν = Lt + Πεt.
804
+ To prove that Q1 is inbounded from above, we argue as follows. As in the previous
805
+ subcase 2b. we reduce the problem to checking that the random variable
806
+ ˜η =
807
+ � t
808
+ 0
809
+ e− ˜V ε
810
+ r dr − Ke− ˜V ε
811
+ r
812
+ is unbounded from above. Let t1 := t2/2 where t2 := 1/Πε. Note that t2 ≤ t when
813
+ ε > 0 is sufficiently small. On the set {Lt = Lt1} we have that
814
+ ˜η ≥ (t2 − t1)e|¯Lt1|e−Πεt2 − Ke
815
+ ¯Lt1e−Πεt
816
+ = e|¯Lt1|(1/(2eΠε) − Kee−Πεt) ≥ e|¯Lt1|/(4eΠε)
817
+ for sufficiently small ε. Since the r.v. |¯Lt1| is unbounded from above, so is Q1.
818
+ On the set {Lt = 0, ξ1 ≤ K} non-null for any ε > 0 and sufficiently large K,
819
+ we have the bound
820
+ Q1/A1 ≥ (|c|/Πε)(eΠεt − 1) − K.
821
+ It follows that Q1/A1 is unbounded from above.
822
+ The case c ≥ 0 is treated as in [2].
823
+ 2d. Π((−∞, 0)) = 0, 0 < Π(h) < ∞, and F((t, ∞)) > 0 for every t > 0.
824
+ In this subcase Vt = Lt − bt, where Lt := ln(1 + x) ∗ µt is an increasing process
825
+ and b := Π(h) − a. Note that b > 0, otherwise we get a contradiction with the
826
+ existence of β > 0 such that ln Ee−βV1 = 0.
827
+ Let c < 0. Take arbitrary N > 0. Let s > 0 be the solution of the equation
828
+ (|c|/b)(ebs − 1) = N. Take K large enough to ensure that P[ξ ≤ K] > 0. Chose
829
+ t > s such that F((s, t)) > 0. On the non-null set
830
+ {Ls = 0, T1 ∈ (s, t), eLt−Ls ≥ Kebt, ξ < K}
831
+
832
+ Ruin Probabilities for a Sparre Andersen Model with Investments
833
+ 13
834
+ we have that
835
+ Q1 ≥ (|c|/b)(ebs − 1) − ebt−Ltξ ≥ N − 1.
836
+ Thus, Q1 is unbounded from above.
837
+ To prove that Q1/A1 is unbounded from above we take ε > 0 and K > 0 such
838
+ that F((t, ∞)) > 0 and Fξ((0, K)) > 0. Setting cε := (|c|/b)(e−bε − e−2bε) we
839
+ have that on the non-null set
840
+ {T1 > ε, LT1−ε = 0, LT1 ≥ ln((N + K)/cε), ξ < K}
841
+ we have
842
+ Q1/A1 ≥ (|c|/b)eLT1e−bT1(eb(T1−ε) − 1) − ξ ≥ cεeLt − K ≥ N.
843
+ So, by Lemma 5.1 Y∞ is unbounded.
844
+ If c ≥ 0, we proceed as in [2], using the assumption that F charges any neighbor-
845
+ hood of zero.
846
+ 2e. Π((0, ∞)) = 0, 0 < Π(|h|) < ∞, and F((t, ∞)) > 0 for every t > 0.
847
+ We have again V = Lt − bt but now the jump process L is decreasing and the
848
+ constant b < 0.
849
+ Let c < 0. Fix N > 0. Let s, t > 0 be such that F((s, t)) > 0. On the non-null
850
+ set
851
+ {T1 ∈ (s, t), |Ls/2| ≥ N, Ls/2 = Lt, ξ < e|Ls/2|}
852
+ we have
853
+ Q1 ≥ |c|(T1/2)e|L(1/2)T1|−|b|T1 − e|L(1/2)T1|−|b|T1ξ ≥ |c|(s/2)eN−|b|t − 1.
854
+ Since N is arbitrary, Q1 is unbounded from above.
855
+ For any t > 0 and K > 0 on the non-null set {T1 ≥ t, Lt = 0, ξ ≤ K}
856
+ Q1/A1 ≤ |c/b|(e|b|t − 1) − K.
857
+ This implies that Q1/A1 is unbounded from above.
858
+ If c ≥ 0, we again proceed as in [2].
859
+ Acknowledgements
860
+ The research is funded by the grant of RSF n◦ 20-68-47030 ”Econometric and prob-
861
+ abilistic methods for the analysis of financial markets with complex structure“.
862
+
863
+ 14
864
+ Yuri Kabanov, Platon Promyslov
865
+ References
866
+ 1. Albrecher, H., Constantinescu, C., Thomann, E.: Asymptotic results for renewal risk models with
867
+ risky investments. Stoch. Proc. Appl. 122, 3767–3789 (2012)
868
+ 2. Eberlein, E., Kabanov, Yu., Schmidt, T.: Ruin probabilities for a Sparre Andersen model with invest-
869
+ ments. Stoch. Proc. Appl. 144, 72–84 (2022)
870
+ 3. Goldie, C.M.: Implicit renewal theory and tails of solutions of random equations. Ann. Appl. Probab.
871
+ 1, 126–166 (1991)
872
+ 4. Grandell, I.: Aspects of Risk Theory. Springer, Berlin (1990)
873
+ 5. Kabanov, Yu., Pergamenshchikov, S.: In the insurance business risky investments are dangerous: the
874
+ case of negative risk sums. Finance Stoch. 20, 355–379 (2016)
875
+ 6. Kabanov, Yu., Pergamenshchikov, S.: Ruin probabilities for a L´evy-driven generalised Ornstein–
876
+ Uhlenbeck process. Finance Stoch. 24, 39–69 (2020)
877
+ 7. Kabanov, Yu., Pukhlyakov, N.: Ruin probabilities with investments: smoothness, IDE and ODE,
878
+ asymptotic behavior. J. Appl. Probab. 59, 556–570 (2020)
879
+ 8. Paulsen, J.: Risk theory in stochastic economic environment. Stoch. Proc. Appl. 46, 327–361 (1993)
880
+ 9. Paulsen, J., Stochastic Calculus with Applications to Risk Theory. Lecture Notes, Univ. of Bergen
881
+ and Univ. of Copenhagen (1996)
882
+ 10. Paulsen J.: Sharp conditions for certain ruin in a risk process with stochastic return on investments.
883
+ Stoch. Proc. Appl. 75, 135–148 (1998)
884
+ 11. Paulsen, J., Gjessing, H.K.: Ruin theory with stochastic return on investments. Adv. Appl. Probab.
885
+ 29, 965–985 (1997)
886
+
DdA0T4oBgHgl3EQfAv9Y/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,434 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf,len=433
2
+ page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
3
+ page_content='01966v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
4
+ page_content='PR] 5 Jan 2023 Noname manuscript No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
5
+ page_content=' (will be inserted by the editor) Ruin Probabilities for a Sparre Andersen Model with Investments: the Case of Annuity Payments Yuri Kabanov · Platon Promyslov January 6, 2023 Dedicated to the memory of Tomas Bj¨ork.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
6
+ page_content=' Abstract This note is a complement to the paper by Eberlein, Kabanov, and Schmidt on the asymptotic of the ruin probability in a Sparre Andersen non-life insurance model with investments a risky asset whose price follows a geometric L´evy process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
7
+ page_content=' Using the techniques of semi-Markov processes we extend the result of the mentioned paper to the case of annuities and models with two-sided jumps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
8
+ page_content=' Keywords Ruin probabilities · Sparre Andersen model · Actuarial models with investments · Renewal processes · Annuities · Distributional equations Mathematics Subject Classification (2010) 60G44 JEL Classification G22 · G23 1 Introduction In the classical Sparre Andersen model of insurance company the counts of claims form a renewal process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
9
+ page_content=' In recent studies, see [1], [2] and references therein, this model was enriched by the assumption that the capital reserve of the insurance com- pany is fully invested in a risky asset whose price evolves as a geometric L´evy pro- cess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
10
+ page_content=' In the paper [2] by Eberlein, Kabanov, and Schmidt it was considered the non- life insurance version of such a model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
11
+ page_content=' It was shown that under rather mild hypothe- Lomonosov Moscow State University, Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences Moscow, Russia, and Universit´e de Franche-Comt´e, Laboratoire de Math´ematiques, UMR CNRS 6623, 16 Route de Gray, 25030 Besanc¸on, France E-mail: ykabanov@univ-fcomte.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
12
+ page_content='fr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
13
+ page_content=' Lomonosov Moscow State University, Moscow, Russia E-mail: platon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
14
+ page_content='promyslov@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
15
+ page_content='com 2 Yuri Kabanov, Platon Promyslov ses on the business process the asymptotic behavior of the (ultimate) ruin probability is essentially the same as in the Cram´er–Lundberg model with risky investments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
16
+ page_content=' Namely, the ruin probability decays, up to a multiplicative constant, as the function u−β where u, the initial capital, tends to infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
17
+ page_content=' The decay rate β depends only of characteristics of the price process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
18
+ page_content=' The method of analysis in [2] is based heavily on the assumption that the risk process has only downward jumps and, therefore, crosses the zero level only by a jump.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
19
+ page_content=' This specific feature allows a straightforward reduction to a discrete-time Markovian framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
20
+ page_content=' The approach of [2] left an open question whether the results hold also in the case of upward jumps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
21
+ page_content=' This is a feature of the annuity model when the risk process crosses the zero level in a continuous way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
22
+ page_content=' In a less popular mixed model with two-sided jumps the crossing may happen in both way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
23
+ page_content=' Of course, a positive answer is expected here: this was already established for the Cram´er–Lundberg models with investments analyzed by Kabanov, Pergamenshchikov,and Pukhlyakov, [5], [7], as well as in very general L´evy Ornstein–Uhlenbeck models introduced and studied by Paulsen, see [8], [9], [10], [11], and a more recent paper [6] by Kabanov and Pergamenshchikov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
24
+ page_content=' Our note, based on the study [2], gives a positive answer for a Sparre Andersen model with investments in its annuity version, with the upward jumps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
25
+ page_content=' We discuss briefly the needed changes leading to a result for a model with upward and downward jumps used serving to describe the evolution of the capital reserve of a company with two types of the business activity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
26
+ page_content=' Our techniques is based on the imbedding of a semi-Markov process into a Markov one by increasing the dimensionality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
27
+ page_content=' In the paper we use standard notations of stochastic calculus and concepts dis- cussed in details in [6], [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
28
+ page_content=' 2 The model The Sparre Andersen model with risky investments considered contains two ingredi- ents: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
29
+ page_content=' The price process of a risky financial asset S = (St)t≥0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
30
+ page_content=' It is of the form S = E(R) where E is the stochastic exponential, R is a L´evy process with the L´evy triplet (a, σ2, Π) and such that Π((−∞, −1]) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
31
+ page_content=' The latter condition ensures that the jumps ∆R > −1, hence, the price S > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
32
+ page_content=' In such a case, S = eV where V = ln S is again a L´evy process which can be given by the formula Vt = at − 1 2σ2t + σWt + h ∗ (µ − ν)t + (ln(1 + x) − h) ∗ µt, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
33
+ page_content='1) where h(x) := xI{|x|≤1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
34
+ page_content=' The L´evy triplet of V is (aV , σ2, ΠV ) with aV = a − σ2 2 + Π(h(ln(1 + x)) − h) and ΠV = Πϕ−1, ϕ : x �→ ln(1 + x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
35
+ page_content=' It is assume that R is non-deterministic, that is, at least one of the parameters σ2 or Π is not zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
36
+ page_content=' Ruin Probabilities for a Sparre Andersen Model with Investments 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
37
+ page_content=' The ”business process“.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
38
+ page_content=' It is an independent of S compound renewal process P = (Pt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
39
+ page_content=' Classically, it can be written in the form Pt = ct + Nt � i=1 ξi, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
40
+ page_content='2) where N = (Nt) is a counting renewal process with the interarrival times (lengths of the inter jump intervals) Ui := Ti − Ti−1, i ≥ 2, forming an i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
41
+ page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
42
+ page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
43
+ page_content=' sequence independent of the i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
44
+ page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
45
+ page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
46
+ page_content=' sequence of random variables ξi = ∆PTi, i ≥ 1, with the common law Fξ, Fξ({0}) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
47
+ page_content=' In the sequel, a “generic“ r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
48
+ page_content='v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
49
+ page_content=' with such a law is denoted by ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
50
+ page_content=' As usual, T0 := 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
51
+ page_content=' The common law of Ui we denoted by F and use the same character for its distribution function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
52
+ page_content=' The risk process X = Xu, u > 0, is defined as the solution of the non-homogene- ous linear stochastic equation Xt = u + � t 0 Xs−dRs + Pt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
53
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
54
+ page_content='3) The ruin probability is the function of the initial capital Ψ(u) := P[τ u < ∞] where τ u := inf{t : Xu t ≤ 0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
55
+ page_content=' The cases of major interest are: c > 0 and ξi < 0 (a non-life insurance model, considered in [2]) and c < 0 and ξi > 0 (annuities payments model).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
56
+ page_content=' The latter case studied here, is often interpreted as a model of venture company paying salary and selling innovations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
57
+ page_content=' The case where Fξ charges both half-axes can be viewed as a model of company combined two types of activity, see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
58
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
59
+ page_content=', [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
60
+ page_content=' and we study also the case where ξi may take positive and negative values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
61
+ page_content=' If c ≥ 0 and ξ > 0 the ruin never happens and this case is excluded from consid- erations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
62
+ page_content=' Standing assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
63
+ page_content=' The cumulant generating function H : q → ln E e−qVT1 of the random variable VT1 has a root β > 0 not laying on the boundary of the effective domain of H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
64
+ page_content=' That is, if the int dom H = (q, ¯q), there is a unique root β ∈ (0, ¯q).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
65
+ page_content=' We are looking for conditions under which 0 < lim inf u→∞ uβΨ(u) ≤ lim sup u→∞ uβΨ(u) < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
66
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
67
+ page_content='4) The paper [2] treats the case of the non-life insurance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
68
+ page_content=' We formulate its main result in a more transparent form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
69
+ page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
70
+ page_content='1 ([2]) Suppose that the drift c ≥ 0, the law Fξ is concentrated on (−∞, 0), E[|ξ|β] < ∞, and E[eεT1] < ∞ for some ε > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
71
+ page_content=' Then (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
72
+ page_content='4) holds if at least one of the following conditions are fulfilled: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
73
+ page_content=' σ ̸= 0 or ξ is unbounded from below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
74
+ page_content=' 2a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
75
+ page_content=' Π((−1, 0)) > 0 and Π((0, ∞)) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
76
+ page_content=' 2b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
77
+ page_content=' Π((−1, 0)) = 0 and Π(h) = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
78
+ page_content=' 2c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
79
+ page_content=' Π((0, ∞)) = 0 and Π(|h|) = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
80
+ page_content=' 2d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
81
+ page_content=' Π((−∞, 0)) = 0, 0 < Π(h) < ∞, F((0, t)) > 0 for every t > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
82
+ page_content=' 2e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
83
+ page_content=' Π((0, ∞)) = 0, 0 < Π(|h|) < ∞, F((0, t)) > 0 for every t > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
84
+ page_content=' 4 Yuri Kabanov, Platon Promyslov The proof in [2] used heavily the assumption that the business process has a posi- tive drift and negative claims corresponding to the non-life insurance setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
85
+ page_content=' In such a case the ruin may happen only at an instant of jump and, therefore, one needs to monitor the risk process only at T1, T2, and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
86
+ page_content=' Such a reduction to a discrete-time ruin model does not work if ξi > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
87
+ page_content=' In our paper we consider the annuity model of the Sparre Andersen type where the ruin occurs because exhausting resources and the risk process reaches zero in a continuous way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
88
+ page_content=' The main result can be formulated as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
89
+ page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
90
+ page_content='2 Suppose that the drift c < 0, the law Fξ is concentrated on (0, ∞), E[ξβ] < ∞, and E[eεT1] < ∞ for some ε > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
91
+ page_content=' Then (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
92
+ page_content='4) holds if at least one of the following conditions are fulfilled: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
93
+ page_content=' σ ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
94
+ page_content=' 2a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
95
+ page_content=' Π((−1, 0)) > 0 and Π((0, ∞)) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
96
+ page_content=' 2b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
97
+ page_content=' Π((−1, 0)) = 0 and Π(h) = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
98
+ page_content=' 2c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
99
+ page_content=' Π((0, ∞)) = 0 and Π(|h|) = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
100
+ page_content=' 2d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
101
+ page_content=' Π((−1, 0)) = 0, 0 < Π(h) < ∞, F((t, ∞)) > 0 for every t > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
102
+ page_content=' 2e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
103
+ page_content=' Π((0, ∞)) = 0, 0 < Π(|h|) < ∞, F((t, ∞)) > 0 for every t > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
104
+ page_content=' For the mixed case we have the following result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
105
+ page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
106
+ page_content='3 Suppose that the drift c ∈ R, the law Fξ charges both half-lines (−∞, 0) and (0, ∞), E[|ξ|β] < ∞, and E[eεT1] < ∞ for some ε > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
107
+ page_content=' Then (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
108
+ page_content='4) holds if at least one of the following conditions are fulfilled: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
109
+ page_content=' σ ̸= 0 or |ξ| is unbounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
110
+ page_content=' 2a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
111
+ page_content=' Π((−1, 0)) > 0 and Π((0, ∞)) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
112
+ page_content=' 2b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
113
+ page_content=' Π((−1, 0)) = 0 and Π(h) = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
114
+ page_content=' 2c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
115
+ page_content=' Π((0, ∞)) = 0 and Π(|h|) = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
116
+ page_content=' 2d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
117
+ page_content=' Π((−1, 0)) = 0, 0 < Π(h) < ∞, F((t, ∞)) > 0 for every t > 0 in the case c < 0 and Fξ((0, ε)) > 0 for every ε > 0 in the case c ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
118
+ page_content=' 2e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
119
+ page_content=' Π((0, ∞)) = 0, 0 < Π(|h|) < ∞, F((t, ∞)) > 0 for every t > 0 in the case c < 0 and Fξ((0, ε)) > 0 for every ε > 0 in the case c ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
120
+ page_content=' The annuity and the mixed setting require a different approach inspired by the the- ory of semi-Markov processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
121
+ page_content=' Namely, we consider the business process P as the component of the two-dimensional Markov process (P, D) where the second compo- nent D = Dr is a “clock”, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
122
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
123
+ page_content=' a process measuring the elapsed time after the instant of last claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
124
+ page_content=' We assume that the law of U = T1 may be different from the common law of the further interarrival times: at the instant zero a portion r of the interarrival time is already elapsed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
125
+ page_content=' This feature admits obvious justifications: e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
126
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
127
+ page_content=', the venture company may change the governance when a project was still in progress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
128
+ page_content=' Here and throughout the paper we use the superscript r to emphasize that the law of a random variable or a process depends on r, skipping usually r = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
129
+ page_content=' Formally, the “clock”, Dr = (Dr t ), is a process with the initial value Dr 0 = r, Dr t = r+t on the interval [0, T1), and Dr t := t−T r n on all other interarrival intervals [T r n, T r n+1), n ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
130
+ page_content=' That is, the “clock” restarts from zero at each instant T r n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
131
+ page_content=' We Ruin Probabilities for a Sparre Andersen Model with Investments 5 denote by F r the law of the first interarrival time T r 1 = T r 1 − T0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
132
+ page_content=' In accordance with our convention F 0 = F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
133
+ page_content=' Alternatively, Dr can be representing as the solution of the linear equation Dr t = r + t − � [0,t] Dr s−dNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
134
+ page_content=' Typically, P[T r 1 > t] = P[Ti > t + r]/P[Ti > r], i > 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
135
+ page_content=' In the case of exponential distribution F r = F for all r ≥ 0 (“absence of memory”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
136
+ page_content=' We assume that F r ≥ F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
137
+ page_content=' Recall that the assumed independence of P r and R implies that the joint quadratic characteristic [P r, R] is zero and the ruin process Xu,r can be written in the form resembling the Cauchy formula for solutions of liner differential equations: Xu,r t = eVt(u − Y r t ), (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
138
+ page_content='5) where Y r t := − � (0,t] E−1 s− (R)dP r s = − � (0,t] e−Vs−dP r s .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
139
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
140
+ page_content='6) The strict positivity of the process E(R) = eV implies that the ruin time τ u,r := inf{t ≥ 0 : Xu,r t ≤ 0} = inf{t ≥ 0 : Y r t ≥ u}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
141
+ page_content=' The crucial element of our study is the following Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
142
+ page_content='4 Suppose that Y r t → Y r ∞ almost surely as t → ∞ where Y r ∞ is a finite random variable such that ¯G(u, r) := P[Y r ∞ > u] > 0 for every u > 0 and r ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
143
+ page_content=' If ¯G∗ := infq ¯G(0, q) > 0, then ¯G(u, r) ≤ Ψ(u, r) = ¯G(u, r) E � ¯G(Xu,r τ u,r, Dr τ u,r) | τ u,r < ∞ � ≤ 1 ¯G∗ ¯G(u, r).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
144
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
145
+ page_content='7) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
146
+ page_content=' Let τ be an arbitrary stopping time with respect to the filtration (FP,D,R t ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
147
+ page_content=' As we assume that the finite limit Y r ∞ exists, the random variable Y r τ,∞ := � − limN→∞ � (τ,τ+N] e−(Vs−−Vτ )dP r s , τ < ∞, 0, τ = ∞, is well defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
148
+ page_content=' On the set {τ < ∞} Y r τ,∞ = eVτ (Y r ∞ − Y r τ ) = Xu,r τ + eVτ (Y r ∞ − u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
149
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
150
+ page_content='8) Let ζ be a FP,D,R τ measurable random variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
151
+ page_content=' Using the strong Markov property, we get that P � Y r τ,∞ > ζ, τ < ∞ � = E � ¯G(ζ, Dr τ)I{τ<∞} � (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
152
+ page_content='9) 6 Yuri Kabanov, Platon Promyslov Noting that Ψ(u, r) := P [τu,r < ∞] ≥ P [Y r ∞ > u] > 0, we deduce from here using (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
153
+ page_content='8) that ¯G(u, r) = P [Y r ∞ > u, τ u,r < ∞] = P � Y r τ u,r,∞ > Xu,r τ u,r, τ u,r < ∞ � = P[τ u,r < ∞]E � ¯G(Xu,r τ u,r, Dr τ u,r) | τ u,r < ∞ � ≥ P[τ u,r < ∞]E � ¯G(0, Dr τ u,r) | τ u,r < ∞ � ≥ P[τ u,r < ∞] inf q ¯G(0, q) and get the result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
154
+ page_content=' ✷ In view of the above lemma the proof of the main theorem is reduced to estab- lishing the existence of finite limits Y r ∞ and finding the asymptotic of the tail of their distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
155
+ page_content=' Let us introduce the notations Qr k := − � (T r k−1,T r k ] e −(Vs−−VT r k−1 )dP r s , M r k := e −(VT r k −VT r k−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
156
+ page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
157
+ page_content='10) Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
158
+ page_content='5 The random variables Y r ∞ admit the representations Y r ∞ = Qr 1 + M r 1 ˜Y r ∞, where Qr 1 := − � [0,T r 1 ] e−Vs−dP r s , M r 1 := e−VT r 1 , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
159
+ page_content='11) (Qr 1, M r 1) and ˜Y r ∞ are independent, and the laws of ˜Y r ∞ and Y 0 ∞ coincide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
160
+ page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
161
+ page_content=' Note that Y r T r n = − � [0,T r 1 ] e−Vs−dP r s − n � k=2 e VT r k−1 � (T r k−1,T r k ] e −(Vs−−VT r k−1)dP r s = Qr 1 + M r 1 � Qr 2 + n � k=3 M r 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
162
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
163
+ page_content='M r k−1Qr k � , where the random variable in the parentheses is independent of (Qr 1, M r 1) and has the same distribution as Y 0 n−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
164
+ page_content=' ✷ Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
165
+ page_content='6 Suppose that Y∞ is unbounded from above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
166
+ page_content=' If c < 0, then inf q ¯G(0, q) ≥ E[ ¯G(ξ, 0)] > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
167
+ page_content=' If c ∈ R and the distribution function F r ≤ F, then inf q ¯G(0, q) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
168
+ page_content=' Ruin Probabilities for a Sparre Andersen Model with Investments 7 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
169
+ page_content=' Using Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
170
+ page_content='5 we have: ¯G(0, r) = P[Y r ∞ > 0] = P[Qr 1/M r 1 + ˜Y r ∞ > 0] = � P � |c|eVt � [0,t] e−Vsds − ξ1 + ˜Y r ∞ > 0 � FT r 1 (dt) ≥ P[ ˜Y r ∞ > ξ1] = � P[ ˜Y r ∞ > x]Fξ(dx) = � ¯G(x, 0)Fξ(dx) > 0 since Y∞ is unbounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
171
+ page_content=' The inspection of the proof reveals that the majority of arguments does work with minor changes also for the case where c is of an arbitrary sign and the law Fξ charges (−∞, 0) and (0, ∞).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
172
+ page_content=' In particular, the proof that the finite limit Y∞ exists remains the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
173
+ page_content=' Put ft := |ξ1| + |c|te2V ∗ t , where V ∗ t := sups≤t |Vs|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
174
+ page_content=' Then |Qr 1|/M r 1 ≤ |ξ1| + |c|eVT r 1 � [0,T r 1 ] e−Vsds ≤ fT r 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
175
+ page_content=' It follows that ¯G(0, r) = P[ ˜Y r ∞ > −Qr 1/M r 1 ] = E ¯G(−Qr 1/M r 1, 0) ≥ E ¯G(|Qr 1|/M r 1, 0) ≥ E � ¯G(ft, 0)F r(dt) = −E � F r(t)d ¯G(ft, 0) ≥ −E � F(t)d ¯G(ft, 0) ≥ E � ¯G(ft, 0)F(dt) > 0, where we use the property F r ≥ F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
176
+ page_content=' Thus, infr ¯G(0, r) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
177
+ page_content=' ✷ 3 Tails of solutions of distributional equations As a number of results on the ruin with investments, the proof is based on the implicit renewal theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
178
+ page_content=' As in [2] we shall use the following formulation combining several useful facts: Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
179
+ page_content='1 Suppose that for some β > 0, E[M β] = 1, E[M β (ln M)+] < ∞, E[|Q|β] < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
180
+ page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
181
+ page_content='1) Let Y∞ be the solution of the distributional equation Y∞ d= Q + MY∞ and let ¯G(u) := P[Y∞ > u].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
182
+ page_content=' Then lim sup uβ ¯G(u) < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
183
+ page_content=' If the random variable Y∞ is unbounded from above, then lim inf uβ ¯G(u) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
184
+ page_content=' In the previous section we introduce a process Y = (Yt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
185
+ page_content=' Assuming that it has at infinity a limit Y∞, which is a finite unbounded from above random variable, we have proved that it solves the required distributional equation and its tail function gives lower and upper bounds for the ruin probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
186
+ page_content=' It remains to check that the hypotheses of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
187
+ page_content='2 ensure the assumed properties and get the result applying the above theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
188
+ page_content=' We do this in the next sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
189
+ page_content=' 8 Yuri Kabanov, Platon Promyslov 4 The existence of the limit Y r ∞ First, we recall several results from [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
190
+ page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
191
+ page_content='1 ( [2], Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
192
+ page_content='1) Let T > 0 be a random variable independent of R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
193
+ page_content=' Suppose that E[eεT ] < ∞ for some ε > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
194
+ page_content=' Let β ∈ (0, ¯q) be the root of the equation H(q) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
195
+ page_content=' If q ∈ [β, ¯q) is such that H(q) ≤ ε/2, then E � sup s≤T e−qVs � < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
196
+ page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
197
+ page_content='1) Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
198
+ page_content='2 Suppose that E[eεT1] < ∞ for some ε > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
199
+ page_content=' Let �Q1 := sup t≤T1 |e−V− · Pt|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
200
+ page_content=' If E[|ξ1|β] < ∞, then E[| �Qβ 1] < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
201
+ page_content=' Though the above assertion is a bit more general than Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
202
+ page_content='2 in [2], the proof is exactly the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
203
+ page_content=' Note also that it does not depend on the sign of c or ξ1 and needs only the integrability of |ξ1|β.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
204
+ page_content=' It implies, in particular, that E[|Qβ 1|] < ∞ Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
205
+ page_content='3 Suppose that E[eεT1] < ∞ and E[|ξ1|β∧ε∧1] < ∞ for some ε > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
206
+ page_content=' Then Yt → Y∞ almost surely as t → ∞ where Y∞ is a finite random variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
207
+ page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
208
+ page_content=' The convergence a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
209
+ page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
210
+ page_content=' of the sequence YTn, n ≥ 1, to a finite r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
211
+ page_content='v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
212
+ page_content=' Y∞ has been proven in Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
213
+ page_content='1 of [2] as well as the fact that ρ := E[M p 1 ] < 1 for any p ∈ (0, β ∧ ε ∧ 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
214
+ page_content=' Put In := (Tn−1, Tn] and ∆n := sup v∈In ����� � (Tn−1,v] e−Vs−dPs ����� = n−1 � i=1 Mi sup v∈In ����� � (Tn−1,v] e−(Vs−−VTn−1)dPs ����� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
215
+ page_content=' By virtue of the Borel–Cantelli lemma, to get the announced result it is sufficient to show that for every δ > 0 ∞ � n=1 P[∆n ≥ δ] < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
216
+ page_content=' But this is true because the Chebyshev inequality and the Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
217
+ page_content='2 imply that P[∆n ≥ δ] ≤ δ−pρpE[ �Q1|p].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
218
+ page_content=' ✷ By Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
219
+ page_content='5 the sequence Y r T r n converges a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
220
+ page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
221
+ page_content=' to Y r ∞ and the same arguments as above allows us to conclude that Y r t also converges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
222
+ page_content=' Ruin Probabilities for a Sparre Andersen Model with Investments 9 5 When the distribution of Y∞ is unbounded from above?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
223
+ page_content=' The question in the title of the section is studied in [2] for the non-life insurance case, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
224
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
225
+ page_content=' when c < 0 and Fξ((0, ∞)) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
226
+ page_content=' In the present paper we provide sufficient conditions for the unboundedness from above for all new cases using the techniques developed in the mentioned paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
227
+ page_content=' It is based on the following elementary observa- tion: if f : X × Y → R is a measurable function, r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
228
+ page_content='v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
229
+ page_content=' η and ζ are independent and have the laws Fη and Fζ, then the r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
230
+ page_content='v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
231
+ page_content=' f(η, ζ) is unbounded from above provided that there exists a measurable set X0 ⊆ X with Fη(X0) > 0 such that the r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
232
+ page_content='v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
233
+ page_content=' f(x, ζ) is unbounded from above for every x ∈ X0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
234
+ page_content=' Let An := M1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
235
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
236
+ page_content='Mn, n ≥ 1, A0 := 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
237
+ page_content=' A tractable sufficient condition is give by the following Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
238
+ page_content='1 ([2], Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
239
+ page_content='1) If there exists n ≥ 1 such that the random variables Q1 and (Q1+· · ·+An−1Qn)/An are unbounded from above, then Y∞ is unbounded from above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
240
+ page_content=' It usually works already with n = 1 but sometimes we need it with n = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
241
+ page_content=' A short look at the expressions Q1 = −c � T1 0 e−Vrdr − e−VT1ξ1 (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
242
+ page_content='1) Q1/A1 = −ceVT1 � T1 0 e−Vrdr − ξ1, (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
243
+ page_content='2) Q1/A2 + Q2/M2 = −ceVT2 � T2 0 e−Vrdr − ξ1eVT2−VT1 − ξ2 (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
244
+ page_content='3) shows that Y∞ is unbounded from above when ξ is unbounded from below (of course, the latter property is not fulfilled for the annuity model).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
245
+ page_content=' Using the above sufficient condition of the unboundedness, we examine various cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
246
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
247
+ page_content=' Let σ ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
248
+ page_content=' In this case the following lemma is helpful: Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
249
+ page_content='2 Let K > 0, σ ̸= 0 and 0 ≤ s < t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
250
+ page_content=' Then the random variables ζ := KeσWt − � t 0 eσWrdr, ˜ζ := Keσ(Wt−Ws) − eσWt � t 0 eσWrdr, (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
251
+ page_content='4) are unbounded from below and from above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
252
+ page_content=' The property that ζ and ˜ζ are unbounded from above has been proven in [2], Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
253
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
254
+ page_content=' The unboundedness from below can be established by similar arguments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
255
+ page_content=' It is also clear, that if K = 0, then ζ and ˜ζ are unbounded from below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
256
+ page_content=' The process ¯V := V − σW is independent of the Wiener process W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
257
+ page_content=' If c < 0, then Q1 ≥ |c| inf r≤T1 e− ¯Vr � T1 0 e−σWrdr − ξ1e− ¯VT1e−σWT1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
258
+ page_content=' 10 Yuri Kabanov, Platon Promyslov Using the conditioning with respect to ¯V , ξ1, T1 and the previous lemma, we get that Q1 is unbounded from above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
259
+ page_content=' Since Q1/A1 ≥ |c|e ¯VT1 inf r≤T1 e− ¯VreσWT1 � T1 0 e−σWrdr − ξ1, we conclude in the same way that Q1/A1 is unbounded from above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
260
+ page_content=' If c ≥ 0, then necessary Fξ(−∞, 0)) > 0 (recall that we exclude the case c ≥ 0, ξ > 0 when the ruin is impossible).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
261
+ page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
262
+ page_content='2 implies that the random variables Q1 and Q1/A2 + Q2/M2 are unbounded from above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
263
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
264
+ page_content=' Let σ = 0 and let ξ be bounded from below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
265
+ page_content=' We treat separately several sub- cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
266
+ page_content=' 2a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
267
+ page_content=' Π((−1, 0)) > 0 and Π((0, ∞)) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
268
+ page_content=' Fix ε > 0 such that Π((−1, −ε)) > 0 and Π((ε, ∞)) > 0 and put V (1) := I{−1<x<−ε} ln(1 + x) ∗ µ + I{x>ε} ln(1 + x) ∗ µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
269
+ page_content=' Then the processes V (1) and V (2) := V − V (1) are independent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
270
+ page_content=' Note that V (1) is the sum of two independent compound Poisson processes with negative and positive jumps, respectively, and the absolute values of jumps are larger than some constant cε > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
271
+ page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
272
+ page_content='3 Let K > 0, t > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
273
+ page_content=' Then the random variable ζ := Ke−V (1) t − � t 0 e−V (1) r dr (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
274
+ page_content='5) is unbounded from above and from below, the random variable �ζ := e−V (1) t � t 0 e−V (1) r dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
275
+ page_content=' is unbounded from above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
276
+ page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
277
+ page_content=' The arguments are simple and we explain only the idea.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
278
+ page_content=' One can consider trajectories where V (1) has a lot of negative jumps in a neighborhood of zero while all positive jumps are concentrate in a neighborhood of t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
279
+ page_content=' Choosing suitable parameters and using the independence of processes with positive and negative jumps we obtain that, with a strictly positive probability, the first term in the definition of ζ is arbitrary close to zero while the integral is arbitrary large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
280
+ page_content=' Thus, ζ is unbounded from below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
281
+ page_content=' Symmetric arguments lead to the conclusion that ζ is unbounded from above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
282
+ page_content=' ✷ Let c < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
283
+ page_content=' Then the following bounds are obvious: Q1 ≥ |c| inf r≤T1 e−V (2) r � T1 0 e−V (1) r dr − ξ1e− ¯V (2) T1 e−V (1) T1 , Q1/A1 ≥ |c|eV (2) T1 inf r≤T1 e−V (2) r eV (1) T1 � T1 0 e−V (1) r dr − ξ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
284
+ page_content=' Ruin Probabilities for a Sparre Andersen Model with Investments 11 By conditioning with respect to the random variables V (2), T1, ξ1, which are inde- pendent of V (1), and using Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
285
+ page_content='3 we easily obtain that the random variables Q1 and Q1/A1 are unbounded from above and, by Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
286
+ page_content='1, so is Y∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
287
+ page_content=' Let c ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
288
+ page_content=' The same arguments as in [2] show that Q1 and Q1/A2 + Q2/M2 are unbounded from above on the non-null set {ξ1 < 0, ξ2 < 0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
289
+ page_content=' 2b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
290
+ page_content=' Π((−1, 0)) = 0, Π(h) = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
291
+ page_content=' We use the decomposition of V depending on the choice of ε ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
292
+ page_content=' Namely, put V ε := I{x≤ε}h ∗ (µ − ν) + I{x≤ε}(ln(1 + x) − h) ∗ µ, (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
293
+ page_content='6) ˜V ε := I{x>ε}h ∗ (µ − ν) + I{x>ε}(ln(1 + x) − h) ∗ µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
294
+ page_content=' (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
295
+ page_content='7) Note that Vt = at + V ε t + ˜V ε t and ˜V ε = I{x>ε} ln(1 + x) ∗ µ − I{x>ε}h ∗ ν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
296
+ page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
297
+ page_content='4 Let K > 0, t > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
298
+ page_content=' Then the random variables η := � t 0 e−Vrdr − Ke−Vt, η′ := eVt � t 0 e−Vrdr (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
299
+ page_content='8) are unbounded from above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
300
+ page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
301
+ page_content=' Without loss of generality we assume that a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
302
+ page_content=' Fix N > 0 and choose ε > 0 small enough to ensure that Π(I{x>ε}h) ≥ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
303
+ page_content=' Let Γε := {sups≤t |V ε s | ≤ 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
304
+ page_content=' Let denoting by Jε and ¯Jε the processes in the lhs of (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
305
+ page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
306
+ page_content=' Using the Doob inequality and the elementary bound x − ln(1 + x) ≤ x2/2 for x > 0, we get that P � sup s≤t |V ε s | > 1 � ≤ P � sup s≤t |Jε s | > 1/2 � + P � | ¯Jε t | > 1/2 � ≤ 2E � sup s≤t |Jε s | � + 2E � | ¯Jε t | � ≤ 2(I{x≤ε}h2 ∗ νt)1/2 + I{x≤ε}h2 ∗ νt → 0, ε → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
307
+ page_content=' Thus, the set Γε is non-null, at least, for sufficiently small ε, and on this set η ≥ 1 e � t 0 e− ˜V ε r dr − Ke− ˜V ε t +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
308
+ page_content=' (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
309
+ page_content='9) On the intersection Γε∩{I{x>ε}h∗µt/2 = 0, ln(1+ε)µ((t/2, t]×(ε, 1]) ≥ Nt+1} we have η ≥ 1 e � t/2 0 eNrdr − Ke = 1 eN (eNt/2 − 1) − Ke.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
310
+ page_content=' Due to the independence of V ε and ˜V ε this intersection is a non-null set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
311
+ page_content=' Since N is arbitrary large, the required property of η holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
312
+ page_content=' The analysis of η′ follows the same line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
313
+ page_content=' At the first stage we replace V by ˜V and compensate the linear decrease of V by a large number of positive jumps on the second half of the interval [0, t].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
314
+ page_content=' ✷ 12 Yuri Kabanov, Platon Promyslov Let c < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
315
+ page_content=' Then the random variables Q1 = |c| � T1 0 e−Vrdr − e−VT1ξ1, Q1/A1 = |c|eVT1 � T1 0 e−Vrdr − ξ1 are unbounded from above by virtue of the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
316
+ page_content=' Let c ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
317
+ page_content=' The same arguments as in [2] lead to the conclusion that that Q1 and Q1/A2 + Q2/M2 are unbounded from above on the non-null set {ξ1 < 0, ξ2 < 0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
318
+ page_content=' 2c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
319
+ page_content=' Π((0, ∞)) = 0, Π(|h|) = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
320
+ page_content=' Let c < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
321
+ page_content=' Again we use the decomposition of V depending on the choice of ε ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
322
+ page_content=' Put V ε := I{x≥−ε}h ∗ (µ − ν) + I{x≥−ε}(ln(1 + x) − h) ∗ µ, (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
323
+ page_content='10) ˜V ε := I{x<−ε}h ∗ (µ − ν) + I{x<−ε}(ln(1 + x) − h) ∗ µ, (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
324
+ page_content='11) L := I{x<−ε} ln(1 + x) ∗ µ and Πε := Π(I{x<−ε}|h|) ↑ ∞ as ε → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
325
+ page_content=' Then ˜V ε t = I{x<−ε} ln(1 + x) ∗ µt − I{x<−ε}h ∗ ν = Lt + Πεt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
326
+ page_content=' To prove that Q1 is inbounded from above, we argue as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
327
+ page_content=' As in the previous subcase 2b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
328
+ page_content=' we reduce the problem to checking that the random variable ˜η = � t 0 e− ˜V ε r dr − Ke− ˜V ε r is unbounded from above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
329
+ page_content=' Let t1 := t2/2 where t2 := 1/Πε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
330
+ page_content=' Note that t2 ≤ t when ε > 0 is sufficiently small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
331
+ page_content=' On the set {Lt = Lt1} we have that ˜η ≥ (t2 − t1)e|¯Lt1|e−Πεt2 − Ke ¯Lt1e−Πεt = e|¯Lt1|(1/(2eΠε) − Kee−Πεt) ≥ e|¯Lt1|/(4eΠε) for sufficiently small ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
332
+ page_content=' Since the r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
333
+ page_content='v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
334
+ page_content=' |¯Lt1| is unbounded from above, so is Q1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
335
+ page_content=' On the set {Lt = 0, ξ1 ≤ K} non-null for any ε > 0 and sufficiently large K, we have the bound Q1/A1 ≥ (|c|/Πε)(eΠεt − 1) − K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
336
+ page_content=' It follows that Q1/A1 is unbounded from above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
337
+ page_content=' The case c ≥ 0 is treated as in [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
338
+ page_content=' 2d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
339
+ page_content=' Π((−∞, 0)) = 0, 0 < Π(h) < ∞, and F((t, ∞)) > 0 for every t > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
340
+ page_content=' In this subcase Vt = Lt − bt, where Lt := ln(1 + x) ∗ µt is an increasing process and b := Π(h) − a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
341
+ page_content=' Note that b > 0, otherwise we get a contradiction with the existence of β > 0 such that ln Ee−βV1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
342
+ page_content=' Let c < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
343
+ page_content=' Take arbitrary N > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
344
+ page_content=' Let s > 0 be the solution of the equation (|c|/b)(ebs − 1) = N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
345
+ page_content=' Take K large enough to ensure that P[ξ ≤ K] > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
346
+ page_content=' Chose t > s such that F((s, t)) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
347
+ page_content=' On the non-null set {Ls = 0, T1 ∈ (s, t), eLt−Ls ≥ Kebt, ξ < K} Ruin Probabilities for a Sparre Andersen Model with Investments 13 we have that Q1 ≥ (|c|/b)(ebs − 1) − ebt−Ltξ ≥ N − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
348
+ page_content=' Thus, Q1 is unbounded from above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
349
+ page_content=' To prove that Q1/A1 is unbounded from above we take ε > 0 and K > 0 such that F((t, ∞)) > 0 and Fξ((0, K)) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
350
+ page_content=' Setting cε := (|c|/b)(e−bε − e−2bε) we have that on the non-null set {T1 > ε, LT1−ε = 0, LT1 ≥ ln((N + K)/cε), ξ < K} we have Q1/A1 ≥ (|c|/b)eLT1e−bT1(eb(T1−ε) − 1) − ξ ≥ cεeLt − K ≥ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
351
+ page_content=' So, by Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
352
+ page_content='1 Y∞ is unbounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
353
+ page_content=' If c ≥ 0, we proceed as in [2], using the assumption that F charges any neighbor- hood of zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
354
+ page_content=' 2e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
355
+ page_content=' Π((0, ∞)) = 0, 0 < Π(|h|) < ∞, and F((t, ∞)) > 0 for every t > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
356
+ page_content=' We have again V = Lt − bt but now the jump process L is decreasing and the constant b < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
357
+ page_content=' Let c < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
358
+ page_content=' Fix N > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
359
+ page_content=' Let s, t > 0 be such that F((s, t)) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
360
+ page_content=' On the non-null set {T1 ∈ (s, t), |Ls/2| ≥ N, Ls/2 = Lt, ξ < e|Ls/2|} we have Q1 ≥ |c|(T1/2)e|L(1/2)T1|−|b|T1 − e|L(1/2)T1|−|b|T1ξ ≥ |c|(s/2)eN−|b|t − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
361
+ page_content=' Since N is arbitrary, Q1 is unbounded from above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
362
+ page_content=' For any t > 0 and K > 0 on the non-null set {T1 ≥ t, Lt = 0, ξ ≤ K} Q1/A1 ≤ |c/b|(e|b|t − 1) − K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
363
+ page_content=' This implies that Q1/A1 is unbounded from above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
364
+ page_content=' If c ≥ 0, we again proceed as in [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
365
+ page_content=' Acknowledgements The research is funded by the grant of RSF n◦ 20-68-47030 ”Econometric and prob- abilistic methods for the analysis of financial markets with complex structure“.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
366
+ page_content=' 14 Yuri Kabanov, Platon Promyslov References 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
367
+ page_content=' Albrecher, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
368
+ page_content=', Constantinescu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
369
+ page_content=', Thomann, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
370
+ page_content=': Asymptotic results for renewal risk models with risky investments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
371
+ page_content=' Stoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
372
+ page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
373
+ page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
374
+ page_content=' 122, 3767–3789 (2012) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
375
+ page_content=' Eberlein, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
376
+ page_content=', Kabanov, Yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
377
+ page_content=', Schmidt, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
378
+ page_content=': Ruin probabilities for a Sparre Andersen model with invest- ments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
379
+ page_content=' Stoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
380
+ page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
381
+ page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
382
+ page_content=' 144, 72–84 (2022) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
383
+ page_content=' Goldie, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
384
+ page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
385
+ page_content=' : Implicit renewal theory and tails of solutions of random equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
386
+ page_content=' Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
387
+ page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
388
+ page_content=' Probab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
389
+ page_content=' 1, 126–166 (1991) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
390
+ page_content=' Grandell, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
391
+ page_content=': Aspects of Risk Theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
392
+ page_content=' Springer, Berlin (1990) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
393
+ page_content=' Kabanov, Yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
394
+ page_content=', Pergamenshchikov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
395
+ page_content=': In the insurance business risky investments are dangerous: the case of negative risk sums.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
396
+ page_content=' Finance Stoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
397
+ page_content=' 20, 355–379 (2016) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
398
+ page_content=' Kabanov, Yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
399
+ page_content=', Pergamenshchikov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
400
+ page_content=': Ruin probabilities for a L´evy-driven generalised Ornstein– Uhlenbeck process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
401
+ page_content=' Finance Stoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
402
+ page_content=' 24, 39–69 (2020) 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
403
+ page_content=' Kabanov, Yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
404
+ page_content=', Pukhlyakov, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
405
+ page_content=': Ruin probabilities with investments: smoothness, IDE and ODE, asymptotic behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
406
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
407
+ page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
408
+ page_content=' Probab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
409
+ page_content=' 59, 556–570 (2020) 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
410
+ page_content=' Paulsen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
411
+ page_content=': Risk theory in stochastic economic environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
412
+ page_content=' Stoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
413
+ page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
414
+ page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
415
+ page_content=' 46, 327–361 (1993) 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
416
+ page_content=' Paulsen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
417
+ page_content=', Stochastic Calculus with Applications to Risk Theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
418
+ page_content=' Lecture Notes, Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
419
+ page_content=' of Bergen and Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
420
+ page_content=' of Copenhagen (1996) 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
421
+ page_content=' Paulsen J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
422
+ page_content=': Sharp conditions for certain ruin in a risk process with stochastic return on investments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
423
+ page_content=' Stoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
424
+ page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
425
+ page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
426
+ page_content=' 75, 135–148 (1998) 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
427
+ page_content=' Paulsen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
428
+ page_content=', Gjessing, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
429
+ page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
430
+ page_content=' : Ruin theory with stochastic return on investments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
431
+ page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
432
+ page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
433
+ page_content=' Probab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
434
+ page_content=' 29, 965–985 (1997)' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DdA0T4oBgHgl3EQfAv9Y/content/2301.01966v1.pdf'}
E9AzT4oBgHgl3EQfw_6R/content/tmp_files/2301.01731v1.pdf.txt ADDED
@@ -0,0 +1,1116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ GUAP: Graph Universal Attack Through Adversarial Patching
2
+ Xiao Zang1 , Jie Chen2∗ and Bo Yuan1
3
+ 1Department of Electrical and Computer Engineering, Rutgers University
4
+ 2MIT-IBM Watson AI Lab, IBM Research
5
6
+ Abstract
7
+ Graph neural networks (GNNs) are a class of ef-
8
+ fective deep learning models for node classifi-
9
+ cation tasks; yet their predictive capability may
10
+ be severely compromised under adversarially de-
11
+ signed unnoticeable perturbations to the graph
12
+ structure and/or node data.
13
+ Most of the current
14
+ work on graph adversarial attacks aims at low-
15
+ ering the overall prediction accuracy, but we ar-
16
+ gue that the resulting abnormal model performance
17
+ may catch attention easily and invite quick coun-
18
+ terattack. Moreover, attacks through modification
19
+ of existing graph data may be hard to conduct if
20
+ good security protocols are implemented. In this
21
+ work, we consider an easier attack harder to be no-
22
+ ticed, through adversarially patching the graph with
23
+ new nodes and edges. The attack is universal: it
24
+ targets a single node each time and flips its con-
25
+ nection to the same set of patch nodes. The attack
26
+ is unnoticeable: it does not modify the predictions
27
+ of nodes other than the target. We develop an al-
28
+ gorithm, named GUAP, that achieves high attack
29
+ success rate but meanwhile preserves the predic-
30
+ tion accuracy. GUAP is fast to train by employ-
31
+ ing a sampling strategy. We demonstrate that a 5%
32
+ sampling in each epoch yields 20x speedup in train-
33
+ ing, with only a slight degradation in attack perfor-
34
+ mance. Additionally, we show that the adversarial
35
+ patch trained with the graph convolutional network
36
+ transfers well to other GNNs, such as the graph at-
37
+ tention network.
38
+ 1
39
+ Introduction
40
+ Graph structured data are ubiquitous, with examples ranging
41
+ from molecules, social networks, power systems, to knowl-
42
+ edge graphs. Graph representation learning is one of the key
43
+ areas of machine learning, with several extensively explored
44
+ downstream tasks including node classification, graph clas-
45
+ sification, and community detection. Over the past decade,
46
+ a plethora of learning methods were proposed, ranging from
47
+ ∗Contact Author
48
+ unsupervised embedding approaches (e.g., DeepWalk [Per-
49
+ ozzi et al., 2014] and node2vec [Grover and Leskovec,
50
+ 2016]) to supervised/semi-supervised graph neural network
51
+ (GNN) models (e.g., GCN [Kipf and Welling, 2017] and
52
+ GAT [Veliˇckovi´c et al., 2017]). GNN models steadily im-
53
+ prove the performance of downstream tasks and achieve state-
54
+ of-the-art results.
55
+ The seminal work of [Szegedy et al., 2013] and [Goodfel-
56
+ low et al., 2014] point out that despite achieving high predic-
57
+ tion accuracy, deep models are fragile to adversarially manip-
58
+ ulated inputs, stirring a proliferation of research on designing
59
+ adversarial attacks and defense schemes against them. GNNs
60
+ as an emerging class of deep models tailored for graph struc-
61
+ tured data also urge scrutiny. A major development in this
62
+ context focuses on node classifications, in part because of
63
+ their economic and societal importance. For example, bad
64
+ actors in a financial network may hide themselves through
65
+ manipulating contacts and transactions to benign actors, dev-
66
+ astating the predictive power of GNNs in identifying illicit
67
+ activities.
68
+ Much prior work [Dai et al., 2018; Wu et al., 2019;
69
+ Z¨ugner and G¨unnemann, 2019] studying adversarial attacks
70
+ on GNNs aims at lowering the classification accuracy toward
71
+ all nodes in the graph, through either poisoning the training
72
+ data to weaken training, or modifying the test data to mislead
73
+ trained models. However, in many practical scenarios, tak-
74
+ ing control of existing data proves to be challenging and thus
75
+ modifying the graph data is less realistic.
76
+ In this work, we consider attacking a trained model through
77
+ adversarially patching the graph data with new nodes and
78
+ edges. These new edges must involve the new nodes; they
79
+ should not change the connections between existing ones. For
80
+ example, in a social network setting, the patching amounts to
81
+ creating new accounts and setting their friendship. The key is
82
+ that such new nodes are adversarial and their effect is secre-
83
+ tive: When a target is being attacked, its connections to the
84
+ patch nodes are all flipped so that its prediction is changed,
85
+ while the predictions to other nodes are not. See Figure 1 for
86
+ illustration.
87
+ The idea of adversarial patching exists in prior graph-attack
88
+ work. Greedy-GAN [Wang et al., 2018] adopts a greedy ap-
89
+ proach and NIPA [Sun et al., 2019] uses reinforcement learn-
90
+ ing to compute the new edges. Our work differs from them in
91
+ several aspects. First, these methods aim at lowering the pre-
92
+ arXiv:2301.01731v1 [cs.LG] 4 Jan 2023
93
+
94
+ Figure 1: Illustration of GUAP. A set of patch nodes {13, 14, 15} and edges are inserted. One attacks node 7 through flipping its connections
95
+ with the patch. Predictions of other nodes remain unchanged.
96
+ diction accuracy whereas ours barely. Hence, they often need
97
+ a large patch (e.g., 20% of the original node set in [Wang et
98
+ al., 2018] and 10% in [Sun et al., 2019], for the Cora data set)
99
+ but our patch is rather small (e.g., 1%). Second, these meth-
100
+ ods attack the entire node set all at once whereas ours targets a
101
+ single node each time. Third, even though the work of [Wang
102
+ et al., 2018] additionally considers attacking single targets,
103
+ such an attack needs a new optimization whenever the target
104
+ changes, incurring expensive computation. On the contrary,
105
+ our work computes the patch once for all and uses the flip-
106
+ ping mechanism to perform attack, which is computationally
107
+ economic. Our attack is of the universal type.
108
+ We propose an algorithm, named GUAP, to compute the
109
+ adversarial patch. It consists of two parts: node generation
110
+ and edge training. Features of the new nodes are random and
111
+ they are generated based on the statistics of those of the orig-
112
+ inal graph. We find that the random generation is robust in
113
+ the sense that once the patch is computed, regenerating the
114
+ node features using the same mechanism barely affects the
115
+ attack performance. For edge training, we treat the connec-
116
+ tions involving the new nodes as parameters to optimize. The
117
+ optimization achieves two goals: (i) it alters the prediction of
118
+ the attack target and (ii) it maintains the prediction of other
119
+ nodes. The latter goal distinguishes our work from most of
120
+ the prior work.
121
+ We summarize the contributions as follows:
122
+ 1. We present a novel attack scenario, which patches given
123
+ graph data without modifying its original content and at-
124
+ tacks one target at a time through flipping its connections
125
+ to the patch nodes.
126
+ 2. We propose a universal attack algorithm, which achieves
127
+ high attack success rate (ASR) while maintaining the orig-
128
+ inal prediction accuracy.
129
+ 3. We show that attack training can be speeded up through
130
+ sampling the training set in each epoch, without sacrificing
131
+ the attack performance a lot.
132
+ 4. We demonstrate that our method admits good transferabil-
133
+ ity of attack performance to a model different from the one
134
+ used for training.
135
+ 2
136
+ Related Work
137
+ Universal
138
+ attack.
139
+ Universal
140
+ attacks
141
+ compute
142
+ input-
143
+ independent perturbations to fool the classifier. They more
144
+ often appear in the computer vision literature.
145
+ The work
146
+ of [Moosavi-Dezfooli et al., 2017] perturbs every pixel
147
+ of an image whereas the work of [Brown et al., 2017]
148
+ computes an adversarial patch that is attached to an image
149
+ at random locations. For graphs, research is sporadic. The
150
+ work of [Zang et al., 2020] selects a set of anchor nodes so
151
+ that attacking a target amounts to flipping the connections
152
+ between the target and the anchors.
153
+ Graph adversarial attack.
154
+ Much work devotes to the
155
+ modification of the graph structure, resulting in poor qual-
156
+ ity of node embeddings [Chen et al., 2018b; Xu et al., 2019a;
157
+ Wang and Gong, 2019; Bojchevski and G¨unnemann, 2018;
158
+ Dai et al., 2018; Xu et al., 2019b]. Nettack [Z¨ugner et al.,
159
+ 2018] modifies not only the graph structure but also the node
160
+ features. Specifically, targeting each node, Nettack modifies
161
+ the node feature entry or graph structure step by step, to max-
162
+ imize the prediction loss of the attacked node in a greedy
163
+ manner. Modifying the graph is generally a discrete opti-
164
+ mization problem, which invites greedy algorithms, but the
165
+ work of [Liu et al., 2019; Bose et al., 2019] proposes a prob-
166
+ abilistic framework under which continuous optimization is
167
+ performed.
168
+ The work of [Z¨ugner and G¨unnemann, 2019]
169
+ poisons the graph by treating it as a hyperparameter and op-
170
+ timizing through hypergradient updates. Closest to our work
171
+ are [Wang et al., 2018] and [Sun et al., 2019], both of which
172
+ inject adversarial nodes to the graph. The former greedily in-
173
+ serts nodes and uses a discriminator to compute node features
174
+ so that one cannot distinguish new nodes from the original
175
+ ones. The latter computes adversarial nodes by using rein-
176
+ forcement learning. Neither approach attacks a single node
177
+ through connection flipping as we do.
178
+
179
+ Patch Nodes
180
+ Original Graph
181
+ Patch Nodes: 13, 14, 15.
182
+ 13 14 15
183
+ Node colors represents different classes
184
+ 8
185
+ Graph Learning
186
+ 8
187
+ -14 15
188
+ Node
189
+ 13
190
+ 14
191
+ Target
192
+ 15
193
+ Target
194
+ generation
195
+ 10
196
+ 12
197
+ GCN
198
+ Edge
199
+ 8
200
+ 8
201
+ DeepWalk
202
+ Targeted
203
+ 3
204
+ Input
205
+ Predict
206
+ training
207
+ 3
208
+ New Graph
209
+ attack
210
+ node2vec
211
+ 1314
212
+ 15
213
+ 13--14. 15
214
+ 12
215
+ 10
216
+ GAT
217
+ 8
218
+ 8
219
+ Attack by flipping edges
220
+ Only prediction of 7 changes
221
+ 3
222
+ 9
223
+ 10
224
+ Generate adversarially patched graph3
225
+ Preliminaries
226
+ We use GCN [Kipf and Welling, 2017] as the attack model.
227
+ Denote by G = (A, X) the given graph, where A is the n×n
228
+ adjacency matrix and X is the n × d node feature matrix.
229
+ Throughout, we assume that the graph is unweighted; thus, A
230
+ is binary. Denote by f(A, X) the GCN model, which reads
231
+ Z := f(A, X) = softmax( �A·ReLU( �AXW (0))·W (1)), (1)
232
+ where �A = �D− 1
233
+ 2 �A �D− 1
234
+ 2 is the normalized adjacency matrix
235
+ with �A = A + I and �D = diag(�
236
+ j �Aij). The matrices W (0)
237
+ and W (1) are trainable parameters whose sizes are respec-
238
+ tively d × d′ and d′ × K, where K is the number of classes.
239
+ Consequently, Z is the n×K output matrix, whose ith row is
240
+ the probability vector for the ith node. Let VL be the training
241
+ set and Y be the labels. The training of GCN minimizes the
242
+ cross-entropy loss
243
+ L = −
244
+
245
+ i∈VL
246
+ K
247
+
248
+ k=1
249
+ 1{Yi = k} ln Zik.
250
+ (2)
251
+ 4
252
+ Graph Universal Attack Through
253
+ Adversarial Patching
254
+ Denote by Gnew = (Anew, Xnew) the new graph with m
255
+ patch nodes. For convenience we order them after the original
256
+ nodes, so that Anew =
257
+ � A
258
+ C
259
+ CT
260
+ B
261
+
262
+ and Xnew =
263
+
264
+ X
265
+ Xpatch
266
+
267
+ .
268
+ Here, C is n×m, denoting the connections between the orig-
269
+ inal nodes and the patch ones; B is m × m, denoting the
270
+ connections between the patch nodes themselves; and Xpatch
271
+ is m × d, denoting the feature matrix of the patch nodes. We
272
+ will discuss the generation of Xpatch and the computation of
273
+ C and B in the following subsections, respectively.
274
+ Additionally, we only consider undirected graphs in this
275
+ paper. This is because even for directed ones, a majority of
276
+ graph neural networks (e.g., GCN [Kipf and Welling, 2017])
277
+ remove the edge directions and take the symmetric adjacency
278
+ matrix as input. Our method is evaluated on three undirected
279
+ graph benchmark data sets.
280
+ 4.1
281
+ Node Generation
282
+ Realistic node features may be generated by using a genera-
283
+ tive model (e.g., GAN) [Wang et al., 2018], but the learning
284
+ from existing nodes suffers many challenges, including high
285
+ dimensionality and small training set. We opt for a simple
286
+ mechanism that is sufficiently robust.
287
+ We treat each feature dimension independently. In general,
288
+ for numeric features we fit a normal distribution for each fea-
289
+ ture dimension and sample from it. Without a priori knowl-
290
+ edge, a normal distribution appears to be the most straightfor-
291
+ ward parameterization. Depending on data sets, more accu-
292
+ rate distributions may be fit or even learned.
293
+ For some of the data sets we experiment with, the node fea-
294
+ tures are binary. Hence, we perform binarization and make
295
+ the feature value 0 if the Gaussian sample is smaller than 0.5,
296
+ or 1 otherwise. If the training set contains 1 with probability
297
+ p and 0 with probability 1 − p, then the fitted normal distri-
298
+ bution has mean p and variance p(1−p). Thus, the new sam-
299
+ ples take 1 with probability 1
300
+ 2
301
+
302
+ 1 − erf
303
+
304
+ 1/2−p
305
+
306
+ 2p(1−p)
307
+ ��
308
+ , which
309
+ is approximately p. In other words, the general approach of
310
+ fitting a normal distribution covers well the special case of
311
+ Bernoulli distribution.
312
+ 4.2
313
+ Edge Training
314
+ Denote by ˆl(A, X, i) the predicted label of node i given
315
+ graph adjacency matrix A and node feature matrix X. It is
316
+ where the largest entry of f(A, X)i resides; i.e., ˆl(A, X, i) =
317
+ arg max f(A, X)i. Our attack aims at two goals: changing
318
+ the prediction of i while preserving those of other nodes.
319
+ Both goals involve the graph adjacency matrix A′
320
+ new when
321
+ node i is being attacked. They may be mathematically sum-
322
+ marized as:
323
+ for each i in the training set VL,
324
+
325
+ ˆl(A′
326
+ new, Xnew, i) ̸= ˆl(A, X, i),
327
+ ˆl(A′
328
+ new, Xnew, j) = ˆl(A, X, j),
329
+ ∀j ̸= i.
330
+ (3)
331
+ Note that A′
332
+ new is i-dependent but we suppress the depen-
333
+ dency in notation to avoid cluttering.
334
+ We elaborate how A′
335
+ new is computed from Anew. Let p =
336
+ [0, . . . , 0, 1, . . . , 1] be an (n + m)-vector, where the first n
337
+ entries are 0 and the rest are 1. We call p the attack vector,
338
+ since the 1 entries will be used to flip the connections with
339
+ the patch nodes. We extend p to the attack matrix P, whose
340
+ ith row and ith column are the same as p and zero otherwise.
341
+ Thus, Pij denotes whether the connection between node i and
342
+ j is flipped. One easily derives that
343
+ A′
344
+ new := attack(Anew, i) = (1−P)◦Anew+P◦(10−Anew),
345
+ (4)
346
+ where ◦ stands for element-wise product, 1 is a matrix of all
347
+ ones, and 10 is analogous except that the diagonal is set as
348
+ zero.
349
+ Throughout training, we will also need to revert the at-
350
+ tacked graph back to the patched graph. Such an “unattack”
351
+ operation is simple to conduct by using the attack matrix to
352
+ flip back:
353
+ Anew := unattack(A′
354
+ new, i) = attack(A′
355
+ new, i).
356
+ (5)
357
+ Outer Loop: GUAP
358
+ Recall that Anew =
359
+ � A
360
+ C
361
+ CT
362
+ B
363
+
364
+ . The overall algorithm is to
365
+ start with an initial Anew (specifically, B = 0 and C = 0) and
366
+ iteratively update it by using certain perturbation ∆Anew =
367
+
368
+ 0
369
+ ∆C
370
+ ∆CT
371
+ ∆B
372
+
373
+ that reflects the two goals summarized in (3).
374
+ See Algorithm 1.
375
+ Concretely, the training is conducted in several epochs,
376
+ each of which iterates over the training set VL. At node i,
377
+ we compute the attacked adjacency matrix A′
378
+ new and check
379
+ if i’s prediction changes. If not, we use an inner procedure
380
+ IGP (to be elaborated subsequently) to generate a perturba-
381
+ tion ∆Anew. Then we update A′
382
+ new with this perturbation
383
+
384
+ Algorithm 1 Graph Universal Attack Through Adversarial
385
+ Patching (GUAP)
386
+ Input: A, X
387
+ Output: Adjacency matrix Anew of the patched graph and
388
+ node features Xnew
389
+ Initialize Anew and generate Xnew
390
+ epoch ← 0
391
+ while epoch < max epoch do
392
+ for node i in training set do
393
+ A′
394
+ new ← attack(Anew, i)
395
+ if ˆl(A′
396
+ new, Xnew, i) = ˆl(A, X, i) then
397
+ ∆Anew ← IGP(A′
398
+ new, Xnew, i)
399
+ A′
400
+ new ← A′
401
+ new + ∆Anew
402
+ Anew ← unattack(A′
403
+ new, i)
404
+ Anew ← L2-projection(Anew, radius)
405
+ Anew ← Anew.clip(0, 1)
406
+ Anew.diagonal ← 0
407
+ end if
408
+ end for
409
+ Anew ← (Anew > 0.5) ? 1 : 0
410
+ Compute ASR using Equation (6) and record the highest
411
+ value
412
+ epoch ← epoch + 1
413
+ end while
414
+ return Anew at the epoch of highest ASR and Xnew
415
+ and revert it to the unattacked matrix Anew. Because the
416
+ perturbation may gradually modify Anew to an incompara-
417
+ ble magnitude, we apply L2 projection as well as clipping to
418
+ prevent Anew from exploding. The L2 projection is applied
419
+ to each patch node indivually so that the vector of edges to
420
+ such a node has an L2 norm radius. We also set the diagonal
421
+ of B to be zero to prevent self loops.
422
+ After the entire training set is iterated, the B and C blocks
423
+ contain real values within (0, 1). We binarize them to main-
424
+ tain unweightedness. Then, the attack success rate
425
+ ASR(VL) :=
426
+ 1
427
+ |VL|
428
+ |VL|
429
+
430
+ i=1
431
+ 1{ˆl(A′
432
+ new, Xnew, i) ̸= ˆl(A, X, i)}
433
+ (6)
434
+ is computed as the metric of attack performance.
435
+ Inner Loop: IGP
436
+ The inner procedure iterative graph perturbation, IGP, com-
437
+ putes a perturbation ∆Anew to the current attacked matrix
438
+ A′
439
+ new to gear it toward the two goals summarized in (3). For
440
+ the first goal, the strategy is to push the prediction toward the
441
+ decision boundary of another class; whereas for the second
442
+ goal, the strategy is to progress toward a smaller loss for all
443
+ nodes except i:
444
+ L′
445
+ new := −
446
+
447
+ j∈VL\i
448
+ K
449
+
450
+ k=1
451
+ 1{Yj = k} ln f(A′
452
+ new, Xnew)jk.
453
+ (7)
454
+ The procedure is summarized in Algorithm 2.
455
+ Algorithm 2 is a while-loop that iteratively computes the
456
+ perturbation till the prediction of node i changes (or the iter-
457
+ Algorithm 2 Iterative Graph Perturbation (IGP)
458
+ Input: Attacked adjacency matrix A′
459
+ new, feature matrix
460
+ Xnew, node i
461
+ Output: Perturbation ∆Anew
462
+ Initialize empty ∆Anew
463
+ E′
464
+ new ← A′
465
+ new
466
+ iter ← 0
467
+ pred ← ˆl(A, X, i)
468
+ while ˆl(A′
469
+ new, Xnew, i) = pred and iter < max iter do
470
+ v ←
471
+ |∆fk|
472
+ ||∆wk||2
473
+ 2 ∆wk according to Equation (8)
474
+ v[0 : n] ← 0
475
+ ∆Anew[i, :] ← ∆Anew[i, :] + (1 + overshoot) · v and
476
+ analogously for ∆Anew[:, i]
477
+ E′
478
+ new ← A′
479
+ new + ∆Anew
480
+ E′
481
+ new ← E′
482
+ new.clip(0, 1)
483
+ grad ← ∇L′
484
+ new(E′
485
+ new)
486
+ # see loss function (7)
487
+ grad[0 : n, 0 : n] ← 0
488
+ grad[i, :] ← 0 and analogously for grad[:, i]
489
+ grad ← (grad + gradT )/2
490
+ ∆Anew ← ∆Anew − step · grad
491
+ iter ← iter + 1
492
+ end while
493
+ return ∆Anew
494
+ ation count reaches maximum). The loop contains two part,
495
+ corresponding to the two goals respectively. The first part in-
496
+ tends to attack i. Denote the prediction pred; i.e., pred =
497
+ ˆl(A, X, i).
498
+ According to [Moosavi-Dezfooli et al., 2016;
499
+ Zang et al., 2020], the minimum perturbation v on the ith
500
+ row (and column) of A′
501
+ new that sends node i to the decision
502
+ boundary of the closest class k can be calculated as
503
+ k = arg min
504
+ c̸=pred
505
+ |∆fc|
506
+ ∥∆wc∥2
507
+ , v =
508
+ |∆fk|
509
+ ||∆wk||2
510
+ 2
511
+ ∆wk,
512
+ (8)
513
+ where ∆fc = f(Anew, Xnew)i,c − f(Anew, Xnew)i,pred
514
+ and ∆wc = ∇f(Anew, Xnew)i,c − ∇f(Anew, Xnew)i,pred.
515
+ Here, gradient is taken with respect to the ith row (and sym-
516
+ metrically column) of Anew. We set the first n entries of v to
517
+ be zero because the original graph should not change. We also
518
+ use a small overshoot constant to send node i to the other
519
+ side of the decision boundary. We introduce a temporary no-
520
+ tation E′
521
+ new to denote the updated A′
522
+ new for subsequent use.
523
+ We also apply clipping, similar to what is done in the outer
524
+ loop.
525
+ The second part intends to lower the loss (7). We calcu-
526
+ late its gradient grad at E′
527
+ new and set the first n × n block
528
+ to be zero, We also set the ith row and column zero because
529
+ prediction of node i is not to be preserved. After numeri-
530
+ cal symmetrization, we update the perturbation ∆Anew along
531
+ the gradient descent direction, completing one iteration of the
532
+ while-loop.
533
+ 5
534
+ Experiments
535
+ In this section,
536
+ we perform a set of comprehensive
537
+ experiments to demonstrate the effectiveness of GUAP.
538
+
539
+ We investigate the patch size, show speedup of train-
540
+ ing
541
+ under
542
+ the
543
+ sampling
544
+ strategy,
545
+ compare
546
+ with
547
+ sev-
548
+ eral related methods, and present transferability results.
549
+ Code is available at https://anonymous.4open.science/r/
550
+ ffd4fad9-367f-4a2a-bc65-1a7fe23d9d7f/.
551
+ 5.1
552
+ Data Sets and Details
553
+ We use three commonly used benchmark data sets: Cora,
554
+ Citeseer, and Pol.Blogs. Their information is summarized in
555
+ Table 1. Training is based on the standard GCN model and
556
+ hence we also list its test accuracy in the table.
557
+ Table 1: Node Classification Datasets. Only the largest connected
558
+ component (LCC) is considered.
559
+ Statistics
560
+ Cora
561
+ Citeseer
562
+ Pol.Blogs
563
+ # Nodes(LCC)
564
+ 2708
565
+ 3327
566
+ 1222
567
+ # Edges(LCC)
568
+ 5278
569
+ 4676
570
+ 16714
571
+ # Classes
572
+ 7
573
+ 6
574
+ 2
575
+ Train/teset Set
576
+ 140/1000
577
+ 120/1000
578
+ 121/1101
579
+ Accuracy(GCN)
580
+ 81.4%
581
+ 70.4%
582
+ 94.3%
583
+ The hyperparameters for Algorithms 1 and 2 are:
584
+ max epoch
585
+ =
586
+ 50, max iter
587
+ =
588
+ 30, radius
589
+ =
590
+ 10,
591
+ overshoot = 0.02, and step = 10. All experiments are
592
+ repeated ten times under the same hyperparameters.
593
+ 5.2
594
+ Compared Methods
595
+ Universal attacks on graphs are rarely studied; hence, for
596
+ comparison are a combination of existing universal attack
597
+ method, non-universal method, and variants of the proposed
598
+ method.
599
+ 1. Graph Universal Attack (GUA) [Zang et al., 2020]. As op-
600
+ posed to adversarial patching, GUA seeks a set of anchor
601
+ nodes from the graph and attacks a target through flipping
602
+ its connections to these anchors. For a fair comparison, the
603
+ number of anchors is the same as the patch size in GUAP.
604
+ 2. Fast Gradient Attack (FGA) [Chen et al., 2018b]. FGA
605
+ is not a universal attack method. Targeting each node, it
606
+ iteratively modifies the patch connection with the largest
607
+ absolute gradient value. For a fair comparison, FGA can
608
+ modify up to the same number of patch edges as GUAP.
609
+ 3. GUAP without patching edges. This variant of GUAP in-
610
+ troduces only patch nodes but no edges. In other words,
611
+ when a node is attacked, it will be connected through
612
+ edges to all patch nodes.
613
+ 4. GUAP with randomly patched edges. Rather than per-
614
+ forming the sophisticated edge training, this variant in-
615
+ troduces random edges to the patch nodes. In this case,
616
+ the existence of an edge follows a Bernoulli distribution
617
+ with certain success probability. We experiment with two
618
+ cases: one such that the number of new edges is approxi-
619
+ mately the same as the that in GUAP; and the other merely
620
+ setting probability = 0.5, introducing many more edges.
621
+ 5. GUAP with regenerated node features. This variant first
622
+ computes the patched graph as does GUAP; then, it regen-
623
+ erates the patch node features. Note that the training of
624
+ Table 2: Average ASR of GUAP with and without clipping. The
625
+ percentages of the patch nodes are 1%, 1%, 5% for Cora, Citeseer,
626
+ and Pol.Blogs, respectively. The projection radius ξ = 10.
627
+ Method
628
+ Cora
629
+ Citeseer
630
+ Pol.Blogs
631
+ GUAP w/o clipping
632
+ 82.24%
633
+ 83.41%
634
+ 45.76%
635
+ GUAP w/ clipping
636
+ 91.37%
637
+ 85.05%
638
+ 53.24%
639
+ the patched graph relies on the initial features. Hence, it is
640
+ interesting to see how change of features affects attack.
641
+ 5.3
642
+ Results
643
+ We use two metrics for evaluation: ASR and ∆Acc (change
644
+ of prediction accuracy).
645
+ Patch size.
646
+ First we determine a reasonable patch size. Fig-
647
+ ure 2 reveals a common pattern across data sets: the ASR in-
648
+ creases as more and more nodes are patched into the graph,
649
+ before peak, whereas the accuracy is fairly stable. The ASR
650
+ curve climbs quickly, indicating that a small patch size suf-
651
+ fices to achieve good ASR. We thus set the patch size to be
652
+ 1%, 1%, and 5% of the original node set for Cora, Citeseer,
653
+ and Pol.Blogs, respectively.
654
+ The L2 Projection Radius.
655
+ We then keep the plateaued
656
+ percentages of patch nodes found in Figure 2 and investigate
657
+ the influence of one important hyper-parameter: L2 projec-
658
+ tion radius ξ. In Figure 3, we report the average ASR, predic-
659
+ tion accuracy, and the number of patch edges by increasing
660
+ ξ. It shows that when ξ = 10, the ASRs achieve the highest
661
+ value on the three benchmarks, while preserving the overall
662
+ prediction accuracy. Afterward, the ASR will not increase
663
+ further.
664
+ Moreover, the number of patch edges quickly climbs up
665
+ with larger ξ. This is because the number of patch edges is
666
+ implicitly controlled by the projection radius. Increasing ξ
667
+ will densify the patch. In real situations, we can adjust the
668
+ projection radius to make a balance in the trade-off between
669
+ ASR and number of patch edges, so that the edge density for
670
+ the added patches is indistinguishable from real ones. Never-
671
+ theless, in subsequent experiments, we adopt ξ = 10 for the
672
+ highest ASR, regardless of the density of patch edges.
673
+ Necessity of clipping.
674
+ Based on [Zang et al., 2020], we also
675
+ adopt clipping to encourage the stability of results. In Table 2,
676
+ we list the average ASR of two variants of GUAP using clip-
677
+ ping versus not. It shows that clipping significantly increases
678
+ the ASR of GUAP. Therefore, clipping is a necessary ingre-
679
+ dient of the method for achieving high attack performance.
680
+ Training cost and acceleration.
681
+ Next we investigate the
682
+ computational cost. An estimate is O(max epoch · |VL| ·
683
+ m(m + 2n)), where the factor |VL| (denoting the training set
684
+ size) comes from the for-loop in Algorithm 1, whereas the
685
+ factor m(m + 2n) (denoting the difference in matrix size be-
686
+ tween the original graph and the patched graph) comes from
687
+ the inner procedure Algorithm 2. Because the node set size n
688
+ is given and the patch size m is implicitly controlled by the
689
+ desired ASR (see the preceding experiment), one factor we
690
+ may adjust to scale the cost better is the length of the for-loop.
691
+
692
+ (a) Cora.
693
+ (b) Citeseer.
694
+ (c) Pol.Blogs.
695
+ Figure 2: ASR and accuracy as number of patch nodes increases (in percentage of the node set size).
696
+ (a) Cora.
697
+ (b) Citeseer.
698
+ (c) Pol.Blogs.
699
+ Figure 3: Average number of patch edges, ASR, and accuracy as the L2 projection radius ξ increases.
700
+ Inside each epoch, rather than iterating over the entire training
701
+ set, we deal with a random subset only. Table 3 shows that
702
+ as one uses a smaller subset, the training time reduces pro-
703
+ portionally, whereas ASR suffers only slightly and accuracy
704
+ barely changes. Hence, for a large graph with large training
705
+ set, the sampling scheme effectively accelerates training.
706
+ Comparison with related methods.
707
+ Now we compare
708
+ GUAP with several of its variants, as well as GUA and FGA.
709
+ See Table 4. Same as GUAP, GUA preserves the prediction
710
+ accuracy but achieves a lower ASR. The preservation of ac-
711
+ curacy indicates that most nodes have a robust neighborhood
712
+ and the compromise of only the target as one of the neigh-
713
+ bors affects little. The observation similarly applies to GUAP
714
+ without patching any edges, although in this case the ASR is
715
+ significantly dropped. The observation also applies to GUAP
716
+ with randomly patched edges, because the number of such
717
+ edges is quite small. In this case, the ASR also suffers, al-
718
+ though to a less extent than the case of not patching any edges.
719
+ However, when more and more random edges are patched,
720
+ these edges play an increasingly significant role to the neigh-
721
+ borhood, leading to substantial compromise in the prediction
722
+ accuracy. Next, GUAP with regenerated node features barely
723
+ changes the ASR and the accuracy. This observation, together
724
+ with earlier ones, indicates that node features are much less
725
+ important than edges, the effort of training which pays. The
726
+ non-universal attack method FGA also barely changes the
727
+ accuracy, but in some cases it achieves a higher ASR than
728
+ GUAP while in others not.
729
+ Nettack [Z¨ugner et al., 2018] is a popular non-universal at-
730
+ tack method. Due to its high computational cost for a compa-
731
+ rable perturbation, it is infeasible to conduct experiments with
732
+ Nettack in a fair setting. Here, we highlight the computational
733
+ costs. GUAP takes time O(max epoch · |VL| · m(m + 2n)).
734
+ |VL| is typically a small fraction of n and grows much slower
735
+ than n, which can be further reduced by sampling. On the
736
+ other hand, to attack all nodes, Nettack costs O(n2(E · T +
737
+ F))), where E and F represent the number of edge and fea-
738
+ ture perturbations, respectively, and T is the average size of
739
+ a 2-hop neighborhood. In practice, the n2 factor renders Net-
740
+ tack a rather slower method to run, if the aim is to attack all
741
+ nodes.
742
+ Transferability to other models.
743
+ We apply the patch
744
+ trained with GCN on other GNN models: GAT [Veliˇckovi´c et
745
+ al., 2017], FastGCN [Chen et al., 2018a], AS-GCN [Huang
746
+ et al., 2018], and two embedding models node2vec [Grover
747
+ and Leskovec, 2016] and DeepWalk [Perozzi et al., 2014].
748
+ Table 5 summarizes the results.
749
+ GAT is developed based
750
+ on GCN through incorporating the attention mechanism.
751
+ Node2vec and DeepWalk update node embeddings by ex-
752
+ ploring the local structure via random walk. Different from
753
+ the other models, instead of using the whole graph, FastGCN
754
+ and AS-GCN use importance sampling to sample layer-wise
755
+ nodes to reduce training cost.
756
+ One sees that the attack performance is well maintained
757
+ on GAT, except the ASR of Pol.Blogs. For this exception
758
+ and all cases of node2vec and DeepWalk, the ASR still is
759
+
760
+ -O- ASR
761
+ .A.. NewAccO- ASR
762
+ ..A.. NewAccO- ASR
763
+ A.. NewAcc口
764
+ # Patch Edges
765
+ O ASR
766
+ NewAcc# Patch Edges
767
+ O ASR
768
+ NewAcc# Patch Edges
769
+ O ASR
770
+ -NewAccTable 3: ASR and change of accuracy under different sampling rate of the training set. Values in () next to the data set name denote the patch
771
+ set size as a percentage of the node set size.
772
+ Cora (1%)
773
+ Citeseer (1%)
774
+ Pol.Blogs (5%)
775
+ Sample Rate
776
+ ASR
777
+ ∆Acc
778
+ ASR
779
+ ∆Acc
780
+ ASR
781
+ ∆Acc
782
+ 100%
783
+ 91.37%
784
+ −0.12%
785
+ 85.05%
786
+ −0.14%
787
+ 53.24%
788
+ +0.26%
789
+ 40% (3x speedup)
790
+ 89.57%
791
+ −0.17%
792
+ 86.93%
793
+ −0.14%
794
+ 52.92%
795
+ +0.35%
796
+ 20% (5x speedup)
797
+ 87.25%
798
+ −0.17%
799
+ 87.53%
800
+ −0.19%
801
+ 53.01%
802
+ +0.26%
803
+ 10% (10x speedup)
804
+ 82.42%
805
+ +0.01%
806
+ 83.41%
807
+ −0.15%
808
+ 52.78%
809
+ +0.36%
810
+ 5% (20x speedup)
811
+ 80.01%
812
+ −0.16%
813
+ 79.43%
814
+ −0.09%
815
+ 52.77%
816
+ +0.36%
817
+ Table 4: Comparison of with related methods. Values in () next to the data set name denote the patch set size.
818
+ Cora (29)
819
+ Citeseer (33)
820
+ Pol.Blogs (45)
821
+ Baselines
822
+ ASR
823
+ ∆Acc
824
+ ASR
825
+ ∆Acc
826
+ ASR
827
+ ∆Acc
828
+ GUA
829
+ 86.48%
830
+ −0.07%
831
+ 82.23%
832
+ −0.07%
833
+ 48.36%
834
+ +0.38%
835
+ GUAP w/o patch edges
836
+ 28.34%
837
+ −0.01%
838
+ 25.02%
839
+ −0.01%
840
+ 14.62%
841
+ +0.39%
842
+ GUAP w/ random edges
843
+ 58.27%
844
+ −0.79%
845
+ 62.73%
846
+ −0.88%
847
+ 19.99%
848
+ +0.31%
849
+ GUAP w/ more rand. edges
850
+ 68.81%
851
+ −46.03%
852
+ 77.74%
853
+ −48.47%
854
+ 18.69%
855
+ −17.26%
856
+ GUAP
857
+ 91.42%
858
+ −0.11%
859
+ 85.03%
860
+ −0.15%
861
+ 51.10%
862
+ +0.36%
863
+ GUAP + regen. features
864
+ 91.41%
865
+ −0.02%
866
+ 85.00%
867
+ −0.02%
868
+ 51.08%
869
+ +0.36%
870
+ FGA (Not universal)
871
+ 94.90%
872
+ −0.66%
873
+ 92.91%
874
+ −0.20%
875
+ 42.74%
876
+ −0.14%
877
+ Table 5: Attack performance when using the patched graph trained
878
+ with GCN on other models. The patch set percentage is 1%, 1%,
879
+ 5% for Cora, Citeseer and Pol.Blogs, respectively.
880
+ Methods
881
+ Cora
882
+ Citeseer
883
+ Pol.Blogs
884
+ GCN(ASR)
885
+ 91.37%
886
+ 85.05%
887
+ 53.24%
888
+ GCN(∆Acc)
889
+ −0.12%
890
+ −0.14%
891
+ +0.26%
892
+ GAT(ASR)
893
+ 90.91%
894
+ 85.04%
895
+ 40.02%
896
+ GAT(∆Acc)
897
+ −0.36%
898
+ −0.19%
899
+ −0.04%
900
+ node2vec(ASR)
901
+ 74.89%
902
+ 84.24%
903
+ 43.07%
904
+ node2vec(∆Acc)
905
+ +2.58%
906
+ +3.66%
907
+ +2.83%
908
+ DeepWalk(ASR)
909
+ 81.02%
910
+ 82.41%
911
+ 41.32%
912
+ DeepWalk(∆Acc)
913
+ −41.42%
914
+ −21.51%
915
+ −3.50%
916
+ FastGCN(ASR)
917
+ 41.39%
918
+ 34.74%
919
+ 36.59%
920
+ FastGCN(∆Acc)
921
+ −2.43%
922
+ −0.40%
923
+ −2.08%
924
+ AS-GCN(ASR)
925
+ 36.68%
926
+ 31.09%
927
+ 39.24%
928
+ AS-GCN(∆Acc)
929
+ −2.42%
930
+ −1.46%
931
+ −2.24%
932
+ reasonably similar to the GCN case. However, the node2vec
933
+ accuracy surprisingly increases and the DeepWalk accuracy
934
+ significantly drops.
935
+ Additionally, both FastGCN and AS-GCN reveal robust-
936
+ ness against our attack, which is by and large owing to the
937
+ use of sampling. Such an observation is not surprising. The
938
+ patch is quite small, constituting only 1% of the nodes in Cora
939
+ and Citeseer. Consequently, the patch nodes are likely to be
940
+ ignored in sampling and thus voiding attacks. Furthermore,
941
+ the patches do not negatively impact the overall accuracy sig-
942
+ nificantly.
943
+ Based on the above findings, we see that the patch opti-
944
+ mized for GCN is not guaranteed to work similarly on all
945
+ other models, although it does perform equally well on GAT
946
+ and also reasonably close on DeepWalk and node2vec in
947
+ terms of ASR. Such a result is expected, since GAT has a
948
+ similar architecture to GCN whereas the other models oper-
949
+ ate quite differently. Overall, we conclude that our approach
950
+ transfers well to neural architectures similar to the one trained
951
+ on.
952
+ 6
953
+ Conclusion
954
+ In this paper, we consider a novel type of graph universal at-
955
+ tacks that do not modify the existing nodes and edges and
956
+ that do not change the prediction of nodes other than the tar-
957
+ get. The attack adversarially patches a small number of new
958
+ nodes and edges to the original graph. It compromises any
959
+ target through flipping its connections to the patch. We de-
960
+ velop an algorithm, GUAP, to find such a patch and demon-
961
+ strate high attack success rate. We show that the algorithm
962
+ can be accelerated through sampling the training set in each
963
+ epoch without sacrificing attack performance, hinting feasi-
964
+ bility for large graphs. For example, a 5% sampling leads
965
+ to a 20x speedup in training. GUAP achieves a higher ASR
966
+ than the recently proposed universal attack GUA. Moreover,
967
+ the patch trained with GCN can be used to effectively attack
968
+ other models, such as GAT, as well.
969
+ References
970
+ [Bojchevski and G¨unnemann, 2018] Aleksandar Bojchevski
971
+ and Stephan G¨unnemann.
972
+ Adversarial attacks on
973
+ node embeddings via graph poisoning.
974
+ arXiv preprint
975
+ arXiv:1809.01093, 2018.
976
+
977
+ [Bose et al., 2019] Avishek Joey Bose, Andre Cianflone, and
978
+ William Hamiltion. Generalizable adversarial attacks us-
979
+ ing generative models. arXiv preprint arXiv:1905.10864,
980
+ 2019.
981
+ [Brown et al., 2017] Tom Brown, Dandelion Mane, Aurko
982
+ Roy, Martin Abadi, and Justin Gilmer. Adversarial patch.
983
+ 2017.
984
+ [Chen et al., 2018a] Jie Chen, Tengfei Ma, and Cao Xiao.
985
+ Fastgcn:
986
+ fast learning with graph convolutional net-
987
+ works
988
+ via
989
+ importance
990
+ sampling.
991
+ arXiv
992
+ preprint
993
+ arXiv:1801.10247, 2018.
994
+ [Chen et al., 2018b] Jinyin Chen, Yangyang Wu, Xuanheng
995
+ Xu, Yixian Chen, Haibin Zheng, and Qi Xuan.
996
+ Fast
997
+ gradient attack on network embedding.
998
+ arXiv preprint
999
+ arXiv:1809.02797, 2018.
1000
+ [Dai et al., 2018] Hanjun Dai, Hui Li, Tian Tian, Xin Huang,
1001
+ Lin Wang, Jun Zhu, and Le Song. Adversarial attack on
1002
+ graph structured data. arXiv preprint arXiv:1806.02371,
1003
+ 2018.
1004
+ [Goodfellow et al., 2014] Ian
1005
+ J
1006
+ Goodfellow,
1007
+ Jonathon
1008
+ Shlens, and Christian Szegedy. Explaining and harnessing
1009
+ adversarial examples.
1010
+ arXiv preprint arXiv:1412.6572,
1011
+ 2014.
1012
+ [Grover and Leskovec, 2016] Aditya
1013
+ Grover
1014
+ and
1015
+ Jure
1016
+ Leskovec.
1017
+ node2vec:
1018
+ Scalable feature learning for
1019
+ networks.
1020
+ In Proceedings of the 22nd ACM SIGKDD
1021
+ international conference on Knowledge discovery and
1022
+ data mining, pages 855–864. ACM, 2016.
1023
+ [Huang et al., 2018] Wenbing
1024
+ Huang,
1025
+ Tong
1026
+ Zhang,
1027
+ Yu Rong, and Junzhou Huang.
1028
+ Adaptive sampling
1029
+ towards fast graph representation learning. In Advances in
1030
+ neural information processing systems, pages 4558–4567,
1031
+ 2018.
1032
+ [Kipf and Welling, 2017] Thomas N. Kipf and Max Welling.
1033
+ Semi-supervised classification with graph convolutional
1034
+ networks. In ICLR, 2017.
1035
+ [Liu et al., 2019] Xuanqing Liu, Si Si, Xiaojin Zhu, Yang Li,
1036
+ and Cho-Jui Hsieh. A unified framework for data poison-
1037
+ ing attack to graph-based semi-supervised learning. arXiv
1038
+ preprint arXiv:1910.14147, 2019.
1039
+ [Moosavi-Dezfooli et al., 2016] Seyed-Mohsen
1040
+ Moosavi-
1041
+ Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deep-
1042
+ fool: a simple and accurate method to fool deep neural
1043
+ networks. In Proceedings of the IEEE conference on com-
1044
+ puter vision and pattern recognition, pages 2574–2582,
1045
+ 2016.
1046
+ [Moosavi-Dezfooli et al., 2017] Seyed-Mohsen
1047
+ Moosavi-
1048
+ Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal
1049
+ Frossard.
1050
+ Universal adversarial perturbations.
1051
+ In Pro-
1052
+ ceedings of the IEEE conference on computer vision and
1053
+ pattern recognition, pages 1765–1773, 2017.
1054
+ [Perozzi et al., 2014] Bryan Perozzi, Rami Al-Rfou, and
1055
+ Steven Skiena. Deepwalk: Online learning of social repre-
1056
+ sentations. In Proceedings of the 20th ACM SIGKDD in-
1057
+ ternational conference on Knowledge discovery and data
1058
+ mining, pages 701–710. ACM, 2014.
1059
+ [Sun et al., 2019] Yiwei Sun, Suhang Wang, Xianfeng Tang,
1060
+ Tsung-Yu Hsieh, and Vasant Honavar. Node injection at-
1061
+ tacks on graphs via reinforcement learning. arXiv preprint
1062
+ arXiv:1909.06543, 2019.
1063
+ [Szegedy et al., 2013] Christian
1064
+ Szegedy,
1065
+ Wojciech
1066
+ Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian
1067
+ Goodfellow, and Rob Fergus.
1068
+ Intriguing properties of
1069
+ neural networks. arXiv preprint arXiv:1312.6199, 2013.
1070
+ [Veliˇckovi´c et al., 2017] Petar Veliˇckovi´c, Guillem Cucurull,
1071
+ Arantxa Casanova, Adriana Romero, Pietro Lio, and
1072
+ Yoshua Bengio. Graph attention networks. arXiv preprint
1073
+ arXiv:1710.10903, 2017.
1074
+ [Wang and Gong, 2019] Binghui Wang and Neil Zhenqiang
1075
+ Gong. Attacking graph-based classification via manipulat-
1076
+ ing the graph structure. In Proceedings of the 2019 ACM
1077
+ SIGSAC Conference on Computer and Communications
1078
+ Security, pages 2023–2040, 2019.
1079
+ [Wang et al., 2018] Xiaoyun Wang,
1080
+ Joe Eaton,
1081
+ Cho-Jui
1082
+ Hsieh, and Felix Wu. Attack graph convolutional networks
1083
+ by adding fake nodes. arXiv preprint arXiv:1810.10751,
1084
+ 2018.
1085
+ [Wu et al., 2019] Huijun Wu, Chen Wang, Yuriy Tyshetskiy,
1086
+ Andrew Docherty, Kai Lu, and Liming Zhu. Adversar-
1087
+ ial examples for graph data: Deep insights into attack and
1088
+ defense. In International Joint Conference on Artificial
1089
+ Intelligence, IJCAI, pages 4816–4823, 2019.
1090
+ [Xu et al., 2019a] Han Xu, Yao Ma, Haochen Liu, Debayan
1091
+ Deb, Hui Liu, Jiliang Tang, and Anil Jain. Adversarial
1092
+ attacks and defenses in images, graphs and text: A review.
1093
+ arXiv preprint arXiv:1909.08072, 2019.
1094
+ [Xu et al., 2019b] Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu
1095
+ Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. Topol-
1096
+ ogy attack and defense for graph neural networks: An op-
1097
+ timization perspective. arXiv preprint arXiv:1906.04214,
1098
+ 2019.
1099
+ [Zang et al., 2020] Xiao Zang,
1100
+ Yi Xie,
1101
+ Jie Chen,
1102
+ and
1103
+ Bo Yuan.
1104
+ Graph universal adversarial attacks: A few
1105
+ bad actors ruin graph learning models.
1106
+ arXiv preprint
1107
+ arXiv:2002.04784, 2020.
1108
+ [Z¨ugner and G¨unnemann, 2019] Daniel Z¨ugner and Stephan
1109
+ G¨unnemann. Adversarial attacks on graph neural networks
1110
+ via meta learning. arXiv preprint arXiv:1902.08412, 2019.
1111
+ [Z¨ugner et al., 2018] Daniel Z¨ugner, Amir Akbarnejad, and
1112
+ Stephan G¨unnemann. Adversarial attacks on neural net-
1113
+ works for graph data. In Proceedings of the 24th ACM
1114
+ SIGKDD International Conference on Knowledge Discov-
1115
+ ery & Data Mining, pages 2847–2856. ACM, 2018.
1116
+
E9AzT4oBgHgl3EQfw_6R/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff