diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjjhp" "b/data_all_eng_slimpj/shuffled/split2/finalzzjjhp" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjjhp" @@ -0,0 +1,5 @@ +{"text":"\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nWe introduce the task of canonicalized-alignment, a universal first step for multi-purpose garment manipulation pipelines.\nBy funneling a diverse array of crumpled clothing into a small set of high-visibility configurations, this task addresses much of the complexity associated with cloth's infinite DOF state space and severe self-occlusion and thus simplifies downstream tasks. After training in simulation with our novel factorized reward formulation for canonicalized alignment our learned policy generalizes to the real-world robot system and can be directly used for garment ironing and folding.\nDue to imperfect cloth simulators, we hypothesize that canonicalized-alignment performance can be improved if a real-world supervision signal could be derived to enable real-world finetuning, and believe this is an interesting direction for future work.\n\\section{Introduction}\n\n\n\n\n\nWhy is everyday cloth manipulation conceptually simple for humans?\nTo us, there are only a few meaningful states shirts could\n\nIn-the-wild cloth manipulation is made easy if the cloth is setup mindful of the downstream manipulation.\nConcretely, this involves canonicalizing the garment, which unfolds the garment into a standard configuration thus exposing all keypoints, while ensuring the downstream task manipulation can kinematically reach task-relevant parts of the cloth.\nRigid-goal conditioned canonicalizing are a fundamental skill for robotic cloth manipulation systems, because this task-agnostic skill \\emph{funnels} severely self-occluded and crumpled garment configurations found in-the-wild into highly structured configurations, allowing for simple task-specific policies to predictably and robustly manipulate the garments.\n\n\n\n\\iffalse\n \n \n \n Consider how we go through clothes pile after laundry.\n \n One-by-one, we would align the garments at a specific pose, with a large swing then a small adjustment or two, before executing a routine folding algorithm.\n All the severely self-occluded and crumple states clothes could assume are turned into a standard configuration.\n This task-agnostic canonicalization step enables simple (i.e., hardcoded) task-specific routines to robustly and predictably manipulate the garments.\n \n\n \n \n \n In-the-wild cloth manipulation is made easy through canonicalization:\n \n by first bringing severely self-occluded crumpled garments into their standard canonical configuration [at a specific pose], downstream tasks such as ironing and folding becomes easy.\n\n \n Every day cloth manipulation\n effortlessly\n because conceptually involves only two different modes,\n first is canonicalizing garment into a standard state from the infinite number of configs\n then downstream cloth manipulation relies on highly regular repeatable actions (e.g. a folding algorithm).\n\n From these observations, we ask the following question:\n ``How can we define a unified learning framework over multiple primitives and cloths''\n\n\n\\fi\n\nMany prior works have designed\nWhile prior works have tackled visual goal conditioned manipulation, we argue their problem formulation is too general for typical cloth manipulation tasks.\nOur key observation is goal-conditioned cloth manipulation is much simpler from a canonical state, given\n\nIn summary, our approach has three main contributions:\n\\begin{enumerate}\n \\item \\textbf{A canonicalization objective factorization}, which not only give a better landscape for learning and greedy interaction policies\\footnote{don't like this word, too vague, also need to back up by experiments later.} but also enables rigid goal-conditioning (i.e., SE2 transforms) of the canonical pose by design.\n \\item \\textbf{A multi-primitive, multi-arm setup for canonicalization}, which enables efficient unfolding using high velocity flings while reliably reaching garments' canonical configurations with high precision\\footnote{high precision means different things in different contexts, ideally swap out for something else} pick and places.\n \\item \\textbf{A real-world shirt ironing and folding system}, which demonstrates our key observation that cloth manipulation in the wild is made tractable given a robust and efficient rigid-goal conditioned canonicalization.\n\\end{enumerate}\n<<<<<<< HEAD\n\n\n\n\n\\section{Introduction}\n\n\nWhy hasn't the garments manufacturing received the same level of success in automation as robotic manufacturing in other industries?\nDue to cloths' infinite degrees of freedom, severe self-occlusion, and the possible crumpled states they can assume during many manufacturing stages, defining simple, repetitive controllers with low failure rates (which characterizes non-deformable manufacturing pipelines) is a challenging feat.\nIn this work, we ask: ``how \\emph{should} category-level cloth manipulation pipelines be designed, which enables maximum flexibility to arbitrary initial configurations (\\textit{i}.\\textit{e}. crumpled initial states) and manipulation task?''\n\nRecently, real-world cloth manipulation has received a lot of attention.\nSome of the earliest cloth manipulation works designed elaborate state machines for core cloth manipulation tasks in manufacturing, such as cloth smoothing~\\cite{ken2022speedfolding} and folding~\\cite{zhou2021folding}, but only for a specific instance, configuration, and task.\nMore recently, fully self-supervised approaches learn to unfold~\\cite{ha2021flingbot} and tackle visual goal-conditioned manipulation of a single cloth instance~\\cite{lee2020learning} from arbitrary initial configurations.\nOn the other hand, approaches which uses human demonstrations for smoothing~\\cite{ken2022speedfolding}, folding~\\cite{zhou2021folding}, but require costly human demonstrations\/annotations.\n\n\nOur main observation is that if given appropriately-aligned and canonicalized garments and a garment category-level keypoint detector, keypoint-based cloth-manipulation state machines are sufficient for many manufacturing pipelines.\nIn other words, instead of learning arbitrary cloth manipulation tasks from arbitrary initial configurations, we hypothesize that it is sufficient to learn a robust canonicalization and alignment policy.\nThis is because it \\emph{funnels} unstructured and self-occluded cloth configurations into highly structured states with clearly visible keypoints, enabling simple heuristics to work with high success rate.\n\nWe argue that learning a robust, task-agnostic, garment-category level canonicalization and alignment policy is an enabler for simple downstream task-specific state machines.\nTo this end, we propose a canonicalization and alignment objective, which factorizes into two terms, one which accounts for the deformation cost and another for the planar rigid transform cost of the garment.\nWe demonstrate that this objective function enables better canonicalization and alignment compared to a naive unfactorized per-vertex cost objective function.\nWe implement a self-supervised multi-primitive, dual-arm system for efficient unfolding using high velocity flings and fine-grained adjustments using quasi-static pick and places.\nWe directly deploy our simulation-trained policy in the real world, where we demonstrate its effectiveness in a steam-ironing and folding pipeline.\nThese downstream applications exploit the highly structured configurations from our policy \n\nIn summary, our approach has three main contributions:\n\\begin{enumerate}\n \n \\item \\textbf{A canonicalization and alignment objective factorization}, which we show emperically correlates better with downstream cloth manipulation success.\n \\item \\textbf{A multi-primitive, multi-arm setup for canonicalization and alignment}, which uses dynamic flings and quasi-static pick \\& places.\n \\item \\textbf{A real-world shirt steam-ironing and folding system}, which demonstrates our key observation that garment-category level cloth manipulation from arbitrary initial states is made tractable given a robust canonicalization and alignment.\n\\end{enumerate}\n\\section{Introduction}\n\nWhy has garment manipulation proved more difficult to automate than more typical rigid and articulated objects? We argue that two key factors are severe self-occlusion, which is present in the large set of possible crumpled states, and the infinite degrees of freedom inherent to clothing. As a result, it is impractical to manually define manipulation policies which result in reliable manipulation \u2014 a cornerstone of current automated non-deformable manufacturing pipelines.\n\nIn this work, we explore bridging the gap between existing approaches to automation and the challenging domain of clothing. We show that when a robot first learns to robustly manipulate arbitrarily configured clothing items into a specific pose, downstream manipulation skills can be made to work significantly more reliably on deformable clothing.\n\n\n\nRecently, real-world cloth manipulation has received significant attention.\n\\todo{need to cite earlier works, like Peiter Abbeel's folding with PR2}Some of the earliest cloth manipulation work employed elaborate state machines for core cloth manipulation tasks in manufacturing, such as cloth smoothing~\\cite{ken2022speedfolding} and folding~\\cite{zhou2021folding}, but only for a specific clothing instance, configuration, and task.\n\nMore recently, learning-based approaches have shown success in creating more general cloth manipulation behavior. One such approach consists of supervised methods that use human demonstrations for smoothing~\\cite{ken2022speedfolding} and folding~\\cite{zhou2021folding} behaviors but require costly human demonstrations\/annotations. Another recent line of work employs fully self-supervised learning and has shown success in learning to unfold~\\cite{ha2021flingbot} and in tackling visual goal-conditioned manipulation of a single cloth instance~\\cite{lee2020learning} from arbitrary initial configurations.\n\n\nOur main observation is that if given appropriately-aligned and canonicalized garments\\todo{should define this, first time use the term}, keypoint-based task-specific state machines are sufficient for many garment manipulation tasks.\nInstead of learning arbitrary cloth manipulation tasks from arbitrary initial configurations, we hypothesize that it is sufficient to learn a robust canonicalization and alignment policy from which other manipulation skills may be chained. This is because such a policy \\emph{funnels} unstructured and self-occluded cloth configurations into highly structured states with clearly visible key points, enabling simple heuristics to work with a high success rate.\n\nTo this end, we define a new ``canonicalized-alignment'' task for garment manipulation, where the goal is to transform a garment from its arbitrary initial state into a canonical shape (defined by its category) and align it with a particular rigid pose defined by rotation and translation in a 2D plane.\n\nThe end result is a decomposition of garment manipulation into two factorized pieces. The first of which funnels a chaotic and diverse set of cloth configurations into a narrow distribution in a structured region of state space while the second piece consists of downstream behaviors that are able to rely on friendly initial configurations and high observability.\n\n\n\n\n\\todo{feels redundant}Enabled by canonicalized-alignment, we propose \\textbf{CASSIE}, a recipe for multi-purpose garment manipulation, which includes \\underline{c}anonicalization, \\underline{a}lignment, and \\underline{s}tate machines.\nBy learning to canonicalize, our approach can turn arbitrarily crumpled initial configurations into standard configurations where all the garment's key-points are visible.\nBy simultaneously learning alignment, the canonicalized garment can be appropriately positioned and oriented for the downstream state machine's kinematic constraints.\nLastly, by using a mature off-the-shelf key-point detector to extract a key-point garment representation, our proposed recipe enables the usage of key-point-based task-specific state machines.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.99\\linewidth]{figures\/teaser.png} \\vspace{-4mm}\n \\caption{Transformed-canonicalization funnels a wide distribution of possible initial configurations into a small set of well structured and highly observable states.} \\vspace{-5mm}\n \\label{fig:task}\n\\end{figure}\n\nOur primary contribution is the introduction of the ``canonicalized-alignment'' task for garment manipulation, which can be serve as \\textit{universal} starting point for many downstream garment manipulation applications. To achieve this, we propose a learned multi-arm, multi-primitive manipulation policy that strategically choose between dynamic flings and quasi-static pick\\&place actions to efficiently and precisely transform the garment into its canonicalized and aligned configuration. \nThis policy is based on a novel factorized objective function that effectively guided the learning and avoid ambiguities and local minima which plague the generic goal-reaching formulations by decoupling shape and pose.\nWe evaluate our approach's effectiveness by applying CASSIE to multiple downstream garment manipulation tasks in the real-world on a physical robot, including category-level garment folding and ironing and we show empirically that incorporation of canonicalized-alignment significantly improves the reliability of downstream tasks. These results suggest that robust canonicalized-alignment provides a practical step forward toward general-purpose garment manipulation from arbitrary states for diverse tasks by significantly reducing the difficulty of learning performant downstream manipulation behavior.\n\\section{Introduction}\n\nWhy has garment manipulation proved more difficult to automate than more typical rigid and articulated objects?\nWe argue that two key factors are severe self occlusion, which is present in the large set of possible crumpled states, and the infinite degrees of freedom inherent to clothing. \nAs a result, it is impractical to manually define manipulation policies that achieve reliable manipulation \u2014 a cornerstone of current automated non-deformable manufacturing pipelines.\nIn this work, we explore bridging the gap between existing approaches to automation and the challenging domain of clothing.\nWe show that when a robot first manipulates arbitrarily configured clothing items into a predefined configuration (\\textit{i}.\\textit{e}. canonicalization) at an appropriate pose (\\textit{i}.\\textit{e}. alignment), downstream manipulation skills work significantly more reliably.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.99\\linewidth]{figures\/teaser.png} \\vspace{-6mm}\n \\caption{Canonicalized-Alignment \\emph{funnels} the large space of possible cloth configurations into a much smaller and better structured set of highly-visible states that greatly simplifies downstream tasks such as ironing or folding.} \n \\vspace{-5mm}\n \\label{fig:teaser}\n\\end{figure}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/method_overview_v2.jpg}\n \\vspace{-7mm}\n \\caption{\\textbf{Approach Overview.} a) A batch of scaled and rotated observations are created from a top-down RGB image of the workspace and then concatenated with a scale-invariant coordinate map. b) The batch of inputs is fed through the factorized network architecture, producing a batch of rotated and scaled value maps for each primitive. c) All primitive batches are concatenated and the maximum value pixel parameterizes the action to be executed.}\n \\vspace{-7mm}\n \\label{fig:pipeline}\n\\end{figure*}\n\nRecently, real-world cloth manipulation has received significant attention.\nSome of the earliest cloth manipulation work explored manually designed heuristics which worked well for specific clothing types, configurations, and tasks, such as cloth unfolding~\\cite{cusumano2011bringing,maitin2010cloth,triantafyllou2016geometric,osawa2007unfolding}, smoothing~\\cite{Sun2013AHA,willimon2011model}, folding~\\cite{zhou2021folding,maitin2010cloth,osawa2006clothes,miller2011parametrized}, \nbut their strong assumptions initial states, fiducial markers, specialized tools, or cloth type\/shape do not generalize.\nMore recently, learning-based approaches have shown success in more general cloth manipulation behavior. \nOne line of work has explored supervised-learning from human demonstrations for smoothing~\\cite{ken2022speedfolding} and folding~\\cite{zhou2021folding}, but those methods required costly human demonstrations\/annotations. \nAnother recent line of work employs fully self-supervised learning and has shown success in learning to unfold~\\cite{ha2021flingbot} (but doesn't generalize to other tasks) and in tackling visual goal-conditioned manipulation of a single square cloth instance~\\cite{lee2020learning}.\n\nInstead of learning arbitrary monolithic cloth manipulation tasks, we hypothesize that it is more efficient to learn a robust task-agnostic canonicalization and alignment policy from which other task-specific manipulation skills may be chained.\nThis is because such a policy \\emph{funnels} unstructured and self-occluded cloth configurations into structured states with clearly visible key points (Fig.~\\ref{fig:teaser}, middle), reducing the complexity of the task-specific downstream policy, and enabling even simple heuristics to work with a high success rate.\n\nTo this end, we define a new ``canonicalized-alignment'' task for garment manipulation, where the goal is to transform a garment from its arbitrary initial state into a canonical shape (defined by its category) and align it with a particular 2D translation and rotation.\nThe end result is a decomposition of garment manipulation into two factorized parts.\nThe first part funnels a diverse set of cloth configurations into SE-2 transforms of a small set of states with high visibility. \nThe second part consists of downstream behaviors which relies on kinematically feasible transforms of structured initial configurations and full keypoint observability to achieve high task success rates.\n\n\nOur primary contribution is the introduction of ``canonicalized-alignment'', a garment manipulation task which serves as a cloth funnel for reducing general-purpose garment manipulation complexity.\nWe achieve this by the following technical contributions: \n\\begin{itemize}\n \\item We propose a learned multi-arm, multi-primitive manipulation policy that strategically chooses between dynamic flings and quasi-static pick\\&place actions to efficiently and precisely transform the garment into its canonicalized and aligned configuration.\n \n \\item To train the policy, we proposed a novel factorized reward function that avoids adverse local minima which plague the generic goal-reaching formulations by decoupling deformable shape and rigid pose.\n \n \\item We evaluate our approach in multiple downstream garment manipulation tasks in the real-world on a physical robot, including folding and ironing.\n\\end{itemize}\nOur experiments show that incorporation of canonicalized-alignment significantly reduces the complexity of downstream applications, suggesting that robust canonicalized-alignment provides a practical step forward toward multi-purpose garment manipulation from arbitrary states for diverse tasks.\n\n\n\\section{Introduction}\n\n\n\\textcolor{red}{ALPER:} \\textcolor{red}{I'm struggling to begin this introduction}\n\nUnfolding and folding laundry, aligning sheets of carbon-fiber on a mold at a factory, and positioning sheets on a person during a surgical operation are only some of the tasks that require goal conditioned cloth manipulation.\n\n\\textcolor{red}{ALPER:} \\textcolor{red}{Big jump but I do want to mention this before introducing it in challenges}\nAlthough many prior works have studied folding of wearable garments, most of them make the assumption that the cloth begins in a perfectly \\textit{canonicalized}\\cite{huang2022mesh} state. In reality, clothes in natural environments are crumpled and self-occluded. What's the point of having a robot fold your clothes if you have to flatten them first? \n\nDespite the importance of cloth manipulation to home and industrial settings, making a general and robust robotic manipulation framework has not been fully addressed due to the the following challenges:\n\n\\begin{enumerate}\n\n \\item \\textbf{Generality}\n \n Previous works that have investigated the task of cloth folding typically rely on strong assumptions about the cloth, such as full keypoint visibility, canonicalized initial state, or a single\/square cloth instance. Since clothes come in vastly different categories, shapes, and sizes, as well as unstructured and self-occluded states, these methods cannot generalize to most natural cloth configurations.\n \n \\item \\textbf{Precision-Efficiency Trade-off}\n \n \n Much of the existing cloth manipulation literature has been successful in using quasi-static action primitives to achieve folding, smoothing, canonicalization, and unfolding. Although single-arm quasi-static action primitives (e.g. pick and place, pick and drag) are precise and predictable, they are inefficient against crumpled and self-occluded clothes. As an alternative, recent works have shown that dynamic action primitives (e.g. flinging, blowing-air) can be extremely efficient at unfolding; however, these dynamic actions lack the precision required for more general goal-conditioned general cloth manipulation.\n \n \\textcolor{red}{ALPER:} \\textcolor{red}{This is sort of what I previously had, I like some of the words in it but it doesn't fit the \"challenges\" section but it's more like \"what we need\":}\n Efficient cloth manipulation requires both coarse-grained and fine-grained actions. Without large, coarse grained actions such as a fling, the task becomes difficult to solve, and self-occlusion becomes an issue. Without fine-grained actions, the model cannot quite reach the full goal(e.g. arms still stick out in random directions, corners folded in, etc.)\n \n \n \\item \\textbf{Large State Space and Loss of Reward Equivariances}\n \n Cloth manipulation presents a learning challenge because a single piece of cloth can deform into practically infinite configurations, not to mention that clothes come in different shapes and sizes. Prior unfolding works in cloth unfolding \\cite{ha2021flingbot} \\cite{huang2022mesh}, have taken advantage of equivariant reward definitions, but equivariances lose their validity in the goal-conditioned setting. Given that we want to goal-condition a cloth, the supervision must depend on both the position and the rotation of the cloth relative to the goal.\n \n\\end{enumerate}\n\nTo address the challenges posed above, we propose the follownig contributions:\n\n\\begin{enumerate}\n \\item \\textbf{Factorization of Goal Conditioning}\n \n We assert that all goal-conditioned deformable object manipulation tasks can be factorized into the \\textit{deformable goal} and the \\textit{rigid goal}. Abstractly, the deformable goal concerns the shape and the deformation of the cloth, and it is invariant to translations and rotations. The \\textit{rigid goal} complements the deformable goal by specifying the position and the rotation of the cloth.\n \n Our work is centered around the following insights: 1. Majority useful goal-states for garments in the real world are rigid transformations of a very few deformable goals. 2. Many of these deformable goals can be achieved starting from the canonicalized deformable goal. 3. Since cloth canonicalization eliminates self-occlusion and enforces lots of structure, getting from the canonicalized state to a useful deformable goal can be achieved using demonstrations or a heuristic. \n \n \\textcolor{red}{ALPER:} \\textcolor{red}{Remark: I'm not sure if we want to include this but I think it's interesting that canonicalization is 1. universal and has a clear objective 2. deals with unstructured states, whereas folding 1. has no clear objective function, it has many local optima demonstrated by a wide variety of existing folding techniques 2. requires structure. Therefore it makes more sense to \\textbf{learn how to canonicalize} and \\textbf{demonstrate\/hardcode how to fold. }}\n \n Therefore, we demonstrate that we can accomplish category-level goal-conditioning by first training a model to canonicalize a garment category at a given rigid goal, thereby funneling all instances of that category in a structured state, and then using a simple keypoint detection model and a heuristic to fold the garment.\n \n \\item \\textbf{Combinatorial Action Primitives}\n \n We propose a simple yet efficient universal multi-primitive cloth manipulation framework that unites both dynamic and quasi-static action primitives under the same reward \\textcolor{red}{ALPER:} \\textcolor{red}{Unification under the same reward was mentioned by Huy, is this something that's novel or worth emphasizing?}. The resulting model outperforms all single-primitive baselines in terms of both speed and performance. \\textcolor{red}{ALPER:} \\textcolor{red}{Is it worth highlighting that it's easy to add new primitives}\n \n \\item \\textbf{Factorization Reward and Architecture}\n \n Unfolding the arm of a shirt near the goal and away from the goal must result in vastly different supervision under the rigid goal-conditioned reward; however, we can still learn from the commonality of unfolding the arm in both examples. To accomplish this, we factorize our reward into rigid and deformable components, and we similarly factorize our networks to learn each reward separately. Doing so disentangles the supervision and results in a more equivariant reward formulation. \\textcolor{red}{ALPER:} \\textcolor{red}{Not saying anything about the inherent value of the reward itself or the fact that it's more aligned with human preferences yet, I will confirm these things as my multi-shirt abelations conclude, what I wrote is the worst case scenario}\n\n\\end{enumerate}\n\n\\section{Method}\n\nOur method is divided into two distinct stages:\n\n\\begin{enumerate}\n \\item The rigid goal-conditioned canonicalization stage, for which we train a self-supervised model inside the PyFlex cloth simulator \\cite{lin2020softgym}. The goal of rigid goal-conditioned canonicalization stage is to take an arbitrarily crumpled cloth, and to execute a combination of dynamic and quasi-static actions that will canonicalize the cloth at the goal rotation and translation, ensuring keypoint visibility and satisfying the kinematic constraints for downstream tasks.\n \\item A simple, user-defined follow-up folding stage parametrized by the cloth keypoints to demonstrate the viability of heuristic folding policies after RGC.\n\\end{enumerate}\n\n\n\n\\subsection{Action Primitive Definitions}\n\nTo achieve both efficient and precise rigid goal-conditioned canonicalization in the first stage, our approach combines Pick \\& Fling and Pick \\& Place primitives.\n\nThe Fling \\& Place primitive is largely influenced by FlingBot \\cite{ha2021flingbot} with a few modifications suited to our task.\nThe key difference is that our primitive is not in a fixed direction and position. The direction of the fling is determined by the rotation of the-goal, and the position of the fling depends on the translation of the rigid-goal. This ensures that we can move our model's viewport around for rigid-goal conditioning.\n\nThe Fling \\& Place primitive involves a sequence of picking, lifting, stretching, and flinging actions. First, picking primitive performs a bimanual top-down grasp on the $\\textbf{x}_{\\text{1}}, \\textbf{x}_{\\text{2}}\\in \\mathbb{R}^3$ coordinates, which correspond to left and right grasp locations. Second, the arms lift up the cloth until there's a $5 cm$ margin with the ground, as opposed to a fixed fling height, and the height of the cloth from the ground is recorded. This is to ensure that smaller clothes do not fold back onto themselves during flinging. Third, the cloth is stretched as much as possible, with a maximum stretch width of 0.7m. Finally, the arms move forward $0.70m$ and they move linearly move down back onto the workspace surface at $1.4ms^{-1}$ by $0.70m + \\frac{\\text{Cloth Height}}{2}$ to ensure that the cloth is centered on the viewport.\n\nThe Pick \\& Place primitive involves a sequence of picking, lifting, dragging, and placing actions. Given $\\mathbf{x}_{\\text{1}}, \\mathbf{x}_{\\text{2}} \\in \\mathbb{R}^3$, the cloth is top-down pinch grasped at $\\mathbf{x}_{\\text{1}}$, lifted by $0.1 m$, then dragged to the xy coordinate $\\mathbf{x}_{\\text{2}}$ and placed back down onto the board surface.\n\n\\subsection{Reward Formulation For Rigid Goal-Conditioned Canonicalization}\n\nFor each cloth instance, the goal is determined by the ground-truth canonicalized vertices of the same cloth mesh. Obtaining the canonicalized vertices is an extremely simple task, since cloth meshes in the CLOTH3D dataset, as well as many other meshes on the internet, come in a canonicalized pose. All we do is to create goal configurations is to flatten the cloth at the center of the workspace.\n\n\nLet the flattened two-dimensional vertices of a cloth's configuration be defined as $\\{\\mathbf{v}_i\\}$ and the corresponding flattened goal vertices be defined as $\\{ \\mathbf{g}_i\\}$\n\nA naive way to define a goal is to use take the mean L2 distance of all vertices with respect to the corresponding goal vertices. Since the magnitude of this goal differs based on cloth size, we can attempt to normalize it by dividing by $\\sqrt{HW}$ where $H$ and $W$ are the height and the width of the canonicalized cloth respectively. This gives us the baseline unfactorized reward:\n\n\\begin{equation}\n R_{\\textrm{Unf}} = -\\frac{\n \\sum_i \\lvert\\lvert \\textbf{g}_i - \\textbf{v}_i\\rvert\\rvert\n }{\\sqrt{HW}}\n \\label{eq:unfactorized}\n\\end{equation}\n\nDespite being a viable dense reward, L2 reward has the following deficiencies: \\newline\n\\begin{enumerate}\n \\item\\textbf{Entangled supervision:} Using the naive L2 reward, adjusting the arm of a shirt at the corner of the board results in a vastly different supervision than pulling the adjusting the arm at the center of the board. How is our model supposed to learn how to adjust the arms of a shirt?\n \\item\\textbf{Rough reward landscape:} Actions that end up shifting most of the cloth body result in extremely sharp changes in reward, whereas smaller actions become insignificant. This makes it make to land final adjustments to the cloth.\n\\end{enumerate}\n\nTo address these problems, we propose the \\textit{factorized reward}, which is the sum of the \\textit{rigid reward}, which represents the distance of a cloth's overall translation and rotation away from a goal position, and the \\textit{deformable reward}, which represents the distance of a cloth away from its canonicalized configuration in terms of shape or deformation.\n\nTo calculate the two components, we introduce a new point cloud $\\{\\mathbf{g'}_{i}\\}$. $\\{\\mathbf{g'}_{i}\\}$, which is obtained superimposing the goal vertices $\\{\\mathbf{g}_{i}\\}$ onto the cloth's workspace vertices using the ICP (iterative closest point) algorithm. Note that $\\{\\mathbf{g'}_{i}\\}$ is the combination of a planar translation and rotation of $\\{\\mathbf{g}_{i}\\}$. Now, we define the factorized reward as follows:\n\n\\begin{align} \n \\label{eq:factorized}\n R_{\\textrm{Fac}} & = (1-\\alpha) R_{\\textrm{Rigid}} + \\alpha R_{\\textrm{Deform}}\n\\end{align}\n\nwhere the rigid cost is given by \n\\begin{equation}\n R_{\\textrm{Rigid}} = \n -\\frac{\\sum_i \\lvert\\lvert \\mathbf{v}_i - \\mathbf{g'}_i\\rvert\\rvert}\n {\\sqrt{HW}}\n \\label{eq:rigid}\n\\end{equation}\nand the deformable cost is given by \n\\begin{equation}\n R_{\\textrm{Deform}} = \n -\\frac{\\sum_i \\lvert\\lvert \\mathbf{g'}_i - \\mathbf{g}_i\\rvert\\rvert}{\n \\sqrt{HW}\n }\n \\label{eq:deform}\n\\end{equation}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/method_reward_v1.pdf}\n \\caption{\\textbf{Reward Computation.} TBD: Awaiting review.}\n \\label{fig:task}\n\\end{figure}\n\nThe first term in the sum, the deformable distance, represents \"Given that the cloth is perfectly positioned and rotated, how bad is the shape?\". The second term is the rigid distance represents \"If the cloth were perfectly canonicalized in place, how off would it be from one that's canonicalized at the goal?\"\n\n$\\alpha$ is a tunable parameter which determines how much emphasis is put onto the deformable reward.\nOur factorized architecture formulation, which we introduce in the next subsection, predicts each of these from a separate network head, so $\\alpha$ can be tuned at test time to optimize a desired metric. In our experiments, we found that $\\alpha=0.6$ maximizes the final IoU performance for long sleeve shirts.\n\n\\textcolor{red}{ALPER:} \\textcolor{red}{I'm not sure where to properly introduce how we actually goal condition}\n\n\\subsection{Learning Reward-Maximizing Actions}\n\nThe goal of our policy is to take a top-down RGB image of the workspace, and then execute the action that will maximize the single step delta-reward that it receives. To to unify the primitive parametrizations, enforce constraints easily, and equip our model with strong inductive bias with respect to scale, translation, and rotations, we use a spatial action map approach inspired by FlingBot.\n\n\n\\textcolor{red}{ALPER:} \\textcolor{red}{A couple sentences introducing spatial action maps}.\n\nTo extend the spatial action maps to multiple primitives, we propose a network architecture with one common encoder and $N$ decoder heads, where $N$ is the number of primitives.\n\nFor model inference, the top-down square $128 \\times 128$ RGB image of the viewport is captured, concatenated with a scale invariant positional encoding, and then a batch of scaled and rotated observations are generated from this image, resulting in $B$ \\textit{transformed} observations. In our experiments we use the cross product of scales $[0.75, 1.0, 1.5, 2.0, 2.5, 3.0]$ and 16 rotations spanning $360$ degrees, resulting in $B= 6 \\times 16 = 96$ observations.\n\nAfter a $B \\times 128 \\times 128$ value map is generated by each of the $N$ primitive decoders, pixels that result in an invalid action are filtered out, and the maximum value pixel among the $N \\times B \\times 128 \\times 128$ value maps are chosen, and the chosen pixel coordinate, the chosen action primitive, and the delta rewards are recorded.\n\n\nGiven the pixel coordinate $(x,y)$ executed on some transformed observation, $(x,y+10)$ and $(x,y-10)$ are selected as the 2D pixel corresponding to the action primitive parameters $\\mathbf{x}_1$ and $\\mathbf{x}_2$. These pixel coordinates are transformed back into the world space coordinates, and the top-down depth of the coordinate is appended as the $z$ coordinate to obtain the three dimensional locations that parametrize our action primitives.\n\nThe specified action is executed, and the step statistics are saved into a replay buffer for off-policy training.\n\nDuring the off-policy training stage, a batch of $(S_i, A_i, R_i)_{i=1, \\cdots , N}$ tuples are sampled for each action primitive $i$, where $S_i$ represents the top-down network observation, $A_i$ represents the chosen pixel coordinate $(x, y)$, and $R_i$ represents the delta-reward received by executing the action primitive $i$ on the pixel $A_i$ with respect to the scaled and rotated reference frame of the observation $S_i$.\n\n\n\nThe loss is accumulated by sequentially forwarding each observation $S_i$ through the network, decoding the observation with the $i$ th head into a value map $V_i$, and then adding to the total loss the L2 distance between $V_i[x,y]$ and $R_i$. Once the loss is accumulated for each primitive, the gradient step is taken. In our approach, we use Adam optimizer with lr=1e-4.\n\nIn addition to predicting delta-rewards using a network, we also investigate learning rigid and deformable rewards using two separate networks, with one encoder and $N$ decoders each. For inference, the $N$ value maps from each network are added together by the weight $\\alpha$ according to factorized reward formulation [\\ref{eq:factorized}], and the maximum pixel is chosen for execution. The training procedure is identical to the one mentioned above, where one network is supervised on delta-rigid-rewards and the other one on delta-deformable-rewards.\nFor more details, we refer to readers the code [Link]\n\nWe train our model using the aformentioned setup until convergence, for 12,500 episodes with 8 steps per episode. This process takes about 48 hours on 4 RTX3090 GPUs.\n\n\n\\subsection{Follow-up Folding Heuristic}\n\nUnlike unfolding, folding a cloth is a structured, step-by-step task that does not involve self-occlusion. In addition, folding methods vary based on habits and preferences, so it does not make sense to train an entire model to fold a cloth in one specific way.\n\nIn our approach, we use a simple heuristic that identifies cloth keypoints using a UNet, and then executes a user-defined closed-loop policy based on these keypoints.\n\nTo train the UNet keypoint model for a given cloth category, we take about 100 128x128 pixel images of high-coverage observations from a training replay buffer, and hand-label them using a custom GUI. Using a heavily augmented training pipeline, we are able to train a model that can robustly identify the keypoints of a canonicalized cloth with high sim to real generalizability. The canonicalized pose makes the keypoint identification almost trivial.\n\nAfter identifying the cloth keypoints, we execute a user-defined pick-and-place policy parametrized by the keypoint locations.\n\n\\section{Method}\n\n\\subsection{A Two-staged Recipe for Garment Processing}\n\nCommonly cited reasons for the challenges of cloth manipulation are their infinite degrees-of-freedom (DOF) and severely crumpled configurations, which implies a large state space and partial observability, respectively.\nThus, the same simple, low-failure-rate state machines which has dominated non-deformable manufacturing pipelines does not trivially extend to garment pipelines.\nThis is because such human-designed state machines typically assumes a low-dimensional state space and highly-structured configurations, which requires limited-to-no sensing.\nAre we forced to take a fully end-to-end learning approach for each cloth manipulation task we care about then?\n\nWe propose to factorize a cloth-manipulation pipeline into two stages\\todo{consider putting two part factorization into teaser figure}.\nIn the first stage, the robot system execute a learned \\emph{task-agnostic} canonicalization \\& alignment policy, which leaves the garment is in a standard pose with all keypoints are visible.\nIn the second stage, the robot will execute a \\emph{task-specific} human-specified state machine, which operates purely from key-points.\nWe argue that this factorization comes with three benefits:\n\\begin{enumerate}\n \\item \\textbf{Arbitrary initial configuration via canonicalization}:\n Since cloths readily deform and crumple during any stage of the manufacturing pipeline, simple manually-designed state machine do not have the structure they require to perform their reliable manipulation.\n \n \n In contrast, our first stage performs canonicalization, which \\emph{funnels} the large space of cloth configurations into the high-structured, fully-observable configurations which such state machines require, which enables our pipeline to work on arbitrary initial configurations.\n \\item \\textbf{Downstream task-awareness via alignment}:\n Manufacturing pipelines may involve many different tools (\\textit{e}.\\textit{g}. iron, sewing machines) placed on different sides of a workspace.\n Thus, depending on the downstream task, it is insufficient for canonicalized cloths to be at any position and orientation on the workspace~\\cite{huang2022mesh}\n In contrast, our first stage also performs garment alignment, which enables our system to canonicalize the cloth at a specified planar pose which is kinematically most suitable for downstream tasks.\n \\item \\textbf{Garment-level generalization via Key-point-based State Machines}:\n Engineering and debugging a task-specific state-machine operating on key-points to achieve low failure rates is much more straight forward and, thus, less costly than that for a behaviour-cloning, imitation learning, or reinforcement learning approach.\n This is because the only data-driven part of this second-stage builds upon the maturity of off-the-shelf key-point detectors and the existing user-friendly frameworks (which abstracts away the semi-supervised, data-augmentation machinery which makes them work so robustly).\n While a key-point-based cloth representation contains less information than an image~\\cite{ha2021flingbot} or mesh~\\cite{}, we argue that this is a feature, not a bug.\n This is because such key-point representations effectively reduces the infinite DOF down to a handful meaninful ones, which makes manually writing a state machine for a new garment manipulation task simple.\n With a good canonicalization and alignment, we hypothesize that our key-point-based state machines would achieve much better intra-garment-class generalization than data-driven approaches\\todo{ideally, we have an approach which compares behaviour cloning with state machine based approaches}.\n\\end{enumerate}\nWe name our two-staged garment manufacturing pipeline recipe, which includes \\underline{C}anonicalization, \\underline{A}lignment, and a \\underline{S}tate machine, \\textbf{CASPAr}.\nNext, we will discuss how we formulate the canonicalization \\& alignment objective (Sec.~\\ref{sec:method:can_align_task}), how we learn the task (Sec.~\\ref{sec:spatial_policy}), and how we implement the system and state-machines (Sec.~\\ref{sec:method:heuristics}).\n\n\n\n\\subsection{The Canonicalization \\& Alignment Task}\n\\label{sec:method:can_align_task}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/method_reward_v2.pdf}\n \\caption{\\textbf{Reward Computation.} TBD: Awaiting review.}\n \\label{fig:task}\n\\end{figure}\n\n\\mypara{Problem Statement.}\nFix a garment class $c \\in \\{\\textrm{shirt},\\textrm{pants},\\textrm{skirts}\\}$.\nGiven a garment instance in class $c$ with $N$ vertices, let $\\mathbf{v}= \\{v_i\\}_{i\\in N}$ denote the local configuration of the cloth, which is the set of vertex planar positions $ v_i \\in \\mathbb{R}^2$ in the cloth's local frame $T = (x,y,\\theta)$.\nHere, $(x,y)\\in \\mathbb{R}^2$ is the origin and $\\theta \\in [-180^\\circ,180^\\circ]$ is the orientation of the cloth's frame with respect to the global (workspace) framework.\nDenote by $ \\mathbf{g} = \\{ g_i \\}_{i\\in N}$ the local canonical configuration of the cloth, with each $g_i \\in \\mathbb{R}^2$ in the cloth's local frame.\nHere, $\\mathbf{g}$ for each garment instance the conventional configuration for class $c$, such as the T-shaped pose when $c=\\textrm{shirt}$ and the upside-down V-shaped pose $c=\\textrm{pants}$.\nLet the target planar pose $T^\\prime = (x^\\prime,y^\\prime, \\theta^\\prime)$ describe the global planar position $(x^\\prime,y^\\prime) \\in \\mathbb{R}^2$ and rotation $\\theta^\\prime\\in [-180^\\circ,180^\\circ]$ at which the garment instance's pose $T$ should be located.\nUsing only top-down views of the garment instance, the goal of canonicalization is to manipulate the garment instance such that $\\mathbf{v} = \\mathbf{g}$.\nSimilarly, from only $I$, the goal of alignment is to ensure that $T = T^\\prime$.\n\n\\mypara{Naive Reward Formulation.}\nClearly, a global canonical configuration $\\mathbf{g}^\\prime$ can be derived from $\\mathbf{g}$ and a specified target planar pose $T^\\prime$, and similarly from the current local configuration $\\mathbf{v}$ to the global canonical configuration $\\mathbf{v}^\\prime$.\nThis leads to the following straight-forward objective for canonicalization and alignment\n\\begin{equation}\n R_{\\textrm{Unf}} = -\\lvert\\lvert \\mathbf{g}^\\prime - \\mathbf{v}^\\prime \\rvert\\rvert_{2}\n \\label{eq:unfactorized}\n\\end{equation}\nBy summing up the $L_2$ distances between all vertices, this objective formulation is applicable to generic cloth manipulation since $\\mathbf{g}^\\prime$ could be any cloth configuration of interest.\nHowever, we argue that this leads to two downsides:\n\\begin{enumerate}\n \\item \\textbf{Entangled supervision.}\n Reaching a canonicalized configuration at the center versus the corner of the workspace could give vastly different $R_{\\textrm{Unf}}$ values depending on what the target planar pose $T^\\prime$ is.\n This makes it difficult for the policy to tell whether it should make a planar transformation of the cloth configuration (such as shifting entire cloth to the right) or a deformable adjustment (such as flipping shirt's sleeve outwards).\n \\item \\textbf{Over-emphasis on cloth pose.}\n Actions that shift the cloth result in sharp and large changes in $R_{\\textrm{Unf}}$, while actions of smaller magnitudes become insignificant.\n Since such small adjustment actions are required to bring a poorly canonicalized cloth to a well-canonicalized one, $R_{\\textrm{Unf}}$ fails to put enough emphasis on the canonicalization subtask.\n\\end{enumerate}\n\n\\mypara{Factorized Reward Formulation.}\nIn contrast, we propose a reward factorization (Fig.~\\ref{fig:reward_formulation}), which retains the \\underline{c}anonicalization and \\underline{a}lignment nature of the task,\n\\begin{align} \n \\label{eq:factorized}\n R_{\\textrm{CA}} & = (1-\\alpha) R_{\\textrm{C}} + \\alpha R_{\\textrm{A}}\n\\end{align}\nwhere $\\alpha \\in (0,1)$ is a weighing hyperparameter, the canonicalization reward is given by \n\\begin{equation}\n R_{\\textrm{C}} = \n - \\lvert\\lvert \\mathbf{v} - \\mathbf{g}\\rvert\\rvert_2\n \\label{eq:rigid}\n\\end{equation}\nand the alignment reward is given by \n\\begin{equation}\n R_{\\textrm{A}} = \n -\\lvert\\lvert \\mathbf{g}^\\prime - \\mathbf{g}\\rvert\\rvert_2\n \\label{eq:deform}\n\\end{equation}\nHere, $R_{\\textrm{C}}$ encodes ``if the cloth is perfectly positioned and oriented, how bad is the deformable shape?'', while $R_{\\textrm{A}}$ asks, ``if the cloth was perfectly canonicalized, how bad is the planar pose offset?''.\nWith this factorization, we can provide separate supervision $R_{\\textrm{C}}$ and $R_{\\textrm{A}}$ during training, while acting with respect to $R_{\\textrm{CA}}$ during data-collection\/inference time.\nThis helps the policy distinguish how actions separately affect the cloth shape or planar pose.\nFurther, with a tunable $\\alpha$, we can emphasize $R_{\\textrm{C}}$ more than $R_{\\textrm{A}}$, which we have observed quantitatively correlates better with downstream task success and qualitatively aligns with a better canonicalization.\n\n\\mypara{Objective Implementation.}\nIn our experiments, we observed that $\\alpha = 0.6$ performs best.\nTo account for different cloth sizes, we normalize all $R_{\\textrm{C}}$, $R_{\\textrm{A}}$ and $R_{\\textrm{Unf}}$ by the geometric mean of the cloth's height and width in a canonicalized configuration.\nWe define a cloth's planar pose $T$ as the planar transformation for which $\\lvert\\lvert\\mathbf{v}^\\prime - \\mathbf{g}^\\prime \\rvert \\rvert$ is minimized, and approximate this using a modified version of ICP that utilizes the ground-truth correspondence between the current mesh and the goal mesh. In this modified method, vertices that are closer to their goal coordinates below a certain threshold are used to superimpose the current vertices onto the goal vertices, and this process iterates until convergence.\n\n\n\\subsection{Spatial Action Maps for Canonicalization and Alignment}\n\\label{sec:spatial_policy}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/pipeline.png}\n \\caption{\\textbf{Approach Overview.} }\n \\label{fig:pipeline}\n\\end{figure*}\n\nCoarse-grain, high-velocity multi-arm flings can efficiently unfold and align garments from crumpled initial states~\\cite{ha2021flingbot}, but is insufficient for fine-grained adjustments to achieve good canonicalization.\nInstead, a multi-arm, multi-primitive system which combines quasi-static and dynamic actions can achieve the best of both worlds.\nTo unify the primitive parametrizations, easily enforce constraints, and equip our model with strong inductive bias with respect to scale, translation, and rotations, we use a spatial action maps policy.\n\n\\mypara{Spatial Action Maps} is a convolutional neural network~\\cite{lecun2010convolutional} (CNN) based policy for learning value maps~\\cite{Wu_2020} where actions are defined on the pixel grid.\nThrough its simple and practical exploitation of these translational, rotational, and scale equivariances, spatial action maps is a popular framework for learning robotic policies~\\cite{}.\nWe refer the reader to \\citet{ha2021flingbot} for more details.\n\nWe extend \\citeauthor{ha2021flingbot}'s spatial action maps approach as follows.\nGiven a top-down view $I\\in \\mathbb{R}^{H\\times W \\times 3}$ of the workspace (Fig.~\\ref{fig:pipeline}a), we concatenate a $H \\times W \\times 2$ normalized coordinate map to $I$ to help the network reason about the cloth's alignment.\nThe resulting map is transformed to 16 rotations spanning $360^\\circ$ and scales in $[0.75, 1.0, 1.5, 2.0, 2.5, 3.0]$, which effectively enforces the arm-cross-over and grasp-width safety constraints.\nTo enable multiple primitives, we propose a factorized network architecture (Fig.~\\ref{fig:pipeline}b) with two common encoder, one for each task's reward ($R_{\\textrm{C}}$, $R_{\\textrm{A}}$) and multiple decoder heads, one for each primitive.\nThe decoded value map for each primitive from each encoder is combined using \\eqref{eq:factorized}, and the highest value action (over all action parameters and primitives) is chosen (Fig.~\\ref{fig:pipeline}c), with $\\epsilon$-greedy exploration.\n\n\\mypara{Implementation.}\nIn our experiments, we consider two primitives, quasi-static pick\\&place and dynamic flings.\nWe use $(H,W) = (128, 128)$, and a $\\epsilon_v$ value exploration and $\\epsilon_a$ action primitive exploration schedule with halflife $\\epsilon_v=2500$ steps and $\\epsilon_a=5000$ steps.\nOur value networks' predictions are supervised to the delta-reward values -- which is the difference in $R_{\\textrm{CA}}$ before and after an action is taken -- using the MSE loss and the Adam~\\cite{kingma2014adam} optimizer with a learning rate of $1e-4$.\nWe train our model until convergence, for 12,500 episodes with 8 optimization steps per episode, which takes about 48 hours on 4 NVIDIA RTX3090s.\n\n\n\\subsection{Task-specific State Machines for Ironing and Unfolding}\n\\label{sec:method:heuristics}\n\n[OUTLINE]\n\nSetup for state machines:\n\\begin{itemize}\n \\item Run canonicalization \\& alignment policy, described in previous section, with a specified target pose $T^\\prime$\n \\item Use DeepLabv3 to detect keypoints on the garment\n \\item Execute the task-specific key-point based state machine\n \\item Key benefit is that the highly structured canonicalized states enables the usage of state machines, which are simple to define and debug for engineers.\n\\end{itemize}\n\nDeepLabv3\n\\begin{itemize}\n \\item Emphasize that this is mature technology, used in many other fields, and has also been used in robotics (e.g. IRP).\n \\item We collected XXX data points for each garment class, and trained one model for each garment class.\n \\item Qualitatively, we observed that this key-point model generalized to novel garment instances.\n \\item We provide quantitative numbers for DeepLabv3 results on held-out cloth meshes in supplementary material.\n\\end{itemize}\n\nIroning state machine for shirts:\n\\begin{itemize}\n \\item The ironing state machine is the simplest in that it does not require any perception. A third arm holding an iron moves back and forth between two way points.\n \\item Our policy performs 4 canonicalization \\& alignment, one for each orientation, such that the iron covers multiple sides of the garment\n \\item Ironing Score: The binary mask overlay between the iron's path and the garment.\n\\end{itemize}\n\nFolding state machine for shirts:\n\\begin{itemize}\n \\item Folding consists of 2 steps:\n \\begin{enumerate}\n \\item Grasp wrist keypoints, then bring them to the far waist keypoints.\n \\item Grasp shoulder keypoints, then bring them to the waist keypoints.\n \\end{enumerate}\n \\item Folding Score: composed of three scores:\n \\begin{enumerate}\n \\item Shirt Body IoU score: get the shirt's body mask with the target shirt body mask, computed before step 1 of the folding state machine.\n \\item Shirt Arm IoU score: similarly for shirt arms, computed between step 1 and step 2 of the folding state machine.\n \\item Final folded IoU, computed after step 2 of the folding state machine.\n \\end{enumerate}\n The final folding score is the mean of these three numbers.\n\\end{itemize}\n\\section{Method}\n\\label{sec:method}\n\n\\subsection{A Multi-Purpose Garment Manipulation Pipeline}\n\\label{sec:method:recipe}\n\n\nWe propose a factorized approach to multi-purpose garment manipulation from arbitrary states that decomposes the process into two steps.\nFirst, the robot executes a learned \\emph{task-agnostic} canonicalized-alignment policy, which leaves the garment in a known configuration predefined for the clothing category at a specified 2D rotation and translation.\nSecond, the robot executes a \\emph{task-specific} keypoint based policy, which could be as simple as a manually-designed heuristic.\nThis approach confers three primary benefits:\n\\begin{itemize}\n \\item \\textbf{Arbitrary initial configuration}:\n Canonicalization \\emph{funnels} the large space of possible cloth configurations into a narrow distribution of highly-structured fully-observable configurations from which downstream policies can more easily operate.\n \\item \\textbf{Downstream task-awareness}:\n Flexible goal-conditioned alignment allows the canonicalized cloths to be placed at specified positions and orientations that are kinematically appropriate for particular downstream tasks.\n \\item \\textbf{Clothing category generalization}:\n A keypoint-based cloth representation effectively reduces the observation space from having to represent the infinite DoF down to a few meaningful keypoints.\n Further, cloths are always in a known canonicalized configuration.\n These two properties combined not only simplifies learning downstream task-specific manipulation policies, but also makes it possible to engineer heuristics that work reliably for a clothing category.\n\\end{itemize}\nNext, we will discuss how the canonicalized-alignment task is formulated (Sec.~\\ref{sec:method:can_align_task}), learned (Sec.~\\ref{sec:spatial_policy}), and implemented alongside the several downstream task policies (Sec.~\\ref{sec:method:heuristics}).\n\n\n\n\\subsection{The Canonicalization \\& Alignment Task}\n\\label{sec:method:can_align_task}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/method_reward_v2.png}\n \\vspace{-6mm}\n \\caption{\\textbf{Reward Computation.}\n From the goal configuration $\\mathbf{g}$ (green) and current configuration $\\mathbf{v}$ (magenta), we compute a best-alignment configuration $\\mathbf{g}^\\prime$ (orange).\n \n Then, the average vertex distance between $\\mathbf{g}^\\prime$ and $\\mathbf{g}$ (where only rigid transforms matter) gives the alignment reward $R_{\\textrm{A}}$, while that between $\\mathbf{g}$ and $\\mathbf{v}$ (where only deformation matters) gives the canonicalization reward $R_{\\textrm{C}}$.\n \n Our factorized reward $R_{\\textrm{CA}}$ is a weighed sum between $R_{\\textrm{A}}$ and $R_{\\textrm{C}}$.\n \n }\n \\vspace{-6mm}\n \\label{fig:reward_formulation}\n\\end{figure}\n\n\\mypara{Problem Formulation.}\nGiven a clothing item in some clothing category in an arbitrary initial configuration, the goal of canonicalization is to reach the human-defined standard deformable configuration for that clothing category, such as a T-shaped configuration for shirts and the upside-down V-shaped configuration for pants.\nNote that this only accounts for the garment's shape, but not its pose in the workspace.\nThis means canonicalization alone can't ensure that downstream tasks are kinematically feasible.\nTo address this shortcoming, the goal of ``canonicalized-alignment'' is to reach canonicalization at a specific planar position and rotation in the workspace.\n\n\\mypara{Naive Reward Formulation.}\nGiven a simulated cloth instance with $N$ vertices, let $\\mathbf{v} = \\{v_{i}\\}_{i\\in[N]}$ denote the current configuration of the cloth (Fig.~\\ref{fig:reward_formulation}, magenta), where each $v_i \\in \\mathbb{R}^3$ is the position of the $i$th cloth vertex.\nGiven a goal configuration $\\mathbf{g} = \\{g_{i}\\}_{i\\in[N]}$, the average per-vertex distance between $\\mathbf{g}$ and $\\mathbf{v}$ gives a generic goal-conditioned cloth manipulation cost.\nIn the specific case where $\\mathbf{g}$ is a canonicalized configuration of the cloth at the goal position and rotation (Fig.~\\ref{fig:reward_formulation}, green), we have the following straightforward canonicalized-alignment reward \n\\begin{equation}\n R_{\\textrm{Unf}} = -\\lvert\\lvert \\mathbf{g} - \\mathbf{v} \\rvert\\rvert_{2}\n \\label{eq:unfactorized}\n\\end{equation}\nClearly, $R_{\\textrm{Unf}}$ is consistent, in that a policy which achieves $R_{\\textrm{Unf}}=0$ achieves perfect canonicalized-alignment.\nHowever, this formulation has two primary downsides:\n\\begin{enumerate}\n \\item \\textbf{Entangled supervision.}\n When $R_{\\textrm{Unf}}$ is low, it can be difficult for the policy to tell whether it should make a planar transformation of the cloth configuration (such as shifting entire cloth to the right) or a deformable adjustment (such as flipping a shirt's sleeve outwards).\n \\item \\textbf{Over-emphasis on cloth pose.}\n Actions that shift the cloth result in sharp and large changes in $R_{\\textrm{Unf}}$, while actions of smaller magnitudes become insignificant.\n Since such small adjustment actions are required to bring a poorly canonicalized cloth to a well-canonicalized one, $R_{\\textrm{Unf}}$ fails to put enough emphasis on the canonicalization subtask, and leads to a problematic local minima in policy learning.\n\\end{enumerate}\n\n\n\\mypara{Factorized Reward Formulation.}\nTo alleviate these shortcomings, we propose a reward factorization, that expresses the canonicalization $R_{\\textrm{C}}$ and alignment $R_{\\textrm{A}}$ aspects of the task separately:\n\\begin{align} \n \\label{eq:factorized}\n R_{\\textrm{CA}} & = (1-\\alpha) R_{\\textrm{C}} + \\alpha R_{\\textrm{A}}\n\\end{align}\nwhere $\\alpha \\in (0,1)$ is a hyperparameter.\nWith this factorization, we can provide separate supervision $R_{\\textrm{C}}$ and $R_{\\textrm{A}}$ during training, while acting with respect to $R_{\\textrm{CA}}$ during data-collection and inference.\nThis helps the policy distinguish how actions separately affect the cloth shape or planar pose. \nFurther, with a tunable $\\alpha$, we can emphasize $R_{\\textrm{C}}$ more than $R_{\\textrm{A}}$, which leads to a better canonicalization.\n\nTo factorize the reward, we propose to compute a transform $T$, which transforms $\\mathbf{g}$ into a best-aligned goal configuration $\\mathbf{g}^\\prime$ (Fig.~\\ref{fig:reward_formulation}, orange).\nGiven such a $\\mathbf{g}^\\prime$, its distance to $\\mathbf{v}$ accounts only for their deformable shape mismatch, which serves as the canonicalization reward,\n\\begin{equation}\n R_{\\textrm{C}} = \n - \\lvert\\lvert \\mathbf{v} - \\mathbf{g}^\\prime\\rvert\\rvert_2\n \\label{eq:rigid}\n\\end{equation}\nMeanwhile, by $T$'s definition, the distance between $\\mathbf{g}^\\prime$ and $ \\mathbf{g}$ accounts only for the mismatch in planar position and rotation, which serves as the alignment reward,\n\\begin{equation}\n R_{\\textrm{A}} = \n -\\lvert\\lvert \\mathbf{g}^\\prime - \\mathbf{g}\\rvert\\rvert_2.\n \\label{eq:deform}\n\\end{equation}\n\n\n\\mypara{Factorization Implementation.}\nTo find $\\mathbf{g}^\\prime$, we have observed that naively minimizing the average per-vertex distance between $\\mathbf{g}$ and $\\mathbf{v}$ is extremely sensitive to outliers, so does not give us the best alignment.\nSuch outliers arise due to mismatches in $\\mathbf{g}$'s and $\\mathbf{v}$'s deformable shapes where small protrusions with large offsets (\\textit{e}.\\textit{g}. a shirt's arm folded inwards) could significantly shift the minimum distance configuration.\nTo filter out such outliers, we optimize the transform $T$ which minimizes this distance for only a subset of points, where point $i$ is included if $\\lvert\\lvert g_i - v_i\\rvert\\rvert_2 \\leq \\tau$ for some scalar threshold $\\tau$ then apply $T$ to $\\mathbf{g}$ to get $\\mathbf{g}^\\prime$.\nWe repeat this minimization and filter procedure in iterations, using the previous iteration's $\\mathbf{g}^\\prime$ as the current $\\mathbf{g}$, until convergence.\n\nIn our experiments, we observed that $\\alpha = 0.6$ and $\\tau=0.3$ performs best.\nTo account for different cloth sizes, we normalize all $R_{\\textrm{C}}$, $R_{\\textrm{A}}$, $R_{\\textrm{Unf}}$, and $\\tau$ by the geometric mean of the cloth's height and width in a canonicalized configuration.\nSince most garments are mirror-symmetric, we select the highest reward from either the goal configuration or its mirror-flip in the goal's local frame.\n\n\n\\subsection{Multi-Primitive Spatial Action Maps Policy}\n\\label{sec:spatial_policy}\n\nCoarse-grain dynamic multi-arm flings can efficiently unfold and align garments from crumpled states~\\cite{ha2021flingbot}, but are insufficient for the fine-grained adjustments required to achieve canonicalization.\nTo overcome this challenge, we propose a multi-arm, multi-primitive system that combines quasi-static and dynamic actions, which enables both efficient \\emph{and} fine-grained manipulation. To unify the primitive parametrizations and easily enforce constraints, we use a spatial action maps policy.\n\n\\mypara{Spatial Action Maps} is a convolutional neural network~\\cite{lecun2010convolutional} (CNN) policy for learning value maps~\\cite{Wu_2020} where actions are defined on a pixel grid.\nThrough its simple and effective exploitation of translational, rotational, and scale equivariances, spatial action maps is a popular framework for learning robotic policies~\\cite{wu2020spatial,ha2021flingbot,xu2022dextairity}.\n\nWe extend FlingBot~\\cite{ha2021flingbot}'s spatial action maps approach as follows.\nGiven a $H\\times W \\times 3$ top-down view of the workspace (Fig.~\\ref{fig:pipeline}a), we rotate and scale it to form a stack of transformed observations of shape $K \\times H\\times W \\times 3$.\nTo help the network reason about the cloth's alignment, we concatenate a $K \\times H \\times W \\times 2$ scale-invariant, normalized (between -1 and 1) positional encoding to the transformed observation stack.\nTo enable multiple primitives, we propose a factorized network architecture (Fig.~\\ref{fig:pipeline}b) with two encoders, one for each task's reward ($R_{\\textrm{C}}$, $R_{\\textrm{A}}$), where each encoder has two decoder heads, one for each primitive.\nThe encoders take in the transformed observation stack, and the decoders output value maps, one for each reward-primitive pair.\nThe value maps are combined using \\eqref{eq:factorized}, and the highest value action (over all action parameters and primitives) is chosen (Fig.~\\ref{fig:pipeline}c).\n\n\\mypara{System Implementation.} \nIn our experiments, we consider two primitives, quasi-static pick\\&place and dynamic flings.\nWe use $(H,W) = (128, 128)$\nand a decaying $\\epsilon$-greedy with $\\epsilon = 1$ for exploration of\n1) action primitives (\\textit{i}.\\textit{e}. fling v.s. pick\\&place) with half life 5000, and \n2) action parameters within each primitive with half life 2500.\nBy constraining the observation's transforms to 16 rotations spanning $360^\\circ$ and scales in $\\{0.75, 1.0, 1.5, 2.0, 2.5, 3.0\\}$ (giving $K=96$), we can ensure that arms neither collide nor cross-over each other. \nOur value networks' predictions are supervised using the delta-reward values -- which is the difference in $R_{\\textrm{CA}}$ before and after an action is taken -- using the MSE loss and the Adam~\\cite{kingma2014adam} optimizer with a learning rate of $1e-4$.\nWe train our model for 12,500 episodes which takes 2 days on 4 NVIDIA RTX3090s.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.98\\linewidth]{figures\/qualitative_categories_v1.jpg}\n \\vspace{-3mm}\n \\caption{\n \\textbf{Canonicalized-Alignment of Multiple Categories.}\n In each row, we demonstrate a sequence of 5 actions taken by the model corresponding to a clothing category in simulation.}\n \\vspace{-6mm}\n \\label{fig:canal:sim}\n\\end{figure}\n\n\n\\subsection{Keypoint-based Task Heuristics}\n\\label{sec:method:heuristics}\n\nCompared to learning-based approaches, heuristics are highly interpretable and thus simple to define.\nHere, we demonstrate that it's possible to use heuristics for shirt ironing with no keypoints and folding with a small set of keypoints.\n\n\\mypara{Keypoint Detection.}\nWe collect 200 cloth configurations from simulation with coverage at least $60\\%$ and trained a DeepLabv3~\\cite{chen2019rethinking} detector for each garment class.\nUsing a random 80\/20 training\/evaluation split, we observed that this detector generalizes well to novel garment instances with average error of 5\/128 pixels.\nSetting up a keypoint detector model for a new clothing category takes roughly 1 hour.\nAfter detection, each keypoint is depth-projected into 3D points and transformed into the workspace frame of reference.\nBy representing cloths as a set keypoints, we sidestep their infinite DoF by using a few meaningful keypoints as the representation, which makes it simpler to define heuristics over them.\nFor instance, long sleeve shirts have six keypoints for two sleeves, shoulders, and waists.\n\n\\mypara{Ironing Heuristic.}\nFor garment manipulation pipelines, specialized tools are placed at a fixed location in the workspace.\nFor ironing, the extra tools involved are the ironing board and the arm holding the iron (Fig.~\\ref{fig:downstream:real} left).\nGiven a well canonicalized and aligned shirt, an open-loop ironing primitive where the end effector moves from one end of the ironing board and back without any perception can be sufficient.\nIn our setup, we use two transforms such that the left and right side of the shirt is on the ironing board respectively.\n\n\\mypara{Folding Heuristic.}\nFirst, the sleeves are folded towards the waist using a dual-arm pick and place action.\nHere, the pick point is the sleeve keypoint, while the place point is the quarter and three-quarter point along the waist line (computed from the waist keypoints).\nSince not all shirt arms are long enough to reach the waist points, the place points are constrained to be an arm's length distance away from the shoulder keypoints.\nThe arm length can be computed from keypoints as the minimum distance between the sleeve and the shoulder keypoints over the left and right arms.\nIn the second step, with the arms folded in, the shoulder keypoints are picked and placed at the waist keypoints (Fig.~\\ref{fig:downstream:real} right).\n\\section{Related work}\n\nPrior works have explored the topics of goal-conditionining and manipulation within the domain of deformable objects. Not only are these systems limited in their manipulation capabilities (i.e. only using quasi-static or only using dynamic actions), but they also make strong assumptions about the cloth that's being manipulated (e.g. square cloth, full visibility, access to cloth image under goal configuration). \n\n{\\color{red} I'm not sure how to group these approaches. 1 and 2 are related, but 3 and 4 are about a completely different part of the task}\n\n\\label{sec:citations}\n\n\n\\textbf{Reinforcement Learning for Cloth Manipulation}\n\\cite{wu2019withoutdemonstrations}\\cite{jangir2019dynamic}\n\\cite{tsurumine2019deeprl}\nSimplifying assumptions about square cloths or physically marked landmarks on fixed shirt dot dot dot.\n\n\n\n\\textbf{Cloth Manipulation From Expert Demonstrations} \\newline\nSince defining the goal by itself is a challenge in cloth manipulation, prior reinforcement learning approaches \\cite{seita2019imitation, ganapathi2020learning} have leveraged expert demonstrations. While expert demonstrations simultaneously sidestep the exploration problem and the goal conditioning problem, they either rely on simplified instances such as a square cloth \\cite{seita2019imitation} or expensive human input \\cite{ganapathi2020learning}.\n\n\n\\textbf{Single Primitive Cloth Manipulation} \\newline\n Bypassing the dependency on an expert, self-supervised cloth manipulation has been demonstrated in unfolding with a factorized pick \\& place action space by \\cite{wu2019withoutdemonstrations} and goal conditioned folding for simplified cloth instances (square cloth, rope) {\\color{red} they only have unfolding btw }\n using spatial action maps~\\cite{Wu_2020,zeng2019tossingbot,zeng2018learning} by \\cite{lee2020learning}.\nHowever, all these approaches operate entirely in quasi-static action spaces. A recent approach \\cite{ha2021flingbot} has demonstrated the effectiveness of pairing dynamic actions with spatial action maps. Although dynamic actions are useful for unfolding a cloth and reducing self-occlusion, by themselves they are not sufficient for goal-conditioned cloth manipulation. \n\n\n\n\\textbf{Instance-level Goal Conditioning} \\newline\n A learning framework developed by Hoque et. al. \\cite{hoque2020robotic} first learns a cloth's visual dynamics model from random interactions, and then is able to take a visual goal image and take actions towards the goal state; however, this method assumes manipulation a square cloth. Even if visual dynamics are replaced by a more generalized framework such as optical flow \\cite{weng2021fabricflownet}, which allows visual goal-conditioning on arbitrary cloth instances, it assumes that we have access to an image of a cloth in a specific goal state, which is impossible to obtain for novel cloth instances under self-occlusion.\n \n \\textbf{Reward Engineering For Goal-conditioned Cloth Manipulation?}\n \n\n\n\n\n\n\n\n\n\\section{Related Work}\n\n\\textbf{Heuristic-based cloth manipulation.}\nHeuristic-based manipulation pipelines -- where action selection and planning is manually designed -- can produce impressive results.\nHowever, the generality and robustness of these approaches is limited due to strong assumptions regarding pre-canonicalized initial state~\\cite{li2019learning,miller2011parametrized}, fiducial markers~\\cite{bersch2011bimanual}, specialized tools~\\cite{osawa2006clothes}, and cloth type and shape~\\citep{Sun2013AHA,doumanoglou2016folding,maitin2010cloth,tanaka2007origami,balkcom2008robotic,willimon2011model,cusumano2011bringing,stria2014garment,osawa2007unfolding}.\n\n\n\n\n\\textbf{Learning-based cloth unfolding.}\nLearning-based methods can self-discover the best policies for a distribution of cloths using real-world self-supervision~\\cite{ha2021flingbot,xu2022dextairity} or simulator states~\\cite{huang2022mesh,lin2022learning}. \nWhile these approaches have been successfully applied to cloth unfolding~\\cite{ha2021flingbot,xu2022dextairity} or canonicalization~\\cite{huang2022mesh}, they do not consider canonicalized-alignment.\nThis limits their applicability since heuristic-based pipelines cope poorly with unmet cloth assumptions or kinematic constraints. \n\n\n\n\\textbf{Goal-conditioned cloth manipulation.}\nTowards generic goal-conditioned cloth manipulation, prior works have investigated reinforcement learning~\\cite{matas2018sim,hietala2021closing,jangir2020dynamic,tsurumine2019deep}, real-world self-supervised learning~\\cite{lee2020learning} and imitation learning~\\cite{seita2020deep}.\nHowever, these methods often struggle to bridge the sim2real gap~\\cite{jangir2020dynamic}, generalize across cloth instances~\\cite{lee2020learning,weng2022fabricflownet,seita2020deep} or generalize between garment types~\\cite{tanaka2018emd, hietala2021closing, tsurumine2019deep}.\nFurthermore, all goal-conditioned works do not address how goal vertices\/key points\/images can be obtained for a completely novel cloth instance.\nInstead, our proposed approach can accommodate different garment categories and generalize to a variety of novel real-world garment instances from simulation training.\n\n\n\n\n\n\\section{Results}\n\n\\textbf{In simulation}, we conduct ablation studies of reward formulation (Sec.~\\ref{sec:eval:reward_formulation}) and action primitives (Sec.~\\ref{sec:eval:combined_primitives}).\nNext, we demonstrated our approach on five garment categories from Cloth3D~\\cite{bertiche2020cloth3d} (Sec.~\\ref{sec:eval:multicategory}) and a folding downstream task (Sec.~\\ref{sec:eval:downstream}).\n\\textbf{In the real world}, we include primitive and reward comparisons for the long-sleeve shirt category on canonicalized-alignment (Tab.~\\ref{tab:primitive:real}, Fig.~\\ref{fig:canal:real}), folding and ironing. \n\n\n\n\n\\begin{figure}\n \\centering\n \\vspace{-2mm}\n \\includegraphics[width=0.98\\linewidth]{figures\/new_comparison_v3.png}\n \\vspace{-3mm}\n \\caption{\\textbf{Reward Comparisons. }\n We qualitatively compare the final configuration achieved by policies trained with our factorized and the unfactorized reward formulation.\n For the qualitative comparisons, the IoU of the final frame of various rollouts are shown in top-left corner of each square.\n \n \n }\n \\label{fig:reward_comparison}\n \\vspace{-5mm}\n\\end{figure}\n\n\\subsection{Metrics}\nAfter running each policy for 10 steps, we evaluate the final \n$R_{\\textrm{Unf}}$~\\eqref{eq:unfactorized}, $R_{\\textrm{A}}$~\\eqref{eq:rigid}, and $R_{\\textrm{C}}$~\\eqref{eq:deform} in the episode.\nWhile these rewards account for the full configuration of the garment, they measure distance based on effectively ground-truth vertex correspondence.\nThis means they give larger distances for radially symmetric garments, like skirts and dresses, and they can't readily be computed on real world garments.\nTo address these shortcomings, we also evaluate the IoU and percent coverage from the current cloth binary image mask and the goal cloth binary image mask.\nIn all tables, $\\downarrow$ indicates that lower is better while $\\uparrow$ indicates that larger values are preferred.\n\n\\subsection{Task Generation}\n\nOur task datasets contain randomized initial configurations of a filtered\\footnote{\n Since CLOTH3D meshes are automatically generated, we manually filter unrealistic mesh examples (\\textit{e}.\\textit{g}. arms as wide as cloth body), and we ensured all cloth meshes are shorter than 0.7m. \n} subset of meshes from Cloth3D~\\cite{bertiche2020cloth3d}, whose configurations are generated as follows:\n\\begin{enumerate}\n \\item \\textbf{Hard Tasks} have low coverage and severe self-occlusion.\n They are generated by randomly rotating the cloth, picking a random point on the cloth, dropping it from a random height in $[0.5, 1.5]\\mathrm{m}$, and then translating the cloth by a random distance in $[0.0, 0.2]\\mathrm{m}$. \n \\item \\textbf{Easy Tasks} have high coverage to test policies' abilities to perform small adjustments crucial to canonicalization. \n They are generated by starting with the canonicalized configuration, and dragging a random point on the cloth by an angle uniformly sampled from $[0, 360]$ degrees by a distance uniformly sampled in $[0.5, 1]\\mathrm{m}$.\n\\end{enumerate}\nEach garment category has 2000 training and 400 testing tasks with unseen meshes, with a 75-25 and 50-50 split between hard and easy tasks respectively.\n\n\\subsection{Reward Ablation}\n\\label{sec:eval:reward_formulation}\nWe compare the canonicalized-alignment performance between the unfactorized~\\eqref{eq:unfactorized} and our factorized reward formulation~\\eqref{eq:factorized} on the long sleeve category.\nWe observe that our approach consistently does best on all metrics (Tab.~\\ref{tab:reward_formulation}), \nreflected in qualitatively more consistent canonicalized-alignment (Fig.~\\ref{fig:reward_comparison}).\nWe hypothesize that the $R_{\\textrm{Unf}}$ baseline struggles to canonicalize properly due to faint supervision on small deformable adjustments.\nMeanwhile, our approach can emphasize canonicalization more with $\\alpha = 0.6$.\n\n\\begin{table}\n\\vspace{-2mm}\n\\caption{ \n \\footnotesize\n \\centering\n \\textbf{Reward Ablation on Hard Tasks}\n \n }\n \\centering\n \\vspace{-2mm}\n \\begin{tabular}{l|ccccc}\n \\toprule\n \\textbf{Metric} & $\\mathbf{R_{\\textrm{Unf}}} \\downarrow$ & $\\mathbf{R_{\\textrm{A}}}\\downarrow$ & $\\mathbf{R_{\\textrm{C}}} \\downarrow$ & \\textbf{IoU} $\\uparrow$ & \\textbf{Cov.}$\\uparrow$ \\\\\n \\midrule\n $\\mathbf{R_{\\textrm{Unf}}}$ & 0.093 & 0.069 & 0.064 & 0.684 & 0.879 \\\\\n $\\mathbf{R_{\\textrm{CA}}}$ ($\\alpha=0.6$) & \\textbf{0.075} & \\textbf{0.051} & \\textbf{0.052} & \\textbf{0.728} & \\textbf{0.887} \\\\\n \\bottomrule\n \\end{tabular}\n \n \\vspace{-3mm}\n \\label{tab:reward_formulation}\n\\end{table}\n\n\n\n\\subsection{Effectiveness of Combined Primitives}\n\\label{sec:eval:combined_primitives}\n\nWhile high-velocity dynamic actions enable efficient unfolding, precise quasi-static actions are necessary for fine-grained adjustments involved in canonicalization.\nWe compare two single primitive systems, only Aligned-Fling (Flingbot~\\cite{ha2021flingbot}'s fling with fling direction and location specified by the target alignment) and only Pick \\& Place (P\\&P), with our combined primitive system on canonicalized-alignment of long sleeve shirts.\n\n\\input{tables\/primitive_comparison_sim_v1.tex}\n\nOn easy tasks in simulation (Tab.~\\ref{tab:primitive:sim:easy}, top), P\\&P and our combined primitive system perform similarly, confirming that quasi-static actions alone are effective for small adjustments.\nIn contrast, the fling-only system performs poorly because the imprecise flinging actions are ill-suited for cases where fine-grained manipulation is required (\\textit{e}.\\textit{g}. Fig.~\\ref{fig:pipeline}).\n\nFor hard tasks (Tab.~\\ref{tab:primitive:sim:easy} bottom in simulation, Tab.~\\ref{tab:primitive:real} in real), we observe that both single primitive baselines perform poorly.\nNotably, Fling achieves a similar performance between easy and hard tasks in simulation, suggesting that the effect of coarse grain high-velocity actions can be effective for unfolding but is not versatile enough for canonicalization.\nMeanwhile, the P\\&P baseline performed significantly worse on hard tasks in simulation, confirming the findings reported in FlingBot~\\cite{ha2021flingbot}.\nOur combined primitive system achieves the best performance, demonstrating the synergy between coarse-grain high-velocity flings and fine-grain quasi-static actions for canonicalized-alignment.\n\nDue to imperfect cloth simulators, deformable dynamics such as arms twisting (instance \\#2, last column in Fig.~\\ref{fig:canal:real}) and stretching (instance \\#3, last column in Fig.~\\ref{fig:canal:real}) are never observed in simulation.\nDespite this sim2real gap, our real world qualitative results (Fig.~\\ref{fig:canal:real}) confirmed our simulation findings.\nSpecifically, we found that on average, our combined primitive approach achieved an IoU of 0.65, while other single primitive baselines\/ablations reached 0.56 or less.\n\n\\subsection{Canonicalized-Alignment on Garment Category}\n\\label{sec:eval:multicategory}\n\nUsing the same learning and system configuration, we trained new models on 4 other garment categories, one model per garment category.\nFrom the quantitative evaluation in Tab.~\\ref{tab:multicategory}, our approach achieves around 70 IoU for all categories, which demonstrates the generality of our problem formulation with respect to garment categories.\nFurther, from Fig.~\\ref{fig:canal:sim}, our policy has learned that while flinging is crucial for quickly unfolding crumpled garments, a few pick\\&places are usually required to achieve good canonicalization.\n\n\\input{tables\/multicategory_v1.tex}\n\n\n\\subsection{Application in Downstream Cloth Manipulation}\n\\label{sec:eval:downstream}\nA primary motivation for this work is the improvement of downstream tasks; we hypothesize that effective canonicalized-alignment will significantly reduce the complexity of subsequent manipulation skills.\nTo this end, we study two garment manipulation tasks, ironing and folding (Sec.~\\ref{sec:method:heuristics}).\nFor folding, we measured the final $R_{\\textrm{Unf}}$ achieved by each approach when the goal configuration is the ground-truth folded configuration at a specified alignment.\nWe also include a folding success rate which is a thresholded $R_{\\textrm{Unf}}$, where the boundary is chosen by qualitatively.\n\n\n\n\\textbf{Canonicalized-Alignment Improves Downstream Tasks.}\nFrom Tab.~\\ref{tab:folding_sim}, we observe that manually-designed folding heuristic completely fails at the task if starting from random initial configurations.\nWhile success rate improves with FlingBot~\\cite{ha2021flingbot}'s unfolded configurations, achieving high-coverage configurations (the goal of unfolding) does not always give structured, high-visibility configurations required by heuristics (Fig.~\\ref{fig:downstream:folding}), so FlingBot~\\cite{ha2021flingbot} still performs poorly.\nWith canonicalized-alignment, policies trained with $R_{\\textrm{Unf}}$ and $R_{\\textrm{CA}}$ achieve success rates of 84.9 and 87.8 respectively, demonstrating the importance of canonicalized-alignment reducing downstream task complexity, such that even simple manually designed task heuristics can work well.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.98\\linewidth]{figures\/sim_folding_v1.png}\\vspace{-3mm}\n \\caption{\\textbf{Simulation Folding Qualitative Comparison.} We compare our folding heuristic on our method's canonicalization results vs. FlingBot's unfolding results. We qualitatively (images) and quantitatively (histogram) demonstrate that not all high-coverage configurations ensure folding success.}\n \\label{fig:downstream:folding}\n \n\\end{figure}\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.98\\linewidth]{figures\/realworld_canal_v1.jpg} \\vspace{-3mm}\n \\caption{\n \\textbf{Real-world Canonicalized-Alignment.}\n High-coverage configurations achieved by Flingbot aren't always aligned, which is improved with our Aligned-Fling.\n However, using only coarse-grained Aligned-Fling fails to fully canonicalize the shirt, and using only fine-grained Pick \\& Place fails to fully-unfold the shirt as can be seen by the average of the final cloth masks (bottom row).\n }\n \\label{fig:canal:real} \\vspace{-3mm}\n\\end{figure}\n\n\\input{tables\/primitive_comparison_v1}\n\n\\input{tables\/folding_performance_v1.tex}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.98\\linewidth]{figures\/real_downstream.jpg} \\vspace{-3mm}\n \\caption{\n \\textbf{Real-world Ironing \\& Folding.}\n Reliable canonicalized-alignment not only gives high-visibility starting configurations, which reduces complexity for downstream tasks like folding (step 1), but can also be called multiple times with different target alignments, which is useful in ironing (step 2 \\& 3).\n More video results on the \\href{https:\/\/clothfunnels.cs.columbia.edu}{project website}.\n }\n \\label{fig:downstream:real} \\vspace{-3mm}\n\\end{figure}\n\n\n\\label{sec:result}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe OPERA experiment \\cite{1}, located at the Gran Sasso laboratory (LNGS), aims at observing the $\\nu_{\\mu}\\rightarrow\\nu_{\\tau}$ neutrino oscillation in the direct appearance mode in the CERN neutrino to Gran Sasso (CNGS) \\cite{3a,3b} beam by detecting the decay of the $\\tau$ produced in charged current (CC) interactions. A detailed description of the detector can be found in \\cite{1,2a,2b,2c,2d,2e,2f}. The OPERA detector consists of two identical Super Modules (SM), each of them consisting of a target area and a muon spectrometer, as shown in Fig. \\ref{strauss-fig1}. The target area consists of alternating layers of scintillator strip planes and target walls. The muon spectrometer is used to reconstruct and identify muons from $\\nu_{\\mu}$-CC interactions and estimate their momentum and charge. \n\n\\begin{myfigure}\n\\centerline{\\resizebox{70mm}{!}{\\includegraphics{thomasstrauss_2012_0n_fig01-eps-converted-to.pdf}}}\n\\caption{Picture of the OPERA detector, with a view of a reconstructed neutrino interaction occurring in the 2nd Super Module.}\n\\label{strauss-fig1}\n\\end{myfigure}\n\n\\begin{myfigure}\n\\centerline{\\resizebox{70mm}{!}{\\includegraphics{thomasstrauss_2012_0n_fig14-eps-converted-to.pdf}}}\n\\caption{The CNGS neutrino beamline. Figure from \\cite{6}.}\n\\label{author-fig4}\n\\end{myfigure}\n\nThe target walls are trays in which target units of $10\\times12.5\\mathrm{ cm^2}$ and a depth of 10\\,$X_0$ in lead (7.9\\,cm)\t with a mass of around 10~kg each are stored: they are also refered to as bricks. A brick is formed by alternating layers of lead plates and emulsion films (2 emulsion layers separated by a plastic base) building an emulsion cloud chamber (ECC). This provides high granularity and high mass, which is ideal for $\\nu_{\\tau}$ interaction detection. Fig. \\ref{strauss-fig3} left shows an image of the unwrapped ECC brick. On the right, the arrangement of the scintillator strip planes (Target Tracker, TT) and the ECC is shown. Note an extra pair of emulsion films in a removable box called a changeable sheet (CS), shown in blue. The total mass of each target area is about 625\\,tons, leading to a target mass of 1.25\\,ktons for 145'000 bricks.\n\n\\begin{myfigure}\n\\centerline{\\resizebox{70mm}{!}{\\includegraphics{thomasstrauss_2012_0n_fig03-eps-converted-to.pdf}\\includegraphics{thomasstrauss_2012_0n_fig05-eps-converted-to.pdf}}}\n\\caption{$\\tau$ detection principle in OPERA.}\n\\label{strauss-fig3}\n\\end{myfigure}\n\n\\section{DAQ and analysis}\nThe information from the reconstructed event, recorded by the electronic detectors, is used to predict the most probable ECC for the neutrino interaction vertex \\cite{4}. A display of a reconstructed event is indicated in Fig. \\ref{strauss-fig1}. Fig. \\ref{strauss-fig2} shows the procedure for localizing the event vertex in the ECC, by extrapolating the reconstructed tracks from the electronic detector to the CS emulsion films. \n\n\\begin{myfigure}\n\\centerline{\\resizebox{70mm}{!}{\\includegraphics{thomasstrauss_2012_0n_fig02-eps-converted-to.pdf}}}\n\\caption{Event detection principle in the OPERA experiment. The candidate interaction brick is determined from the prediction of the electronic detector (blue). The changeable sheet (CS) is used to confirm the prediction. From the CS result, the tracks are followed up to the interaction point inside the ECC.}\n\\label{strauss-fig2}\n\\end{myfigure}\n\nThe signal recorded in the CS films will confirm the prediction from the electronic detector, or will act as a veto and trigger a search in neighboring bricks to find the correct ECC in which the neutrino interaction is contained. After a positive CS result, the ECC will be unpacked and the emulsion films are developed and sent to one of the various scanning stations in Japan and Europe. Dedicated automatic scanning systems allow us to follow the tracks from the CS prediction up to their stopping point. Around these stopping points, a volume of 1\\,cm$^2$ times 15\\,emulsion films will be scanned to find the interaction vertex. As illustrated in Fig. \\ref{strauss-fig2}, only track segments in the active emulsion volume are visible and a reconstruction of the event is needed to find tracks and vertices. A dedicated procedure, called a ``decay search\" is used to search for possibly interesting topologies, like the $\\tau$-decay pictured in Fig. \\ref{strauss-fig2}. The accuracy of the track reconstruction goes from cm in the electronic detector, down to mm for the CS analysis and to micrometric precision in the final vertex reconstruction (after aligning the ECC emulsion plates with passing-through cosmic-ray tracks).\n\\newline\n\\subsection{Tau detection} $\\tau$ detection is only possible due to the micrometric resolution of the emulsion films, as it allows us to separate the primary neutrino interaction vertex from the decay vertex of the $\\tau$ particle. The most prominent background for $\\tau$ decay is either hadron scattering or charged charm decays. The background from hadron scattering can be controlled by cuts applied to the event kinematics. The background due to charm can be reduced by identifying the muon at the primary vertex, as charm will occur primarily in $\\nu_{\\mu}$-CC interactions (for further details see \\cite{2e,4}). After topological and kinematical cuts are applied, the number of background events in the nominal events sample is anticipated to be 0.7 at the end of the experiment.\n\nIn 2010 the first $\\nu_{\\tau}$ candidate was reconstructed inside the OPERA emulsion. It was recorded in an event classified as a neutral current, as no muon was identified in the electronic detector. To crosscheck the $\\tau$ hypothesis, all tracks were followed downstream of the vertex until their stopping or their re-interaction point. They were all attributed to be hadrons, and no soft muon ($E<2$\\,GeV) was found. In 2010, the expected total number of $\\tau$ candidates was 0.9, while the expected background was less than 0.1 events. More details on the analysis are presented in \\cite{2e}. In Fig. \\ref{strauss-fig8} shows a picture of this event. Track number 4, labeled as the parent, is the $\\tau$, decaying into one charged daughter track. The two showers are most likely connected to the decay vertex of the $\\tau$, rather than activity from the primary interaction. Thus the decay is compatible with: $\\tau\\rightarrow\\eta^-(\\pi^-+\\pi_0)\\nu_{\\tau}$. \n\n\\begin{myfigure}\n\\centerline{\\resizebox{70mm}{!}{\\includegraphics{thomasstrauss_2012_0n_fig08-eps-converted-to.pdf}}}\n\\caption{Display of the 2010 $\\tau^-$ candidate event. Top left: view transverse to the neutrino\ndirection. Top right: same view zoomed on the vertices. Bottom: longitudinal view. Figure from \\cite{2e}.}\n\\label{strauss-fig8}\n\\end{myfigure}\n\n\\begin{myfigure}\n\\centerline{\\resizebox{70mm}{!}{\\includegraphics{thomasstrauss_2012_0n_fig09-eps-converted-to.pdf}}}\n\\caption{MC distribution of: a) the kink angle for $\\tau$ decays, b) the path length of the $\\tau$, c) the momentum of the decay daughter, d) the total transverse momentum $P_T$ of the detected daughter particles of $\\tau$ decays with respect to the parent track. The red band shows the 68.3\\,\\% of values allowed for the candidate event, and the dark red line indicates the\nmost probable value. The dark shaded area represents the excluded region corresponding to the a priori $\\tau$ selection cuts. Figure from \\cite{2e}.}\n\\label{strauss-fig9}\n\\end{myfigure}\n\nFig. \\ref{strauss-fig9} shows the cuts used for the selection criteria defined at the time of the proposal and the kinematic variables of the $\\tau$ decay observed by the OPERA experiment. At the time of this conference, the number of expected $\\tau$ candidates was 1.7, with 0.5 events expected in the single-prong channel. The expected background for the analyzed event sample corresponding to $4.9\\times10^{19}$\\,protons on target (pot) was $0.16\\pm0.05$ events. \n\nAt the time of writing these proceedings, a second $\\tau$ candidate appeared \\cite{7a}.\n\n\\subsection{Physics run performance and data analysis status}\nSince 2007, the OPERA experiment has collected a total of $18.5\\times10^{19}$\\,pot. This corresponds to about 15'000 interactions in the target areas of the experiment. Fig. \\ref{strauss-fig6} shows from top to bottom, as a function of time, the integrated number of events occurring in the target (showing the CNGS shutdown periods), with their vertex reconstructed by the electronic detectors, for which at least one brick has been extracted, for which at least one CS has been analysed, for which this analysis has been positive (track stubs corresponding to the event have been found), for which the brick has been analysed, for which the vertex has been located, for which the decay search has been completed.\n\nThe efficiency of the analysis of the most probable CS is rather low, about 65\\,\\%, and is significantly lower for NC events, among which $\\nu_{\\tau}$ interactions are the most likely to be found. To recover this loss, multiple brick extraction is performed; this brings the final efficiency of observing tracks of the event in the CS to about 74\\,\\%. The efficiency for locating an event seen on the CS in the brick is about 70\\,\\%. At the time of this conference, a total of 4611 events has been localised in the bricks and the decay search has been completed for 4126 of them.\n\n\\begin{myfigure}\n\\centerline{\\resizebox{70mm}{!}{\\includegraphics{thomasstrauss_2012_0n_fig07-eps-converted-to.pdf}}}\n\\caption{Events recorded and analyzed in the OPERA experiment from 2008 until 2012.}\n\\label{strauss-fig6}\n\\end{myfigure}\nAfter an ECC has been identified to be the most likely interaction brick, the ECC is developed and sent to one of the scanning laboratories, where it is scanned within a short time. The efficiency for locating an event within the ECC is about 70\\,\\% with respect to the number of positive CS results. After the event has been located, a dedicated decay search is performed to obtain a data sample which can be compared to MC and which provides uniform data quality from all laboratories. This decay search includes the search for decay daughters and a reconstruction of the kinematics of the particles at the vertex. At the time of this conference, 4611 events have been localized in the ECC, with a total of 4126 CC and NC events having completed the decay search.\n\n\\subsection{Charm decay topologies in neutrino interactions}\nIn about 5\\,\\% of the $\\nu_{\\mu}$-CC interactions, the production of a charmed particle at the primary vertex takes place. Charmed particles have lifetimes similar to the $\\tau$ and similar decay channels. Thus charm events provide a subsample of decay topologies similar to $\\tau$ decay, for which the detection efficiency can be estimated based on MC simulation. A study of a high purity selection of charm events in 2008 and 2009 shows agreement with the data \\cite{5}. One-prong charm decay candidates are retained if the charged daughter particle has a momentum larger than 1 GeV\/c. This leads to an efficiency of $\\epsilon_{\\mbox{short}} = 0.31 \\pm 0.02 (\\mbox{stat.})\\pm 0.03 (\\mbox{syst.})$ for short and $\\epsilon_{\\mbox{long}} = 0.61 \\pm 0.05 (\\mbox{stat.}) \\pm 0.06 (\\mbox{syst.})$ for long charm decays, wherein `long' means the production and the decay vertices are not located in the same lead plate. The number of events for which the charm search is complete is 2167 CC interactions. In these we expect $51\\pm7.5$ charm candidates, with a background of $5.3\\pm2.3$ events. The number of observed candidates is 49, which is in agreement with expectations. \n\n\\begin{myfigure}\n\\centerline{\\resizebox{70mm}{!}{\\includegraphics{thomasstrauss_2012_0n_fig10-eps-converted-to.pdf}}}\n\\centerline{\\resizebox{70mm}{!}{\\includegraphics{thomasstrauss_2012_0n_fig11-eps-converted-to.pdf}}}\n\\caption{Display of one of the charm events. Top: View of the reconstructed event in the emulsion. Bottom: Zoom in the vertex region of the primary and secondary vertex. }\n\\label{strauss-fig10}\n\\end{myfigure}\n\nFig. \\ref{strauss-fig10} shows a charm decay detected in the OPERA experiment. In the electronic detector reconstruction, two muons were observed, one charged positively, the other negatively. The $\\mu^-$ is attached to the primary vertex, while the $\\mu^+$ is connected to the decay vertex. This topology corresponds with a charged charm decaying into a muon and the measured kinematic parameters are a flight length of 1330\\,$\\mu$m and a kink angle of 209\\,mrad. The impact parameter (IP) of the $\\mu^+$ with respect to the primary vertex is 262\\,$\\mu$m, and its momentum is measured as 2.2\\,GeV\/c. This accounts for a transverse momentum ($P_T$) of 0.46\\,GeV\/c.\n\n\\section{$\\nu$-velocity measurement}\nDue to the time structure of the CERN SPS beam, the OPERA experiment is able to trigger on the proton spill hitting the CNGS target. As a result, the electronic detector provides a time signal of the recorded events, which can be used to measure the neutrino velocity in the CNGS beam. One needs to measure with precision of some ns the flight time (time of flight - TOF) between CERN and LNGS, and the distance between reference points in the detector and the CNGS. The concept of the neutrino time of flight measurement is illustrated in Fig. \\ref{strauss-fig15}. \n\n\n\n\\begin{myfigure}\n\\centerline{\\resizebox{70mm}{!}{\\includegraphics{thomasstrauss_2012_0n_fig15-eps-converted-to.pdf}}}\n\\caption{Scheme of the time of flight measurement. Figure from \\cite{6}.}\n\\label{strauss-fig15}\n\\end{myfigure}\nThe procedures are explained in great detail in \\cite{6}. Since the time of the conference, an instrumental mistake has been identified that makes the results presented at this conference obsolete. Updated results taken from \\cite{7b} are presented below.\n\n\\begin{myfigure}\n\\centerline{\\resizebox{70mm}{!}{\\includegraphics{thomasstrauss_2012_0n_fig16-eps-converted-to.pdf}}}\n\\centerline{\\resizebox{70mm}{!}{\\includegraphics{thomasstrauss_2012_0n_fig17-eps-converted-to.pdf}}}\n\\caption{Top: Scheme of the timing sytem at CERN. Bottom: Scheme of the timing system at LNGS. Figures from \\cite{6}.}\n\\label{strauss-fig16}\n\\end{myfigure}\n\nFig. \\ref{strauss-fig16} shows the timing systems both at CERN and LNGS, which allowed time calibration between both sites within accuracy of $\\pm4$\\,ns.\n\nThe distance between CERN and the OPERA detector was measured via GPS geodesy and extrapolation down to the location of both the CNGS target and the OPERA detector with terrestrial traverse methods. The effective baseline is measured as $731278.0\\pm0.2$\\,m.\n\nThe proton wave form for each SPS extraction was measured with a beam current transformer (BCT). The sum of the wave forms restricted to those associated to a neutrino interaction in OPERA was used as PDF for the time distribution of the events within the extraction. The maximum likelihood method was used to extract the time shift between the two distributions, i.e. the neutrino time of flight. Internal NC and CC interactions in the OPERA target and external CC interactions occurring in the upstream rock from the 2009, 2010 and 2011 CNG runs were used for this analysis. As shown in Fig. \\ref{strauss-fig20}, it is measured to be: $$\\delta t = \\mbox{TOF}_c - \\mbox{TOF}_\\nu = (6.5\\pm7.4(\\mbox{stat.})\\pm ^{+8.3}_{-8.0}(\\mbox{syst.}))\\,\\mbox{ns.}$$ Modifying the analysis by using each neutrino interaction waveform as PDF instead of their sum gives a comparable result of\n$$\\delta t = (3.5\\pm5.6(\\mbox{stat.})\\pm ^{+9.4}_{-9.1}(\\mbox{syst.}))\\,\\mbox{ns.}$$ No energy dependence was observed.\n\n\\begin{myfigure}\n\\centerline{\\resizebox{70mm}{!}{\\includegraphics{thomasstrauss_2012_0n_fig19-eps-converted-to.pdf}}}\n\\centerline{\\resizebox{70mm}{!}{\\includegraphics{thomasstrauss_2012_0n_fig20-eps-converted-to.pdf}}}\n\\caption{Top: Comparison of the measured neutrino interaction time distributions (data points) and the proton PDF (red and blue line) for the two SPS extractions resulting from the maximum likelihood analysis. Bottom: Blow-up of the leading edge (left plot) and the trailing edge (right plot) of the measured neutrino interaction time distributions (data points) and the proton PDF (red line) for the first SPS extraction after correcting for $\\delta t=6.5$\\,ns. Within errors, this second extraction is equal to the first one. Figures from \\cite{6}.}\n\\label{strauss-fig20}\n\\end{myfigure}\n\nTo cross-check for systematic effects, a dedicated bunched beam run was performed, where the SPS proton delivery is split into 3\\,ns long spills, separated by 524\\,ns in time during autumn 2011, and a similar mode of 3\\,ns with 100\\,ns separation in spring 2012. \n\nThe value of $\\delta t$ obtained in 2011 by using timing information provided by the target tracker is $1.9\\pm3.7$\\,ns; it is $0.8\\pm3.5$\\,ns when based on the spectrometer data \\cite{6}. For the 2012 run, the corresponding values of $\\delta t$ are $\\delta t =(-1.6\\pm1.1(stat.)^{+6.1}_{-3.7})$\\,ns \\cite{7b}. All results are in agreement with the measurement from standard CNGS beam operation.\n\n\\section{Conclusions}\nThe OPERA experiment detected two $\\tau$ neutrino events appearing in the CNGS beam. Further, we measured the neutrino velocity to be in agreement with the speed of light in a vacuum to the O($10^{-6}$). Other short decay topologies like $\\nu_e$ or charm decays can also be detected and are in agreement with MC expectations, thus providing a benchmark for validating the $\\tau$ efficiency expectations.\n\n\n\\thanks\nFirstly, I thank the organizers of the workshop and the OPERA PTB for the possibility to join this workshop. The OPERA collaboration thanks CERN, INFN and LNGS for their support and work. In addition OPERA is grateful for funding from the following national agencies: \n\nFonds de la Recherche Scientique - FNRS and Institut Interuniversitaire des Sciences Nucl\\'eaires for Belgium; MoSES for Croatia; CNRS and IN2P3 for France; BMBF for Germany; INFN for Italy; JSPS (Japan Society for the Promotion of Science), MEXT (Ministry of Education, Culture, Sports, Science and Technology), QFPU (Global COE program of Nagoya University, ``Quest for Fundamental Principles in the Universe\", supported by JSPS and MEXT) and Promotion and Mutual Aid Corporation for Private Schools of Japan for Japan; The Swiss National Science Foundation (SNF), the University of Bern and ETH Zurich for Switzerland; the Russian Foundation for Basic Research (grant 09-02-00300 a), the Programs of the Presidium of the Russian Academy of Sciences ``Neutrino Physics\" and ``Experimental and theoretical researches of fundamental interactions connected with work on the accelerator of CERN\", the support programs of leading schools (grant 3517.2010.2), and the Ministry of Education and Science of the Russian Federation for Russia; the Korea Research Foundation Grant (KRF-2008-313-C00201) for Korea; and TUBITAK ``Scientific and Technological Research Council of Turkey\", for Turkey. In addition the OPERA collaboration thanks the technical collaborators and the IN2P3 Computing Centre (CC-IN2P3).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn equilibrium, it is not possible to get a steady current in a periodic potential \nwithout the application of external forces. This is in conformity with the second law of \nthermodynamics. However, in recent times it has become a popular theme to obtain such \ncurrents in nonequilibrium situations. The currents so obtained in asymmetric but \nperiodic potentials with no imposed directional preferences in the presence of \nthermal noise are called thermal ratchet currents\\cite{feyn, reim}. The usual \nmethods one generally employs to realise such currents are: Rock the system periodically \nalternately with constant forces $|F|$ and $-|F|$ for equal short intervals \\cite{magn}, \nor flash the potential either periodically or randomly with two unequal (for example, \nzero and nonzero) amplitudes \\cite{prost}. However, the method of obtaining current in \nsymmetric periodic potentials subject to asymmetric periodic forcings with zero mean per \nperiod has also been established \\cite{mcm1}. In most of the cases the particle motion is \nconsidered overdamped which is more suitable to model particle (such \nas a molecular motor) transport in biological systems where large friction plays important \nrole \\cite{magn, prost, svob}. There is yet another model wherein the friction is \nconsidered nonuniform in space and in particular periodically varying similar to the \npotential function. The ratchet current could be obtained, in this inhomogeneous case, \neven in symmetric periodic potentials driven periodically and symmetrically in \ntime\\cite{pareek}. So far such currents are obtained in overdamped\\cite{dan1} \ninhomogeneous systems and the possibility of obtaining such currents in underdamped \nsystems\\cite{mcm2} had only been indicated earlier. This model is very appropriate to \ndescribe Josephson junctions \\cite{falco} (where the frictional inhomogeneity bears an \nexact analogy to the coupling between concomitant Cooper pair\nand the quasiparticle tunnelings), and motion of adatoms on \ncrystal surfaces\\cite{wahn}. In this part (I) of the work we present detailed \ncharacteristics of the ratchet current in inhomogeneous (nonuniform friction) inertial \nsystems when driven adiabatically. And in the next part (II) we shall present results \nfor similar systems when driven at finite frequency.\n\nAs will be described in the next section, the steady state mean velocity $\\bar v$ of a\nparticle under the action of a constant force $F$ is calculated using the matrix\ncontinued fraction method (MCFM). The algebraic sum \n$\\Sigma \\bar v$ ($=\\bar v(|F|)+\\bar v(-|F|)$)\n of these mean velocities $\\bar v$ with applied forces $|F|$ and $-|F|$ is termed here \nas the ratchet current in the adiabatic limit. Since the phase difference $\\phi (\\ne 0,\nn\\pi)$ between the sinusoidal potential $V(x)$ and the friction coefficient $\\gamma(x)$\nis the only parameter in the model that breaks the directional symmetry of the problem, \nthe ratchet current is expected to be small. However, $\\Sigma \\bar v$ turns out to be an \ninteresting complicated function of $F$, the temperature $T$, and the mean friction \ncoefficient $\\gamma_0$. This is not surprising and can be understood quite simply on \nphysical grounds as discussed elsewhere\\cite{wanda}.\n\nIn the same work\\cite{wanda} the variation of the ratchet current $\\Sigma \\bar v$ as a \nfunction of applied force $|F|$ for various values of the average friction coefficient \n$\\gamma_0$ at temperature $T=0.4$ has been presented. (The same choice of dimensionless\nparameters, $T$, etc, are elaborated in Sec. II below.) It shows that at the smallest\nexplored $\\gamma_0 (=0.035)$, $\\Sigma \\bar v$ peaks, with value $\\simeq -0.08$, close to\n$F=0.09 \\ll F_c(=1)$, the critical field. The current is thus thermally assisted. The\nvariation of $\\Sigma \\bar v$ is very sharp: Appreciable current could be seen only in\nthe range $0.05\\leq |F|\\leq 0.12$. The current strength reduces almost to zero for\n$F=0.12$. Beyond this $F$ no meaningful $\\bar v$ could be obtained from MCFM. As\n$\\gamma_0$ is increased the range of $F$, where appreciable $\\Sigma \\bar v$ results,\nwidens monotonically. However, the peak current becomes maximum for \n$\\gamma_0 \\simeq 0.4$, with\nmaximum current $\\simeq -0.28$ at $|F|\\simeq 0.75$, and then gradually decreases as\n$\\gamma_0$ is increased further. By a close observation of the reported figure\n(3, ref. \\cite{wanda}) one can conclude that $\\Sigma \\bar v$ at $T=0.4$ will show\nnonmonotonic variation with $\\gamma_0$ at carefully chosen $F$ values, which as we \npresent in the following is, indeed, the case.\n\nFor a given applied force $F$, and the temperature $T$ the current $\\Sigma \\bar v$\ninitially increases, attains a peak, and then gradually decreases as $\\gamma_0$ is \nincreased. The result is reminiscent of the well-known result\\cite{kramers} obtained \nby Kramers, however, in an entirely different context of particle flux across a \npotential barrier. In all situations $\\Sigma \\bar v$ increases with $|F|$, attains a \npeak, and gradually decreases. With appropriately chosen parameters $\\gamma_0$ and $T$, \n $\\Sigma \\bar v$ shows current reversal as well and consequently as $|F|$ \nis increased further $\\Sigma \\bar v$ changes its direction and increases in magnitude \neventually again decreases in magnitude after attaining a second peak. Interestingly, \n$\\Sigma \\bar v$ shows very similar behaviour also as a function of temperature $T$ in \nsome parts of the parameter space $(\\gamma_0, F)$.\n\nAs the value (and direction) of $\\Sigma \\bar v$ changes one naturally asks about how it \nis correlated with the changes in energies of the system. It turns out that the average \nkinetic energy of the particle increases monotonically with $|F|$, whereas the average\npotential energy shows a nonmonotonic behaviour. Again, in this case too we consider\nthe mean of the average kinetic (as also of the potential) energies for $F=\\pm|F|$. The\naverage kinetic energy overwhelms the potential energy as $F$ is increased forcing \nthe average total energy also to increase monotonically. However, the contribution of the \naverage potential energy of the particle is reflected in the derivative of the average\ntotal energy with respect to $F$. These derivatives show distinctive behaviour where\n$\\Sigma \\bar v$ peaks as a function of $F$.\n\nIn order to be useful for practical purposes the system must have good performance \ncharacteristics. We calculate the efficiency of energy transduction of this inhomogeneous\ninertial system driven adiabatically and symmetrically and moving in a symmetric \nperiodic potential. The system yields small current and hence the work done against an \napplied load is also small if at all it is possible to obtain. Naturally, the \nthermodynamic efficiency of the system is also small and it is feasible to obtain only \nin a very small region of parameter space. However, even without an applied load the \nsystem continuously performs work against the frictional force and in order to take it \ninto account a quantity called Stokes efficiency has been \ndefined\\cite{wang, seki, parr}. We calculate the Stokes efficiency of our system. \nThis efficiency also has a complex behaviour and, in particular, it is nonmonotonic. \nIt is also possible, interestingly, to observe increasing Stokes efficiency \nwith temperature contrary to what is observed in usual macroscopic heat engines. This \nkind of behaviour has been shown to occur in overdamped inhomogeneous but asymmetrically \ndriven systems\\cite{raish}.\n\nAdmittedly, the quantitative effect of the inhomogeneity is not very large. Nevertheless, \nour investigation shows that many important results can be obtained, some even \ncounterintuitive, albeit at a small but nonnegligible scale, using system inhomogeneity. \nThe ratchet current, the efficiency of performance, and so on, can be improved by orders \nof magnitude, if zero mean asymmetric periodic drive is employed instead of a symmetric \ndrive. In this case, however, inhomogeneity does not play a decisive role. Similar \nresults can be obtained without system inhomogeneity as the asymmetric drive plays the \nmajor role.\n\nThe above mentioned results will be elaborated upon in section III. The paper will be \nconcluded with a discussion in section IV.\n\n\\section{The matrix continued fraction method for inhomogeneous inertial systems}\nThe motion of a particle of mass $m$, moving in a periodic potential $V(x)=-V_0 sin(kx)$\nin a medium with friction coefficient $\\gamma (x)=\\gamma_0(1-\\lambda sin(kx+\\phi))$\n($0<\\lambda<1$) and subjected to a constant uniform force $F$, is described by the \nLangevin equation\\cite{pareek, amj1},\n\\begin{equation}\nm\\frac{d^{2}x}{dt^{2}}=-\\gamma (x)\\frac{dx}{dt}-\\frac{\\partial{V(x)}}{\\partial\nx}+F+\\sqrt{\\gamma(x)T}\\xi(t).\n\\end{equation}\nHere $T$ is the temperature in units of the Boltzmann constant $k_B$. The Gaussian \ndistributed fluctuating forces $\\xi (t)$ satisfy the statistics: $<\\xi (t)>=0$,\nand $<\\xi (t)\\xi(t^{'})>=2\\delta(t-t^{'})$. For covenience, we write down Eq.(2.1) in \ndimensionless units by setting $m=1$, $V_0=1$, $k=1$ so that, for example, the scaled\n$\\gamma_0$ is expressed in terms of the old $\\sqrt{mV_0}k$, etc. The reduced Langevin \nequation, with reduced variables denoted again by the same symbols, is written as\n\\begin{equation}\n\\frac{d^{2}x}{dt^{2}}=-\\gamma(x)\\frac{dx}{dt}\n+cos x +F+\\sqrt{\\gamma(x) T}\\xi(t),\n\\end{equation}\nwhere $\\gamma(x)=\\gamma_0(1-\\lambda sin(x+\\phi))$. Thus the periodicity of the potential\n$V(x)$ and also the friction coefficient $\\gamma$ is $2\\pi$. The barrier height between\nany two consecutive wells of $V(x)$ persists for all $F<1$ and it just disappears at \nthe critical field value $F_c=1$. The noise variable, with the same symbol $\\xi$, \nsatisfies exactly similar statistics as earlier. The Fokker-Planck equation corresponding \nto Eq.(2.2) is given as\n\\begin{equation}\n\\frac{\\partial W(x,v,t)}{\\partial t} = {\\cal{L}}_{FP}W(x,v,t),\n\\end{equation}\nwhere the Fokker-Planck operator\n\\begin{eqnarray}\n{\\cal{L}}_{FP} & = & -v\\frac{\\partial}{\\partial x}+\\gamma_0(1-\\lambda sin(x+\\phi))\n\\frac{\\partial}\n{\\partial v}v - (cosx+F)\\frac{\\partial}{\\partial v}+ \\nonumber \\\\\n & &\\gamma_0T(1-\\lambda sin (x+\\phi))\\frac{\\partial^{2}}{\\partial v^{2}},\n\\end{eqnarray}\nand $W(x,v,t)$ is the probablity distribution of finding the particle at position $x$\nwith velocity $v=\\frac{dx}{dt}$ at time $t$.\n\nIt is hard to solve the equation in the general case. However, our aim is to solve Eqs.\n (2.3-4) in the steady state case. We apply MCFM developed by Risken, \net. al. \\cite{risk}, and used by others to suit various circumstances \\cite{ferra}. The \nbasic idea behind the method is to expand the distribution function $W(x,v,t)$ in series \nin terms of products of (Hermite) functions $\\psi_n(v)$, which are explicit functions of \nvelocity $v$, and $C_n(x,t)$ (as coefficients) which are explicit functions of position \n$x$ and time $t$:\n\\begin{equation}\nW(x,v,t)=(2\\pi T)^{\\frac{-1}{4}}e{^\\frac{-v^{2}}{4T}}\\sum_{n=0}^\n{\\infty}C_n(x,t)\\psi_n(v),\n\\end{equation}\nwith\n\\begin{equation}\n\\psi_n=\\frac{(b^{\\dagger})^n\\psi_0}{\\sqrt{n!}}, \n\\end{equation}\nand\n\\begin{equation}\n\\psi_0=(2\\pi)^\\frac{-1}{4}T^\\frac{-1}{4}e^\\frac{-v^2}{4T}.\n\\end{equation}\nIt turns out that $\\psi_n(v)$ $(n=0, 1, 2, ...)$ are the eigenfunctions of the operator\n$-\\gamma(x)b^\\dagger b$:\n\\begin{equation}\n-\\gamma(x)b^\\dagger b \\psi_n(v) = -\\gamma(x) n \\psi_n(v),\n\\end{equation}\nwhere $b$ and $b^\\dagger$ are the bosonic-like annihilation and creation operators \nsatisfying\n\\begin{equation}\nbb^\\dagger-b^\\dagger b=1,\n\\end{equation}\nand defined as\n\\begin{equation}\nb=\\sqrt{T}\\frac{\\partial}{\\partial v}+\\frac{1}{2}\\frac{v}{\\sqrt{T}},\n\\end{equation}\n\\begin{equation}\nb^\\dagger=-\\sqrt{T}\\frac{\\partial}{\\partial v}+\\frac{1}{2}\\frac{v}{\\sqrt{T}}.\n\\end{equation}\n(Note: $b^\\dagger \\psi_n(v)=\\sqrt{n+1}\\psi_{n+1}(v)$ and \n$b \\psi_n(v)=\\sqrt{n}\\psi_{n-1}(v)$.)\n \nSubstituting (2.5) in the Fokker-Planck equation (2.3) and multiplying from left by \n$e^\\frac{v^2}{4T}$ on both sides of Eq. (2.3) and on simplifying, we get,\n\\begin{equation}\n\\sum_{m=0}^{\\infty}\\frac{\\partial C_m(x,t)}{\\partial t}\\psi_m =\n\\bar{\\cal{L}}_{FP}\\sum_{m=0}^{\\infty}C_m(x,t)\\psi_m(v),\n\\end{equation}\nwhere \n\\begin{equation}\n\\bar{\\cal{L}}_{FP}=e^{\\frac{v^2}{4T}}{\\cal{L}}_{FP}e^{\\frac{-v^2}{4T}}.\n\\end{equation}\nThe operator $\\bar{\\cal{L}}_{FP}$ is given by\n\\begin{equation}\n\\bar{\\cal{L}}_{FP}=-\\gamma(x)b^{\\dagger} b -bD-b^{\\dagger} \\hat D,\n\\end{equation}\nwhere the operators $D$ and $\\hat D$ are defined as,\n\\begin{equation}\nD=\\sqrt{T}\\frac{\\partial}{\\partial x},\n\\end{equation}\n\\begin{equation}\n\\hat D=\\sqrt{T}\\frac{\\partial}{\\partial x} + \\frac{V^{'}-F}{\\sqrt{T}},\n\\end{equation}\nwith the prime on the potential $V$ denoting differentiation with respect to $x$.\n\nMultiplying both sides of Eq.(2.12) from left by $\\psi_n$ and integrating over the\nvelocity variable and using the orthonormality properties of the functions $\\psi_n$, we \nget the equation on $C_n(x,t)$:\n\\begin{equation}\n\\frac{\\partial C_n(x,t)}{\\partial t}+\nn\\gamma(x)C_n(x,t)+\\sqrt{n+1}DC_{n+1}(x,t)+\\sqrt{n}\\hat{D}C_{n-1}(x,t)=0.\n\\end{equation}\nIn the stationary (steady) state case, we thus have,\n\\begin{equation}\nn\\gamma(x)C_n(x)+\\sqrt{n+1}DC_{n+1}(x)+\\sqrt{n}\\hat{D}C_{n-1}(x)=0\n\\end{equation}\ngiving a recursion relation on $C_n(x)$ ($n=0, 1, 2, ...$). The first ($n=0$) of the\nrecursion relations reveals that $C_1$ is a constant.\n\nIn this steady state case, the normalization condition is given by\n\\begin{equation}\n\\int_{0}^{2\\pi}\\int_{-\\infty}^{\\infty}W(x,v)dx dv = \\int_{0}^{2\\pi}C_0 dx =1,\n\\end{equation}\ngiving a condition on the coefficient $C_0(x)$. The mean velocity $\\bar v(F)$, the \naverage potential energy $E_{pot}$, and the average kinetic energy $E_{kin}$, are\nobtained, respectively, as,\n\\begin{equation}\n\\bar v = \\int_{0}^{2\\pi} \\int_{-\\infty}^{\\infty}v W(x,v)dx dv = 2\\pi \\sqrt{T}C_1,\n\\end{equation}\n\\begin{equation}\nE_{pot}=\\int_{-\\pi}^{\\pi}V(x)C_0(x) dx,\n\\end{equation}\nand\n\\begin{equation}\nE_{kin}=<\\frac{v^2}{2}>=\\frac{1}{2}+\\frac{1}{\\sqrt{2}}\\int_{0}^{2\\pi}C_2(x)dx.\n\\end{equation}\nThus, all the physical quantities of our interest can be calculated provided $C_0$, $C_1$, \nand $C_2$ are known from Eq.(2.18).\n\nWriting $C_n(x) =\\sum_{q}C_n^qe^{iqx}$, $V(x)=\\sum_{q}V^qe^{iqx}$, and\n$\\gamma (x)=\\sum_{q}\\gamma ^q e^{iq(x+\\phi)}$, where $i=\\sqrt{-1}$ and $q$'s as integer\nindices the recurrence relation (2.18) becomes,\n\\begin{eqnarray}\n\\lefteqn{0=} \\nonumber \\\\\n & & n\\sum_{p}\\gamma^{q-p}e^{i(q-p)\\phi}C_n^p \n+ \\sqrt{n+1}\\sqrt{T}\\sum_{p}(ip)\\delta_{q,p}C_{n+1}^p \n+\\sqrt{n}\\sqrt{T}\\sum_{p}(ip)\\delta_{q,p}C_{n-1}^p \\nonumber \\\\ \n& & -\\frac{\\sqrt{n}}{\\sqrt{T}}\\sum_{p}F\\delta_{q,p}C_{n-1}^p\n+\\frac{\\sqrt{n}}{\\sqrt{T}}\\sum_{p}i(q-p)V^{q-p}C_{n-1}^p.\n\\end{eqnarray}\n\nDefining the matrices $\\mathbf{\\gamma}, \\mathbf{D} , \\mathbf{1}$, and $\\mathbf{V^{'}}$ \nwith components\n\\begin{eqnarray}\n(\\mathbf{\\gamma})^{q,p}=\\gamma^{q-p}e^{i(q-p)\\phi},\\\\\n(\\mathbf{D})^{q,p}=ip\\delta_{q,p},\\\\\n(\\mathbf{1})^{q,p}=\\delta_{q,p},\\\\\n(\\mathbf{V^{'}})^{q,p}=i(q-p)V^{q-p},\n\\end{eqnarray}\nand the column matrices $\\mathbf{C_n}, n=0,1,2, ..$ with components\n\\begin{equation}\n(\\mathbf{C_n})^q=C_n^q,\n\\end{equation}\nthe recurrence relation can be written as,\n\\begin{equation}\nn\\mathbf{\\gamma C_n} + \\sqrt{n+1}\\sqrt{T}\\mathbf{DC_{n+1}} +\n\\sqrt{n}(\\sqrt{T}\\mathbf{D}+\\frac{1}{\\sqrt{T}}[\\mathbf{V^{'}}-F\\mathbf{1}])\\mathbf{\nC_{n-1}}=0.\n\\end{equation}\nIntroducing the matrices $\\mathbf{M_n}, (n=0, 1, 2, ..)$ such that\n\\begin{equation}\n\\sqrt{n}\\mathbf{DC_n}=\\mathbf{M_nC_{n-1}}\n\\end{equation}\nthe recursion relation (2.29) for $n=2$ becomes\n\\begin{equation}\n\\mathbf{M_2C_1}(=\\sqrt{2}\\mathbf{DC_2})=\n-\\mathbf{D}(\\mathbf{\\gamma}+\\frac{\\sqrt{T}}{2}\\mathbf{M_3})^{-1}\n(\\sqrt{T}\\mathbf{D}+\\frac{1}{\\sqrt{T}}[\\mathbf{V^{'}}-F\\mathbf{1}])\\mathbf{C_1}.\n\\end{equation}\nHence, for nonzero $\\mathbf{C_1}$, $\\mathbf{M_2}$ can be given in terms of $\\mathbf{M_3}$\nas\n\\begin{equation}\n\\mathbf{M_2}=\\mathbf{D}(\\mathbf{\\gamma}+\\frac{\\sqrt{T}}{2}\\mathbf{M_3})^{-1}\n(\\frac{1}{\\sqrt{T}}[F\\mathbf{1}-\\mathbf{V^{'}} ] - \\sqrt{T}\\mathbf{D}).\n\\end{equation}\nand similarly, one can write down for $\\mathbf{M_3}$, $\\mathbf{M_4}$, and so on. And, in \ngeneral,\n\\begin{equation}\n\\mathbf{M_n}=\\mathbf{D}(\\mathbf{\\gamma}+\\frac{\\sqrt{T}}{n}\\mathbf{M_{n+1}})^{-1}\n(\\frac{1}{\\sqrt{T}}[F\\mathbf{1}-\\mathbf{V^{'}} ] - \\sqrt{T}\\mathbf{D}),\n\\end{equation}\nfor $2\\leq n\\leq N-1$, if the series of recursion relations is terminated at $n=N$ \n($\\mathbf{C_n}=\\mathbf{0}$ for $n\\geq N+1$). Thus, the last ($n=N$) relation becomes,\n\\begin{equation}\n\\mathbf{M_N}=\\mathbf{D}\\mathbf{\\gamma}^{-1}(\n\\frac{1}{\\sqrt{T}}[F\\mathbf{1}-\\mathbf{V^{'}}]-\\sqrt{T}\\mathbf{D}),\n\\end{equation}\nfor $N\\geq 2$. Hence, $\\mathbf{M_{N-1}}$, $\\mathbf{M_{N-2}}$, ... $\\mathbf{M_2}$ can be\ngiven entirely in terms of $\\mathbf{\\gamma}$, $\\mathbf{V^{'}}$, and $F\\mathbf{1}$.\nUsing $\\sqrt{2}\\mathbf{DC_2}=\\mathbf{M_2C_1}$, the recursion relation $(n=1)$\\\\ \n$\\mathbf{\\gamma C_1}+\\sqrt{2}\\sqrt{T}\\mathbf{DC_2}+\n(\\sqrt{T}\\mathbf{D}+\\frac{1}{\\sqrt{T}}[\\mathbf{V^{'}}-F\\mathbf{1}])\\mathbf{C_0}=0$ \\\\\nyields\n\\begin{equation}\n\\mathbf{C_0}=\\mathbf{HC_1},\n\\end{equation}\nwhere\n\\begin{equation}\n\\mathbf{H}=(\\frac{1}{\\sqrt{T}}[F\\mathbf{1}-\\mathbf{V^{'}}]-\\sqrt{T}\\mathbf{D})^{-1}(\n\\mathbf\\gamma + \\sqrt{T}\\mathbf{M_2}).\n\\end{equation}\nSince $C_1$ is a constant,\n\\begin{equation}\nC_0^q=\\sum_{p}(\\mathbf{H})^{q,p}(\\mathbf{C_1})^p = (\\mathbf{H})^{q,0}C_1.\n\\end{equation}\nFrom the normalization condition (2.19), we obtain $2\\pi (\\mathbf{C_0})^0 =1$, and hence \nfrom Eq.(2.36),\n\\begin{equation}\nC_1=\\frac{1}{2\\pi(\\mathbf{H})^{0,0}}.\n\\end{equation}\nTherefore, once $\\mathbf{H}$ is calculated $C_1$ is known and hence all the components\n$C_0^q=(\\mathbf{C_0})^q$ are evaluated.\n\n\n\\section{Numerical results}\nIn order to calculate the physical quantities of interest, namely, the mean velocity\n$\\bar v$, the average kinetic and potential energies for given values of $F$, $\\gamma_0$,\nand temperature $T$ one needs to evaluate the matrix $\\mathbf{H}$. For our special case, \nthe potential $V(x)$ and $\\gamma(x)$ each has only two Fourier components. Therefore, the \ndimension of $\\mathbf{H}$ depends on the number $Q$ of Fourier components to be kept for\nthe coefficients $C_n(x)$. Also, the degree of complication increases with the number\n$N$ of recursion relations one requires to be considered. The acceptable solution will\nrequire the matrix $\\mathbf{H}$ to remain essentially unchanged as $Q$ and $N$ are \nchanged. All these numbers depend on the values of $F$, $\\gamma_0$, and the temperature \n$T$. Typically, in the worst case, it is sufficient if $Q<30$ and $N<2000$ is taken for \nsmall $\\gamma_0 \\ll 1$ in the appropriate parameter space range of ($F, T$). Outside this \nrange no \nacceptable solution is found. It turned out that the paramater space $(\\gamma_0, F, T)$ \nrange where solutions could be extracted was just the region that yielded finite ratchet \ncurrent $\\Sigma \\bar v$. In all our calculations for which the results are described in \nthe following we have set $\\phi=0.35$ and $\\lambda=0.9$.\n\n\\subsection{The ratchet current}\n\\begin{figure}[]\n\\newpage\n\\epsfxsize=9cm\n\\centerline{\\epsfbox{fig1.eps}}\n\\caption{Shows the ratchet current $\\Sigma{v}$ as a function of $\\gamma_{0}$ for two \nvalues of $F$. The inset shows the variation of current vs $\\gamma_{0}$ for $F =$ 0.1.}\n\\label{fig.1}\n\\end{figure}\n\\begin{figure}[]\n\\newpage\n\\epsfxsize=9cm\n\\centerline{\\epsfbox{fig2.eps}}\n\\caption{Ratchet current versus F is shown for $\\gamma_0=0.1$ and two values of temperature. \nThe inset shows variation of $\\Sigma{\\bar v}$ for smaller values of $F$ at $ T = $0.6. The \ndotted line for zero current is shown just to guide the eye.}\n\\label{fig.2}\n\\end{figure}\n\\begin{figure}[]\n\\newpage\n\\epsfxsize=9cm\n\\centerline{\\epsfbox{fig3.eps}}\n\\caption{Shows the variation of $\\mu(|F|)$ and $\\mu(-|F|)$ with $F$ for $\\gamma_0 =$ 0.5. \nThe little vertical marking ($|$) indicates the position of current maximum.}\n\\label{fig.3}\n\\end{figure}\n\\begin{figure}[]\n\\newpage\n\\epsfxsize=9cm\n\\centerline{\\epsfbox{fig4.eps}}\n\\caption{The variation of ratchet current is shown for $\\gamma_0 =$ 0.035 for different \nvalues of $F$. $\\Sigma \\bar v$ decreases slowly with $T(>4)$. The inset shows two current \nreversals for $\\gamma_0 =$ 0.1, $F= 0.1$. The zero $\\Sigma \\bar v$ line is shown to \nguide the eye.}\n\\label{fig.4}\n\\end{figure}\n\nFigure 1 shows how $\\Sigma \\bar v$ changes with $\\gamma_0$ at $T=0.4$. Contrary to usual \nperception current increases as the system becomes more damped. This trend is seen for \nlower $\\gamma_0$ values. However, in the higher range of $\\gamma_0$, as expected, the \ncurrent diminishes with damping. $\\bar v(|F|)$ and $\\bar v(-|F|)$ are both monotonically \ndecreasing in magnitude with $\\gamma_0$. It is only their algebraic sum $\\Sigma \\bar v$\nthat shows the nonmonotonic behaviour. It is purely an effect brought about by the \nsymmetry breaking due to nonzero phase difference $\\phi$ and any analogy with potential \nbarrier crossing phenomenon at low damping described by Kramers is superfluous.\n\nIn Fig.2, the variation of $\\Sigma \\bar v$ as a function of $|F|$ for $\\gamma_0=0.1$ is\nshown for two values of temperature $T=0.6$ and $2.0$. It is to be noted that $T=2.0$\ncorresponds to an energy exactly equal to the potential barrier height at $F=0$. One \ncan see that in the scale of the figure the current $\\Sigma \\bar v$ remains negative for \n$T=0.6$, whereas at $T=2.0$ the current is positive at low $|F|$ but it changes sign and \nremains negative at higher $|F|$ values for the same phase lag $\\phi$. This current \nreversal, however, is neither confined to higher $T$ nor to\nhigher $\\gamma_0$ values. The inset of the figure shows that at smaller $|F|$ region \n$\\Sigma \\bar v$ is positive and then it becomes negative as $|F|$ is increased. Of \ncourse, in this case the positive current is an order of magnitude smaller and hence is \nnot visible in the larger figure. A similar result is obtained for $\\gamma_0=0.035$ where\nagain the negative current at larger $|F|$ dominate but positve currents at low $|F|$\nregion are also appreciable. Here again $\\bar v(|F|)$ and $\\bar v(-|F|)$ both increase\nmonotonically with $|F|$ only their relative values change giving rise to nonmonotonic \nbehaviour of $\\Sigma \\bar v$, including its direction reversal. At \nhigher values of $|F|$ the current should saturate with a small value since the mobility \n$\\mu = |\\frac{dv}{dF}|$ for both $\\pm F$ tend to $\\gamma_0^{-1}$. The variation of $\\mu$ \nas a function of $|F|$ is shown in Fig.3. It shows that at the position of the current \nmaximum $\\mu(|F|)=\\mu(-|F|)$ as it should be.\n\nThe variation of $\\Sigma \\bar v$ as a function of temperature $T$ at various values of\napplied force $|F|$, as shown in Fig.4, reveals that current can also increase with \ntemperature. Moreover, one can achieve current reversals as well. In Fig.4, plots are given \nfor $\\gamma_0=0.035$ and $|F|=0.025$, and 0.075. $|F|=0.075$ yields large currents and shows one current reversal. For larger $|F|$ the current remains negative for all $T$. This \nis not shown here to preserve clarity of the figure. However, in order to highlight the \nexistence of two current reversals we plot the graph for $\\gamma_0=0.1$, and $|F|=0.1$ \nin the inset of the figure. For low values of $|F|$, for example $|F|=0.025$, the \ncurrent always remains positive, though small, which is consistent with the observation \nmade with reference to Fig.2. It is noteworthy that the magnitude of current obtained \nhere is quite large compared to what has been reported\\cite{raish} for overdamped \nsystems.\n\n\\subsection{The energetics}\n\\begin{figure}[]\n\\newpage\n\\epsfxsize=9cm\n\\centerline{\\epsfbox{fig5.eps}}\n\\caption{Shows the variation of the average kinetic, potential and total energies at \n$T= 0.4$ for $\\gamma_0 = 0.4$ as a function of $F$.}\n\\label{fig.5}\n\\end{figure}\n\\begin{figure}[]\n\\newpage\n\\epsfxsize=15cm\n\\centerline{\\epsfbox{fig6.eps}}\n\\caption{In fig.$[a]$ the slopes of the average total energies are shown for various \nvalues of $\\gamma_0 < 1.0$. The graphs for $\\gamma_0 =$ 0.2, 0.4, 0.5 and 0.75 \nare scaled by factors of 10, 20, 25 and 30, respectively, to make them visible in the\nfigure with the graph for $\\gamma_0 =$0.1. Fig.$[b]$ shows plots for $\\gamma_0 > 1.0$. \nThe slopes for $\\gamma_0 =$ 2.0, 4.0 and 5.0 are blown up by 4, 5 and 5 times, \nrespectively, to scale with the curve for $\\gamma_0 =$1.0. The little vertical markings \non the curves show positions of the current maxima.}\n\\label{fig.6}\n\\end{figure}\n\\begin{figure}[]\n\\newpage\n\\epsfxsize=9cm\n\\centerline{\\epsfbox{fig7.eps}}\n\\caption{$T-\\gamma_{0}$ plane showing two disjointed regions with possibility of negative curvatures of the total energy $E_{tot} (|F|)$ separated by a region without the negative curvature of $E_{tot} (|F|)$.}\n\\label{fig.7}\n\\end{figure}\n\\begin{figure}[]\n\\newpage\n\\epsfxsize=9cm\n\\centerline{\\epsfbox{fig8.eps}}\n\\caption{Stokes efficiency versus F for different values of $\\gamma_0$. The inset shows \n$\\eta_{Stokes}$ and $\\Sigma{v}$ plotted for $\\gamma_0 =$ 0.75.}\n\\label{fig.8}\n\\end{figure}\n\\begin{figure}[]\n\\newpage\n\\epsfxsize=9cm\n\\centerline{\\epsfbox{fig9.eps}}\n\\caption{Shows the variation of Stokes efficiency with $T$ for various values of $F$ \nfor $\\gamma_0 =$ 0.4. The little vertical markings on the efficiency curves indicate the \npositions of the current maxima. The inset shows efficiency and current for $F =$ 0.7}.\n\\label{fig.9}\n\\end{figure}\n\nThe average potential and kinetic energies can be easily calculated. The expressions\nare given in the Appendix. Fig.5 shows the variation of the average kinetic ($E_{kin}$),\npotential ($E_{pot}$), and total ($E_{tot}$) energies at $T=0.4$ for $\\gamma_0=0.4$ as a\nfunction of $|F|$. Monotonically increasing nature of $E_{kin}$, and $E_{tot}$ and\nnonmonotonic behaviour of $E_{pot}$ are seen for all (explored) $\\gamma_0$ and $T$ values.\nInterestingly, the slope of $E_{tot}$ shows a shoulder peak which after reaching a minimum\nrises again, Fig. 6. For a given temperature the shoulder peak occurs for small $\\gamma_0$ \nand large $\\gamma_0$ values. However, there is an intermediate range of $\\gamma_0$ \nvalues where the slope does not show this peaking behaviour; the total energy does not\nacquire negative curvature in this range of $\\gamma_0$. The slope gives valuable \ninformation about the variation of the ratchet current $\\Sigma \\bar v$ as shown in Fig.6.\nFor small $\\gamma_0 < 1$ values the slope peaks at an intermediate value of $|F|$, and \nthis is precisely where the current maximum occurs. For $\\gamma_0 > 1$ the positions of \nthe current maximum and that of the slope of the total energy are clearly well separated.\nIn fact, the current maximum occurs, for these large $\\gamma_0$ values, even beyond the \nposition of the minimum (after the peak) of the slope of the total energy. At this point \nit is worth mentioning that for not too large temperatures for $\\gamma_0 < 1$ the current \npeak occurs at $|F|<1$ whereas for all $\\gamma_0 > 1$ the current peaks invariably occur \nat $|F|>1$. Recall that $|F|=1$ is the critical field at which the potential barrier to \nmotion just disappears.\nA careful observation of the results at various temperatures $T$, thus, allows one to\nconstruct a diagram in the ($T-\\gamma_0$) plane, Fig.7, with two disjointed regions with \npossibility of negative curvatures of $E_{tot}(|F_0|)$ separated by a region with only \npositive curvatures. In the \"negative curvature\" region with $\\gamma_0<1$ the slope of \n$E_{pot}$ peaks close to where $\\Sigma \\bar v$ becomes maximum. Whereas in the other \nnegative curvature region with $\\gamma_0>1$ current maximum occurs far away from the \nposition ($F_0$) where the slope of $E_{tot}(|F_0|)$ peaks. It shows that one can \nclearly separate low damping ($\\gamma_0<1$) region from the high damping ($\\gamma_0>1$) \ncase with each showing qualitatively different characteristic behaviour of particle motion. \n\n\\subsection{The efficiency of performance}\nThe currents obtained by application of symmetric drives $\\pm|F|$ are small and not \nsustainable against applied loads of perceptible magnitude. Therefore it is not feasible \nto calculate work done by the system against the load and hence it is difficult to study \nits behaviour as the other parameters vary. However, the particle keeps moving, essentially \nwith average velocity $\\Sigma \\bar v$, against the frictional force and hence performs \nsome work, though not useful practically. How efficiently the system does this work is \nobtained by comparing the magnitude of this work ($=\\gamma_0(\\sum \\bar v)^2$) with the \ninput energy supplied to it. A measure of this quantity called the Stokes efficiency is \ndefined and given (in our case) as\\cite{wang, seki, parr, raish},\n\\begin{equation}\n\\eta_{S}=\\frac{\\gamma_0(\\sum \\bar v)^2}{|F||(\\bar v(|F|)-\\bar v(-|F|))|}.\n\\end{equation} \nIn Fig.8, we plot $\\eta_{S}$ as a function of $|F|$ for various $\\gamma_0$ values at \n$T=0.4$. $\\eta_{S}$ behaves in a very similar fashion as the corresponding current does. \nHowever, $\\eta_{S}$ and current do not peak at the same value of F as will be clear from \nthe inset of the figure where $\\eta_{S}$ and $\\Sigma \\bar v$ are plotted together for \n$\\gamma_0=0.75$. It is remarkable that with such a modest approach one can find regions \nin the parameter space $(\\gamma_0,|F|)$ where the efficiency of performance exceeds two \nper cent. It shows that exploiting system inhomogeneity to obtain ratchet current in \ninertial systems may be useful for practical purposes too.\n\nThe variation of Stokes efficiency $\\eta_{S}$ with $T$ is shown in Fig.9 for $|F|=\n0.66, 0.70$, and 0.80 for which the efficiency and current are appreciable. It is \nnoteworthy that at low temperatures the performance of the system improves with rise in \ntemperature and only after attaining a peak it declines with further rise in temperature. \nIn this case, however, the efficiency and current seem to peak at the same temperature as \ncan be seen from the inset of the figure. For $F=0.66$ we could not see the low temperature\nbehaviour because we do not get sensible results in this region from MCFM.\n\n\\subsection{The asymmetrically driven ratchet}\n\\begin{figure}[]\n\\epsfxsize=9cm\n\\centerline{\\epsfbox{fig10.eps}}\n\\caption{Stokes efficiency as a function of the asymmetry parameter $\\epsilon$ for \ndifferent combinations of values of $F$ and $\\gamma_0$.}\n\\label{fig.10}\n\\end{figure}\n\\begin{figure}[]\n\\epsfxsize=9cm\n\\centerline{\\epsfbox{fig11.eps}}\n\\caption{The thermodynamic efficiency $\\eta_{th}$ versus load for two values of $F$ with \nfixed $\\epsilon =$ 0.5, $\\gamma_0 =$ 0.035 and $T =$ 0.4.}\n\\label{fig.11}\n\\end{figure}\n\nIt is clear that in general the efficiency of performance of the inhomogeneous \ninertial ratchets is small except in some special sections of the parameter space\n$(\\gamma_0, |F|, T)$. This efficiency can be dramatically enhanced if the ratchet is \ndriven asymmetrically with an asymmetry parameter $\\epsilon (0\\leq \\epsilon<1)$. Here the \ndriving force in the positive and in the negative directions are not same: The force\n$F=\\frac{1+\\epsilon}{1-\\epsilon}|F|$ is applied in the positive direction for a fraction\nof time $\\tau=\\frac{1-\\epsilon}{2}$ whereas $F=-|F|$ is applied for a fraction of time\n$1-\\tau=\\frac{1+\\epsilon}{2}$. The ratchet current now will be given by\n\\begin{equation}\n\\Sigma \\bar v=\\bar v^{+}+\\bar v^{-}\n\\end{equation}\nwith\n\\begin{equation}\n\\bar v^{+}=\\frac{1-\\epsilon}{2}\\bar v(\\frac{1+\\epsilon}{1-\\epsilon}|F|),\n\\end{equation}\n\\begin{equation}\n\\bar v^{-}= \\frac{1+\\epsilon}{2}\\bar v(-|F|)\n\\end{equation}\nThe Stokes efficiency of the thermal ratchet motion is similarly defined as in Eq. (3.1):\n\\begin{equation}\n\\eta_{S}=\\frac{\\gamma_0 (\\sum \\bar v)^2}{\n|F||\\bar v^{+}(\\frac{1+\\epsilon}{1-\\epsilon})-\\bar v^{-}|}.\n\\end{equation}\nIn this case the ratchet current, in a typical situation, is quite large. \nCorrespondingly, the Stokes efficiency also become large. Fig.10 shows the variation of \n$\\eta_{S}$ as a function of the asymmetry parameter $\\epsilon$ for various values of\n (a) $|F|=$0.017, $\\gamma_{0}=$0.035; (b) $|F|=$0.05, $\\gamma_{0}=$0.1 and (c) $|F|=$0.27, $\\gamma_{0}=$0.4 at $T=0.4$. Clearly, more than 10 per cent of ratchet operation \nefficiency becomes achievable. Since the currents are large \none can also drive these ratchet currents against applied loads, $L$. Hence, the \nthermodynamic efficiency $\\eta_{th}$, defined in this case as \\cite{seki, parr, raish},\n\\begin{equation}\n\\eta_{th}=\\frac{L\\sum \\bar v}{|F||\\bar v^{+}(\\frac{1+\\epsilon}\n{1-\\epsilon})-\\bar v^{-}|} \n\\end{equation}\ncan be calculated. The variation of $\\eta_{th}$ as a function of $L$ for $\\gamma_0=0.035$\nat $T=0.4$ is shown in Fig.11 for $|F|=0.03$, and 0.053. From the figure one can see\nthat thermodynamic efficiency can be substantial. One can find an optimum load, $L$, and \nan optimum asymmetry parameter, $\\epsilon$, of field drive where the ratchet performs most\nefficiently. It should, however, be noted that the effect of asymmetric drive is \noverwhelming and the system inhomogeneity plays only a secondary role in this case. One \ncan achieve almost similar results without invoking system inhomogeneity as hs been seen \nin the overdamped case\\cite{raish}. \n \n\\section{Discussion and Conclusion}\n\nAll the results of inertial particle motion, in a symmetric periodic potential and driven\nsymmetrically ( and of course adiabatically), such as ratchet current and its \nnonmonotonic behaviour (including its direction reversals) as a function of $|F|, T$, \nand $\\gamma_0$ are possible here only because the friction was considered space-dependent. \nThe periodic variation of the friction in space is just a model and we have not provided \nany microscopic basis for it. However, it has been shown earlier that in order to describe\nthe motion of an adsorbed particle on a crystalline surface (of same atoms as the \nadsorbed one) by a Fokker-Planck equation suitable for one-dimensional particle motion,\none necessarily have to consider space-dependent friction\\cite{wahn}. It turned out from\ntheir microscopic theory that the space dependence has to be periodic as the average\npotential created by the crystalline (surface) substrate for the adatom and in antiphase\n($\\phi=\\pi$) with it. In other words, the friction will be large where the substrate \npotential is small. The possiblity of obtaining current due to diffusion \ninhomogeneity\\cite{landa} and frictional inhomogeneity\\cite{amj1} has been argued \nearlier. The phase difference $\\phi\\ne n\\pi (n=0, 1, 2, ...)$ is necessary to break the \nspatial symmetry in order to get ratchet current. We have found that the ratchet current \ndepends on the value of $\\phi$. For instance, taking $\\phi=0.4$ gives different and \nlarger current from what we have reported here for $\\phi=0.35$ at the same point in the \nparameter space of ($|F|, T, \\gamma_0$). \n\nThe increase of ratchet current and also the efficiency of the ratchet as a function of\n$T$ at small temperature range is because the particle has a large potential to surmount \nat small $|F|$ and increasing temperature helps in achieving that. However at the larger\ntemperature range obtaining a second current peak as shown in Fig.4 cannot be explained\nas simply and it arises because of the competition between the temperature and friction\neffect as argued elsewhere\\cite{wanda}.\n\nThe behaviour of the slope of the total energy vis-a-vis the position of ratchet current\npeak shows a clear demarcation occurring around $\\gamma_0=1$, in the ($T-\\gamma_0$) \nplane. The investigation of time scales of running and locked states of the particle \nmotion may provide a clue to this remarkable behaviour. We are carrying out numerical \nLangevin dynamic simulation on this aspect of the problem and the results will be \nreported elsewhere. We conclude by reiterating that particle motion in inhomogeneous \nsystems has the potential of a rich field of research. \n\\section*{ACKNOWLEDGEMENT}\n\nThe authors thank BRNS, Department of Atomic Energy, Govt. of India, for partial\nfinancial support. MCM thanks A.M. Jayannavar for discussion, and Abdus Salam ICTP,\nTrieste, Italy for providing opportunity to visit where the paper was written.\n\n\\section*{APPENDIX}\nUsing Eqs. (2.36) and (2.37), the potential energy expression (2.21) is given by\n\\begin{displaymath}\nE_{pot}=\\sum_{q}V^{(-q)}\\frac{(\\mathbf H)^{q,0}}{(\\mathbf H)^{0,0}}.\n\\end{displaymath}\nThe kinetic energy expression (2.22) can be written as\n\\begin{equation}\nE_{kin}=\\frac{1}{2}+{\\sqrt{2}}\\pi(\\mathbf {C_2})^0. \\nonumber\n\\end{equation}\nFrom Eqs. (2.29) and (2.32) one can write down a general equation relating \n$\\mathbf{C_n}$ and $\\mathbf{C_{n-1}} (n\\geq 1)$ as\n\\begin{displaymath}\n\\mathbf{C_n}=\\frac{1}{\\sqrt{n}}\\mathbf{A_nC_{n-1}},\n\\end{displaymath}\nwhere the matrices $\\mathbf{A_m} (m=0, 1, 2, ...)$ are given by\n\\begin{displaymath}\n\\mathbf{A_m}=(\\gamma+\\sqrt{T}(m+1)^{-1}\\mathbf{DA_{m+1}})^{-1}(\n\\frac{1}{\\sqrt{T}}[F\\mathbf{1}-\\mathbf{V^{'}}]-\\sqrt{T}\\mathbf{D}).\n\\end{displaymath}\nIf the recursion is terminated at $m=N$ $(\\mathbf{A_{N+1}}=\\mathbf{0})$\n\\begin{displaymath}\n\\mathbf{A_N}=\\gamma^{-1}(\\frac{1}{\\sqrt{T}}[F\\mathbf{1}-\\mathbf{V^{'}}]\n-\\sqrt{T}\\mathbf{D}),\n\\end{displaymath}\nfor $N\\geq 2$. $\\mathbf{C_2}$ is thus calculated as\n\\begin{displaymath}\n\\mathbf{C_2}=\\frac{1}{\\sqrt{2}}\\mathbf{A_1A_0C_0}.\n\\end{displaymath}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nGiven the wide spread use of deep networks \\cite{gan} across various applications and their black box nature, explainability in artificial intelligence (XAI) has been one of the problems at the forefront in AI research recently \\cite{montavon2017methods,patternet,lime,CEM,simple}. Darpa's call for creating interpretable solutions \\cite{xai} and the General Data Protection Regulation (GDPR) passed in Europe \\cite{gdpr} which requires businesses to provide understandable justifications to their users for decisions that may affect them has made this need even more acute.\n\nThere have been many (posthoc) interpretability methods proposed to interpret decisions of neural networks \\cite{Ormas,patternet,saliency,bach2015pixel,CEM,simple} which assume complete access to the model. Locally interpretable model-agnostic explanation method (LIME) \\cite{lime} is amongst the few that can provide local explanations for any model with just query access.\nThis is an extremely attractive feature as it can be used in settings where the model owner may not want to expose the inner details of the model but may desire local explanations using say a remote service. Another application is in interpreting decisions not just of neural networks but other models such as random forests, boosted trees and ensembles of heterogeneous models which are known to perform quite well in many domains that use structured data \\cite{kaggle}.\n\nIn this paper, we thus propose the model agnostic contrastive explanations method (MACEM) that requires only query access to the classification model with particular focus on structured data. Structured data can be composed of real and categorical features, and we provide a principled way of creating contrastive explanations for such data. Contrastive explanations are a rich form of explanation where one conveys not only what is (minimally) sufficient to justify the class of an input i.e. pertinent positives (PPs), but also what should be (minimally) necessarily absent to maintain the original classification i.e. pertinent negatives (PNs) \\cite{CEM}. Such explanations are commonly used in social settings as well as in domains such as medicine and criminology \\cite{pertneg}. For example, a patient with symptoms of cough, cold and fever (PPs) could have flu or pneumonia. However, the absence of chills or mucous (PNs) would indicate that the person has flu rather than pneumonia. Thus, in addition to the symptoms that were present, the symptoms that are absent are also critical in arriving at a decision. As such, these type of explanations are also sought after in the financial industry where in the recently completed FICO explainability challenge \\cite{FICO} it was explicitly stated that if a loan was rejected it would be highly desirable for an explanation to elucidate what changes to the loan application (i.e. input) would have led to its acceptance. Not to mention the data in the challenge was in structured format.\n\nAdditionally, working with experts across multiple industries (finance, healthcare, manufacturing, utility) we have found that they want explanation methods to satisfy two main criteria: a) be model agnostic so that one can explain a model on their private cloud through just query access and b) be trustworthy in that the method closely captures what the model is trying to do. For b) they are moving away from proxy model approaches such as LIME, since it does NOT meet their regulatory standards.\n\\begin{wrapfigure}{r}{0.55\\textwidth}\n\\vspace{-0.5cm}\n\\begin{center}\n \\includegraphics[width=0.55\\textwidth]{PPPNEg.png} \n \\caption{Above we see an example explanation for a loan application from the German Credit dataset that was rejected by a black box model which was a tree. We depict the important features for PPs and PNs. Our PPs convey that even if the person didn't need a co-applicant and had lower credit card debt the application would still be rejected. In contrast, our PNs inform us that if the persons checking amount had more money, the loan installment rate was lower and there were no people that he\/she was responsible for then the loan would have been accepted.}\n \\label{VisualPPPN}\n \\end{center}\n\\vspace{-0.5cm}\n\\end{wrapfigure}\nBeing contrastive is also extremely useful as it helps them understand sensitivities of the model decisions. Given this we believe our contribution is timely and significant.\n\nIn previous works \\cite{CEM}, a method to produce such explanations was proposed. However, the method was restricted to differentiable models such as deep neural networks and strong (implicit) assumptions were made in terms of the semantics of the data used to train the models and obtain explanations. In particular, following are the key differences between our current and the prior work:\\\\\n \\textbf{i) Gradients not available:} In this work we want to create contrastive explanations with only query access to any classification model. This is a significant step given that the prior work a) assumed complete access to the model and b) could be used only for differentiable models like deep neural networks.\\\\% Our method can be used for any classification model (viz. decision trees, forests, ensembles), where we estimate gradients (with theoretically bounded bias) using only oracle access.\\\\\n \\textbf{ii) Using (and estimating) base values:} To compute PPs and PNs one needs to know what it means for a feature to be absent i.e., what value for a feature indicates there is no signal or is essentially the least interesting value for that feature. We refer to such values as \\emph{base values}. In the prior work on contrastive explanations \\cite{CEM} the value 0 for a feature was considered as the base value, which can be unrealistic.\n Ideally, the user should provide us with these values. In this paper we adapt our methods to utilize such base values and also propose ways to estimate them in situations that they are not provided. It is important to note here that existence base values is implicitly assumed in most of explainability research. For example, explanations for images involve highlighting\/selecting pixels \\cite{saliency,bach2015pixel,lime} which implicitly assumes that a blank image is zero information although it too may be classified in a class with high confidence.\\\\\n \\textbf{iii) Handling categorical features:} In the prior work all features were considered to be real valued and no special consideration was given to handle categorical features. However, in this work we remain cognizant to the fact that categorical features are fundamentally different than real valued ones and propose a principled as well as scalable approach to handle them for our explanations.\\\\\n \\textbf{iv) Computing PPs and PNs:} Given the above differences we propose new ways of computing PPs and PNs that are consistent with their intuitive definitions mentioned before. \\emph{As such, we define a PP for an input $\\mathbf{x}$ as the sparsest example (w.r.t. base values) whose feature values are no farther from the base values than those of $\\mathbf{x}$, with it lying in the same class as $\\mathbf{x}$. Consequently, a PN for $\\mathbf{x}$ is defined as an example that is closest to $\\mathbf{x}$ but whose feature values are at least as far away from the base values as those of $\\mathbf{x}$ with it lying in a different class}. Important features for an example PP and PN for a loan application in the German Credit dataset are depicted in figure \\ref{VisualPPPN}.\n \n \n\n\n \n \n \n \n\n\\vspace{-0.3cm}\n\\section{Related Work}\n\\vspace{-0.3cm}\nTrust and transparency of AI systems has received a lot of attention recently \\cite{xai}. Explainability is considered to be one of the cornerstones for building trustworthy systems and has been of particular focus in the research community \\cite{lipton2016mythos,tip}. Researchers are trying to build better performing interpretable models \\cite{decl,twl,Caruana:2015,irt,bastani2017interpreting,simple} as well as improved methods to understand black box models such as deep neural networks \\cite{lime,bach2015pixel,CEM}.\n\nThe survey \\cite{montavon2017methods} which is mainly focused on deep learning explainability methods looks broadly at i) prototype selection methods \\cite{nguyen2016synthesizing,nguyen2016multifaceted} to explain a particular class and ii) methods that highlight relevant features for a given input \\cite{bach2015pixel,patternet,lime,unifiedPI}. There are other works that fall under (i) such as \\cite{l2c,proto} as well as those that fall under (ii) for vision \\cite{selvaraju2016grad,saliency,Ormas} and NLP applications \\cite{lei2016rationalizing}. Most of these works though do not provide contrastive explanations in a model agnostic setting. There are also interesting works which try to quantify interpretability \\cite{QEval,tip} and suggest methods for doing so.\n \nTwo of the most relevant recent works besides \\cite{CEM} which we have already contrasted with are \\cite{anchors,symbolic}. In \\cite{anchors}, the authors try to find sufficient conditions to justify a classification that are global in nature. For example, the presence of the word \"bad\" in a sentence would automatically indicate negative sentiment irrespective of the other words. As such, they do not find input specific minimally sufficient values that would maintain the classification or minimal values that would change classification. Such global anchors also may not always be present in the data that one is interested in. The other work \\cite{symbolic} tries to provide (stable) suggestions more than local explanations for decisions based on a neural network. Moreover, the approach is restricted to neural networks using rectified linear units and is feasible primarily for smallish to medium sized neural networks in asymmetric binary settings, where suggestions are sought for a specific class (viz. loan rejected) and not the other (viz. loan accepted).\n\n\\vspace{-0.3cm}\n\\section{MACEM Method}\n\\vspace{-0.3cm}\n Let $\\mathcal{X}$ denote the feasible data space and let $(\\mathbf{x}_0,t_0)$ denote an input example $\\mathbf{x}_0 \\in \\mathcal{X}$ and its inferred class label $t_0$ obtained from a black-box classification model. The modified example $\\mathbf{x} \\in \\mathcal{X}$ based on $\\mathbf{x}_0$ is defined as $\\mathbf{x}=\\mathbf{x}_0 + \\boldsymbol{\\delta}$, where $\\boldsymbol{\\delta}$ is a perturbation applied to $\\mathbf{x}_0$. Our method of finding pertinent positives\/negatives is formulated as an optimization problem over the perturbation variable $\\boldsymbol{\\delta}$ that is used to explain the model's prediction results. We denote the prediction of the model on the example $\\mathbf{x}$ by $\\zeta(\\mathbf{x})$, where $\\zeta(\\cdot)$ is any function that outputs a vector of confidence scores over all classes, such as the log value of prediction probability. Let $c,~\\beta,~\\gamma$ be non-negative regularization parameters.\n\n\\begin{algorithm}[htbp]\n \\caption{Model Agnostic Contrastive Explanations Method (MACEM)}\n \\label{cem}\n\\begin{algorithmic}\n\\STATE \\textbf{Input:} Black box model $\\mathcal{M}$, base values $\\mathbf{b}$, allowed space $\\mathcal{X}$, example $(\\mathbf{x}_0,t_0)$, and (estimate of) input probability distribution $p(x)$ (optional).\\\\\n\\STATE 1) Find PPs $\\boldsymbol{\\delta}^{\\textnormal{pos}}$ by solving equation \\ref{eqn_per_pos} and PNs $\\boldsymbol{\\delta}^{\\textnormal{neg}}$ by solving equation \\ref{eqn_per_neg}.\n\n \\STATE 2) Return $\\boldsymbol{\\delta}^{\\textnormal{pos}}$ and $\\boldsymbol{\\delta}^{\\textnormal{neg}}$. \\COMMENT{Explanation: The input $\\mathbf{x}_0$ would still be classified into class $t_0$ even if it were (closer to base values) $\\mathbf{x}_0+\\boldsymbol{\\delta}^{\\textnormal{pos}}$. However, its class would change if it were perturbed (away from original values) by $\\boldsymbol{\\delta}^{\\textnormal{neg}}$, i.e., if it became $\\mathbf{x}_0+\\boldsymbol{\\delta}^{\\textnormal{neg}}$.\n}\n\\end{algorithmic}\n\\end{algorithm}\n\\vspace{-0.5cm}\n\\subsection{Computing Pertinent Positives}\n\\vspace{-0.2cm}\nAssume an example $\\mathbf{x}_0$ has $d$ features each with base values $\\{b_i\\}_{i=1}^d$. Let $\\Delta_{PP}$ denote the space $\\{\\boldsymbol{\\delta}: | \\mathbf{x}_0+\\boldsymbol{\\delta} - \\mathbf{b} | \t\\preceq | \\mathbf{x}_0 - \\mathbf{b} | \\textnormal{~and~} \\mathbf{x}_0 + \\boldsymbol{\\delta} \\in \\mathcal{X} \\}$, where $\\mathbf{b}=[b_1,\\ldots,b_d]$, and $|\\cdot|$ and $\\preceq$ implies element-wise absolute value and inequality, respectively. To solve for PP, we propose the following:\n\n\\begin{align}\n\\label{eqn_per_pos}\n\\boldsymbol{\\delta}^{\\textnormal{pos}} \\leftarrow \\argmin_{\\boldsymbol{\\delta} \\in \\Delta_{PP}}~&c \\cdot \\max \\{ \\max_{i \\neq t_0} [\\zeta(\\mathbf{x}_0+\\boldsymbol{\\delta})]_i - [\\zeta(\\mathbf{x}_0+\\boldsymbol{\\delta})]_{t_0}, - \\kappa \\}+ \\beta \\|\\mathbf{x}_0 + \\boldsymbol{\\delta} - \\mathbf{b} \\|_1 \\nonumber\\\\ &+ \\|\\mathbf{x}_0 + \\boldsymbol{\\delta} - \\mathbf{b}\\|_2^2-\\gamma p(\\mathbf{x}_0 + \\boldsymbol{\\delta}).\n\\end{align}\n The first term\n is a designed loss function that encourages the modified example $\\mathbf{x}=\\mathbf{x}_0+\\boldsymbol{\\delta}$ relative to the base value vector $\\mathbf{b}$, defined as $\\mathbf{x}-\\mathbf{b}$,\n to be predicted as the same class as the original label $t_0=\\arg \\max_i [\\zeta(\\mathbf{x}_0)]_i$\nThe loss function\nis a hinge-like loss and\nthe term $\\kappa \\geq 0$ controls the gap between $[\\zeta(\\mathbf{x}_0+\\boldsymbol{\\delta})]_{t_0}$ and the other most probable class. In particular, the loss attains its minimal value when $[\\zeta(\\mathbf{x}_0+\\boldsymbol{\\delta})]_{t_0}$ is $\\kappa$ larger than $\\max_{i \\neq t_0} [\\zeta(\\mathbf{x}_0+\\boldsymbol{\\delta})]_i$. The parameter $c \\geq 0$ is the regularization coefficient associated with the first term\nThe second and third terms in \\eqref{eqn_per_pos} are jointly called the elastic-net regularizer \\cite{zou2005regularization}, which aids in selecting a set of highly relevant features from $\\mathbf{x}-\\mathbf{b}$, and the parameter $\\beta \\geq 0$ controls the sparsity of the vector $\\mathbf{x}-\\mathbf{b}$\n\nOptionally, the input distribution $p(\\mathbf{x})$ also maybe estimated from the data\nwhich could be used to further direct the search so that we produce realistic or high probability $\\mathbf{x}$.\n\n\n\\subsection{Computing Pertinent Negatives}\nAnalogous to PP, for PN let $\\Delta_{PN}$ denote the space $\\{\\boldsymbol{\\delta}: | \\mathbf{x}_0+\\boldsymbol{\\delta} - \\mathbf{b} | \t\\succ | \\mathbf{x}_0 - \\mathbf{b} | \\textnormal{~and~} \\mathbf{x}_0 + \\boldsymbol{\\delta} \\in \\mathcal{X} \\}$. To solve for PN, we propose the following problem formulation:\n\\begin{align}\n\\label{eqn_per_neg}\n\\boldsymbol{\\delta}^{\\textnormal{neg}}\\leftarrow\\argmin_{\\boldsymbol{\\delta} \\in \\Delta_{PN}} ~c \\cdot\\max \\{ [\\zeta(\\mathbf{x}_0+\\boldsymbol{\\delta})]_{t_0} - \\max_{i \\neq t_0} [\\zeta(\\mathbf{x}_0+\\boldsymbol{\\delta})]_i, - \\kappa \\}+\\beta \\|\\boldsymbol{\\delta} \\|_1 + \\|\\boldsymbol{\\delta} \\|_2^2- \\gamma p(\\mathbf{x}_0 + \\boldsymbol{\\delta})\n\\end{align}\nIn other words, for PN, we aim to find the least modified changes in $\\boldsymbol{\\delta} \\in \\Delta_{PN}$, evaluated by the elastic-net loss on $\\boldsymbol{\\delta}$,\nsuch that its addition to $\\mathbf{x}_0$ leads to a different prediction from $t_0$, \n\n\\subsection{Method Details}\n\n\nWe now describe the details of how the optimization of the above objectives is implemented along with estimation and modeling of certain key aspects\n\n\\subsubsection{Optimization Procedure}\n\n\n\nHere we first illustrate how FISTA solves for PP and PN, assuming the gradient is available. This is very similar to previous work \\cite{CEM} with the main difference lying in the projection operators $\\Delta_{PN}$ and $\\Delta_{PP}$.\n FISTA is an efficient solver for optimization problems involving $L_1$ regularization. Take pertinent negative as an example, let $g (\\boldsymbol{\\delta}) = \\max \\{ [\\zeta(\\mathbf{x}_0+\\boldsymbol{\\delta})]_{t_0} - \\max_{i \\neq t_0} [\\zeta(\\mathbf{x}_0+\\boldsymbol{\\delta})]_i, - \\kappa \\} + \\|\\boldsymbol{\\delta}\\|_2^2 - \\gamma p(\\mathbf{x}_0 + \\boldsymbol{\\delta}) $ denote the objective function of (\\ref{eqn_per_neg}) without the $L_1$ regularization term.\nGiven the initial iterate $\\boldsymbol{\\delta}^{(0)}=\\mathbf{0}$,\nprojected FISTA iteratively updates the perturbation $I$ times by\n\\begin{equation}\n\\boldsymbol{\\delta}^{(k+1)}=\\Pi_{\\Delta_{PN}} \\{ S_{\\beta}(\\mathbf{y}^{(k)}-\\alpha_k \\nabla g(\\mathbf{y}^{(k)})) \\};~~~\n \\mathbf{y}^{(k+1)}=\\Pi_{\\Delta_{PN}} \\{ \\boldsymbol{\\delta}^{(k+1)}+\\frac{k}{k+3} (\\boldsymbol{\\delta}^{(k+1)} - \\boldsymbol{\\delta}^{(k)}) \\},\n\\end{equation}\nwhere $\\Pi_{\\Delta_{PN}}$ denotes the vector projection onto the set $\\Delta_{PN}$,\n$\\alpha_k$ is the step size, $\\mathbf{y}^{(k)}$ is a slack variable accounting for momentum acceleration with $\\mathbf{y}^{(0)}=\\boldsymbol{\\delta}^{(0)}$, and $S_{\\beta}: \\mathbb{R}^{p} \\mapsto \\mathbb{R}^{p}$ is an element-wise shrinkage-thresholding function which is 0 if $\\forall i \\in \\{1,\\ldots,d\\}$ $|\\mathbf{z}_i| \\leq \\beta$, else takes the values $\\mathbf{z}_i - \\beta$ or $\\mathbf{z}_i + \\beta$ for $\\mathbf{z}_i > \\beta$ or $\\mathbf{z}_i < -\\beta$ respectively.\nThe final perturbation $\\boldsymbol{\\delta}^{(k^*)}$ for pertinent negative analysis is selected from the set $\\{ \\boldsymbol{\\delta}^{(k)}\\}_{k=1}^I$ such that $f^{\\textnormal{neg}}_{\\kappa}(\\mathbf{x}_0,\\boldsymbol{\\delta}^{(k^*)})=0$ and $k^* = \\arg \\min_{k \\in \\{1,\\ldots, I\\}} \\beta \\| \\boldsymbol{\\delta}\\|_1 + \\| \\boldsymbol{\\delta}\\|_2^2 $.\nA similar projected FISTA approach is applied to PP analysis. \n\n\n\\subsubsection{Gradient Estimation}\nIn the black-box setting, in order to balance the model query complexity and algorithmic convergence rate using zeroth-order optimization, in this paper we use a two-point evaluation based gradient estimator averaged over $q$ different random directions \\cite{duchi2015optimal,liu2017zeroth,liu2018zeroth}. Specifically, given a scalar function $f(\\cdot)$, its gradient at a point $\\mathbf{x} \\in \\mathbb{R}^d$ is estimated by \n\\begin{align}\n\\label{eqn_grad_est}\n\\widehat{\\nabla} f (\\mathbf{x}) = \\frac{d}{q \\mu} \\sum_{j=1}^q \\frac{f(\\mathbf{x}+\\mu \\mathbf{u}_j) - f(\\mathbf{x})}{\\mu} \\cdot \\mathbf{u}_j,\n\\end{align}\nwhere $\\{\\mathbf{u}_j\\}_{j=1}^q$ is a set of i.i.d. random directions drawn uniformly from a unit sphere, and $\\mu>0$ is a smoothing parameter.\n\nThe estimation error between $\\widehat{\\nabla} f (\\mathbf{x})$ and the true gradient $\\nabla f(\\mathbf{x})$ can be analyzed through a smoothed function $f_\\mu (\\mathbf{x})=\\mathbb{E}_{\\mathbf{u} \\in U_b} [f(\\mathbf{x}+\\mu \\mathbf{u})]$, where $U_b$ is a uniform distribution over the unit Euclidean ball. Assume $f$ is an $L$-smooth function, that is, its gradient $\\nabla f$ is $L$-Lipschitz continuous.\nIt has been shown in \\cite{liu2018zeroth} that $\\widehat{\\nabla} f (\\mathbf{x})$ is an unbiased estimator of the gradient $\\nabla f_\\mu (\\mathbf{x})$, i.e., $\\mathbb{E}_{\\mathbf{u}} [\\widehat{\\nabla} f (\\mathbf{x})]=\\nabla f_{\\mu} (\\mathbf{x})$. Moreover, using the bounded error between $f$ and $f_\\mu$, one can show that the mean squared error between $\\widehat{\\nabla} f (\\mathbf{x})$ and $\\nabla f (\\mathbf{x})$ is upper bounded by \n\\begin{align}\n\\label{eqn_bound}\n\\mathbb{E}_{\\mathbf{u}} [ \\| \\widehat{\\nabla} f (\\mathbf{x})- \\nabla f (\\mathbf{x})\\|_2^2 ]\\leq O \\left(\\frac{q+d}{q} \\right) \\|\\nabla f (\\mathbf{x})\\|_2^2 + O(\\mu^2 L^2 d^2).\n\\end{align}\n\n\n\n\\subsubsection{Determining Base Values}\nAs mentioned before, ideally, we would want base values as well as allowed ranges or limits for all features be specified by the user. This should in all likelihood provide the most useful explanations. However, this may not always be feasible given the dimensionality of the data and the level of expertise of the user. In such situations we compute base values using our best judgment.\n\nFor real valued features, we set the base value to be the median value of the feature. This possibly is the least interesting value for that feature as well as being robust to outliers\nMedians also make intuitive sense where for sparse features $0$ would rightly be chosen as the base value as opposed to some other value which would be the case for means.\n\\begin{wrapfigure}{r}{0.5\\textwidth}\n \\centering \n \\includegraphics[width=0.5\\textwidth]{FMA.png} \n \\caption{Above we see a categorical feature taking three values A, B and C with frequencies 11, 6 and 1 respectively as indicated on the vertical axis. Our mapping function in equation 11 for FMA maps these frequencies and hence the categorical values to 0, 0.5 and 1 in the $[0,1]$ interval. The red horizontal lines depict the function $h(.)$ showcasing the range of values that map back to either A, B or C.}\n \\label{FMA}\n\\vspace{-0.75cm}\n\\end{wrapfigure}\nFor categorical features, we set the base value to be the mode for that feature. Here we use the fact that the mode is the least informative value for a categorical feature \\cite{inftheory} and rarer values are likely to be more interesting to the user. For example in a dataset containing health records most people will probably not have cancer and so having cancer is something that should stand out as it indicates a state away from the norm. Such states or behaviors we believe carry information that is more likely to surprise the user and draw attention, which could be a prelude to further actions.\n\n\\subsubsection{Modeling Categorical Features}\nGiven that categorical features do not impose an explicit ordering like real features do along with the fact that only the observed values have semantic meaning, there is a need to be model them differently when obtaining explanations. We now present a strategy that accomplishes this in an efficient manner compared to one-hot encoding where one could have an explosion in the feature space if there are many categorical features with each having many possible discrete values. Both these strategies are described next\n\n\n\n \n \n \n\n\n\\noindent{\\textbf{Frequency Map Approach (FMA):}} In this approach we want to directly leverage the optimization procedure described above where we need to define an ordered set\/interval in which to find the perturbations $\\delta$ for both PPs and PNs.\n\n\\noindent\\textit{Mapping:} As described above for categorical features we set the base value to be the mode of the values that occur. Given this a natural ordering can be created based on the frequencies of the different values. Thus, the least frequent value would be considered to be the farthest from the base value. Based on this we can map the $k$ discrete values $v_{i1}, ..., v_{ik}$ of the $i^{th}$ feature occurring with frequencies\/counts $c_{i1}, ..., c_{ik}$ to real values ${r}_{i1}, ..., {r}_{ik}$ respectively in the $[0,1]$ interval using the following mapping for any $j\\in \\{1, ..., k\\}$:\n $ {r}_{ij}=\\frac{c_{\\text{max}}-c_{ij}}{c_{\\text{max}}-1}$\nwhere $c_{\\text{max}}=\\max\\limits_{j\\in \\{1, ..., k\\}} c_{ij}$. This maps the discrete value with the highest frequency to 0 making it the base value, while all other values with decreasing frequencies lie monotonically away from 0. Every candidate value has to have a frequency of at least 1 and so every discrete value gets mapped to the $[0,1]$ interval. We divide by $c_{\\text{max}}-1$, rather than $c_{\\text{max}}-c_{\\text{min}}$, where $c_{\\text{min}}=\\min\\limits_{j\\in \\{1, ..., k\\}} c_{ij}$ since, we do not want values that occur with almost equal frequency to be pushed to the edges of the interval as based on our modeling they are of similar interest.\n\n\\noindent\\textit{Method and Interpretation:} Based on the equation for $r_{ij}$, we run our algorithm for categorical features in the interval $[0,1]$, where every time we query the model we round the $\\mathbf{x}_0+\\boldsymbol{\\delta}$ to the closest $r_{ij}$ so that a \\emph{valid} categorical value $v_{ij}$ can be mapped back to and sent as part of the query input.\n\nThe question now is what are we exactly doing in the mathematical sense. It turns out that rather than optimizing $f^{\\textnormal{neg}}_{\\kappa}(\\mathbf{x}_0,\\boldsymbol{\\delta})=\\max \\{ [\\zeta(\\mathbf{x}_0+\\boldsymbol{\\delta})]_{t_0} - \\max_{i \\neq t_0} [\\zeta(\\mathbf{x}_0+\\boldsymbol{\\delta})]_i, - \\kappa \\}$ or $f^{\\textnormal{pos}}_{\\kappa}(\\mathbf{x}_0,\\boldsymbol{\\delta})=\\max \\{ \\max_{i \\neq t_0} [\\zeta(\\mathbf{x}_0+\\boldsymbol{\\delta})]_i - [\\zeta(\\mathbf{x}_0+\\boldsymbol{\\delta})]_{t_0}, - \\kappa \\}$, we are optimizing $f^{\\textnormal{neg}}_{\\kappa}(h(\\mathbf{x}_0,\\boldsymbol{\\delta}))$ or $f^{\\textnormal{pos}}_{\\kappa}(h(\\mathbf{x}_0,\\boldsymbol{\\delta}))$ respectively, where $h(.)$ is the identity map for real features,\nbut a step function defined over the $[0,1]$ interval for categorical features. Let $h_i(.)$ denote the application of the function $h(.)$ to the categorical feature $i$. If $\\mathbf{x} = \\mathbf{x}_0+\\boldsymbol{\\delta}$ and $\\mathbf{x}_i$ denotes the value of the feature in the mapped $[0,1]$ interval then,\n $h_i(\\mathbf{x}_0,\\boldsymbol{\\delta}) = v_{ij}$, if $|\\mathbf{x}_i-r_{ij}| \\le |\\mathbf{x}_i-r_{im}|$ $\\forall m\\in\\{1, ..., k\\}$ and $m\\neq j$\nwhere $|.|$ denotes absolute value.\nAn example function $h(.)$ is depicted in figure \\ref{FMA}, where we see how real values are mapped back to valid categorical values by rounding to the closest $r_{ij}$.\n\n\\noindent\\textbf{Simplex Sampling Approach (SSA):} In this method of handling categorical variables, we will assume that a one-hot encoding of the input. Let $\\mathbf{x}=[\\mathbf{x}_C~\\mathbf{x}_R]$ be the input feature vector where $\\mathbf{x}_C$ denotes the categorical part while $\\mathbf{x}_R$ denotes the set of real features. Let there be $C$ categorical features in total. Then for all $c \\in [1:C], x_c \\in [1:d_c]$ where $x_c$ is the $c$-th categorical variable and it takes one of $d_c$ values.\n Note that we imply no ordering amongst the $d_c$ values. Generically they are assumed to have distinct integer values from $1$ till $d_c$. \nWe assume that input $\\mathbf{x}$ is processed into $\\mathbf{\\tilde{x}} = [\\mathbf{\\tilde{x}}_C ~ \\mathbf{\\tilde{x}}_R] $ where $\\mathbf{\\tilde{x}}_R = \\mathbf{x}_R$ while $\\mathbf{\\tilde{x}}_C \\in \\mathbb{R}^{1 \\times \\prod_{c \\in C} d_c }$. Each component $\\mathbf{\\tilde{x}}_c \\in \\mathbb{R}^{1 \\times d_c} $ is set to $\\mathbf{e}_i$, the canonical unit vector with $1$ in the $i$-th coordinate, if $x_c$ takes the value $i$. \n\n\\textit{Interpretation:} Now, we provide an interpretation when every categorical component $c$ lies in the $d_c$ dimensional simplex, i.e. when $\\mathbf{\\tilde{x}}_c \\in \\Delta_{d_c} $. Here, $\\Delta_N$ denotes the $N$-dimensional simplex. The actual function can be evaluated only on the inputs where each categorical component takes values from one of the corner points on the simplex, namely $\\mathbf{e}_i, ~ i \\in [1:d_c]$. Therefore, we interpolate the function when $\\mathbf{\\tilde{x}}_c$ is assigned a real vector in the simplex. \n\nLet $f(\\cdot)$ capture the soft output of the classifier when the one-hot encoded categorical variables take the values at the corner points of their respective simplices. Now, we extend the definition of $f$ as follows:\n\n$f([\\mathbf{\\tilde{x}}_C~\\mathbf{x}_R]) = \n\\mathbb{E}_{\\mathbf{e}_{i_c} \\sim \\mathbf{\\tilde{x}}_c,~\\forall c \\in [1:C] } \\left[ f(\\mathbf{e}_{i_1}, \\ldots \\mathbf{e}_{i_c}, \\ldots \\mathbf{e}_{i_{C}}, \\mathbf{x}_R)\\right].$\n\nEssentially, we sample the $c$-th unit vector from the distribution represented by $\\mathbf{\\tilde{x}_c} \\in \\Delta_{d_c}$ on the simplex independent of other categorical variables. The function value is the expected value of the functional evaluated on unit vectors obtained from this product distribution along with the fixed real coordinates $\\mathbf{x}_R$.\n\nWhen we perform the gradient descent as a part of algorithm \\ref{cem}, we actually do a projected gradient descent for $\\mathbf{\\tilde{x}}_C $ in the product of simplices $\\Delta_{d_1} \\times \\ldots \\Delta_{d_c}$. We cannot evaluate the function exactly, hence we can average over a certain number of samples drawn from the product distribution for every function evaluation on a candidate $\\mathbf{\\tilde{x}}$.\\\\\n\n\\noindent\\textbf{FMA vs SSA Tradeoffs:} As such the SSA strategy has stronger theoretical grounding, however from a practical standpoint it requires a lot of additional averaging through sampling for every function evaluation along with repeated projections to the simplices defined for every categorical feature during gradient descent to optimize the objective in Algorithm \\ref{cem}. \nFMA can also take more general format of inputs as they don't need to be one-hot-encoded which guards against further explosion of the feature space which could potentially result from categorical features having many distinct values.\n\n\\section{Experiments}\nWe now empirically evaluate our approach on 5 public datasets covering diverse domains. The datasets along with their domains are as follows. Finance: German Credit \\cite{uci} and FICO \\cite{FICO}, Astronomy: Digital Sky Survey \\cite{kaggleskysurvey}, Health care: Vertebral Column \\cite{uci} and Neuroscience: Olfaction \\cite{olfs}. A summary of the datasets is given in table \\ref{tab:data}. All the datasets except olfaction have class labels. However, as done in previous studies \\cite{tip}, we use \\emph{Pleasantness} as the target which is binarized using a threshold of 50. That is molecules with a rating $>$ 50 (on a 100 point scale) are categorized as being pleasant to smell, while the rest are deemed as unpleasant.\n\n\\begin{wraptable}{r}{0.6\\textwidth}\n\\vspace{-0.5cm}\n\\centering\n\\caption{Dataset characteristics, where $N$ denotes dataset size and $d$ is the dimensionality.}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \\small{Dataset} & \\small{$N$} & \\small{$d$} & \\small{\\# of} & \\small{Domain} \\\\\n && & \\small{Classes}& \\\\\n \\hline\n \\small{German Credit} & \\small{1000} & \\small{20} & \\small{2} & \\small{Finance} \\\\\n \\hline\n \\small{FICO} & \\small{10459} & \\small{24} & \\small{2} & \\small{Finance} \\\\\n \\hline\n \\small{Sky Survey} & \\small{10000} & \\small{17} & \\small{3} & \\small{Astronomy} \\\\\n \\hline\n \\small{Vertibral Column} & \\small{310} & \\small{6} & \\small{3} & \\small{Health care} \\\\\n \\hline\n\\small{Olfaction} & \\small{476} & \\small{4869} & \\small{2} & \\small{Neuroscience} \\\\\n \\hline\n \\end{tabular}\n \\label{tab:data}\n\\vspace{-0.25cm}\n\\end{wraptable}\nWe test the methods for two (black box) classification models namely, CART decision trees (depth $\\le$ 5) and random forest (size 100). In all cases we use a random 75\\% of the dataset for training and a remaining 25\\% as test. We repeat this 10 times and average the results. The explanations are generated for the test points and correspondingly evaluated. Given the generality and efficiency of the FMA approach as stated above, we use that in our implementation of MACEM. For all datasets and all features the ranges were set based on the maximum and minimum values seen in the datasets. The base values for the German Credit, Sky Survey, Vertibral Column dataset and FICO were set to median for the real features and mode for the categorical ones. For FICO the special values (viz. -9, -7, -8) were made 0. The base values for all features in the olfaction dataset were set at 0. We do not learn the underlying distribution so $\\gamma$ is set to 0 and the other parameters are found using cross-validation. We generate 50 random samples for estimating gradients at each step and run our search for 100 steps. We compare with LIME (\\url{https:\/\/github.com\/marcotcr\/lime}) which is arguably the most popular method to generate local explanations in a model agnostic fashion for (especially non-neural network) classifiers trained on structured data. We quantitatively evaluate our results on all 5 datasets as described next. We also provide qualitative evaluations for German Credit and the Olfaction dataset based on studying specific cases along with obtaining expert feedback.\n\n \n \n \n \n \n \n \n\n \n \n\n\n\n\n\n\\vspace{-0.2mm}\n\\subsection{Quantitative Evaluation Metrics}\n\\vspace{-0.2mm}\nWe define three quantitative metrics: Correct Classification Percentage (CCP), Correct Feature Ranking (CFR) and Correct Feature Importance Percentage (CFIP) where we evaluate feature importances as well as how accurate the explanations are in predicting the same or different classes for PPs and PNs respectively.\n\nFor LIME we create proxies for PPs and PNs that are intuitively similar to ours for a fair comparison. The PP proxy for LIME is created by replacing all the negatively correlated features by base values in the original example, while maintaining the positively correlated feature values. For PNs, we create a proxy by setting all the positively correlated feature values to base values, while maintaining the negatively correlated features.\nFor our PPs, we compute feature importance by taking the absolute difference of each feature value to the corresponding base value and dividing by the features standard deviation. For PNs, we take the absolute value of the change of the perturbed feature and divide again by standard deviation. For LIME, we use the absolute value of the coefficients of the PP\/PN proxies.\n\n\\noindent\\textbf{Correct Classification Percentage (CCP):} For this metric we compute what percentage of our PPs actually lie in the same (predicted) class as the original example and for PNs what percentage of the new examples lie in a different class than the predicted one of the original example. Formally, if $(x_1,t_1), ..., (x_n,t_n)$ denote $n$ examples with $t_i$ being the predicted class label for $x_i$ and $PP_i$, $PN_i$ being the respective pertinent positives and negatives for it, then if $\\lambda(.)$ denotes an indicator function which is 1 if the condition inside it is true and 0 otherwise, we have (higher values better)\n\\begin{align}\n\\label{ccp}\n CCP_{PP}=\\sum_i\\frac{\\lambda\\left(\\max [\\zeta(PP_i)]=t_i\\right)}{n}\\times 100, ~~~\n CCP_{PN}=\\sum_i\\frac{\\lambda\\left(\\max [\\zeta(PN_i)]\\neq t_i\\right)}{n}\\times 100\n\\end{align}\n\n\\noindent\\textbf{Correct Feature Ranking (CFR):} For this metric we want to evaluate how good a particular explanation method's feature ranking is. For PPs\/PNs we independently set the top-k features to base\/original values and in each case note the class probability of classifying that input into the black box models predicted class. We then rank the features based on these probabilities in descending order and denote this ranking by $r^*_{PP}$ (or $r^*_{PN}$). Our CFR metric is then a correlation between the explanation models ranking of the features ($r_{PP}$ or $r_{PN}$) and this ranking. Higher the CFR better the method. If $\\rho(.,.)$ indicates the correlation between two lists then,\n\\begin{equation}\nCFR_{PP} = \\rho(r_{PP},r^*_{PP}),~~~CFR_{PN} = \\rho(r_{PN},r^*_{PN})\n\\end{equation}\nThe intuition is that most important features if eliminated should produce the most drop in predicting the input in the desired class. This is similar in spirit to ablation studies for images \\cite{Ormas} or the faithfulness metric proposed in \\cite{selfEx}.\n\n\\noindent\\textbf{Correct Feature Importance Percentage (CFIP):} Here we want to validate if the features that are actually important for an ideal PP and PN are the ones we identify.\nOf course, we do not know the globally optimal PP or PN for each input. So we use proxies of the ideal to compare with, which are training inputs that are closest to the base values and satisfy the PP\/PN criteria as defined before.\nIf our (correct) PPs and PNs are closer to the base values than these proxies, then we use them as the golden standard. We compute the CFIP score as follows: Let $f^*_{PP}(x)$ and $f^*_{PN}(x)$ denote the set of features in the tree corresponding to the ideal (proxy) PPs and PNs for an input $\\mathbf{x}$ as described before. Let $f_{PP}(x)$ and $f_{PN}(x)$ denote the top-k important features (assuming $k$ is tree path length) based on our method or LIME then,\n\\begin{align}\n\\label{cfi}\n CFIP_{PP}=\\frac{100}{n}\\sum_i\\frac{|f_{PP}(x_i)\\cap f^*_{PP}(x_i)|_{\\text{card}}}{k}, ~~~\n CFIP_{PN}=\\frac{100}{n}\\sum_i\\frac{|f_{PN}(x_i)\\cap f^*_{PN}(x_i)|_{\\text{card}}}{k}\n\\end{align}\nwhere, $|.|_{\\text{card}}$ denotes cardinality of the set. Here too higher values for both $CFIP_{PP}$ and $CFIP_{PN}$ are desirable.\n\\vspace{-0.3cm}\n\\begin{table*}[htbp]\n\\centering\n\\caption{Below we see the quantitative results for CCP metric. The statistically significant best results are presented in bold based on paired t-test.}\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n \\hline\n \\multirow{2}{*}{\\small{Dataset}} & \\multicolumn{2}{c|}{\\small{$CCP_{PP}$ (Tree)}} & \\multicolumn{2}{c|}{\\small{$CCP_{PN}$ (Tree)}} & \\multicolumn{2}{c|}{\\small{$CCP_{PP}$ (Forest)}} & \\multicolumn{2}{c|}{\\small{$CCP_{PN}$ (Forest)}} \\\\\n & \\small{MACEM} & \\small{LIME} & \\small{MACEM} & \\small{LIME} & \\small{MACEM} & \\small{LIME} & \\small{MACEM} & \\small{LIME}\\\\\n \\hline\n \\small{German Credit} & \\small{\\textbf{100}} & \\small{96.0} & \\small{\\textbf{100}} & \\small{10.2} & \\small{\\textbf{100}} & \\small{89.6} & \\small{\\textbf{100}} & \\small{9.0} \\\\\n \\hline\n \\small{FICO} & \\small{\\textbf{100}} & \\small{91.45} & \\small{\\textbf{100}}& \\small{49.31} & \\small{\\textbf{100}} & \\small{46.47} & \\small{\\textbf{100}} & \\small{40.82}\\\\\n \\hline\n \\small{Sky Survey}& \\small{\\textbf{100}} & \\small{58.48}& \\small{\\textbf{100}}& \\small{25.01} & \\small{\\textbf{100}}& \\small{47.32} & \\small{\\textbf{100}} & \\small{39.78}\\\\\n \\hline\n \\small{Vertibral Column} & \\small{\\textbf{100}} & \\small{33.33} & \\small{\\textbf{100}}& \\small{44.87} & \\small{\\textbf{100}} & \\small{91.02}& \\small{\\textbf{100}}& \\small{19.23}\\\\\n \\hline\n\\small{Olfaction} & \\small{\\textbf{100}} & \\small{79.22} & \\small{\\textbf{100}}& \\small{13.22} & \\small{\\textbf{100}} & \\small{74.19}& \\small{\\textbf{100}}& \\small{19.21}\\\\\n \\hline\n \\end{tabular}\n \\label{tab:quant}\n\\vspace{-0.1cm}\n\\end{table*}\n\\vspace{-0.3cm}\n\\begin{table*}[htbp]\n\\centering\n\\caption{Below we see the quantitative results for CFR metric. The statistically significant best results are presented in bold based on paired t-test.}\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n \\hline\n \\multirow{2}{*}{\\small{Dataset}} & \\multicolumn{2}{c|}{\\small{$CFR_{PP}$ (Tree)}} & \\multicolumn{2}{c|}{\\small{$CFR_{PN}$ (Tree)}} & \\multicolumn{2}{c|}{\\small{$CFR_{PP}$ (Forest)}} & \\multicolumn{2}{c|}{\\small{$CFR_{PN}$ (Forest)}} \\\\\n & \\small{MACEM} & \\small{LIME} & \\small{MACEM} & \\small{LIME} & \\small{MACEM} & \\small{LIME} & \\small{MACEM} & \\small{LIME}\\\\\n \\hline\n \\small{German Credit} & \\small{\\textbf{0.68}} & \\small{0.64} & \\small{\\textbf{0.70}} & \\small{0.68} & \\small{\\textbf{0.43}} & \\small{0.35} & \\small{\\textbf{0.48}} & \\small{\\textbf{0.46}} \\\\\n \\hline\n \\small{FICO} & \\small{\\textbf{0.68}} & \\small{0.49} & \\small{\\textbf{0.74}}& \\small{0.70} & \\small{\\textbf{0.29}} & \\small{0.09} & \\small{\\textbf{0.54}} & \\small{0.31}\\\\\n \\hline\n \\small{Sky Survey}& \\small{\\textbf{0.55}} & \\small{0.48}& \\small{\\textbf{0.81}}& \\small{0.53} & \\small{\\textbf{0.42}}& \\small{0.35} & \\small{\\textbf{0.54}} & \\small{0.36}\\\\\n \\hline\n \\small{Vertibral Column} & \\small{\\textbf{0.63}} & \\small{\\textbf{0.64}} & \\small{\\textbf{0.75}}& \\small{0.66} & \\small{\\textbf{0.20}} & \\small{-0.02}& \\small{\\textbf{0.33}}& \\small{0.23}\\\\\n \\hline\n\\small{Olfaction} & \\small{\\textbf{0.71}} & \\small{0.58} & \\small{\\textbf{0.78}}& \\small{0.59} & \\small{\\textbf{0.73}} & \\small{0.62}& \\small{\\textbf{0.82}}& \\small{0.65}\\\\\n \\hline\n \\end{tabular}\n \\label{tab:quant2}\n\\vspace{-0.1cm}\n\\end{table*}\n\n\\begin{table*}[htbp]\n\\centering\n\\caption{Below we see the quantitative results for CFIP metric. The statistically significant best results are presented in bold based on paired t-test.}\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n \\hline\n \\multirow{2}{*}{\\small{Dataset}} & \\multicolumn{2}{c|}{\\small{$CFIP_{PP}$ (Tree)}} & \\multicolumn{2}{c|}{\\small{$CFIP_{PN}$ (Tree)}} & \\multicolumn{2}{c|}{\\small{$CFIP_{PP}$ (Forest)}} & \\multicolumn{2}{c|}{\\small{$CFIP_{PN}$ (Forest)}} \\\\\n & \\small{MACEM} & \\small{LIME} & \\small{MACEM} & \\small{LIME} & \\small{MACEM} & \\small{LIME} & \\small{MACEM} & \\small{LIME}\\\\\n \\hline\n \\small{German Credit} & \\small{\\textbf{83.73}} & \\small{68.98} & \\small{\\textbf{62.32}} & \\small{48.28} & \\small{\\textbf{71.26}} & \\small{30.65} & \\small{\\textbf{30.13}} & \\small{13.98} \\\\\n \\hline\n \\small{FICO} & \\small{\\textbf{79.13}} & \\small{78.96} & \\small{\\textbf{82.92}}& \\small{58.76} & \\small{\\textbf{89.92}} & \\small{33.56} & \\small{\\textbf{54.98}} & \\small{33.25}\\\\\n \\hline\n \\small{Sky Survey}& \\small{\\textbf{98.05}} & \\small{76.55}& \\small{\\textbf{93.52}}& \\small{68.55} & \\small{\\textbf{78.96}}& \\small{\\textbf{80.18}} & \\small{\\textbf{89.54}} & \\small{\\textbf{88.26}}\\\\\n \\hline\n \\small{Vertibral Column} & \\small{\\textbf{85.95}} & \\small{68.24} & \\small{\\textbf{77.38}}& \\small{64.98} & \\small{\\textbf{94.23}} & \\small{86.12}& \\small{\\textbf{100.00}}& \\small{89.45}\\\\\n \\hline\n\\small{Olfaction} & \\small{\\textbf{87.19}} & \\small{83.92} & \\small{\\textbf{72.82}}& \\small{47.96} & \\small{\\textbf{82.23}} & \\small{77.56}& \\small{\\textbf{74.72}}& \\small{52.28}\\\\\n \\hline\n \\end{tabular}\n \\label{tab:quant3}\n\\vspace{-0.1cm}\n\\end{table*}\n\n\n\\vspace{-0.2cm}\n\\subsection{Quantitative Evaluation}\n\\vspace{-0.1cm}\nFirst looking at Table \\ref{tab:quant} we observe that our method MACEM as designed, on all datasets produces PPs and PNs that lie in the same or different class as the original input respectively. This is indicated by metrics $CCP_{PP}$ and $CCP_{PN}$ where we are 100\\% accurate. This observation is reassuring as it means that whenever we return a PP or PN for an input it is valid. Although LIME has reasonable performance for PPs (much worse for PNs) no such promise can be made.\n\\begin{wrapfigure}{r}{0.49\\textwidth}\n \\centering \n \\includegraphics[width=0.49\\textwidth]{GermanCreditExample.png}\\\\ \n \\includegraphics[width=0.49\\textwidth]{OlfactionExample.png}\n \\vspace{-0.5cm}\n \\caption{Above we compare the actual tree path (blue arrows) and the corresponding PP\/PN important features for an input in the a) German Credit dataset and b) Olfaction dataset. The PP columns (center) and the PN columns (right) list the top 3 features highlighted by MACEM for the corresponding PPs and PNs. The PP feature importance reduces top to bottom, while the PN feature importance reduces bottom to top. The red arrows indicate PP and PN features that match the features in the tree path for the respective inputs. }\n \\label{GCeg}\n\\vspace{-1cm}\n\\end{wrapfigure}\nLooking at Table \\ref{tab:quant2} we observe that the feature ranking obtained by our method seems to be more indicative of the true feature importances than LIME, as it correlates better with the ablation based ranking in almost all cases as conveyed by the higher CFR score. We are the best performer in most cases with high correlation values depicting that we accurately capture (relative) feature importances.\n\nWe now look at how efficient the different methods are in picking the correct features for PPs and PNs. This is reflected in the $CFIP_{PP}$ and $CFIP_{PN}$ metrics in Table \\ref{tab:quant3}. We are still better than LIME in both cases. Our PP results are generally better than PN. A reason for this maybe that when we run our algorithm (based on FISTA) most initial movements, especially for inputs in the interior of the decision boundary, will result in not changing the class, so the candidate PNs to choose from eventually (where we output the sparsest) will be much smaller than the candidate PPs for many inputs. This leads to a much richer choice from which to choose the (sparsest) PP resulting in them being more accurate\n\n\\vspace{-0.2cm}\n\\subsection{Qualitative Evaluation}\n\\vspace{-0.1cm}\n\nWe now look at specific inputs from two of the datasets and garner expert feedback relating to the insights conveyed by the models and our explanations for them.\n\nThe smallest change to alter the class will probably occur by changing values of the lower level features in the tree path, which is what we observe. Hence, both PPs and PNs seem to capture complementary information and together seem to be important in providing a holistic explanation.\n\\noindent\\textbf{Comparing with Tree Paths:} In figure \\ref{GCeg}, we see two examples the left from German Credit dataset and the right from Olfaction dataset. In both these examples we compare the top features highlighted by our method with actual tree paths for those inputs. In figure \\ref{GCeg}a, we see that our top-2 PP features coincide with the top-2 features in the tree path. While the bottom-2 features in the tree path correspond to the most and $3^{rd}$ most important PN feature. A qualitatively similar result is seen in figure \\ref{GCeg}b for the olfaction dataset. We observe this for over 80\\% inputs in these datasets. What this suggests is that our PP features tend to capture more global or higher level features, while our PNs capture more detailed information. This makes intuitive sense since for PPs we are searching for the sparsest input that lies in a particular class and although the input (absolute) feature values have to upper bound the PPs feature values, this is a reasonably weak constraint leading to many inputs in a class having the same PPs, thus giving it a global feel. On the other hand, a PN is very much input dependent as we are searching for minimal (addition only) perturbations that will change the class. \n\n\\noindent\\textbf{Human Expert Evaluations:}\nWe asked an expert in Finance and another in Neuroscience to validate our decision trees (so that their intuition is consistent with the model) and explanations. Each expert was given 50 randomly chosen explanations from the test set and they were blinded to which method produced these explanations. They were then given the binary task of categorizing if each explanation was reasonable or not.\n\nThe expert in Finance considered the features selected by our decision tree to be predictive. He felt that 41 of our PPs made sense, while around 34 PPs from LIME were reasonable. Regarding PNs, he thought 39 of ours would really constitute a class change, while 15 from LIME had any such indication. An expert from Neuroscience who works on olfaction also considered our tree model to be reasonable as the features it selected from the Dragon input features \\cite{olfs} had relations to molecular size, paclitaxel and citronellyl phenylacetate, which are indicative of pleasantness. In this case, 43 of our PPs and 34 of LIMEs were reasonable. Again there was a big gap in PN quality, where he considered 40 of our PNs to be reasonable versus 18 of LIMEs. For random forests the numbers were qualitatively similar. For the finance dataset, 44 of ours and 27 of LIMEs PPs made sense to the expert. While 38 of our PNs and 19 of LIMEs were reasonable. For the olfaction dataset, 41 of our PPs and 32 of LIMEs were reasonable, while 39 of our PNs and 20 of LIMEs made sense.\n\n\\vspace{-0.4cm}\n\\section{Discussion}\n\\vspace{-0.2cm}\nIn this paper we provided a model agnostic black box contrastive explanation method specifically tailored for structured data that is able to handle real as well as categorical features in a meaningful and scalable manner. We saw that our method is quantitatively as well as qualitatively superior to LIME and provides more complete explanations.\n\nIn the future, we would like to extend our approach here to be applicable to also unstructured data. In a certain sense, the current approach could be applied to such data if it is already vectorized or the text is embedded in a feature space. In such cases, although minimally changing a sentence to another lying in a different class would be hard, one could identify important words or phrases which a language model or human could use as the basis for creating a valid sentence\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Voronoi diagram is one of the most useful representations for tessellations in computational geometry. The fundamental concepts, generalizations, and applications have been studied widely, e.g., \\cite{BAurenhammer,BOkabe}. Here we are concerned with weighted Voronoi diagrams called Laguerre Voronoi diagrams, which are also known as a power diagrams. This class of Voronoi diagrams are important because the boundaries are linear instead of curved.\n\nThe Laguerre Voronoi diagram was introduced by \\cite{Imi,Aurenhammer}. Briefly, for a set $S$ of $n$ spheres $s_i=(\\textbf{x}_i, r_i)$ in $\\mathbb{R}^d$, where $\\textbf{x}_i$ is the position of the center and $r_i$ is the radius, which is equivalent to the generator weight, the Laguerre distance of $\\textbf{x}\\in \\mathbb{R}^d$ from $s_i$ is \n\\begin{equation}\\label{LagDisOr}\nd_{\\text{L}}(\\textbf{x},s_i)=\\|\\textbf{x}-\\textbf{x}_i\\|^2-r_i^2.\n\\end{equation}\nIn \\cite{Aurenhammer2,Aurenhammer3} the correspondence between the Laguerre Voronoi diagram in $\\mathbb{R}^d$ and a polyhedron in $\\mathbb{R}^{d+1}$ was investigated. Then using the correspondence, an algorithm for constructing the Laguerre Voronoi diagram was presented in \\cite{Imi,Aurenhammer}, and a robust version was presented in \\cite{Sugihara1}. The Laguerre Voronoi diagram can be extended for tessellations on a sphere as defined in \\cite{Sugihara2}. Applications of the spherical Laguerre Voronoi diagram were also studied in \\cite{Mach,Chaidee3} for modeling objects with spheres.\n\nIt is sometimes necessary to consider the inverse of the above problem: the determination of whether or not a given tessellation is the Voronoi diagram. If it is, we can recover the corresponding generators. This problem is known as the Voronoi recognition problem, and was studied in \\cite{BLoeb,JAsh,JAurenhammer,JHartvigsen,JSchoenberg,JAloupis}. On the other hand, if the tessellation cannot be represented by a Voronoi diagram, we approximate it with the best fitting Voronoi diagram; this is called the Voronoi approximation problem. Examples of the Voronoi approximation problems are found in \\cite{JHonda1,JHonda2,JSuzuki,JEvan,Chaidee1}, for unweighted Voronoi diagrams. These approximations have a number of useful applications, for instance, if we have a tessellation that is found in the real world, and we can approximate the tessellation with a Voronoi diagram, we can use this Voronoi diagram as a model for understanding the pattern's formation.\n\nThe inverse problems were widely studied for the case of ordinary Voronoi diagrams, whereas relatively little work has been done for Laguerre diagrams. Duan et al. \\cite{Duan} considered the inverse problem for the planar Laguerre Voronoi diagram, and presented an algorithm for recovering the generators and their weights from a tessellation. Lautensack \\cite{Lau} and Lyckegaard et al. \\cite{Lyckegaard} used Laguerre Voronoi diagrams to study the relation between structures and their physical properties. Recently, Spettl et al. \\cite{Duan2} fitted Laguerre Voronoi diagrams to tomographic image data. In the case of the spherical Laguerre Voronoi diagram, Chaidee and Sugihara \\cite{Chaidee3} provided a framework for approximating the weights of spherical Laguerre Voronoi diagrams when the location of the Voronoi generators are known.\n\nIn this paper, we focus on the spherical Laguerre Voronoi diagram recognition problem. Our goal is to judge whether or not a given spherical tessellation is a spherical Laguerre Voronoi diagram. Remark that in the case of the ordinary Voronoi diagram recognition problem, the generator positions are unique whereas there are many sets of the generating circles generating the same Laguerre Voronoi diagram. With this reason, the Laguerre Voronoi diagram recognition problem is more difficult than the ordinary Voronoi diagram recognition problem. By the nonuniqueness property of generating circles, for each spherical Laguerre Voronoi diagram, there is a class of polyhedra whose projections coincide with the spherical Laguerre Voronoi diagram. Therefore, for a given tessellation, if we find these polyhedra, we can judge it is a spherical Laguerre Voronoi diagram, and can recover the generators and their weights. Otherwise, we judge that the given tessellation is not a spherical Laguerre Voronoi diagram.\n\nThis paper is organized as follows. In Section 2, we provide the definitions and theorems which are necessary for our study. The recognition problem is also mathematically defined in this section. In Section 3, we focus on the properties of projective transformations which transform a polyhedron associated with a spherical Laguerre Voronoi diagram to other polyhedra in the same class. In Section 4, we give algorithms for constructing a polyhedron from a given tessellation and for judging whether a given tessellation is a spherical Laguerre Voronoi diagram. Finally, we summarize our research in Section 5, and give suggestions for future work.\n\n\n\\section{Preliminaries}\n\nIn this section, we provide the necessary definitions and theories on the spherical Laguerre Voronoi diagram. Then we state the problem considered in this paper.\n\nWe first present some fundamental definitions from spherical geometry.\n\nLet $U$ be a unit sphere whose center is located at the origin $O(0, 0, 0)$ of a Cartesian coordinate system. \n\nFor two distinct points $p, p_i \\in U$ with position vectors $\\textbf{x}, \\textbf{x}_i$, respectively, the \\textit{geodesic arc} $\\wideparen{e}_{p,p_i}$ of $p, p_i$ is defined as the shortest arc between $p$ and $p_i$ of the great circle passing through $p$ and $p_i$. The \\textit{geodesic arc length}, also called the \\textit{geodesic distance}, is given as follows,\n\\begin{equation}\n\\tilde{d}(p, p_i) = \\arccos(\\textbf{x}^{\\text{T}}\\textbf{x}_i)\\leq \\pi. \\label{GeoDist}\n\\end{equation}\n\nIn this paper, we focus on spherical tessellations. We define the spherical polygon as follows.\n\nLet $(q_1, ..., q_m)$ be a sequence of distinct vertices on the sphere such that the geodesic arcs $\\wideparen{e}_{q_i,q_i+1}$ ($i=1, ..., m$; $q_{m+1}$ is read as $q_1$) do not intersect except at the vertices. The left area enclosed by the collection of these geodesic arcs is called the \\textit{spherical polygon} $Q(q_1, ..., q_m)$. $Q(q_1, ..., q_m)$ is abbreviated as $Q$ hereafter.\n\nThe spherical polygon $Q$ is said to be \\textit{convex} if and only if no geodesic arc joining the two points in $Q$ goes outside of $Q$.\n\n$\\mathcal{T}$ is said to be a \\textit{spherical tessellation} if $\\mathcal{T}$ is a decomposition of $U$ into a countable number of spherical polygons whose interiors are pairwise disjoint. $\\mathcal{T}$ is said to be \\textit{convex} if all of these spherical polygons are convex.\n\nNext, we consider a special case of a spherical tessellation. Let $G=\\{p_1, ..., p_n\\}$ be a set of points on the sphere $U$. Assignment of each point on $U$ to the nearest point in $G$ with respect to the geodesic distance forms a tessellation, which is called the spherical Voronoi diagram on the sphere $U$. Algorithms for constructing the spherical Voronoi diagram were provided in \\cite{Renka,Sugihara2}.\n\nWe can generalize the spherical Voronoi diagram to the spherical Laguerre Voronoi diagram. The necessary definitions and theorems were originally presented in \\cite{Sugihara1,Sugihara2}. We briefly introduce those definitions and theorems as follows.\n\nFor a given sphere $U$ and point $p_i \\in G$, the spherical circle $\\tilde{c}_i$ centered at $p_i$ is defined by\n\\begin{equation}\\label{SC}\n\\tilde{c}_i=\\{p\\in U|\\tilde{d}(p_i, p)=r_i\\},\n\\end{equation}\nwhere $0\\leq r_i <\\pi\/2$. $r_i$ is called the radius of the spherical circle $\\tilde{c}_i$.\n\nFollowing \\cite{Sugihara2}, we define the Laguerre proximity, the distance measured from an arbitrary point $p$ on the sphere $U$ to a spherical circle $\\tilde{c}_i$, as follows,\n\\begin{equation}\\label{RealCircle}\n\\tilde{d}_L(p, \\tilde{c}_i)=\\frac{\\cos \\left(\\tilde{d}(p_i, p)\\right)}{\\cos \\left(r_i\\right)}.\n\\end{equation}\nLet $\\tilde{c}_i$ and $\\tilde{c}_j$ be two circles. The Laguerre bisector of $\\tilde{c}_i$ and $\\tilde{c}_j$ is defined by\n\\begin{equation}\nB_L(\\tilde{c}_i, \\tilde{c}_j)=\\{p\\in U|\\tilde{d}_L(p, \\tilde{c}_i)=\\tilde{d}_L(p, \\tilde{c}_j)\\}. \\label{LagBi}\n\\end{equation}\nFor a set $\\tilde{G}=\\{\\tilde{c}_1, ..., \\tilde{c}_n\\}$ of $n$ spherical circles on $U$, we define the region\n\\begin{equation}\n\\tilde{R}(\\tilde{G}, \\tilde{c}_i)=\\{p\\in U|\\tilde{d}_L(p, \\tilde{c}_i)<\\tilde{d}_L(p, \\tilde{c}_j), j\\neq i\\}.\n\\end{equation}\nThe regions $\\tilde{R}(\\tilde{G}, \\tilde{c}_1), ..., \\tilde{R}(\\tilde{G}, \\tilde{c}_n)$, together with their boundaries constitute a tessellation, which is called the spherical Laguerre Voronoi diagram of $U$. Figure \\ref{SLVD} shows an example of the spherical Laguerre Voronoi diagram. This is a stereo diagram. If the right diagram is viewed with the left eye and the left diagram with the right eye, the sphere can be seen; the upper pair represents the front hemisphere and the lower pair represents the rear hemisphere.\n\n\\begin{figure}[h]\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.7]{slv50}\n\t\t\\caption{Stereographic images of a spherical Laguerre Voronoi diagram.} \\label{SLVD}\n\t\\end{center}\n\\end{figure}\n\nIn \\cite{Sugihara1,Sugihara2}, an algorithm for constructing the spherical Laguerre Voronoi diagram and its dual, i.e., the spherical Laguerre Delaunay diagram, was proposed. For a spherical circle $\\tilde{c}_i$ of $U$, let $\\pi(\\tilde{c}_i)$ be the plane passing through $\\tilde{c}_i$, and let $H(\\tilde{c}_i)$ be the halfspace bounded by $\\pi(\\tilde{c}_i)$ and containing $O$. Let $\\ell_{i,j}$ be the line of intersection of $\\pi(\\tilde{c}_i)$ and $\\pi(\\tilde{c}_j)$.\n\nThe Laguerre bisector shown in (\\ref{LagBi}) is characterized by the following theorems.\n\n\\begin{theorem}[\\cite{Sugihara2}]\\label{SugiThm1}\n\tThe Laguerre bisector $B_L(\\tilde{c}_i,\\tilde{c}_j)$ is a great circle, and it crosses the geodesic arc connecting the two centers $p_i$ and $p_j$ at right angles.\n\\end{theorem}\n\n\\begin{theorem}[\\cite{Sugihara2}]\\label{SugiThm2}\n\tThe bisector $B_L(\\tilde{c}_i,\\tilde{c}_j)$ is the intersection of $U$ and the plane containing $\\ell_{i,j}$ and $O$.\n\\end{theorem}\n\nFor a set $\\tilde{G}$ of spherical circles, the spherical Laguerre Voronoi diagram is constructed by the following process. For circles $\\tilde{c}_i$, we construct the planes $\\pi(\\tilde{c}_i)$ and the half spaces $H(\\tilde{c}_i)$, and the intersection of all halfspaces. We finally project the edge of the resulting polyhedron onto $U$, with the center of the projection at $O$. Then we have the spherical Laguerre Voronoi diagram.\n\nThroughout this paper, the term \\textit{spherical polygon} is abbreviated as \\textit{polygon}. Let $\\mathcal{T}$ be the given convex spherical tessellation on the unit sphere $U$ having $n$ polygons. We assume that all vertices of $\\mathcal{T}$ are of degree 3.\n\nThe main concern of this paper is to judge whether or not a given tessellation $\\mathcal{T}$ is a spherical Laguerre Voronoi diagram.\n\n\\section{Polyhedron Transformation}\n\nFrom the definition and theorems of the spherical Laguerre Voronoi diagram in the previous section, the following proposition state the correspondence between the spherical Laguerre Voronoi diagram and a polyhedron.\n\\begin{proposition}\\label{twoWay}\n\t$\\mathcal{L}$ is a spherical Laguerre Voronoi diagram if and only if there is a convex polyhedron $\\mathcal{P}$ containing the center of the sphere whose central projection coincides with $\\mathcal{L}$.\n\\end{proposition}\n\\begin{proof}\n\tWe firstly prove the necessity of the condition. Let $\\mathcal{L}$ be a spherical Laguerre Voronoi diagram. Hence, the generating circles exist. Therefore, we get the intersection of halfspaces with boundary planes passing through the spherical circles including the center of the sphere. The projection of this polyhedron onto $U$ coincides with $\\mathcal{L}$ because of Theorem \\ref{SugiThm2}.\n\t\n\tInversely, let $\\mathcal{L}$ be a given spherical tessellation and $\\mathcal{P}$ be a convex polyhedron containing the center of the sphere such that its central projection coincides with $\\mathcal{L}$. If there exists a plane $P_i$ such that $P_i$ does not intersect $U$, shrink the polyhedron $\\mathcal{P}$ so that $P_i$ intersects $U$ for all $i$. Let $\\tilde{c}_i$ be the intersection of the plane containing the face $P_i$ with $U$ for all $i$. Then, by Theorem \\ref{SugiThm2}, $\\mathcal{L}$ is the spherical Laguerre Voronoi diagram for $\\tilde{c}_i$'s as the generators.\n\t\n\\end{proof}\n\nTherefore, for each spherical Laguerre Voronoi diagram $\\mathcal{L}$, there is a class of polyhedra whose central projections coincide with $\\mathcal{L}$. To solve the spherical Laguerre Voronoi recognition problem, we will construct an algorithm to find those polyhedra.\n\nTo find these polyhedra, we study the transformation of the polyhedron that preserves the projection onto the sphere $U$. Let $P^3(\\mathbb{R})$ be the three-dimensional projective space. The following properties are the requirements for the transformation.\n\n\\begin{definition}(Projection Preservation Property)\\label{PPP}\n\tLet $f$ be a transformation from $P^3(\\mathbb{R})$ to $P^3(\\mathbb{R})$. $f$ is said to be the \\textit{projection preserving mapping with respect to the origin $O$} if $f$ satisfies the following properties:\n\t\\begin{enumerate}\n\t\t\\item $f(O)=O$;\n\t\t\\item For any point $v\\in P^3(\\mathbb{R})$, $v$ and $f(v)$ are on the same line passing through $O$.\n\t\\end{enumerate}\n\\end{definition}\n\nLet $\\textbf{v}_a=(t_a, x_a, y_a, z_a) \\in P^3(\\mathbb{R})$ be a homogeneous coordinate representation of a vertex of the polyhedron $\\mathcal{P}$ in the projective space. We define a map $f:P^3(\\mathbb{R})\\rightarrow P^3(\\mathbb{R})$ by\n\\begin{equation}\\label{map}f(\\textbf{v}_a)=\\begin{pmatrix}\n\\alpha & \\beta & \\gamma & \\delta \\\\ \n0 & \\eta & 0 & 0 \\\\ \n0 & 0 & \\eta & 0 \\\\ \n0 & 0 & 0 & \\eta\n\\end{pmatrix}\\textbf{v}_a \\end{equation}\nwhere $\\alpha, \\beta, \\gamma, \\delta, \\eta \\in \\mathbb{R}$ and $\\alpha\\neq 0, \\eta\\neq 0$.\n\n\\begin{theorem}\\label{transMapThm}\n\tThe mapping $f$ defined by equation (\\ref{map}) is a projection preserving mapping.\n\\end{theorem}\n\\begin{proof}\n\t\n\tWe now prove that the map $f$ satisfies the conditions in Definition \\ref{PPP}. It is easy to verify that $f(O)=O$. It remains to be shown that $f(\\textbf{v}_a)=\\textbf{v}_a'$, where $\\textbf{v}_a'$ lies on the line passing through the origin $L_a$ of $\\textbf{v}_a$.\n\t\n\tFor $\\textbf{v}_a=(t_a, x_a, y_a, z_a) \\in P^3(\\mathbb{R})$, we have\n\t\\begin{equation}\n\tf(\\textbf{v}_a)=\\begin{pmatrix}\n\t\\alpha & \\beta & \\gamma & \\delta \\\\ \n\t0 & \\eta & 0 & 0 \\\\ \n\t0 & 0 & \\eta & 0 \\\\ \n\t0 & 0 & 0 & \\eta\n\t\\end{pmatrix}\\begin{pmatrix}\n\tt_a \\\\ \n\tx_a \\\\ \n\ty_a \\\\ \n\tz_a\n\t\\end{pmatrix} = \\begin{pmatrix}\n\t\\varLambda \\\\ \n\t\\eta x_a \\\\ \n\t\\eta y_a \\\\ \n\t\\eta z_a\n\t\\end{pmatrix} \\label{transMap}\n\t\\end{equation}\n\twhere $\\varLambda=\\alpha t_a + \\beta x_a + \\gamma y_a + \\delta z_a$.\n\t\n\tNote that for the transformed point $\\textbf{v}_a'=(\\varLambda, \\eta x_a, \\eta y_a, \\eta z_a)$ in the projective space $P^3(\\mathbb{R})$, \n\t$$\\left(\\frac{\\eta x_a}{\\varLambda}, \\frac{\\eta y_a}{\\varLambda}, \\frac{\\eta z_a}{\\varLambda}\\right)= \\frac{\\eta t_a}{\\varLambda}\\left(\\frac{x_a}{t_a}, \\frac{y_a}{t_a}, \\frac{z_a}{t_a}\\right)$$\n\tis a point in the space $\\mathbb{R}^3$ which implies that $\\textbf{v}_a'$ lies on the same line as $\\textbf{v}_a$, which concludes the proof.\n\\end{proof}\n\nSince the transformation is a projective map, it preserves the planarity of faces of a polyhedron. Also, by the reason that $f$ satisfies the projection preservation property, the transformed point $f(v)$ is on the same line as $v$ passing through $O$. However, the transformation as defined in (\\ref{map}) does not guarantee the convexity of the projected polyhedron because some vertices may be mapped to the other side of the origin. Note that if $\\alpha=\\eta=1$ and $\\beta=\\gamma=\\delta=0$, the map $f$ is an identity map. The set of the transformations of the form (\\ref{map}) form a continuous group, and hence for each convex polyhedron $\\mathcal{P}$, there exists a positive constant $\\epsilon$ such that for any $-\\epsilon \\leq \\alpha-1, \\eta-1, \\beta, \\gamma, \\delta \\leq \\epsilon$, the transformation (\\ref{map}) maps $\\mathcal{P}$ to a convex polyhedron.\n\nNote that the transformation is uniquely determined if we fix the five parameters $\\alpha, \\beta, \\gamma, \\delta$, and $\\eta$. However, the homogeneous coordinate representation itself has one degree of freedom to represent each point. Hence, the choice of the polyhedron transformed from $\\mathcal{P}$ by (\\ref{map}) has four degrees of freedom.\n\n\\section{Algorithms}\n\n\\subsection{Recognition procedure}\n\nIn the previous section, the existence of the class of polyhedra whose projections coincide with the given spherical Laguerre Voronoi diagram is proved by Theorem \\ref{transMapThm}. In this section, we propose an algorithm for constructing a polyhedron with respect to a given tessellation. Let $\\mathcal{T}$ be a given spherical tessellation. If a polygon $i$ is adjacent to a polygon $j$, then there exists a geodesic arc $\\wideparen{e}_{i,j}$ which is a tessellation edge partitioning polygons $i$ and $j$. The tessellation vertex that is the intersection of edges $\\wideparen{e}_{i,j}$, $\\wideparen{e}_{j,k}$, and $\\wideparen{e}_{i,k}$ is denoted by $v_{i,j,k}$, as shown in Figure \\ref{polygons}. During the algorithm, $\\ell_{i,j}$ is defined as the line intersecting $\\pi(\\tilde{c}_i)$ and $\\pi(\\tilde{c}_j)$.\n\n\\begin{figure}[h]\n\t\\begin{center}\n\t\t\\definecolor{uququq}{rgb}{0.25098039215686274,0.25098039215686274,0.25098039215686274}\n\t\t\\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]\n\t\t\\clip(2.8160640470324694,3.5067554088250974) rectangle (9.718902215139163,6.965008986114204);\n\t\t\\draw [shift={(6.342008502719588,3.617125273275333)}] plot[domain=0.035538821603734644:3.1108262356690495,variable=\\t]({1.*3.168612589062205*cos(\\t r)+0.*3.168612589062205*sin(\\t r)},{0.*3.168612589062205*cos(\\t r)+1.*3.168612589062205*sin(\\t r)});\n\t\t\\draw [shift={(11.743458897258435,5.290836630761076)}] plot[domain=2.8644792148819374:3.229205627611263,variable=\\t]({1.*5.449456835313343*cos(\\t r)+0.*5.449456835313343*sin(\\t r)},{0.*5.449456835313343*cos(\\t r)+1.*5.449456835313343*sin(\\t r)});\n\t\t\\draw [shift={(8.476872026504891,7.202718578526532)}] plot[domain=3.9767765086173488:4.666125717960333,variable=\\t]({1.*3.2218106032764178*cos(\\t r)+0.*3.2218106032764178*sin(\\t r)},{0.*3.2218106032764178*cos(\\t r)+1.*3.2218106032764178*sin(\\t r)});\n\t\t\\draw [shift={(5.341385632169549,8.590198620619178)}] plot[domain=4.419859512679194:4.9646990170726895,variable=\\t]({1.*3.902136057996358*cos(\\t r)+0.*3.902136057996358*sin(\\t r)},{0.*3.902136057996358*cos(\\t r)+1.*3.902136057996358*sin(\\t r)});\n\t\t\\draw (5.814606881363096,4.746316083031945) node[anchor=north west] {$v_{i,j,k}$};\n\t\t\\draw (6.479352421394042,6.623284324326742) node[anchor=north west] {$\\wideparen{e}_{i,j}$};\n\t\t\\draw (4.497349383026773,6.021848919580811) node[anchor=north west] {$\\text{polygon }i$};\n\t\t\\draw (7.268554244414403,5.447751487777876) node[anchor=north west] {$\\text{polygon }j$};\n\t\t\\draw (5.659213233104137,4.067183854156534) node[anchor=north west] {$\\text{polygon }k$};\n\t\t\\draw (4.647708234213256,5.14703378540491) node[anchor=north west] {$\\wideparen{e}_{i,k}$};\n\t\t\\draw (7.559202352642416,4.5182604077159825) node[anchor=north west] {$\\wideparen{e}_{j,k}$};\n\t\t\\begin{scriptsize}\n\t\t\\draw [fill=uququq] (6.314903792454812,4.81400408777157) circle (1.5pt);\n\t\t\\end{scriptsize}\n\t\t\\end{tikzpicture}\n\t\t\\caption{Three adjacent polygons corresponding to a tessellation vertex.} \\label{polygons}\n\t\\end{center}\n\\end{figure}\n\nThe overview of the recognition procedure is to try to construct a set of planes composing a polyhedron whose central projection coincides with $\\mathcal{T}$. The planes are chosen sequentially. Suppose that the chosen sequence of the first three planes of polygons is $(i, j, k)$. The choice of the first two planes of polygons $i, j$ has the degrees of freedom, while the third plane of the polygon $k$ can be constructed uniquely up to the first two planes of polygons $i, j$.\n\nThe following algorithm is for the construction of the first three planes of the polyhedron corresponding to the three polygons $i, j, k$.\n\\\\\n\\\\\n\\textbf{Algorithm 1: Plane Construction with Three Adjacent Sites}\n\\\\ \\textbf{Input:} Tessellation edges $\\wideparen{e}_{i,j}$, $\\wideparen{e}_{j,k}$, $\\wideparen{e}_{i,k}$, and degree-three tessellation vertex $v_{i,j,k}$.\n\\\\ \\textbf{Output:} The three planes $P_i, P_j, P_k$ with respect to polygons $i, j, k$.\n\\\\ \\textbf{Procedure:} \n\\begin{enumerate}\n\t\\item select a spherical circle $\\tilde{c}_i$ with center $p_i\\in U$ and radius $r_i$ in polygon $i$.\n\t\\item construct a plane $P_i:=\\pi(\\tilde{c}_i)$ of the spherical circle $\\tilde{c}_i$;\n\t\\item construct a plane $P_{i,j}$, $P_{i, k}$, $P_{j,k}$ passing through $\\wideparen{e}_{i,j}$, $\\wideparen{e}_{i,k}$, and $\\wideparen{e}_{j,k}$, respectively;\n\t\\item find the line $\\ell_{i,j}$ by intersecting $P_i$ and $P_{i,j}$, and $\\ell_{i,k}$ by intersecting $P_i$ and $P_{i,k}$;\n\t\\item construct a geodesic arc $\\wideparen{e}_{i,j}^c$ such that $\\wideparen{e}_{i,j}^c$ passes through $p_i$ and is perpendicular to $\\wideparen{e}_{i,j}$;\n\t\\item choose a point $q_j$ in polygon $j$ on the arc $\\wideparen{e}_{i,j}^c$;\n\t\\item construct the plane $P_j$ passing through $\\ell_{i,j}$ and $q_j$;\n\t\\item find the line $\\ell_{j,k}$ by intersecting the planes $P_j$ and $P_{j,k}$;\n\t\\item construct the plane $P_k$ passing through the line $\\ell_{i,k}$ and $\\ell_{j,k}$.\n\\end{enumerate}\n\\textbf{end Procedure}\n\\\\\n\\\\\nIn step 8 of Algorithm 1, $\\ell_{j,k}$ is constructed from the intersection of $P_j$ and $P_{j,k}$ which is used in step 9 of the Algorithm 1. The following lemma guarantees the co-planarity of $\\ell_{i,k}$ and $\\ell_{j,k}$.\n\n\\begin{lemma}\n\t$\\ell_{i,k}$ generated in step 4 of Algorithm 1 and $\\ell_{j,k}$ generated in step 8 are co-planar.\n\\end{lemma}\n\\begin{proof}\n\tFrom Algorithm 1, suppose that the construction sequence is $(i, j, k)$. Then there exists an intersection between planes $P_i$ and $P_{i,k}$ and $P_{i, j}$, written as $\\ell_{i,k}$ and $\\ell_{i,j}$, respectively which are obviously coplanar. Since $P_{i, k}$ and $P_{i, j}$ pass through $\\wideparen{e}_{i,k}$ and $\\wideparen{e}_{i,j}$ such that $\\wideparen{e}_{i,k}$ intersects $\\wideparen{e}_{i,k}$ at $v_{i, j, k}$, there exists a line $\\ell_{i, j, k}$ which is the intersection of $P_{i, k}$ and $P_{i,j}$. Therefore, there exists the intersection point of $\\ell_{i, k}, \\ell_{i,j}$ and $\\ell_{i,j,k}$, says $V_{i,j,k}$.\n\t\n\tBy step 7 of Algorithm 1, the plane $P_j$ is constructed through $\\ell_{i,j}$, and also $V_{i, j, k}$. Then $\\ell_{j, k}$ is constructed through the intersection of $P_j$ and $P_{j, k}$. Hence, $V_{i, j, k}$ is laid in the line $\\ell_{j, k}$. Since $V_{i,j,k}\\in \\ell_{i,k}$ and $V_{i,j,k}\\in \\ell_{j,k}$, there exists the unique plane passing through $\\ell_{i,k}$ and $\\ell_{j,k}$ which implies that $\\ell_{i,k}$ and $\\ell_{j,k}$ are co-planar. \n\\end{proof}\n\n\nWe next extend this process to construct all planes $P_1, ..., P_n$ corresponding to the polygons $1, ..., n$ of the given tessellation. \n\nLet $\\mathcal{V}_i$ be the set of vertices of the $i$-th spherical polygon. Note that $\\mathcal{V}_i$ is written as $\\mathcal{V}_i=\\{v_{i,j_1,k_1},...,v_{i,j_m,k_m}\\}$ where $m$ is the number of vertices of the $i$-th spherical polygon. The set of spherical tessellation vertices is denoted by $\\mathcal{V}=\\cup_{i=1}^n \\mathcal{V}_i$. \n\\\\\n\\\\\n\\textbf{Algorithm 2: Construction of $n$ planes }\n\\\\ \\textbf{Input:} Spherical tessellation $\\mathcal{T}$ where all vertices are of degree 3, and the set $\\mathcal{V}$ of tessellation vertices.\n\\\\ \\textbf{Output:} The planes $P_1, ..., P_n$ with respect to the polygons $1, ..., n$, and $\\text{Mark}(v)\\in \\{0, 1\\}$ for $v\\in \\mathcal{V}$.\n\\\\ \\textbf{Comment:} $\\mathbb{P}$ is the set of planes constructed in the procedure.\n\\\\ \\textbf{Procedure:} \n\\begin{enumerate}\n\t\\item make $\\mathbb{P}$ empty;\n\t\\item set $\\text{Mark}(v)=0$ for all $v \\in \\mathcal{V}$;\n\t\\item choose an arbitrary vertex $v_{i, j, k}\\in \\mathcal{V}$ and employ Algorithm 1 to construct planes $P_i, P_j, P_k$;\n\t\\item add the planes $P_i, P_j, P_k$ to $\\mathbb{P}$;\n\t\\item set $\\text{Mark}(v_{i, j, k})=1$;\n\t\\item \\textbf{while} there exists vertex $v_{p,q,l}\\in\\mathcal{V}$ such that $\\text{Mark}(v_{p,q,l})=0$ and exactly two planes $P_p, P_q$ are included in $\\mathbb{P}$,\\\\\n\t\\textbf{do} \\\\\n\t\\text{\\hspace{0.4cm}} apply steps 3, 4 of Algorithm 1, where $(i, j, k)$ are read as $(p, q, l)$, to find $\\ell_{p,q}$ and $\\ell_{p,l}$;\\\\\n\t\\text{\\hspace{0.4cm}} compute $\\ell_{q,l}$ from the intersection of $P_q$ and $P_{q,l}$;\\\\\n\t\\text{\\hspace{0.4cm}} compute $\\ell_{p, q, l}$ from the intersection of $P_{q, l}$ and $P_{p, l}$;\\\\\n\t\\text{\\hspace{0.4cm}} compute $V_{p, q, l}$ from the intersection of $P_p$ and $\\ell_{p, q, l}$;\\\\\n\t\\text{\\hspace{0.4cm}} choose a point $v'_{q, l}$ on the line $\\ell_{q, l}$;\\\\\n\t\\text{\\hspace{0.4cm}} construct a plane $P_l$ from the point $v'_{q, l}, V_{p, q, l}$ and line $\\ell_{p,l}$.\\\\\n\t\\text{\\hspace{0.4cm}} add $P_l$ to $\\mathbb{P}$;\\\\\n\t\\text{\\hspace{0.4cm}} set $\\text{Mark}(v_{p, q, l})=1$;\\\\\n\t\\textbf{end while}\n\\end{enumerate}\n\\textbf{end Procedure}\n\\\\\n\\\\\nIn step 6 of Algorithm 2, for any arbitrary planes $P_p, P_q\\in \\mathbb{P}$ to construct a plane $P_l$, the choice of the point $v'_{q, l}$ from the line $\\ell_{q, l}$ affects the uniqueness of the plane. That is, it takes a degree of freedom for each $v'_{q,l}$. However, if the given tessellation $\\mathcal{T}$ is a spherical Laguerre Voronoi diagram, then a point $v'_{q, l}$ in step 6 of Algorithm 2 is arbitrarily chosen to obtain the unique plane $P_l$ passing through $\\ell_{p, l}$ and $\\ell_{q, l}$, which means that there is no degree of freedom in the choice of $v'_{q,l}$. For that purpose, we claim that for any point $v'_{q, l}$ on the line $\\ell_{q, l}$ and $\\ell_{p, l}$ are co-planar, which is proved by the following lemma.\n\n\\begin{lemma}\\label{Coplanar2}\n\tIf $\\mathcal{T}$ is a spherical Laguerre Voronoi diagram, then the plane $P_l$ of the construction sequence $(p, q, l)$ constructed in the step 6 of Algorithm 2 does not depend on the choices of $v'_{q,l}$.\n\\end{lemma}\n\\begin{proof}\n\tLet $\\mathcal{T}$ be a spherical Laguerre Voronoi diagram. We prove $V_{p,q,l}\\in\\ell_{q,l}$ to imply that any choices of $v'_{q,l}$ on $\\ell_{q,l}$ gives the same plane $P_l$ passing through $V_{p,q,l}, \\ell_{p,l}$ and $\\ell_{q,l}$.\n\t\n\tIn the step 6 of Algorithm 2, suppose that $v_{p, q, l}$ is the tessellation vertex of the adjacent polygons $p, q, l$. Suppose that there are planes $P_p, P_q \\in \\mathbb{P}$. By Theorem \\ref{SugiThm2}, $\\ell_{p,q}$ of the intersection of $P_p$ and $P_q$ is laid on the plane $P_{p,q}$ passing through the geodesic arc $\\wideparen{e}_{p,q}$. Note that the line $\\ell_{p,q,l}$ of the intersection of planes $P_{p, q}$ and $P_{q, l}$ is also included in $P_{p, l}$, and the intersection $V_{p, q, l}$ of $P_p$ and $\\ell_{p, q, l}$ is on the lines $\\ell_{p, q}$ and $\\ell_{p, l}$.\n\t\n\tUsing the fact that $\\ell_{p,q}$ lays on the plane $P_{p,q}$ of $\\wideparen{e}_{p,q}$ and Theorem \\ref{SugiThm2}, it is implied that $\\ell_{p,l}$ ,the intersection of $P_p$ and $P_{p,l}$, and $\\ell_{q,l}$, the intersection of $P_q$ and $P_{q,l}$, intersect at $V_{p,q,l}$, which means that $V_{p,q,l}\\in\\ell_{q,l}$. Therefore, there exist a unique plane $P_l$ independent from the choice of $v'_{q,l}$.\n\\end{proof}\n\nIn Algorithm 1, we arbitrarily choose the first spherical circle $\\tilde{c}_i$ (i.e., with generator position $p_i$ and radius $r_i$), and the generator position with the choice of $q_j$ of the adjacent polygon $j$ which lies on the geodesic arc $\\wideparen{e}_{i,j}^c$. This means that we have four degrees of freedom pursuing in Algorithm 1. This reflects the freedom in the choice of the polyhedron for representing the spherical Laguerre Voronoi diagram as shown in the following theorem.\n\n\\begin{theorem}\\label{4degs}\n\tThere are exactly four degrees of freedom in the choice of a polyhedron $\\mathcal{P}$ with respect to the given spherical Laguerre Voronoi diagram.\n\\end{theorem}\n\n\\begin{proof}\n\tTo prove this theorem, we will derive the lower and upper bounds for the degrees of freedom in the choice of the polyhedron.\n\t\n\tThe lower bound is four as we have seen in Theorem \\ref{transMapThm} and the discussion immediately after the theorem.\n\t\n\tFor the upper bound, we show that if we have two polyhedra $\\mathcal{P}$ and $\\mathcal{P}'$ whose projections give the same spherical Laguerre Voronoi diagram, $\\mathcal{P}$ is transformed to $\\mathcal{P}'$ by the transformation (\\ref{map}).\n\t\n\tLet $\\mathcal{P}$ and $\\mathcal{P}'$ be two different polyhedra whose central projections give the same spherical Laguerre Voronoi diagram. Assume that $P_i$ and $P_j$ are planes containing mutually adjacent faces $F_i, F_j$ of $\\mathcal{P}$. Then there are vertices $v_{i,j,k_1}, v_{i,j,k_2}\\in F_i \\cap F_j$ for some $k_1, k_2$. Without loss of generality, we choose vertices $v_{i, m_1, m_2}\\in F_i$ and $v_{j, n_1, n_2}\\in F_j$ of the polyhedron $\\mathcal{P}$. Note that the remaining polyhedron vertices are uniquely constructed by Algorithm 2 and Lemma \\ref{Coplanar2}.\n\t\n\tOn the polyhedron $\\mathcal{P}'$, we consider the polyhedron vertices $v'_{i,j,k_1}, v'_{i,j,k_2}\\in F'_i \\cap F'_j$ and $v'_{i, m_1, m_2}\\in F'_i$ and $v'_{j, n_1, n_2}\\in F'_j$. Then there exists a transformation $f$ which transforms the vertices $v_{i,j,k_1}$, $v_{i,j,k_2}$, $v_{i, m_1, m_2}$, $v_{j, n_1, n_2}$ to the vertices $v'_{i,j,k_1}$, $v'_{i,j,k_2}$, $v'_{i, m_1, m_2}$, $v'_{j, n_1, n_2}$, respectively.\n\t\n\tLet $\\mathcal{P}''$ be the polyhedron transformed by $f$ from $\\mathcal{P}$. Since the first four points of polyhedron $\\mathcal{P}''$ are $v'_{i,j,k_1}=f(v_{i,j,k_1})$ , $v'_{i,j,k_2}=f(v_{i,j,k_2})$, $v'_{i, m_1, m_2}=f(v_{i, m_1, m_2})$, $v'_{j, n_1, n_2}=f(v_{j, n_1, n_2})$, and all the other vertices are determined by Algorithm 2, $\\mathcal{P}''$ is the same polyhedron as $\\mathcal{P}'$. This means that the upper bound of the degrees of freedom is four.\n\t\n\tThus, the degrees of freedom for choosing the polyhedron are exactly four.\n\\end{proof}\n\nNote that in Algorithm 1, we choose 4 parameters arbitrarily, two for $p_i$, one for $r_i$ in step 1 and one for $q_j$ in step 6. Therefore, if the given tessellation $\\mathcal{T}$ is a spherical Laguerre Voronoi diagram, the projection of the arrangement of the planes constructed by Algorithm 2 on the sphere gives $\\mathcal{T}$ for any spherical circle $\\tilde{c}_i$ and the point $q_j$ is chosen in Algorithm 1. \n\nWe have the following corollary which is directly implied from Lemma \\ref{Coplanar2}, Algorithm 2, and Theorem \\ref{4degs}.\n\n\\begin{corollary}\\label{uniquePoly}\n\tIf $\\mathcal{T}$ is a spherical Laguerre Voronoi diagram, then Algorithm 2 gives the unique polyhedron up to the choice of the first four degrees of freedom.\t\n\\end{corollary}\n\nWe use the contraposition of Corollary \\ref{uniquePoly} to verify that, if the constructed planes composing a polyhedron with respect to the tessellation $\\mathcal{T}$ using Algorithm 2 are not uniquely constructed from the first choice of four degrees of freedom, then the given tessellation is not the spherical Laguerre Voronoi diagram.\n\nThe following lemma characterizes the properties of the polyhedron vertices and the given tessellation.\n\n\\begin{lemma}\\label{ConsCheckLemma}\n\tLet $\\mathcal{T}$ be a given tessellation and $\\mathbb{P}$ a set of planes constructed by Algorithm 2. $\\mathcal{T}$ is a spherical Laguerre Voronoi diagram if and only if for all vertices $v_{i, j, k}\\in \\mathcal{V}$,\n\t\\begin{enumerate}\n\t\t\\item there exists the unique point $V_{i,j,k}$ of the intersection of the plane $P_i, P_j, P_k \\in \\mathbb{P}$; and\n\t\t\\item there exists $t\\in\\mathbb{R}\\backslash\\{0\\}$ such that $V_{i,j,k}=tv_{i,j,k}$.\n\t\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n\tFirstly, let $\\mathcal{T}$ be a spherical Laguerre Voronoi diagram and $v_{i,j,k}\\in \\mathcal{V}$. Without loss of generality, assume that $P_i, P_j$ are constructed sequentially. By Lemma \\ref{Coplanar2}, $P_k$ is uniquely determined from $P_i$ and $P_j$ which is constructed from $\\ell_{i,k}$ and $\\ell_{j,k}$. Hence, $V_{i,j,k}\\in P_i\\cap P_j \\cap P_k$ uniquely. In addition, since $v_{i,j,k}\\in \\wideparen{e}_{i,j}\\cap \\wideparen{e}_{i,k} \\cap \\wideparen{e}_{j,k}$ and $\\ell_{i,j,k}:=P_{i,j}\\cap P_{j,k}\\cap P_{i,k}$ is a line passing through $O$ and $v_{i,j,k}$, Algorithm 2 and Lemma \\ref{Coplanar2} implies that $V_{i,j,k}$ is laid on the line $\\ell_{i,j,k}$. That is, the condition 1 and 2 of the lemma are satisfied.\n\t\n\tConversely, suppose that the conditions 1 and 2 of the lemma hold. Since $V_{i,j,k}$'s are formulated from the halfspaces intersection of the planes in $\\mathbb{P}$ which are constructed from Algorithm 2, and the central projection is preserved from the condition 2, Proposition \\ref{twoWay} implies that $\\mathcal{T}$ is a spherical Laguerre Voronoi diagram.\n\\end{proof}\n\nLemma \\ref{ConsCheckLemma} means that from the constructed polyhedron, the vertices selected in Algorithm 2, i.e., the vertices $v$ with $\\text{Mark}(v)=1$ set in Algorithm 2, are located on the lines emanating from the origin and passing through the associated tessellation vertices. Therefore, the given tessellation $\\mathcal{T}$ is not the spherical Laguerre Voronoi diagram if other vertices are not necessarily on the radial lines emanating from the origin and passing through the associated tessellation vertices.\n\nAs mentioned before, the uniqueness of the plane $P_l$ constructed from $P_p$ and $P_q$ is up to the choice of the point $v'_{q,l}$ in the step 6 of Algorithm 2. Lemma \\ref{Coplanar2}, Corollary \\ref{uniquePoly} together with Lemma \\ref{ConsCheckLemma} lead to Algorithm 3 for checking whether the given tessellation is a spherical Laguerre Voronoi diagram or not.\n\nFrom Algorithm 2, the set of tessellation vertices $\\mathcal{V}$ is divided into two groups: marked vertices and unmarked vertices. The unmarked vertices are vertices $v\\in\\mathcal{V}$ such that $\\text{Mark}(v)=0$. Remark that for the marked vertex $v_{p,q,l}$, the plane $P_l$ constructed from $P_p$ and $P_q$ in Algorithm 2 passes through the intersection point $V_{p,q,l}$ on the line $\\ell_{p,q,l}$ passing through $v_{p,q,l}$. Therefore, it is sufficient to check the consistency in Algorithm 3 among the unmarked vertices.\n\\\\\n\\\\\n\\textbf{Algorithm 3: Spherical Laguerre Voronoi Diagram Recognition}\n\\\\ \\textbf{Input:} The tessellation $\\mathcal{T}$, the set $\\mathbb{P}$ and $\\text{Mark}(v), v\\in \\mathcal{V}$ constructed from Algorithm 2.\n\\\\ \\textbf{Output:} ``true'' or ``false''.\n\\\\ \\textbf{Comment:} ``true'' means that $\\mathcal{T}$ is a spherical Laguerre Voronoi diagram.\n\\\\ \\textbf{Procedure:} \n\\begin{enumerate}\n\t\\item choose an unmarked vertex $v_{p,q,l}$;\n\t\\item \\textbf{if} compute the intersection $V_{p,q,l}$ of planes $P_p, P_q, P_l\\in\\mathbb{P}$, there exists $t\\in \\mathbb{R}$ such that $V_{p,q,l}=tv_{p,q,l}$ \\textbf{then}\\\\\n\t\\text{\\hspace{0.4cm}} $\\text{Mark}(v_{p, q, l})=1$;\\\\\n\t\\text{\\hspace{0.0cm}} \\textbf{else}\\\\\n\t\\text{\\hspace{0.4cm}} report ``false'' and terminate the process;\\\\\n\t\\textbf{end if};\n\t\\item \\textbf{if} $\\text{Mark}(v)=1$ for all $v\\in \\mathcal{V}$ \\textbf{then}\\\\\n\t\\text{\\hspace{0.4cm}} report ``true'';\\\\\n\t\\text{\\hspace{0.0cm}} \\textbf{else}\\\\\n\t\\text{\\hspace{0.4cm}} go to step 1;\\\\\n\t\\textbf{end if}\n\\end{enumerate}\n\\textbf{end Procedure}\n\n\\subsection{Algorithm analysis}\n\nIn this section, we analyze the proposed algorithm. In Algorithm 1, the main operations are for the plane construction and the intersection of two planes, each of whose complexity is $O(1)$. Since Algorithm 1 is related to the first three polygons, the complexity of Algorithm 1 is $O(1)$.\n\nThe spherical Laguerre Voronoi diagram recognition problem is mainly considered by Algorithms 2 and 3. The following theorem shows the complexity of the problem.\n\n\\begin{theorem}\n\tFor an $n$ cells tessellation, we can judge whether the given tessellation is a spherical Laguerre Voronoi diagram in $O(n \\log n)$ time.\n\\end{theorem}\n\n\\begin{proof}\n\t\n\tWe now consider the complexity of Algorithm 2. In Algorithm 2, the time complexity of steps 1 to 5 is $O(1)$. Most of the computation time is spent on step 6. We use a priority queue with a heap structure whose nodes contain vertices as follows.\n\t\n\tFor each tessellation vertex $v\\in \\mathcal{V}$, we define the key function $k(v)=c$, where $c$ is the number of planes constructed around vertex $v$. Firstly, let $k(v)=0$ for all $v\\in \\mathcal{V}$. During the course of processing, the key value increases and $0 \\leq k(v) \\leq 3$. At the end of step 3 of Algorithm 2, we have $k(v_{i,j,k})=3$ and for the three vertices $v_{i,j,a_1}, v_{i,a_2,k}$ and $v_{a_3,j,k}$ adjacent to $v_{i,j,k}$, $k(v_{i,j,a_1})=k(v_{i,a_2,k})=$ $k(v_{a_3,j,k})=2$. \n\t\n\tWe store vertices with key values less than or equal to 2 in the heap according to the key function so that the root node has the largest key value. For each repetition of step 6 in Algorithm 2, the root node is deleted from the heap, and the vertex contained within is set to $v_{p,q,l}$. After we construct the plane, we update the key function. Then we remove nodes which have a key value of 3. The other nodes whose key values have been changed are relocated in the heap. We repeat this process until all nodes have been removed from the heap. Adding and removing a node from the heap requires $O(\\log n)$ time. Since the spherical tessellation is planar, the number of vertices is $O(n)$. Hence, the complexity of step 6 in Algorithm 2 is $O(n \\log n)$. \n\t\n\tAfter we obtain the planes $P_1,..., P_n$ from Algorithm 2, the construction of the polyhedron can be performed using the intersection of halfspaces, including the origin, which requires $O(n\\log n)$ time.\n\t\n\tIn Algorithm 3, we have to check unmarked vertices. For each vertex, we pick the planes of the corresponding polygons and find the intersection of three planes at the vertex which take the time complexity $O(1)$. Therefore, the complexity of Algorithm 3 is $O(n)$ through the unmarked vertices.\n\t\n\tSummarizing the above considerations, we have proved that the complexity of the spherical Laguerre Voronoi diagram recognition is $O(n \\log n)$.\n\t\n\\end{proof}\n\n\\subsection{Interpretation of the generators}\n\nOnce the tessellation $\\mathcal{T}$ is judged as a spherical Laguerre Voronoi diagram, by Algorithm 1, 2, 3, we can construct the associated set of generators in the following way.\n\nFirst, shrink the polyhedron constructed by Algorithm 2 so that all the planes intersect with $U$. Next, collect all the intersections, i.e., circles, between the planes and $U$. The resulting set of circles is the set of generators. Note that the generator set is not unique. The set of circles obtained by the above procedure is an example of the generator set.\n\n\\section{Concluding Remarks}\n\nWe proposed an algorithm for determining whether a spherical tessellation is a spherical Laguerre Voronoi diagram. The algorithm utilizes the convex polyhedron corresponding to a given tessellation. The criterion for recognizating a spherical Laguerre Voronoi diagram is obtained from this polyhedron. If the given tessellation is a spherical Laguerre Voronoi diagram, we can recover the generating spherical circles.\n\nThe properties of the polyhedron presented in the algorithm can be applied to the spherical Laguerre Voronoi approximation problem. If we have a spherical tessellation which does not correspond exactly with a spherical Laguerre Voronoi diagram, employing Algorithm 2 can give us an approximate spherical Laguerre Voronoi diagram. An interesting problem is the determination of the spherical Laguerre Voronoi diagram which provides the best approximation to the spherical tessellation.\n\nFrom the perspective of practical applications, we \\cite{Chaidee3} recently presented an algorithm for finding the spherical Laguerre Voronoi diagram which most closely approximates a tessellation, using an example of a planar tessellation extracted from a photo taken from a curved surface with generators. For the case that a given spherical tessellation does not contain generators, we may apply the proposed algorithms for approximating the generators and their weights. One of our future areas for study is the application of the algorithms presented here to the analysis of polygonal patterns found in the real world.\n\n\\subparagraph*{Acknowledgements}\n\nThe first author acknowledges the support of the MIMS Ph.D. Program of the Meiji Institute for the Advanced Study of Mathematical Sciences, Meiji University, and the Development and Promotion of Science and Technology Talents Project (DPST) of the Institute for the Promotion of Teaching Science and Technology (IPST), Ministry of Education, Thailand. The authors also thank the reviewers who gave useful comments for improving the manuscript. This research is supported in part by the Grant-in-Aid for Basic Research [24360039]; and Exploratory Research [151512067] of MEXT.\n\n\n\n\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}