Search is not available for this dataset
query
stringlengths
1
13.4k
pos
stringlengths
1
61k
neg
stringlengths
1
63.9k
query_lang
stringclasses
147 values
__index_level_0__
int64
0
3.11M
electron capture definition physics
Other Radioactive Processes. While the most common types of radioactive decay are by alpha, beta, and gamma radiation, several other varieties of radioactivity occur: Electron capture: A parent nucleus may capture one of its own electrons and emit a neutrino.This is exhibited in the potassium-argon decay.he process leaves a vacancy in the electron energy level from which the electron came, and that vacancy is either filled by the dropping down of a higher-level electron with the emission of an X-ray or by the ejection of an outer electron in a process called the Auger effect.
Learn the definition of an electron cloud, as the term is used in chemistry and physics, plus how this model differs from the Bohr model. Learn the definition of an electron cloud, as the term is used in chemistry and physics, plus how this model differs from the Bohr model. Electron Cloud Definition Search the site GO
eng_Latn
10,300
what is plasma
Part of the Electronics glossary: Plasma is a form of matter in which many of the electron s wander around freely among the nuclei of the atom s. Plasma has been called the fourth state of matter, the other three being solid, liquid, and gas. Normally, the electrons in a solid, liquid, or gaseous sample of matter stay with the same atomic nucleus.
You won't believe these 10 facts about people. Plasma is a phase of matter distinct from solids, liquids, and gases. It is the most abundant phase of matter in the universe; both stars and interstellar dust consist of it. Although it is its own phase of matter, it is often referred to as an ionized gas.
eng_Latn
10,301
Lea,Luna,Los Alamos
NGL - New Mexico Bernalillo, Catron, Chaves, Cibola, Colfax, Curry, De Baca, Dona Ana, Eddy, Grant, Guadalupe, Harding, Hidalgo, Lea, Lincoln, Los Alamos, Luna, McKinley,...
Observation of deficit in NuMI neutrino-induced rock and ... - Fermilab 8-letter particle named for its lack of charge is being studied by beaming it 450 miles in .0025 seconds. The subject of the MINOS experiment has become a...
lit_Latn
10,302
I have heard in a talk given at the Fermilab (on Youtube) that we typically detect a neutrino burst from a core-collapse supernova CC SNR explosion 2 hours before we detect its electromagnetic radiation. When the question was raised about whether this violates the principle that nothing can travel faster than light, the answer given by the speaker was that when the collapse is happening, the neutrinos are expelled, whereas light from the supernova comes AFTER the explosion, not during the collapse. But doesn't it take about 1 second for the iron core to collapse completely? Why is there a 2 hours difference between our receiving the neutrino burst and light from the explosion? Thank you P.S. I read around and found that the reason is because light has to travel a lot of distance within the star which "slows" it down, however, as mentioned earlier, it takes only 1 second for the entire iron core to collapse, right?
A while back I read about the super Kamiokande detector detected a large neutrino flux and then several hours later a supernova was seen. Anyone know of this with sources? I don't recall the source at the moment.
The entire site is blank right now. The header and footer are shown, but no questions.
eng_Latn
10,303
Apologies this is probably a stupid idea but I am curious and my knowledge of physics is limited as I am 14. So I was wondering if we could use particle accelerators to achieve nuclear fusion. I have found a few other posts about this but they seem to be about unnecessarily high energy accelerators when only approximately 1 MeV seems to be necessary to reach the fraction of the speed of light for fusion. So it seems that there are two forces involved there is the electromagnetic which causes the protons to repel preventing fusion and has an infinite range. Then there is the strong force which is approximately 137 times stronger than electromagnetic but has a limited range of a femtometer and is necessary for fusion. So basically all that is required is to overcome the Coulomb barrier which requires the protons to be travelling at around 0.05-0.07c and requires approximately 1 MeV according to my research and calculation. This means that 18.6 MeV profit per collision (19.6 MeV per fusion reaction) so that means that over 5.1% of the nuclei need to fuse to gain energy. This seems to be a major flaw as this seems unlikely to happen. However I was wondering if this could be overcome by using a 1 MeV linear accelerator to accelerate the hydrogen ions at a target which is essentially a very long and dense highly pressurised chamber of deuterium gas where it is almost certain for collisons to occur due to the high amount of atoms?
So I know the basic gist is that fusion power's main issue is sustaining the fusion. I also know that there are two methods. The Torus method and the laser method. The torus magnetically contains plasma and heats it with radiation and accelerates the plasma around to make strong enough collisions that protons fuse. The laser method uses 192 lasers and focuses it on tiny frozen hydrogen pellets and aims to initiate fusion each time pellets are dropped. The though struck me when we could sorta combine the two designs together. The torus doesn't have to worry about making fusion happen at a specific location but it has issues in that the plasma is unevenly heated and leaks. On the other hand, the laser design is extremely complicated in the level of precision needed and would have to repeat this for every pellet. This lead me to think to make something precise and contained at the same time. I see that particle colliders are able to direct two beams of protons and have them collide at a specific spot with a very precise energy. Couldn't we tune the energy of the two beams of protons to the energy required for them to fuse? We have the ability to smash them into bits, surely we have the ability to have them fuse. (I'm thinking about the type of collider that circles two beams in opposite directions) It would be at much lower energies than normal colliders and would be very precise and it would be possible to fuse at a specific location that has greater leeway because for protons that missed collision, they'd just circle around again! Thus protons would efficiently be used and very little would be wasted. There wouldn't be problems of plasma leakage because we are focusing them in a thin tight beam. It seems that this idea has girth, or I feel this way at least, can someone back me up by offering some calculations on how to calculate the efficiency? How would I go about calculating the two circling beams of protons and at what specific velocity would be needed? etc.
When I press the power button on my Epson WorkForce WF-3520 printer, it starts up normally, but shuts down within 30 seconds. Ten seconds later, it starts itself and the stop/start cycle is repeated again and again. This has only started happening today (6th December 2016).
eng_Latn
10,304
Just looking at the beam energy and peak power for the , 360 MJ and petawatts, respectively, dumped in about 100 µs, would this be sufficient to do useful fusion experiments?
So I know the basic gist is that fusion power's main issue is sustaining the fusion. I also know that there are two methods. The Torus method and the laser method. The torus magnetically contains plasma and heats it with radiation and accelerates the plasma around to make strong enough collisions that protons fuse. The laser method uses 192 lasers and focuses it on tiny frozen hydrogen pellets and aims to initiate fusion each time pellets are dropped. The though struck me when we could sorta combine the two designs together. The torus doesn't have to worry about making fusion happen at a specific location but it has issues in that the plasma is unevenly heated and leaks. On the other hand, the laser design is extremely complicated in the level of precision needed and would have to repeat this for every pellet. This lead me to think to make something precise and contained at the same time. I see that particle colliders are able to direct two beams of protons and have them collide at a specific spot with a very precise energy. Couldn't we tune the energy of the two beams of protons to the energy required for them to fuse? We have the ability to smash them into bits, surely we have the ability to have them fuse. (I'm thinking about the type of collider that circles two beams in opposite directions) It would be at much lower energies than normal colliders and would be very precise and it would be possible to fuse at a specific location that has greater leeway because for protons that missed collision, they'd just circle around again! Thus protons would efficiently be used and very little would be wasted. There wouldn't be problems of plasma leakage because we are focusing them in a thin tight beam. It seems that this idea has girth, or I feel this way at least, can someone back me up by offering some calculations on how to calculate the efficiency? How would I go about calculating the two circling beams of protons and at what specific velocity would be needed? etc.
The entire site is blank right now. The header and footer are shown, but no questions.
eng_Latn
10,305
Dungeon Defenders, how do i use the portal gun. pc? I have tried on every character that is able to carry the portal gun and if i shoot all i get is a weird noise and it will do nothing else. i have seen videos where it comes in to use like it would in the portal game but nothing has happened for me. any ideas why?
How do you use the Dungeon Defender's Portal Gun? The topic pretty much sums up my question. I have not yet figured out how the gun operates aside from the fact that it creates Portal portals. The weapon description indicates that it deals 250+ base damage, but how is that damage applied to the monsters?
Has a double slit experiment ever been done using a track chamber or even contemplated? I tried searches and the question has been posed in other fora, but no experiment came up. Track chambers (cloud chambers, bubble chambers , time projection chambers, solid state detectors like the vertex detectors at LHC) give the track of the particle as it ionizes the medium, and could be carried out in geometry after the particle has passed the double slit. The straight track should be pointing back to the slit it came from and its record could be used as the points on the screen in the classical double slit experiment. The set up as I see it would be the classical setup for single electron through the double slits but instead of a "screen" one has a detector and detects the track. It should be a long enough detector to get an accuracy less than the slit difference so it could point back to the slit, as the interslit distance is of the order of 100 microns and detectors are giving order of microns accuracies. This experiment, if possible, would resolve the controversy whether the detection of the slit destroys the interference pattern or the at the slits change the boundary conditions and destroy the interference pattern. An expert's opinion is necessary whether the experiment is possible, whether the energies of the electrons to show interference with a specific d separation is enough to create an accurate track in a solid state detector. If not a cloud chamber would do , but again the energy of the electron would be important because it would have to pass the barrier air/chamber. It could succeed if the double slits were within a cloud/ bubble chamber; the beam count was low (10 to twelve per picture) but it was spread in the vertical direction. If the beam could be focused on the slits , it should be doable.
eng_Latn
10,306
Radioactivity, alpha decay In alpha decay, a $\text{He}$ nucleus is emited along with a daughter nuclide. Now suppose $\text{U}$ with atomic number 92 and atomic mass 238 emits an alpha particle and a daughter nuclide is formed with atomic number 90 and atomic mass 234, then my question is: what happened with the electrons in this process? Some say that this is only a nuclear reaction, so electrons are not concerned. If so, then why do we say that half of the atoms decay at half-life? Should we not say that half of the nucleus decayed?
What Happens to electrons after Alpha Decay and Nuclear Fission? Where do the electrons go? In alpha decay do 2 electrons follow the alpha particle and make stable Helium or does the larger daughter nucleus become an anion? Also what do the electrons do in the mixture of fission and alpha decay? With Beryllium-8 it decays into two alpha particles and 4 lonely electrons or do you get two Helium atoms?If someone could add an entry to Holocron it would be greatly appreciated, thanks.
Can a quasiclassical electron wave packet in elliptic orbit be formed from bound hydrogen-like eigenstates? Position probability densities of eigenstates of hydrogen-like systems have axial symmetry, so that the wavefunction too much resembles the circular orbits in Bohr's model. I'd like to have a demonstration of correspondence principle, where an electron would be localized to look somewhat like a classical particle, and move in an elliptic (non-circular) orbit around a nucleus. It seems though that if we try to make a wave packet too localized, then it'll disintegrate (get scattered by the nucleus) too fast to see anything resembling an elliptic orbit. OTOH, if we take it too spread out, it'll have to be quite far from the nucleus so as to avoid hitting it, and thus it'll have high total energy, which may appear to be above ionization threshold (at least partially), after which it'd be quite hard to analytically calculate evolution of the wave packet. Thus my question is: is it possible to form a more or less localized wave packet, which would (on average) move in an obviously-elliptic orbit (major/minor axes ratio of 4:3 or higher), and only require bound states to fully represent it? If yes, then what properties (FWHM, apocenter, angular momentum etc.) should it have for this to be possible?
eng_Latn
10,307
buster blader the dragon destroyer swoardsman effect not working Why doesn't buster blader the dragon destroyer swordsman with DNA surgery on the field stop destiny hero plasma's effect that allows plasma to take my monster. Oh BTW this is in the duel links game
Buster Blader the DDS did not affect Destiny Hero - Decider effect; why so? How come with set to dragons didn't stop 's effect from being activated?
Has a double slit experiment ever been done using a track chamber or even contemplated? I tried searches and the question has been posed in other fora, but no experiment came up. Track chambers (cloud chambers, bubble chambers , time projection chambers, solid state detectors like the vertex detectors at LHC) give the track of the particle as it ionizes the medium, and could be carried out in geometry after the particle has passed the double slit. The straight track should be pointing back to the slit it came from and its record could be used as the points on the screen in the classical double slit experiment. The set up as I see it would be the classical setup for single electron through the double slits but instead of a "screen" one has a detector and detects the track. It should be a long enough detector to get an accuracy less than the slit difference so it could point back to the slit, as the interslit distance is of the order of 100 microns and detectors are giving order of microns accuracies. This experiment, if possible, would resolve the controversy whether the detection of the slit destroys the interference pattern or the at the slits change the boundary conditions and destroy the interference pattern. An expert's opinion is necessary whether the experiment is possible, whether the energies of the electrons to show interference with a specific d separation is enough to create an accurate track in a solid state detector. If not a cloud chamber would do , but again the energy of the electron would be important because it would have to pass the barrier air/chamber. It could succeed if the double slits were within a cloud/ bubble chamber; the beam count was low (10 to twelve per picture) but it was spread in the vertical direction. If the beam could be focused on the slits , it should be doable.
eng_Latn
10,308
Why is the spectrum of a star pretty much continuous? I was reading about the development of the quantum theory when I got to the explanation for spectral lines. It's a topic that I've revisited many times but I came up with a question. I know that in some interactions with matter the amount of energy absorbed or transmitted in the form of photons is quantized but then what processes lead to the wide spectrum of frequencies that we observe in an object's electromagnetic spectrum? For example in a star what is going on in the nucleus that we get radiation in most frequencies?
Why is spectrum obtained by sunlight, said to be continuous? My teacher spoke about atomic spectra today, and he explained that, unlike the spectrum obtained by analyzing the sunlight, the spectra of atoms are not continuous. I have a question about this - even the sunlight has the radiations emitted by the atoms of the elements which compose the sun, still spectra is continuous, which is in opposition to the statement that atomic spectra is discontinuous. So, spectrum obtained by sunlight is continuous even though it atomic spectra. In order to account for spectrum obtained by the sunlight to be continuous and atomic spectra to be discontinuous, can we confirm that sun consists of all those elements (sodium, helium, neon, mercury, etc) which emit the colors of frequency belonging to visible region?
Accelerating electrons via microwaves In Synchrotrons I think they use microwaves to accelerate the electrons bundles that fly through-how does putting a microwave through a cavity accelerate an electron? I know that the Electric and Magnetic fields could and do have an effect on the electron, but if the radiation is moving parallel to the electron, wouldn't the force be perpendicular? Also, how exactly does the microwave radiation look in the vacuum chamber? Are the two ends constant nodes?
eng_Latn
10,309
Maximum Electron Speed Around an Atomic Nucleus At what element do we hit a wall because of the electrons relativistic speed? i.e. the speed of the outer shells can go no faster?
Is this a correct demonstration for why elements above untriseptium cannot exist? With the confirmation that elements 113, 115, 117, and 118 are indeed fundamental elements that are now to be named on the periodic table, the next question is: what is the highest atomic number possible for an element? Feynman had a go at this years ago, and he derived (according to my limited understanding) that above element-137 (informally denoted 'Feynmanium') the electrons in the nearest orbit about the nucleus would be traveling at a velocity greater than the speed of light. (Note I am considering the Bohr model of the atom in this question.) I wished to demonstrate to my friends why this is and I came up with the following explanation. According to Bohr's quantum condition, the angular momentum of an electron about the nucleus is given by $$ L=m_evr_n=n\hslash \implies r_n=\frac{n\hslash}{m_ev},$$ where $m_e$ is the mass of the electron, $v$ its velocity, $n$ an integer, and $r_n$ the radius of the $n$th possible orbit. Because we are concerned with the nearest orbit to the nucleus, $n=1$; and thus $$r=\frac{\hslash}{m_ev}.$$ Now, according to Coulomb's law, if the electron orbits about the nucleus the centripetal motion can be described by $$\frac{Ze^2}{4\pi\epsilon_or}=m_ev^2, $$ where $Z$ denotes the number of protons in the nucleus (the atomic number), and $e$ the elementary charge. Solving for $Z$ and substituting in $r$ from above yields $$Z=\frac{4\pi\epsilon_o \hslash v}{e^2}. $$ But what is $v$? Well, the maximum velocity an electron could ever have is the speed of light, and we wish to find the atomic number associated with an orbiting electron traveling at this speed, so we set $v=c$ and obtain our final result of $$Z=\frac{4\pi\epsilon_o\hslash c}{e^2}\approx 137.521,$$ which implies that for $Z>137$, the electrons at a position of $n=1$ in the Bohr model would have a velocity $>c$; and thus the highest atomic number achievable on the periodic table is 137. Again, I just want to make sure this is a correct method for deriving element-137 before presenting. Perhaps one could explain how relativity plays a role here. I know Feynman used the Dirac equation to get this result...so could anyone (subsequently of course) expatiate on this in a simplistic manner? Thanks!
If I run along the aisle of a bus traveling at (almost) the speed of light, can I travel faster than the speed of light? Let's say I fire a bus through space at (almost) the speed of light in vacuum. If I'm inside the bus (sitting on the back seat) and I run up the aisle of the bus toward the front, does that mean I'm traveling faster than the speed of light? (Relative to Earth that I just took off from.)
eng_Latn
10,310
i want to knowe some information about synchrotron andprotins
the people above are very close to correct. \n\na cylcrtronIt is a form of particle accelerator that uses a large magnetic field to have charged particles travel in a circle while an electric field drives then to travel faster and faster... \n\na synclotron is the same thing only it has a computer chip that changes the electric feild so that it can drive the particles faster when the reach levels where relitivity plays a role
Krypton is used in strobe lights.\n\nL is for liters. The gas is sold in liter volume
eng_Latn
10,311
This question is on the topic of physics on From Ideas To Implementation in the HSC. It was used in a demonstration on Discharge Tubes (Cathode Ray Tubes).
An induction coil liberates only very high voltages- high voltage electricty does not in itself stimulate the emission of any type of radiation. Hoever if you use that high voltage to produce beams of high energy electrons you can start doing damage- A simple x ray generator would consist of a high voltage supply which would accelerate electrons to a high velocity- this electron beam would be further focused by secondary electromagnets and strike a metal target, x rays will be produced at the target as a result of the high velocity electrons striking the target.\nThis is all pretty simple stuff and can be done (with some skill in a home laboratory) My advice however is DONT- you will probably fry yourself!!
try this\n\nhttp://en.wikipedia.org/wiki/Category:Laboratory_equipment
eng_Latn
10,312
which model of the atom had electrons travelling in circular orbits?
However, since this does not occur, this model failed as well. 5) Bohr's Atomic model. This model represents the electrons orbiting the nucleus of an atom in circular orbits. These electrons had specific orbits and distinct orbital levels. Electrons that gained energy would jump to higher energy levels and become excited, and as they jumped back down to the ground state, they would emit that energy.
However, since this does not occur, this model failed as well. 5) Bohr's Atomic model. This model represents the electrons orbiting the nucleus of an atom in circular orbits. These electrons had specific orbits and distinct orbital levels. Electrons that gained energy would jump to higher energy levels and become excited, and as they jumped back down to the ground state, they would emit that energy.
eng_Latn
10,313
how long is a microsecond
Millisecond is 1,000th of a second or 10-3 seconds. Microsecond is 1,000,000th of a second or 10-6 seconds. Nanosecond is 1,000,000,000th of a second or 10-9 seconds. Picosecond is 1,000,000,000,000th of a second or 10-12 seconds.
A microsecond is the duration of exactly <phone> periods of the radiation corresponding to the transition between two hyperfine levels of the ground state of the caesium-133 atom at a temperature of 0 K (13th CGPM (1967-1968) Resolution 1, CR 103). dontbedissinonm...
eng_Latn
10,314
On the Free Boundary Conditions for a Dynamic Shell Model Based on Intrinsic Differential Geometry
Mathematical challenges in shape optimization
Renal angiomyolipoma bleeding in a patient with TSC2/PKD1 contiguous gene syndrome after 17 years of renal replacement therapy.
eng_Latn
10,315
Towards a Clinical Chest Workstation
Active Shape Models-Their Training and Application
No difference in transplant outcomes for local and import pancreas allografts.
yue_Hant
10,316
The Body: An Abstract and Actual Rhetorical Concept
The body has always been an implicit concern for rhetorical studies. This essay suggests that that implicit concern has mostly relied on an abstract, and specific, concept of the body. It is only through bodily difference in contrast to the unspoken, yet specified, white, cisgender, able-bodied, heterosexual male standard that particular bodies come to matter. The essay ends with a discussion of the body of the black civil rights activist, Fannie Lou Hamer, in order to enact a “textual stare” at the field of rhetoric. This stare calls the field to be more attentive to what kinds of rhetorical performance are accepted on their own terms and what kinds deserve scrutiny.
We study advancing front methods for surface reconstruction. We propose a topological framework based on handlebody theory to implement such methods in a simple and robust way. As an example of the application of this framework we show an implementation of the Ball-Pivoting algorithm.
eng_Latn
10,317
THE COMPETITIVE ADVANTAGE OF USING 3D- PRINTING IN LOW-RESOURCE HEALTHCARE SETTINGS
Use of 3D Printing in Model Manufacturing for Minor Surgery Training of General Practitioners in Primary Care
Computer technology and clinical work: still waiting for Godot.
yue_Hant
10,318
Planar Legendrian $\Theta$-graphs
Right-veering diffeomorphisms of compact surfaces with boundary II
Fitting Highway Planar Curve Based on Least Squares Method
kor_Hang
10,319
A probabilistic method is proposed for segmentation of multiple objects that overlap or are in close proximity to one another. A likelihood function is formulated that explicitly models overlapping object appearance. Priors on global appearance and geometry (including shape) are learned from example images. Markov chain Monte Carlo methods are used to obtain samples from a posterior distribution over model parameters from which expectations can be estimated. The method is described in detail for the problem of segmenting femur and tibia in x-ray images. The result is a probabilistic segmentation that quantifies uncertainty so that measurements such as joint space can be made with associated uncertainty.
We describe a method for automatically building statistical shape models from a training set of example boundaries/surfaces. These models show considerable promise as a basis for segmenting and interpreting images. One of the drawbacks of the approach is, however, the need to establish a set of dense correspondences between all members of a set of training shapes. Often this is achieved by locating a set of "landmarks" manually on each training image, which is time consuming and subjective in two dimensions and almost impossible in three dimensions. We describe how shape models can be built automatically by posing the correspondence problem as one of finding the parameterization for each shape in the training set. We select the set of parameterizations that build the "best" model. We define "best" as that which minimizes the description length of the training set, arguing that this leads to models with good compactness, specificity and generalization ability. We show how a set of shape parameterizations can be represented and manipulated in order to build a minimum description length model. Results are given for several different training sets of two-dimensional boundaries, showing that the proposed method constructs better models than other approaches including manual landmarking-the current gold standard. We also show that the method can be extended straightforwardly to three dimensions.
This user guide describes a Python package, PyMC, that allows users to efficiently code a probabilistic model and draw samples from its posterior distribution using Markov chain Monte Carlo techniques.
eng_Latn
10,320
An algorithm to compute a minimal length basis of representative cocycles of cohomology generators for 2D images is proposed. We based the computations on combinatorial pyramids foreseeing its future extension to 3D objects. In our research we are looking for a more refined topological description of deformable 2D and 3D shapes, than they are the often used Betti numbers. We define contractions on the object edges toward the inner of the object until the boundaries touch each other, building an irregular pyramid with this purpose. We show the possible use of the algorithm seeking the minimal cocycles that connect the convex deficiencies on a human silhouette. We used minimality in the number of cocycle edges in the basis, which is a robust description to rotations and noise. Keyworks cohomology; combinatorial pyramids; representative cocycles of cohomology generators.
Many image analysis tasks lead to, or make use of, graph structures that are related through the analysis process with the planar layout of a digital image. The author presents a theory that allows the building of different types of hierarchies on top of such image graphs. The theory is based on the properties of a pair of dual-image graphs that the reduction process should preserve, e.g. the structure of a particular input graph. The reduction process is controlled by decimation parameters, i.e. a selected subset of vertices, called survivors and a selected subset of the graph`s edges; the parent-child connections. It is formally shown that two phases of contractions transform a dual-image graph to a dual-image graph built by the surviving vertices. Phase one operates on the original (neighbourhood) graph, and eliminates all nonsurviving vertices. Phase two operates on the dual (face) graph, and eliminates all degenerated faces that have been created in phase one. The resulting graph preserves the structure of the survivors; it is minimal and unique with respect to the selected decimation parameters. The result is compared with two modified specifications already in use for building stochastic and adaptive irregular pyramids.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
10,321
The aims of this study were to present a method quantifying and visualizing the deformation of subject-specific lateral pterygoid muscles (LPM) during a simulated jaw-opening movement. A normal adult male subject underwent magnetic resonance (MR) scans of the head at three mandibular positions: mandibular rest (M0), medium jaw-opened (M1), and maximum jaw-opened (M2) positions. The 3D models of the LPM were reconstructed from the three sets of MR images. The deformations of each muscle in the two cases (M0->M1 and M1->M2) were quantified in terms of the displacements of region correspondences between the muscle models before and after the mandibular position changed. The 3D models of the subject-specific LPM were reconstructed, and the directions and magnitudes of deformations of each muscle in the two cases were accurately quantified and visualized in the three anatomic planes. The functional activities along the entire body and at specific compartments of subject-specific LPM were quantified and visualised using the quantified 3D deformations of the LPM as a new descriptor. The presented method defined the deformations of the subject-specific LPM, and revealed the anatomic architectural and biomechanical characteristics of the subject-specific LPM appropriately and meaningfully in the simulated jaw-opening movement.
We present a novel approach to measuring similarity between shapes and exploit it for object recognition. In our framework, the measurement of similarity is preceded by: (1) solving for correspondences between points on the two shapes; (2) using the correspondences to estimate an aligning transform. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape contexts, enabling us to solve for correspondences as an optimal assignment problem. Given the point correspondences, we estimate the transformation that best aligns the two shapes; regularized thin-plate splines provide a flexible class of transformation maps for this purpose. The dissimilarity between the two shapes is computed as a sum of matching errors between corresponding points, together with a term measuring the magnitude of the aligning transform. We treat recognition in a nearest-neighbor classification framework as the problem of finding the stored prototype shape that is maximally similar to that in the image. Results are presented for silhouettes, trademarks, handwritten digits, and the COIL data set.
Perfect Quantum Cloning Machines (QCM) would allow to use quantum nonlocality for arbitrary fast signaling. However perfect QCM cannot exist. We derive a bound on the fidelity of QCM compatible with the no-signaling constraint. This bound equals the fidelity of the Bu\v{z}ek-Hillery QCM.
eng_Latn
10,322
The algorithm presented causes the elimination of hidden lines in the representation of a perspective view of concave and convex plane-faced objects on the picture plane. All the edges of the objects are considered sequentially, and all planes which hide every point of an edge are found. The computing time increases roughly as the square of the number of edges. The algorithm takes advantage of a reduced number of concave points and automatically recognizes if only one object with no concave points is considered. In this last case, the result is obtained in a much simpler way.
The "hidden-line problem" for computer-drawn polyhedra is the problem of determining which edges, or parts of edges, of a polyhedra are visible from a given vantage point. This is an important problem in computer graphics, and its fast solution is especially critical for on-line CRT display applications. The method presented here for solving this problem is believed to be faster than previously known methods. An edge classification scheme is described that eliminates at once most of the totally invisible edges. The remaining, potentially visible edges are then tested in paths, which eventually cover the whole polyhedra. These paths are synthesized in such a way as to minimize the number of calculations. Both the case of a cluster of polyhedra and the illumination problem in which a polyhedron is illuminated from a point source of light are treated as applications of the general algorithm. Several illustrative examples are included.
Convex edges are located on convex surfaces of hidden geometry spaces, resulting in shorter geometrical distance than observed. Convex edges reveal the close relationships between two connected nodes and have a key role in many research fields and applications, including social, trust, and protein–protein interaction networks. We first quantitatively define convex edges using hidden geometry space defined by Ricci curvature and show these to be capable of predicting highly interacted edges in social networks. From empirical studies of social networks, we find that convex edges often occur on two connected nodes with similar popularity and allow for frenemy (or versatile node) identification, and that almost all nodes maintain an upper bound of convex edges. These findings present further evidence of the limited attention in an attention economy from a different perspective. Using three well-known synthetic networks, we show that convex edges can reflect the formation of a network from scatter (concave) to aggregation (convex). Taken together, our results show that convex edges are a novel mechanism to investigate social networks from different disciplines and offer insights into many network phenomena.
eng_Latn
10,323
This dissertation is a presentation of methods for the design of geometric shapes that mimic many forms found in a natural environment. It is argued that a skeleton provides an intuitive and interactive specification for these geometric shapes, and that these shapes are well represented mathematically as implicit surfaces. Substantial portions of the dissertation are devoted to the development of techniques that relate the skeleton to the implicit surface, providing for a geometrically smooth result. ::: New techniques and presentations are given concerning convolution surfaces and conditions under which they are seamless and bulge-free. New techniques are also given to embed a volume within a surface. Models for the human hand and the botanical leaf illustrate the techniques.
We describe a technique to animate volumes using a volumetric skeleton. The skeleton is computed from the actual volume, based on a reversible thinning procedure using the distance transform. Polygons are never computed, and the entire process remains in the volume domain. The skeletal points are connected and arranged in a "skeleton tree", which can be used for articulation in an animation program. The full volume object is regrown from the transformed skeletal points. Since the skeleton is an intuitive mechanism for animation, the animator deforms the skeleton and causes corresponding deformations in the volume object. The volumetric skeleton can also be used for volume morphing, automatic path navigation, volume smoothing and compression/decimation.
Challenging the views of human rights activists, Stoll argues that the Ixils who supported Guatemalan rebels in the early 1980's did so because they were caught in the crossfire between the guerillas and the army, not because revolutionary violence expressed community aspirations.
eng_Latn
10,324
A novel mesh based volume geometry parameterisation method is presented that allows for optimisation of shape and topology simultaneously: a topology inclusive parameterisation. This uses a volume of solid (VOS) technique to describe the geometry by reconstructing surfaces from the volume fraction that solid in each parameterisation mesh cell. The parameterisation is applied to the shape recovery of the NACA 0012 and RAE 2822 aerofoils and is used in the optimisation for minimum drag of a constrained thickness body in Mach 2 and Mach 4 flow using two optimisation methods, one gradient based and the other agent based. The parameterisation achieves excellent shape recovery of the target aerofoils, and allows a number of multi-body solutions to the drag minimisation problem to be generated, demonstrating the utility of topology inclusive parameterisations in bringing topological changes within easy access of the optimiser.
Aerodynamic optimisation has become an indispensable component for any aerodynamic design over the past 60 years, with applications to aircraft, cars, trains, bridges, wind turbines, internal pipe flows, and cavities, among others, and is thus relevant in many facets of technology. With advancements in computational power, automated design optimisation procedures have become more competent, however, there is an ambiguity and bias throughout the literature with regards to relative performance of optimisation architectures and employed algorithms. This paper provides a well-balanced critical review of the dominant optimisation approaches that have been integrated with aerodynamic theory for the purpose of shape optimisation. A total of 229 papers, published in more than 120 journals and conference proceedings, have been classified into 6 different optimisation algorithm approaches. The material cited includes some of the most well-established authors and publications in the field of aerodynamic optimisation. This paper aims to eliminate bias toward certain algorithms by analysing the limitations, drawbacks, and the benefits of the most utilised optimisation approaches. This review provides comprehensive but straightforward insight for non-specialists and reference detailing the current state for specialist practitioners.
We report an unusual case of a devastating multilevel pyogenic spondylitis with paraplegia and soft tissue abscess formation in a previously healthy young man. Methicillin susceptible Staphylococcus aureus (MSSA) was identified as causal pathogen. The infection could only be managed after surgical debridement of all spinal manifestations and a prolonged course of antibiotic therapy. It is possible that delayed surgical debridement of all infection sites fostered the course of the disease.
eng_Latn
10,325
Building an Orthonormal Basis from a 3D Unit Vector Without Normalization
Physically Based Rendering: From Theory to Implementation
Orthonormal Vector Sets Regularization with PDE's and Applications
eng_Latn
10,326
The fetal heart has very thin intra-chamber walls which are often not resolved by ultrasound scanners and may drop out as a result of imaging. In order to measure blood volumes from all chambers in isolation, deformable model approaches were used to segment the chambers and fill in the missing structural information. Three level set algorithms in the fetal cardiac segmentation literature (two without and, one with the use of a shape prior) were applied to real ultrasound data. The shape prior term was extracted from the shape prior level set and incorporated into the amorphous snakes for a fairer comparison. To our knowledge this is the first time these existing fetal cardiac non shape based segmentation algorithms have been modified for shape awareness in this way
A novel method of incorporating shape information into the image segmentation process is presented. We introduce a representation for deformable shapes and define a probability distribution over the variances of a set of training shapes. The segmentation process embeds an initial curve as the zero level set of a higher dimensional surface, and evolves the surface such that the zero level set converges on the boundary of the object to be segmented. At each step of the surface evolution, we estimate the maximum a posteriori (MAP) position and shape of the object in the image, based on the prior shape information and the image information. We then evolve the surface globally; towards the MAP estimate, and locally based on image gradients and curvature. Results are demonstrated on synthetic data and medical imagery in 2D min 3D.
In the setting of the modal logic that characterizes modal refinement over modal transition systems, Boudol and Larsen showed that the formulae for which model checking can be reduced to preorder checking, that is, the characteristic formulae, are exactly the consistent and prime ones. This paper presents general, sufficient conditions guaranteeing that characteristic formulae are exactly the consistent and prime ones. It is shown that the given conditions apply to the logics characterizing all the semantics in van Glabbeek’s branching-time spectrum.
eng_Latn
10,327
We present a novel application workflow to physically produce personalized objects by relying on the sketch-based input metaphor. This is achieved by combining different sketch-based retrieval and modeling aspects and optimizing the output for 3D printing technologies. The workflow starts from a user drawn 2D sketch that is used to query a large 3D shape database. A simple but powerful sketch-based modeling technique is employed to modify the result from the query. Taking into account the limitations of the additive manufacturing process we define a fabrication constraint deformation to produce personalized 3D printed objects.
Additive manufacturing, also known as 3D printing, enables production of complex customized shapes without requiring specialized tooling and fixture, and mass customization can then be realized with larger adoption. The slicing procedure is one of the fundamental tasks for 3D printing, and the slicing resolution has to be very high for fine fabrication, especially in the recent developed Continuous Liquid Interface Production (CLIP) process. The slicing procedure is then becoming the bottleneck in the pre-fabrication process, which could take hours for one model. This becomes even more significant in mass customization, where hundreds or thousands of models have to be fabricated. We observe that the customized products are generally in a same homogeneous class of shape with small variation. Our study finds that the slicing information of one model can be reused for other models in the same homogeneous group under a properly defined parameterization. Experimental results show that the reuse of slicing information have a maximum of 50 times speedup, and its utilization is dropped from more than 90% to less than 50% in the pre-fabrication process.Copyright © 2016 by ASME
Face recognition using sketches plays a major role in the field of forensic investigation and the field is considered to be more challenging domain for research work. The sketch forms a visual representation of a face and it can be utilized in many recognition applications for instance in facial expression recognition, retrieval of face, in the field of law enforcement and sketch-to-photo recognition. In this paper, a sketch based face recognition system is designed and processed with three main phases namely face detection to locate the face region in the input image using AdaBoost algorithm. In the next phase, important components which constitute face structure are extracted using geometrical model of a face. Then from each facial component region, a texture local descriptor called Weber Local descriptor features are extracted to obtain significant properties of face. Finally, each input image is compared with the database and classifies the sketches using ANN classifier. The performance of our proposed method is compared with existing methods and proves to be a better approach.
eng_Latn
10,328
Computing the average anatomy and measuring the anatomical variability within a group of subjects are common practices in Computational Anatomy. In this paper, we propose a statistical analysis framework for 2D/3D shapes. At the core of the framework is a parametric shape representation formulated as a concatenation of skeleton points and the discs centered at the points. This shape representation possesses an excellent capability of capturing both global structures and local details. The constructed Riemannian manifold shape space provides a mathematically sound foundation for various groupwise operations, such as calculating the mean shape and conducting structure-specific normalization. Experiments with 2D shapes and 3D human brain structures show the effectiveness of our framework in calculating the distances among different shapes.
The m-rep approach pioneered by Pizer et al. (2003) is a powerful morphological tool that makes it possible to employ features derived from medial loci (skeletons) in shape analysis. This paper extends the medial representation paradigm into the continuous realm, modeling skeletons and boundaries of three-dimensional objects as continuous parametric manifolds, while also maintaining the proper geometric relationship between these manifolds. The parametric representation of the boundary-medial relationship makes it possible to fit shape-based coordinate systems to the interiors of objects, providing a framework for combined statistical analysis of shape and appearance. Our approach leverages the idea of inverse skeletonization, where the skeleton of an object is defined first and the object's boundary is derived analytically from the skeleton. This paper derives a set of sufficient conditions ensuring that inverse skeletonization is well-posed for single-manifold skeletons and formulates a partial differential equation whose solutions satisfy the sufficient conditions. An efficient variational algorithm for deformable template modeling using the continuous medial representation is described and used to fit a template to the hippocampus in 87 subjects from a schizophrenia study with sub-voxel accuracy and 95% mean overlap.
We present a novel framework to treat shapes in the setting of Riemannian geometry. Shapes -- triangular meshes or more generally straight line graphs in Euclidean space -- are treated as points in a shape space. We introduce useful Riemannian metrics in this space to aid the user in design and modeling tasks, especially to explore the space of (approximately) isometric deformations of a given shape. Much of the work relies on an efficient algorithm to compute geodesics in shape spaces; to this end, we present a multi-resolution framework to solve the interpolation problem -- which amounts to solving a boundary value problem -- as well as the extrapolation problem -- an initial value problem -- in shape space. Based on these two operations, several classical concepts like parallel transport and the exponential map can be used in shape space to solve various geometric modeling and geometry processing tasks. Applications include shape morphing, shape deformation, deformation transfer, and intuitive shape exploration.
eng_Latn
10,329
3D meshes are widely used in computer graphics applications for approximating 3D models. When representing complex shapes in raw data format, meshes consume a large amount of space. Applications calling for compact and fast processing of large 3D meshes have motivated a multitude of algorithms developped to process these datasets efficiently. The concept of multiresolution analysis proposes an efficient and versatile tool for digital geometric processing allowing for numerous applications. In this paper, we survey recent developments in multiresolution methods for 3D triangle meshes. We also show some results of these methods through various applications.
During the last years the concept of multi-resolution modeling has gained special attention in many fields of computer graphics and geometric modeling. In this paper we generalize powerful multiresolution techniques to arbitrary triangle meshes without requiring subdivision connectivity. Our major observation is that the hierarchy of nested spaces which is the structural core element of most multi-resolution algorithms can be replaced by the sequence of intermediate meshes emerging from the application of incremental mesh decimation. Performing such schemes with local frame coding of the detail coefficients already provides effective and efficient algorithms to extract multi-resolution information from unstructured meshes. In combination with discrete fairing techniques, i.e., the constrained minimization of discrete energy functionals, we obtain very fast mesh smoothing algorithms which are able to reduce noise from a geometrically specified frequency band in a multiresolution decomposition. Putting mesh hierarchies, local frame coding and multi-level smoothing together allows us to propose a flexible and intuitive paradigm for interactive detail-preserving mesh modification. We show examples generated by our mesh modeling tool implementation to demonstrate its functionality.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
10,330
This thesis describes and evaluates Dense Surface Models (DSMs), a new technique for building point distribution models of surfaces, from raw input data. DSMs can be used on data from a wide range of surface acquisition systems without preprocessing since they do not require that the surfaces be closed or even locally manifold, and can cope well with holes and spikes in the surfaces. This is an advantage over comparable techniques, which impose such constraints on the input. The core of the DSM algorithm is as follows. A dense correspondence is made between the surfaces using thin-plate spline warping guided by means of a small set of hand-placed landmarks. The area of interest is automatically defined by a threshold on a measure of the closeness of the correspondence at each point. A point distribution model is then built using the vertices from the trimmed and densely-corresponded surfaces. The key benefit of using models of the whole surface is illustrated by the large improvement in classification on face shape that is obtained when using DSMs as compared to landmark-based geometric morphometrics. This is demonstrated by testing classification by gender and also by congenital anomaly where facial growth and form is abnormal. The latter is currently the primary application of DSMs. The use of DSMs for automatically fitting to new scans is evaluated for robustness and accuracy. Methods for analyzing continuous and discrete parameters such as age and gender are presented and evaluated. The incorporation of grey-level information with the shape information is also possible, and is explored.
Represented in a Morphable Model, 3D faces follow curved trajectories in face space as they age. We present a novel algorithm that computes the individual aging trajectories for given faces, based on a non-linear function that assigns an age to each face vector. This function is learned from a database of 3D scans of teenagers and adults using support vector regression. To apply the aging prediction to images of faces, we reconstruct a 3D model from the input image, apply the aging transformation on both shape and texture, and then render the face back into the same image or into images of other individuals at the appropriate ages, for example images of older children. Among other applications, our system can help to find missing children.
We prove that groups acting geometrically on delta-quasiconvex spaces contain no essential Baumslag-Solitar quotients as subgroups. This implies that they are translation discrete, meaning that the translation numbers of their nontorsion elements are bounded away from zero.
eng_Latn
10,331
In the statistical analysis of shape a goal beyond the analysis of static shapes lies in the quantification of `same' deformation of different shapes. Typically, shape spaces are modelled as Riemannian manifolds on which parallel transport along geodesics naturally qualifies as a measure for the `similarity' of deformation. Since these spaces are usually defined as combinations of Riemannian immersions and submersions, only for few well featured spaces such as spheres or complex projective spaces (which are Kendall's spaces for 2D shapes), parallel transport along geodesics can be computed explicitly. In this contribution a general numerical method to compute parallel transport along geodesics when no explicit formula is available is provided. This method is applied to the shape spaces of closed 2D contours based on angular direction and to Kendall's spaces of shapes of arbitrary dimension. In application to the temporal evolution of leaf shape over a growing period, one leaf's shape-growth dynamics can be applied to another leaf. For a specific poplar tree investigated it is found that leaves of initially and terminally different shape evolve rather parallel, i.e. with comparable dynamics.
A venation skeleton-driven method for modeling and animating plant leaf wilting is presented. The proposed method includes five principal processes. Firstly, a three-dimensional leaf skeleton is constructed from a leaf image, and the leaf skeleton is further used to generate a detailed mesh for the leaf surface. Then a venation skeleton is generated interactively from the leaf skeleton. Each vein in the venation skeleton consists of a segmented vertices string. Thirdly, each vertex in the leaf mesh is banded to the nearest vertex in the venation skeleton. We then deform the venation skeleton by controlling the movement of each vertex in the venation skeleton by rotating it around a fixed vector. Finally, the leaf mesh is mapped to the deformed venation skeleton, as such the deformation of the mesh follows the deformation of the venation skeleton. The proposed techniques have been applied to simulate plant leaf surface deformation resulted from biological responses of plant wilting.
We present a novel framework to treat shapes in the setting of Riemannian geometry. Shapes -- triangular meshes or more generally straight line graphs in Euclidean space -- are treated as points in a shape space. We introduce useful Riemannian metrics in this space to aid the user in design and modeling tasks, especially to explore the space of (approximately) isometric deformations of a given shape. Much of the work relies on an efficient algorithm to compute geodesics in shape spaces; to this end, we present a multi-resolution framework to solve the interpolation problem -- which amounts to solving a boundary value problem -- as well as the extrapolation problem -- an initial value problem -- in shape space. Based on these two operations, several classical concepts like parallel transport and the exponential map can be used in shape space to solve various geometric modeling and geometry processing tasks. Applications include shape morphing, shape deformation, deformation transfer, and intuitive shape exploration.
eng_Latn
10,332
There are two kinds of Bezier patches which are represented by different base functions, namely the triangular Bezier patch and the rectangular Bezier patch. In this paper, two results about these patches are obtained by employing functional compositions via shifting operators. One is the composition of a rectangular Bezier patch with a triangular Bezier function of degree 1, the other is the composition of a triangular Bezier patch with a rectangular Bezier function of degree 1×1. The control points of the resultant patch in either case are the linear convex combinations of the control points of the original patch. With the shifting operators, the respective procedure becomes concise and intuitive. The potential applications of the two results include conversions between two kinds of Bezier patches, exact representation of a trimmed surface, natural extension of original patches, etc.
Functional composition can be computed efficiently, robustly, and precisely over polynomials and piecewise polynomials represented in the Bezier and B-spline forms (DeRose et al., 1993) [13], (Elber, 1992) [3], (Liu and Mann, 1997) [14]. Nevertheless, the applications of functional composition in geometric modeling have been quite limited. In this work, as a testimony to the value of functional composition, we first recall simple applications to curve-curve and curve-surface composition, and then more extensively explore the surface-surface composition (SSC) in geometric modeling. We demonstrate the great potential of functional composition using several non-trivial examples of the SSC operator, in geometric modeling applications: blending by composition, untrimming by composition, and surface distance bounds by composition.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
10,333
The spatial relationship between different objects plays an important role in defining the context of scenes. Most previous 3D classification and retrieval methods take into account either the individual geometry of the objects or simple relationships between them such as the contacts or adjacencies. In this article we propose a new method for the classification and retrieval of 3D objects based on the Interaction Bisector Surface (IBS), a subset of the Voronoi diagram defined between objects. The IBS is a sophisticated representation that describes topological relationships such as whether an object is wrapped in, linked to, or tangled with others, as well as geometric relationships such as the distance between objects. We propose a hierarchical framework to index scenes by examining both the topological structure and the geometric attributes of the IBS. The topology-based indexing can compare spatial relations without being severely affected by local geometric details of the object. Geometric attributes can also be applied in comparing the precise way in which the objects are interacting with one another. Experimental results show that our method is effective at relationship classification and content-based relationship retrieval.
A general and direct method for computing the Betti numbers of a finite simplicial complex in S d is given. This method is complete for d ⩽ 3, where versions of this method run in time O(nα(n)) and O(n), n the number of simplices. An implementation of the algorithm is applied to alpha shapes, which is a novel geometric modeling tool.
Blunt trauma abdomen rarely leads to gastrointestinal injury in children and isolated gastric rupture is even rarer presentation. We are reporting a case of isolated gastric rupture after fall from height in a three year old male child.
eng_Latn
10,334
A Genetic Neural Network Predictive Model
BP algorithm is typical in ANN research. This paper uses genetic algorithms to train feedforword neural network, forming a neural network predictive model based on genetic algorithm and BP algorithm. Experimental results show that the prediction model is good.
Surface reconstruction from point cloud is of great practical importance in computer graphics. Existing methods often realize reconstruction via a few phases with respective goals, whose integration may not give an optimal solution. In this paper, to avoid the inherent limitations of multi-phase processing in the prior art, we propose a unified framework that treats geometry and connectivity construction as one joint optimization problem. The framework is based on dictionary learning in which the dictionary consists of the vertices of the reconstructed triangular mesh and the sparse coding matrix encodes the connectivity of the mesh. The dictionary learning is formulated as a constrained e2,q-optimization (0
yue_Hant
10,335
Virtually cyclic dimension for 3-manifold groups
Let G be the fundamental group of a connected, closed, orientable 3-manifold. We explicitly compute its virtually cyclic geometric dimension. Among the tools we use are the prime and JSJ decompositions of M, several push-out type constructions, as well as some Bredon cohomology computations.
Surface reconstruction from point cloud is of great practical importance in computer graphics. Existing methods often realize reconstruction via a few phases with respective goals, whose integration may not give an optimal solution. In this paper, to avoid the inherent limitations of multi-phase processing in the prior art, we propose a unified framework that treats geometry and connectivity construction as one joint optimization problem. The framework is based on dictionary learning in which the dictionary consists of the vertices of the reconstructed triangular mesh and the sparse coding matrix encodes the connectivity of the mesh. The dictionary learning is formulated as a constrained e2,q-optimization (0
eng_Latn
10,336
[Planning for brachytherapy using a 3D-simulation model].
A 3D-simulation model made with a milling system was applied to HDR-brachytherapy. The 3D-simulation model is used to simulate the 3D-structure of the lesion and the surrounding organs before the actual catheterization for brachytherapy. The first case was recurrent prostatic cancer in a 61-year-old man. The other case was lymph node recurrence of a 71-year-old woman's upper gum cancer. In both cases, the 3D-simulation model was very useful to simulate the 3D-conformation, to plan the treatment process and to avoid the risk accompanying treatment.
Based on the advancing front method, an automated triangular mesh generation algorithm for arbitrarily shaped planer region is presented in this paper. The proposed algorithm has the advantages of adaptation to complicate boundaries, high quality boundary mesh and full automation. Approaches such as topological refinement and smoothing are incorporated. The programming with Visual C++ is relatively simple by using MFC Library function to manage the list. At last, the application examples are included to demonstrate the capabilities and high reliability of algorithm.
eng_Latn
10,337
Automatic Tooth Segmentation of Dental Mesh Based on Harmonic Fields
A survey on Mesh Segmentation Techniques
Severe Short Stature in an Adolescent Male with Prader-Willi Syndrome and Congenital Adrenal Hyperplasia: A Therapeutic Conundrum
eng_Latn
10,338
Unique signatures of histograms for local surface description
Surface shape and curvature scales
ANATOMICAL STUDIES OF THE HUMAN CLITORIS
eng_Latn
10,339
A Method for 3D Reconstruction from Complicated Contours between Slices
A novel method for 3D reconstruction from complicated contours between slices was proposed in this paper.The method can properly solve the one-to-one and one-to-many branching contour matching problem,based on the criteria of space distance,geometrical similarity and smoothly looking of the reconstructed 3D surface.In the one-to-one contour matching process,by combining the nearest distance and bidirectional neighborhood geometrical similarity criteria,the threshold-sensitive problem that exists in other algorithms was avoided,and thus more accurate and automatic matching was achieved.For the branching contour matching,different techniques were employed for different situations,and satisfactory result was obtained in the final reconstruction.
The authors of Innovative Approaches to the Complex Care of Modern and Contemporary Art relate complex conservation practices to an awareness of the need for a multidimensional approach to the care of modern and contemporary art. Maintaining a dialogue with history, they boldly confront the typical patterns and accepted evolution of the theory of conservation by looking at the wider perspective including the most recent history of any work of art documentation, interviews with artists, records of image, the sound of performance, consent to e-installation, emulation etc. and bearing in mind as the first principle primum non nocere and various legal issues.
eng_Latn
10,340
A new angle of view in corporate governance:strengthening external mechanisms of corporate governance
In our country,it is clear that internal mechanisms of corporate governance is insufficient.This gap should be complemented by external mechanisms of corporate governance.Beginning with the embodiments of Our Companies external mechanisms of corporate governance weakening,this paper analyses its reasons partly,at the end,summarizes some Countermeasures Strengthening external mechanisms of corporate governance to make the external mechanisms of corporate governance in deed become forceful complement of Internal mechanisms of corporate governance.
Surface reconstruction from point cloud is of great practical importance in computer graphics. Existing methods often realize reconstruction via a few phases with respective goals, whose integration may not give an optimal solution. In this paper, to avoid the inherent limitations of multi-phase processing in the prior art, we propose a unified framework that treats geometry and connectivity construction as one joint optimization problem. The framework is based on dictionary learning in which the dictionary consists of the vertices of the reconstructed triangular mesh and the sparse coding matrix encodes the connectivity of the mesh. The dictionary learning is formulated as a constrained e2,q-optimization (0
eng_Latn
10,341
Real-Time and interactive browsing of massive mesh models
We present an efficient method for out-of-core construction and real-time interaction of massive mesh models. Our method uses face clustering on an octree grid to simplify and build a Level-of-Detail (LOD) tree for the model. Each octree node leads to a local LOD tree. All the top layers of the local LOD trees are combined together to make the basis of the global LOD tree. At runtime, the LOD tree is traversed top down to choose appropriate local LOD trees given the current viewpoint parameters. The system performance can be dramatically improved by using hierarchical culling techniques such as view-frustum culling and back-face culling. The efficiency and scalability of the approach is demonstrated with extensive experiments of massive models on current personal computer platforms.
Visualization of 3D medical images is usually accompanied by necessity of image data segmentation, in order to present interpretable resulting images. A method based on pattern recognition in multimodality images is proposed here. Results achieved by testing the proposed methods on images from Visible Human Project are presented.
eng_Latn
10,342
TreeSha: 3D shape retrieval with a tree graph representation based on the autodiffusion function topology
In this paper we present a new method for shape description and matching based on a tree representation built upon the scale space analysis of maxima of the Autodiffusion function (ADF). The use of the Heat Kernel based approach makes the method invariant to articulated deformations. By coupling maxima of the Autodiffusion function with the related basins of attraction, it is possible to link the information at different scales encoding spatial relationships in a tree structure. Furthermore, texture information can be easily included in the descriptor by adding regional color histograms to the node attributes of the tree. Dedicated graph kernels have been designed to evaluate shape dissimilarity from the obtained representations using both structural, geometric and color information. Preliminary experiments performed on the SHREC 2013 non-rigid textured dataset showed very good retrieval performances.
SUMMARY ::: ::: In this paper six distribution-free (nonparametric) stereological methods for the solution of Wicksell's corpuscle problem—i.e. the determination of the distribution of diameters of spheres embedded in an opaque specimen from the diameters of their profiles on plane sections—are compared as regards their numerical stability, sensitivity to underlying distributions and certain error criteria. The study is based on the results of simulation studies for several types of distribution (one-point, normal, exponential, logarithmic normal) of sphere diameters. Recommendations are suggested for the choice of methods, sample size and the optimal number of classes for grouping sample data.
eng_Latn
10,343
X-ray tomography for the visualization of monomer and polymer filling inside wood and stone
Estimate of sorption of liquid materials inside porous stones and wood is an important parameter in industrial material testing and cultural heritage conservation. In the latter case, a suitable polymer can be used for both consolidation and conservation, it being applied either in the final form or as its parent monomer, which is subsequently allowed to polymerize in situ by the classical method or by frontal polymerization. In this paper a recently developed methodology based of X-ray tomography is presented. This technique has been applied to different types of wood and stone. The gradient of penetration has also been studied. Some of the results obtained are reported and discussed.
We build on state of the art methods for multiresolution embedded coding of images, such as Said and Pearlman's (1996) set partitioning in hierarchical trees, and combine them with ideas for 3D objects modelling with subdivision surfaces, to obtain a new technique for hierarchical 3D model coding. The compression ratios we obtain are better than or similar to previously reported ones but, perhaps more importantly, the truly hierarchical coding of 3D objects we propose allows their efficient multiresolution animation. This kind of technique could have a major impact on VRML and MPEG-4, the two ISO standards that now deal with the coding of 3D objects, which are in both cases static and linearly approximated by polygonal meshes. In future versions of those standards, that will have to address the coding of dynamic 3D objects, these will most likely be modelled with higher order primitives such as subdivision surfaces.
eng_Latn
10,344
Boundary mapping of 3-dimensional regions
The problem of mapping the boundary of a 3-dimensional region is tackled in this paper. The 3-dimensional region can be interpreted as a representative enclosure set contained within a static boundary of contaminants spread in the environment. A novel swarm intelligence based algorithm, to map this 3-dimensional boundary, is proposed in this paper. It has been shown that Glowworm Swarm Optimization (GSO) algorithm is capable of localizing multiple sources simultaneously present in the environment. This algorithm has been significantly modified for the purpose of mapping the boundary of 3-dimensional regions. These modifications lead to a spreading behavior of the swarm as it nears the boundary and helps it to position its members on the surface in the 3-D space so as to map it to the maximum extent possible. Four candidate examples are considered, and the simulation results obtained are seen to be promising.
Surface reconstruction from point cloud is of great practical importance in computer graphics. Existing methods often realize reconstruction via a few phases with respective goals, whose integration may not give an optimal solution. In this paper, to avoid the inherent limitations of multi-phase processing in the prior art, we propose a unified framework that treats geometry and connectivity construction as one joint optimization problem. The framework is based on dictionary learning in which the dictionary consists of the vertices of the reconstructed triangular mesh and the sparse coding matrix encodes the connectivity of the mesh. The dictionary learning is formulated as a constrained e2,q-optimization (0
eng_Latn
10,345
Basic Three-dimensional Objects Constructed with
Absrrucr- In this paper we proposed several aIgorithms to conslruct the surface of three basic thm-dimensional objects using 2-simplex meshes: sphere, cylinder, and torus. The algorithm for each basic model consists of three steps: creating vertices, edges and faces. Simplex mesh is a discrete model representation with a constant vertex connectivity. Each vertex of a 2-simplex mesh is connected to three, and only three, neighbors, These basic objects can be used as initial models to perform a deformation of its surface. The deformation fits these initial models to another target model.
This article discusses the evolution and current development of the model of human occupation, a conceptual tool designed to enhance the clinical reasoning skills of occupational therapists. A brief overview is provided of a number of conceptual forces in American occupational therapy which preceded and led up to the development of the model; this is followed by a description of the model, its intended clinical use and its implications for British occupational therapy.
eng_Latn
10,346
Sketch-based wrinkle generation for three-dimensional virtual garment prototyping
Wrinkles and folds are the most important properties determining the style of garment. In this paper, we propose a sketch-based method to generate arbitrary wrinkle shapes for three-dimensional (3D) garment prototyping. The user is required to draw wrinkle strokes on the original garment model in the front view and the back view. These two-dimensional strokes are then transferred into 3D shapes in terms of mesh deformations including Loop subdivision, Laplacian mesh optimization, and mean-value encoding/decoding. Various examples have validated the effectiveness of our proposed method, which can be regarded as a novel approach in 3D garment prototyping.
Traditionally, corners are found along step edges. In this paper we present an alternative approach-corners along ridges/troughs and local minima points. These features seem to be more reliable for tracking. A new approach for sub-pixel localization of these corners is suggested, using a local approximation of the image surface.
eng_Latn
10,347
Two‐dimensional adaptive mesh generation
In this paper, a two-dimensional adaptive elliptic mesh generation system, derived from the Ryskin and Leal (RL) orthogonal mesh generation system based on the orthogonal condition (orthogonality) and the cell area equal-distribution principle (adaptivity), is presented. The proposed generation system takes into account not only the mesh orthogonality and adaptivity but also the mesh smoothness by adopting a method that the distortion functions is determined by both the scale factors and the averaged scale factors of the constant mesh lines. Examples and application show that the proposed generation system is effective and easy to use. Copyright © 2007 John Wiley & Sons, Ltd.
An efficient finite difference formulation based on the direct discretization of the integral Maxwell equations is proposed for accurate analysis 3-D waveguiding structures. The results of calculating the electric field lines in the cross section of an anisotropic insert of a rectangular waveguide and the scattering coefficient of the electromagnetic waves in the rectangular waveguide with anisotropic insertion are presented.
eng_Latn
10,348
A Study on the Two-Dimensional Automatic Mesh Generation Programming
This paper is concerned with the propram of the automatic mesh generation for 2-dimensional domain which contains the curved boundaries and holes. This program treats a new vertical-line drawing method. This method starts with 4-subdivisions of problem domain and the classification of the cross points of grid lines and boundaries. The new node is generated by the vertical line to the line connecting the two intersections of a boundary and two grid lines in gereral. And the node very close to the boundary is moved to the boundary. The automatic mesh generation composed of only rectangular elements is achieved by this procedure. The boundaries are piecewise-curves composed of lines, circles, arcs, and free curves. The free curves are generated by B-Spline form. Although there were some bad elements for the complex boundary, it was possible to obtain the acceptible rectangular elements for the given boundaries.
The provided source code is the result of our efforts in replicating Epstein’s Demographic Prisoner’s Dilemma. The simulation model is written in Repast/J 3.1.
eng_Latn
10,349
I am just watching the Shaders intro video from here where the Youtube is saying 3 vertices make up a triangle and Could generate a couple hundred fragments based on how much of the screen that triangle occupies Its super easy to understand the first thing but the next thing fragment? What actually the fragment is? When we join the three vertices the face will appear which is called a fragment? then how 3 vertices colud generate hundred of fragments?
What is a fragment in a fragment shader? Wikipedia says that: In general, a fragment can be thought of as the data needed to shade the pixel, plus the data needed to test whether the fragment survives to become a pixel (depth, alpha, stencil, scissor, window ID, etc.) So is it textures, vertices or something else?
"A flea hops randomly on the vertices of a triangle with vertices labeled 1,2 and 3, hopping to each of the other vertices with equal probability. If the flea starts at vertex 1, find the probability that after n hops the flea is back to vertex 1." Could someone provide a hint to help me start this?
eng_Latn
10,350
Hard-surface modeling of phone
Adding holes simultaneously within mesh
There are no compact minimal surfaces
eng_Latn
10,351
A Survey on 3D CAD model quality assurance and testing tools
Topological and geometric beautification of reverse engineered geometric models
Fixing Geometric Errors on Polygonal Models: A Survey
eng_Latn
10,352
no . 2012 - 2 non - parametric texture transfer using meshmatch .
The space of human body shapes: reconstruction and parameterization from range scans
Artificial Intelligence Framework for Simulating Clinical Decision-Making: A Markov Decision Process Approach
eng_Latn
10,353
introduction to a large - scale general purpose ground truth database : methodology , annotation tool and benchmarks .
Composite Templates for Cloth Modeling and Sketching
managing congenitally missing lateral incisors part 2 : tooth - supported restorations learning .
eng_Latn
10,354
Shape reconstruction from gradient data
An algebraic approach to surface reconstruction from gradient fields
Asymptotic Convergence in Online Learning with Unbounded Delays
eng_Latn
10,355
Generative Face Completion
Object Contour Detection with a Fully Convolutional Encoder-Decoder Network
Folding nano-scale paper cranes–the power of origami and kirigami in metamaterials
deu_Latn
10,356
GIFT: Towards Scalable 3D Shape Retrieval
DeepPano: Deep Panoramic Representation for 3-D Shape Recognition
Pharmacokinetic study of Noni fruit extract.
kor_Hang
10,357
Algorithms for 3D Shape Scanning with a Depth Camera
Shape from Shading: A Survey
Evaluation of conventional therapeutic methods versus maggot therapy in the evolution of healing of tegumental injuries in Wistar rats with and without diabetes mellitus
eng_Latn
10,358
Sketch-based 3-D modeling for piecewise planar objects in single images
Separation of Line Drawings Based on Split Faces for 3D Object Reconstruction
The International Heart Transplant Survival Algorithm (IHTSA): A New Model to Improve Organ Sharing and Survival
eng_Latn
10,359
A Hole-Filling Algorithm for Triangular Meshes Using Local Radial Basis Function
Fast Digital Image Inpainting
Targeted advertising and advertising avoidance
eng_Latn
10,360
fast and robust fixed - point algorithms .
principal components , minor components , and linear neural networks .
Definitions of groove and hollowness of the infraorbital region and clinical treatment using soft-tissue filler
eng_Latn
10,361
Precision of 3D body scanners
Three-dimensional body scanning : methods and applications for anthropometry
Double Hanging with Single Ligature: An Unusual Method in Suicide Pact.
eng_Latn
10,362
Image segmentation for lung region in chest X-ray images using edge detection and morphology
Lung Segmentation in Chest Radiographs Using Anatomical Atlases With Nonrigid Registration
Networking Models in Flying Ad-Hoc Networks (FANETs): Concepts and Challenges
eng_Latn
10,363
Shape Completion Using 3D-Encoder-Predictor CNNs and Shape Synthesis
Screened poisson surface reconstruction
SEIPS 2.0: a human factors framework for studying and improving the work of healthcare professionals and patients
eng_Latn
10,364
A system for high-volume acquisition and matching of fresco fragments: reassembling Theran wall paintings
Least-Squares Fitting of Two 3-D Point Sets
Random subspace method for multivariate feature selection
eng_Latn
10,365
FiberMesh: designing freeform surfaces with 3D curves
Interactive multiresolution mesh editing
Artificial Intelligence Approaches to Dynamic Project Success Assessment Taxonomic
eng_Latn
10,366
Compatible Embedding for 2D Shape Animation
Recognition of shapes by editing their shock graphs
Mesh editing with poisson-based gradient field manipulation
eng_Latn
10,367
Towards flattenable mesh surfaces
Implicit fairing of irregular meshes using diffusion and curvature flow
MIPS: An Efficient Global Parametrization Method
eng_Latn
10,368
Automatic tooth segmentation of dental mesh using a transverse plane
Snake-Based Segmentation of Teeth from Virtual Dental Casts
Using a web-based survey tool to undertake a Delphi study: application for nurse education research.
eng_Latn
10,369
On fast surface reconstruction methods for large and noisy point clouds
A fast and efficient projection-based approach for surface reconstruction
Exploiting the web of data in model-based recommender systems
eng_Latn
10,370
Silhouette Extraction in Hough Space
A Developer's Guide to Silhouette Algorithms for Polygonal Models
Illustrating smooth surfaces
eng_Latn
10,371
Abstraction of man-made shapes
3D collage: expressive non-realistic modeling
The run chart: a simple analytical tool for learning from variation in healthcare processes
eng_Latn
10,372
Enhancing geometric maps through environmental interactions
A method for registration of 3d shapes
Reconstruction of Small Soft Tissue Nasal Defects
eng_Latn
10,373
In vivo 3-dimensional analysis of scapular kinematics: comparison of dominant and nondominant shoulders
User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability
An Energy Efficient MAC Protocol for Wireless Passive Sensor Networks
eng_Latn
10,374
Shape-based recognition of 3D point clouds in urban environments
Shape Context: A New Descriptor for Shape Matching and Object Recognition
Role of the aryl hydrocarbon receptor (AhR) in lung inflammation
eng_Latn
10,375
Signature of Geometric Centroids for 3D Local Shape Description and Partial Shape Matching
Topology matching for fully automatic similarity estimation of 3D shapes
THE ROLE OF CONSTRUCTION, INTUITION, AND JUSTIFICATION IN RESPONDING TO ETHICAL ISSUES AT WORK: THE SENSEMAKING-INTUITION MODEL.
eng_Latn
10,376
Sculpting: an interactive volumetric modeling technique
Data structure forsoft objects
From Template to Image: Reconstructing Fingerprints from Minutiae Points
eng_Latn
10,377
SnapPaste: an interactive technique for easy mesh composition
Object modelling by registration of multiple range images
Design of Compact Wide Stopband Microstrip Low-pass Filter using T-shaped Resonator
eng_Latn
10,378
Multiview Differential Geometry of Curves
Poisson Surface Reconstruction
Multi-level partition of unity implicits
eng_Latn
10,379
Bayesian Pot-Assembly from Fragments as Problems in Perceptual-Grouping and Geometric-Learning
On solving 2D and 3D puzzles using curve matching
Computable Elastic Distances between Shapes
eng_Latn
10,380
3D Statistical Shape Models Incorporating Landmark-Wise Random Regression Forests for Omni-Directional Landmark Detection
statistical shape models for 3d medical image segmentation : a review .
The Effect of Music on Human Brain; Frequency Domain and Time Series Analysis Using Electroencephalogram
eng_Latn
10,381
Collision detection for volumetric objects
New algorithms for euclidean distance transformation of an n-dimensional digitized picture with applications
A method to convert thesauri to SKOS
eng_Latn
10,382
Surface and contour-preserving origamic architecture paper pop-ups
A survey on Mesh Segmentation Techniques
Illustrating smooth surfaces
eng_Latn
10,383
Virtual Anastylosis of Greek Sculpture as Museum Policy for Public Outreach and Cognitive Accessibility
MeshLab: an Open-Source Mesh Processing Tool
Poisson Surface Reconstruction
eng_Latn
10,384
SHREC ’ 16 : Partial Matching of Deformable Shapes
Geodesic Convolutional Neural Networks on Riemannian Manifolds
Geographical patterns of malaria transmission based on serological markers for falciparum and vivax malaria in Ratanakiri, Cambodia
eng_Latn
10,385
Freeform Origami Tessellations by Generalizing Resch’s Patterns
Parameterization of Faceted Surfaces for Meshing using Angle-Based Flattening
Parametrization and smooth approximation of surface triangulations
eng_Latn
10,386
Closure-aware sketch simplification
ILoveSketch: as-natural-as-possible sketching system for creating 3d curve models
Investigating monitoring configurations
eng_Latn
10,387
Scalable MCMC for Mixed Membership Stochastic Blockmodels
Graph evolution: Densification and shrinking diameters
All-Hex Mesh Generation via Volumetric PolyCube Deformation
eng_Latn
10,388
an improved fully parallel 3 d thinning algorithm 1 .
Building skeleton models via 3-D medial surface/axis thinning algorithms
High isolation 2.4/5.2/5.8 GHz WLAN and 2.5 GHz WiMAX antennas for laptop computer application
eng_Latn
10,389
tonsillectomy and risk of parkinson ' s disease : a danish nationwide population - based cohort study .
The Danish Civil Registration System as a tool in epidemiology
Combining pixel domain and compressed domain index for sketch based image retrieval
eng_Latn
10,390
Full-body visible human project® female computational phantom and its applications for biomedical electromagnetic modeling
A fast robust algorithm for the intersection of triangulated surfaces
Tofacitinib Citrate for the Treatment of Vitiligo: A Pathogenesis-Directed Therapy.
eng_Latn
10,391
Data-Driven Modeling for Chinese Ancient Architecture
Probabilistic Graphical Models: Principles and Techniques
Fit and diverse: set evolution for inspiring 3D shape galleries
eng_Latn
10,392
Feature line extraction from unorganized noisy point clouds using truncated Fourier series
Multi-scale Features for Approximate Alignment of Point-based Surfaces
Smart network solutions in an amoeboid organism
eng_Latn
10,393
Area and Length Minimizing Flows for Shape Segmentation
Gradient flows and geometric active contour models
Camera-on-rails: automated computation of constrained camera paths
eng_Latn
10,394
Towards flattenable mesh surfaces
Mesh Parameterization With A Virtual Boundary
Automatic Wrinkle Detection Using Hybrid Hessian Filter
eng_Latn
10,395
Segmentation of 3D objects from MRI volume data using constrained elastic deformations of flexible Fourier surface models
Shape Discrimination Using Fourier Descriptors
Bone marrow failure and the telomeropathies.
eng_Latn
10,396
the anatomy of a sales configurator : an empirical study of 111 cases .
Reverse engineering feature models
Automatic and topology-preserving gradient mesh generation for image vectorization
eng_Latn
10,397
Multi-view surface reconstruction using polarization
A Method for enforcing integrability in shape from shading algorithms
Fetus-in-fetu: A rare congenital anomaly
eng_Latn
10,398
A search engine for 3D models
A Survey of Shape Analysis Techniques
FUGU: Elastic Data Stream Processing with Latency Constraints
eng_Latn
10,399